Jenny M
Jenny M

Reputation: 1023

Lock file for update one section of data or all the file content

I’ve file which can be updated from several of process, hence I want to use locking and found https://github.com/gofrs/flock which might helps

But I think that issue is a bit more complicated , for example to update the file the api needs to read the file/ section before (we are providing api to read the file or the application objects by name )
Get the json data modify it in the memory and update the file.

There is two options:

  1. updating the all file content
  2. updating section on the file i.e. one application properties

The problem is like following:


 1. Process A & Process B (can be more…) Reads the object  of
    application name  `node1`

 2. Process A update the section (node1) with new data (for example
        change the kind property and update)

 3. Process B want to do the same , the problem is that between the time
        it reads the data and the time it wants to update ,the data is not
        valid since other process already update it.

in addition the same scenario is valid to all the file context

Race condition issue…

This is a short example of the file that some stateless process can update at any given time

ID: demo
version: 0.0.1

applications:
 - name: node1
   kind: html5
   path: node1
   expose:
    - name: node_api
      properties:
         url: ${app-url}


 - name: node2
   kind: nodejs
   path: node2
   parameters:
      quota: 256M
      memory: 256M

How we can overcome this issue, or maybe simplify it to avoid race condition and collusion?

Upvotes: 1

Views: 542

Answers (3)

peterSO
peterSO

Reputation: 166764

This is a common problem so look for known solutions. For example, optimistic locking.

Something like this pseudocode:

lock file for read
read file into data1
release file lock
hash data1 as hash1
update data1
lock file for update
read file into data2
hash data2 as hash2
if hash1 != hash2
    release file lock
    return error
write file from (updated) data1
release file lock
return success

Upvotes: 1

Ankit Deshpande
Ankit Deshpande

Reputation: 3604

There can be multiple approaches to solve this problem.

1)Using Locks
You can create read-write lock. If a process only wants to read a file it can acquire a read lock. If a process wants to write, it acquires a write lock and other processes have to wait till the write lock is released.

Using Versioning
You can keep a counter for tracking version. Lock will still be needed in the approach also for writing to file.

Initial version 1.
Process B reads the file, sees version 1.
Process A reads the file, sees version 1, before writing increments the version to 2 and then updates the file.
So now Process B before writing will compare the versions. Since its version (version 1) is less than the current one(version 2), it will have to abort/retry its operation.

The process should update the file only if the contents of the file are the same as what it read. You can achieve it the way peterSO suggested in his answer.

Upvotes: 1

flyx
flyx

Reputation: 39768

My advice in the comments is still valid: The use-case you describe in typically implemented by databases. They have been designed to solve this problem.

However, if you have to use this YAML file, you can implement your writing operation like this:

  1. create a lock.
  2. update by reading the file.
  3. perform the changes in-memory.
  4. write back to the file.
  5. release the lock.

This ensures that no stale data is updated.

Upvotes: 1

Related Questions