Simone
Simone

Reputation: 2311

Google File System Consistency Model

I was reading about GFS and its consistency model but I'm failing to grasp some of it. In particular, can someone provide me with a specific example scenario (or an explanation of why it cannot happen) of:

Upvotes: 8

Views: 6505

Answers (2)

Daniel Darabos
Daniel Darabos

Reputation: 27455

I'm quoting from http://research.google.com/archive/gfs.html. Check out Table 1, which is a summary of the possible outcomes for writes/appends:

Table 1 from GFS whitepaper

  1. "If a record append fails at any replica, the client retries the operation. As a result, replicas of the same chunk may contain different data possibly including duplicates of the same record in whole or in part." So any failure on a replica (e.g. timeout) will cause a duplicate record at least on the other replicas. This can happen without concurrent writes.

  2. The same situation that causes a duplicate record also causes an inconsistent (and hence undefined) region. If a replica failed to acknowledge the mutation, it may not have performed it. In that case when the client retries the append this replica will have to add padding in place of the missing data, so that the record can be written at the right offset. So one replica will have padding while other will have the previously written record in this region.

  3. A failed write can cause an inconsistent (hence undefined) region as well. More interestingly, successful concurrent writes can cause consistent but undefined regions. "If a write by the application is large or straddles a chunk boundary, GFS client code breaks it down into multiple write operations. They [...] may be interleaved with and overwritten by concurrent operations from other clients. Therefore, the shared file region may end up containing fragments from different clients, although the replicas will be identical because the individual operations are completed successfully in the same order on all replicas. This leaves the file region in consistent but undefined state [...]."

Upvotes: 11

Michael Deardeuff
Michael Deardeuff

Reputation: 10697

I don't think it really has to do with concurrent append but wih the at least once semantics of their system.

Failure is a fundamental problem of large distributed systems. In the presence of failure a sender may not know if the computer on the other end of the network fully received its message.

For such occasions distributed systems guarantee that a message is either delivered either at most once or delivered at least once.

In this case, it appears GFS decided upon at least once delivery to the storage nodes.

Upvotes: 1

Related Questions