-
Notifications
You must be signed in to change notification settings - Fork 15
Expand and clarify consitency/durability docs in store.wit #56
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from all commits
89c63fc
aaeec54
ee77cef
b4f3fff
5b3c65b
94183c2
cf03f8a
418d48f
a9f34a4
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -7,22 +7,52 @@ | |
/// ensuring compatibility between different key-value stores. Note: the clients will be expecting | ||
/// serialization/deserialization overhead to be handled by the key-value store. The value could be | ||
/// a serialized object from JSON, HTML or vendor-specific data types like AWS S3 objects. | ||
/// | ||
/// ## Consistency | ||
/// | ||
/// Data consistency in a key value store refers to the guarantee that once a write operation | ||
/// completes, all subsequent read operations will return the value that was written. | ||
/// | ||
/// Any implementation of this interface must have enough consistency to guarantee "reading your | ||
/// writes." In particular, this means that the client should never get a value that is older than | ||
/// the one it wrote, but it MAY get a newer value if one was written around the same time. These | ||
/// guarantees only apply to the same client (which will likely be provided by the host or an | ||
/// external capability of some kind). In this context a "client" is referring to the caller or | ||
/// guest that is consuming this interface. Once a write request is committed by a specific client, | ||
/// all subsequent read requests by the same client will reflect that write or any subsequent | ||
/// writes. Another client running in a different context may or may not immediately see the result | ||
/// due to the replication lag. As an example of all of this, if a value at a given key is A, and | ||
/// the client writes B, then immediately reads, it should get B. If something else writes C in | ||
/// quick succession, then the client may get C. However, a client running in a separate context may | ||
/// still see A or B | ||
/// An implementation of this interface MUST be eventually consistent, but is not required to | ||
/// provide any consistency guaranteeds beyond that. Practically speaking, eventual consistency is | ||
/// among the weakest of consistency models, guaranteeing only that values will not be produced | ||
/// "from nowhere", i.e. any value read is guaranteed to have been written to that key at some | ||
/// earlier time. Beyond that, there are no guarantees, and thus a portable component must neither | ||
/// expect nor rely on anything else. | ||
/// | ||
/// In the future, additional interfaces may be added to `wasi:keyvalue` with stronger guarantees, | ||
/// which will allow components to express their requirements by importing whichever interface(s) | ||
/// provides matching (or stronger) guarantees. For example, a component requiring strict | ||
/// serializability might import a (currently hypothetical) `strict-serializable-store` interface | ||
/// with a similar signature to `store` but with much stronger semantic guarantees. On the other | ||
/// end, a host might either support implementations of both the `store` and | ||
/// `strict-serializable-store` or just the former, in which case the host would immediately reject | ||
/// a component which imports the unsupported interface. | ||
/// | ||
/// Here are a few examples of behavior which a component developer might wish to rely on but which | ||
/// are _NOT_ guaranteed by an eventually consistent system (e.g. a distributed system composed of | ||
/// multiple replicas, each of which may receive writes in a different order, making no attempt to | ||
/// converge on a global consensus): | ||
/// | ||
/// - Read-your-own-writes: eventual consistency does _NOT_ guarantee that a write to a given key | ||
/// followed by a read from the same key will retrieve the same or newer value. | ||
/// | ||
/// - Convergence: eventual consistency does _NOT_ guarantee that any two replicas will agree on the | ||
/// value for a given key -- even after all writes have had time to propagate to all replicas. | ||
/// | ||
/// - Last-write-wins: eventual consistency does _NOT_ guarantee that the most recent write will | ||
/// take precendence over an earlier one; old writes may overwrite newer ones temporarily or | ||
/// permanently. | ||
/// | ||
/// ## Durability | ||
/// | ||
/// This interface does not currently make any hard guarantees about the durability of values | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think it's okay to leave the durability wide open. I am wondering in your case 3 - under async Now, there is a question of "what happens if an async I/O error occurs right after the In a strict interpretation of the spec, once If the store experiences a critical I/O failure that causes data corruption or data loss, there are currently no instructions on how the store should respond. Should it return I think there are two possible ways to extend the specification to address the above concerns: Handle defunct after errorsWe could define that once a bucket handle experiences a critical I/O error, all further operations on that handle must return an error. That is, if a store fails after a Best-effort guarantee tied to success conditionsThe specification could define that “read your writes” holds as long as the store does not fail irrecoverably between operations. A There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @Mossaka Based on the previous discussion above, I think there's performance reasons not to require "read your writes" (even when reads follow writes on the same |
||
/// stored. A valid implementation might rely on an in-memory hash table, the contents of which are | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. For in-memory stores, we probably want to emphasize that the data might be lost due to store crashed, and the Best-effort guarantee described in my comment above should apply to our specification - stating that the "read your write" consistency contract should only apply to store operating under normal conditions. |
||
/// lost when the process exits. Alternatively, another implementation might synchronously persist | ||
/// all writes to disk -- or even to a quorum of disk-backed nodes at multiple locations -- before | ||
/// returning a result for a `set` call. Finally, a third implementation might persist values | ||
/// asynchronously on a best-effort basis without blocking `set` calls, in which case an I/O error | ||
/// could occur after the component instance which originally made the call has exited. | ||
/// | ||
/// Future versions of `wasi:keyvalue` may provide ways to query and control the durability and | ||
/// consistency provided by the backing implementation. | ||
interface store { | ||
/// The set of errors which may be raised by functions in this package | ||
variant error { | ||
|
@@ -56,7 +86,7 @@ interface store { | |
/// A bucket is a collection of key-value pairs. Each key-value pair is stored as a entry in the | ||
/// bucket, and the bucket itself acts as a collection of all these entries. | ||
/// | ||
/// It is worth noting that the exact terminology for bucket in key-value stores can very | ||
/// It is worth noting that the exact terminology for bucket in key-value stores can vary | ||
/// depending on the specific implementation. For example: | ||
/// | ||
/// 1. Amazon DynamoDB calls a collection of key-value pairs a table | ||
|
@@ -67,7 +97,14 @@ interface store { | |
/// 6. Memcached calls a collection of key-value pairs a slab | ||
/// 7. Azure Cosmos DB calls a collection of key-value pairs a container | ||
/// | ||
/// In this interface, we use the term `bucket` to refer to a collection of key-value pairs | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I found the wording "connection to a collection of key-value pairs" instead of "a collection of key-value pairs" to be a bit strange - it now implies a networked view instead of a logical container. What does this say to downstream implementation that does not involve networking, e.g. a filesystem implementation? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I used that wording to emphasize the fact that you can have to It might help to use two different terms for these concepts, e.g. "bucket" could refer to the collection while "bucket-view" refers to a specific view of the collection, similar the distinction between a value and a pointer to a value in a programing language. In the interest of minimizing further changes to this PR, though, would it help to change "connection to a collection of key-value pairs" to "view of a collection of key-value pairs" (and likewise replace "connection" with "view" anywhere else it appears)? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Thanks for clarifying. I am okay to merge this PR as is because we can always update the spec if other people find this confusing. |
||
/// In this interface, we use the term `bucket` to refer to a connection to a collection of | ||
/// key-value pairs. | ||
/// | ||
/// Note that opening two `bucket` resources using the same identifier MAY result in connections | ||
/// to two separate replicas in a distributed database, and that writes to one of those | ||
/// resources are not guaranteed to be readable from the other resource promptly (or ever, in | ||
/// the case of a replica failure or message reordering). See the `Consistency` section of the | ||
/// `store` interface documentation for details. | ||
resource bucket { | ||
/// Get the value associated with the specified `key` | ||
/// | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I might be missing your point here, but I thought that Eventual Consistency did mean that eventually all replicas will converge on the same value... you just don't know how long it'll take.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought so too, but see @Mossaka's comment above:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I can see how that could arise in a fully-weak consistency model; but in that case we should not say "Eventual Consistency" above. That being said, are we aware of any particular kv-store implementations we'd like to allow that aren't even Eventual Consistency? I had thought EC was sortof the "lower bound" for traditional KV Stores. If we start talking about "caches", then I can see this happening, but I guess that's a question: even if we're not making durability guarantees, do we want implementations that actively evict keys (as opposed to only losing them on crashes)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Mossaka seems to be saying that eventual consistency is that weak:
If someone can point me to an authoritative definition of what "eventual consistency" means and what it does and does not include, I'm happy to use that as a reference and update this document to be consistent with it. So far, it seems that everyone has their own, incompatible idea of what it means.
Maybe there isn't a precise, widely-accepted definition? In that case, I can note that in the docs here, e.g. "Although 'eventual consistency' has no precise, widely-accepted definition, here we define it to mean..." Or just not use the term at all?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd be happy to read any alternative definitions, but the wikipedia article does clearly describe convergence over time.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, that's what I thought it meant and what I want it to mean. @Mossaka can you explain where your assertion that "a consistent state in eventual consistency does not guarantee that all replicas will converge to exactly the same value for every key" came from? It seems to contradict what the Wikipedia article is claiming.