From the Microsoft documentation I don't fully understand whether or not CosmosDB using session consistency garantuees no out of order writes. The following quote makes it seem like it has the same garantuees as cosistent prefix:
The reads are guaranteed to honor the consistent-prefix (assuming a single “writer” session), ...
Although from the baseball example further down the page it seems like a reader could get a completely random order back, similar to eventual consistency. From other sources online I also can't find a definitive anwser, apart from the images shown on the Azure Portal that seem to implicitly suggest the same order as the writer.
(I'm from the Cosmos DB team)
A given client using session consistency will see its own writes in order, but see other clients' writes with eventual consistency (assuming using a different session token).
We're gonna update the docs to make that more clear. New text will read something like this:
Session: Within a single client session reads are guaranteed to honor the consistent-prefix (assuming a single “writer” session), monotonic reads, monotonic writes, read-your-writes, and write-follows-reads guarantees. Clients outside of the session performing writes will see eventual consistency.
Per my research, i think session consistency level can't guarantee the clients always read the value in the order.
My evidence is from this link:
When the consistency level is set to bounded staleness, Cosmos DB
guarantees that the clients always read the value of a previous
write, with a lag bounded by the staleness window.
When the consistency level is set to strong, the staleness window is
equivalent to zero, and the clients are guaranteed to read the latest
committed value of the write operation.
For the remaining three consistency levels, the staleness window is
largely dependent on your workload. For example, if there are no
write operations on the database, a read operation with eventual,
session, or consistent prefix consistency levels is likely to yield
the same results as a read operation with strong consistency level.
As above said,the staleness window is dependent on your actual workload if you choose Session Consistency Level. So,if you do concern about the read order, i suggest you using bounded staleness or even Strong Consistency Level.
Related
While going through Datastax tutorial I learned that
1)Lower consistency level is quicker for read and write whereas, it's much longer if we use higher consistency level
2) Lower consistency level also ensures high availability of data.
My question is
if Lower CL is good then we can always set CL as ONE,
why do we need QUORUM and ALL consistency levels?
It ultimately depends on the application using Cassandra. If the application is ok with serving up data that may be under-replicated or slightly stale, then LOCAL_ONE should be fine. If the application absolutely cannot provide a wrong answer, or if written rows are not being successfully read consistently, then perhaps LOCAL_QUORUM may be more applicable.
I tell my application teams the same thing. Start with LOCAL_ONE, and and work with it through testing. If you don't have any problems, then continue using it. If you do experience stale data, and your application is more read-sensitive, then try writing at LOCAL_QUORUM and continue reading at LOCAL_ONE. And if that doesn't help, then perhaps the application may need both at QUORUM.
Again, that's why application teams need to do thorough testing.
And just to address it, ALL is a useful consistency level because it invokes a read repair. Essentially, if you have a table which you know is missing data, and you don't want to run a costly nodetool repair on it, you can set your consistency to ALL and read from it. I find this trick to be most-useful when addressing issues with multi-DC clusters having issues with system_auth.
But you probably wouldn't want to use ALL from within an application. Or if you did, it'd be for a very specific edge case.
The real meat of database like Cassandra is "eventual consistency": it won't enforce strong consistency when you first write data to the database. rather, it gives you option to choose a weaker consistency level like "one" to reach high writing performance, and then later when you query data, as long as this rule "Read_Consistency_level + Write_consistency_level >= the RF policy (Replication factor)" satisfies, you won't have stale data.
It's risky if you can't fulfill the above rule since you might get either stale or contradictory (sometimes new, sometimes old) data.
I have an application that perform only inserts/deletes in cassandra. All write operations (inserts/deletes) application perform using consistency level QUORUM, read operation currently is executed using QUORUM as well, but i`m wondering if in this case (when there is no updates to data) consistency level ONE would give same results as QUORUM.
Not necessarily. It could be that your read request goes to the one node which has not (yet) received/applied the updates. The QUORUM consistency level does allow for some nodes to not have the updated data just yet; by using a consistency level of ONE for your read, you might read stale data. Perhaps your application can deal with this situation -- that's for you to decide. But you should know that a consistency level of ONE for reads may not return the data you expect in your situation.
Hope this helps!
I have a 5 node cluster and keyspace with replication factor of 3. The nature of operations are such that writes are much more important than read, but frequency of read operations are about 10 times higher than write. To achieve consistency while improving overall performance, I chose to set consistency level for writes as ALL, and ONE for read. But this causes operations to fail if even one node is down.
Is there a method by which I can simultaneously change consistency level for (Write,Read) from (ALL,ONE) to (QUORUM, QUORUM) if one node is detected down, or if there is a query-execution-exception; plus this be done in a manner that no operations pass through a temporary phase where it sees a temporary (QUORUM, ONE) setting.
We also plan to expand to twice the capacity, 3 datacenter with 4 nodes each. Is it possible to define custom consistency levels, like, (a level of ALL in any one datacenter and ONE in others). I'm thinking that a level of (EACH_ONE) for read, coupled with above level for write will insure consistency but will allow the cluster to remain available even if a node goes down.
The flexibility is there since you can set your consistency level at a per request basis. Depending on the client you are using, there are some nice capabilities. For example, the java driver has something called a DowngradingConsistencyRetryPolicy such that if a request fails, it will be retried with the next lowest consistency level until the request succeeds. This pushes the complexity of retrying into the client so you don't have to write a bunch of code for it, it's really nice!
The java driver also allows you to configure consistency level per request with Statement#setConsistencyLevel()
As far as custom consistency levels, this is not an option available to you (without changing the cassandra source code), however I think what is made available should be sufficient.
For reads, I don't find much value in ensuring consistency between Data Centers on read. I think LOCAL_QUORUM is more than sufficient, but if you really care, you can use something like EACH_QUORUM for to ensure all datacenters agree, but that will severely impact your response time and availability. For example, if one of your datacenters goes down completely, you won't be able to do reads at all (unless downgrading).
For writes, I'd strongly recommend not using ALL in a multi datacenter set up if you care about response time and availability. Depending on your requirements, LOCAL_QUORUM should likely be more than sufficient.
While one of the benefits of Cassandra is that consistency is tunable, you can have as much strong consistency as you like, but keep in mind that Cassandra is at its best as a Highly Available, Partition Tolerant system.
A really good presentation on consistency that I think really nails a lot of these points is Christos Kalazantis' talk 'Eventual Consistency != Hopeful Consistency' which suggests that a consistency level of ONE is sufficient for a lot of use cases.
When both read and write are set to quorum, I can be guaranteed the client will always get the latest value when reading.
I realize this may be a novice question, but I'm not understanding how this setup doesn't provide consistency, availability, and partitioning.
With a quorum, you are unavailable (i.e. won't accept reads or writes) if there aren't enough replicas available. You can choose to relax and read / write on lower consistency levels granting you availability, but then you won't be consistent.
There's also the case where a quorum on reads and writes guarantees you the latest "written" data is retrieved. However, if a coordinator doesn't know about required partitions being down (i.e. gossip hasn't propagated after 2 of 3 nodes fail), it will issue a write to 3 replicas [assuming quorum consistency on a replication factor of 3.] The one live node will write, and the other 2 won't (they're down). The write times out (it doesn't fail). A write timeout where even one node has writte IS NOT a write failure. It's a write "in progress". Let's say the down nodes come up now. If a client next requests that data with quorum consistency, one of two things happen:
Request goes to one of the two downed nodes, and to the "was live" node. Client gets latest data, read repair triggers, all is good.
Request goes to the two nodes that were down. OLD data is returned (assuming repair hasn't happened). Coordinator gets digest from third, read repair kicks in. This is when the original write is considered "complete" and subsequent reads will get the fresh data. All is good, but one client will have received the old data as the write was "in progress" but not "complete". There is a very small rare scenario where this would happen. One thing to note is that write to cassandra are upserts on keys. So usually retries are ok to get around this problem, however in case nodes genuinely go down, the initial read may be a problem.
Typically you balance your consistency and availability requirements. That's where the term tunable consistency comes from.
Said that on the web it's full of links that disprove (or at least try to) the Brewer's CAP theorem ... from the theorem's point of view the C say that
all nodes see the same data at the same time
Which is quite different from the guarantee that a client will always retrieve fresh information. Strictly following the theorem, in your situation, the C it's not respected.
The DataStax documentation contains a section on Configuring Data Consistency. In looking through all of the available consistency configurations, For QUORUM it states:
Returns the record with the most recent timestamp after a quorum of replicas has responded regardless of data center. Ensures strong consistency if you can tolerate some level of failure.
Note that last part "tolerate some level of failure." Right there it's indicating that by using QUORUM consistency you are sacrificing availability (A).
The document referenced above also further defines the QUORUM level, stating that your replication factor comes into play as well:
If consistency is top priority, you can ensure that a read always
reflects the most recent write by using the following formula:
(nodes_written + nodes_read) > replication_factor
For example, if your application is using the QUORUM consistency level
for both write and read operations and you are using a replication
factor of 3, then this ensures that 2 nodes are always written and 2
nodes are always read. The combination of nodes written and read (4)
being greater than the replication factor (3) ensures strong read
consistency.
In the end, it all depends on your application requirements. If your application needs to be highly-available, ONE is probably your best choice. On the other hand, if you need strong-consistency, then QUORUM (or even ALL) would be the better option.
I have a table counting around 1000 page views per second. What read and write ConsistencyLevel should I use with it? I am using the Cassandra Thrift client.
Carlo has more or less the right idea. But you have to balance it with your use case.
I work in the game industry and we use cassandra for player data. It is quite heavily bound by the read-modify-write pattern which is not the strong suit of cassandra. But we also have some functionality that are Write heavy (thousands of writes for a few reads a day).
This is my opinion, based upon experience, of how you should use the consistency levels.
Write + Read at QUORUM means that before returning for both operations it will wait for a majority of nodes in the cluster to confirm the operation. It is the solution I use when Read and Writes are roughly at the same frequency. (Player data blob)
Write One + Read All is useful for something very write heavy. We use this for high scores for examples (write often read every 5 minutes for regenerating the high score table of the whole game)
You could use Write Any if you do not care about the data that much (non critical logs comes to mind).
The only use case I could come up for the Write All + Read One would be messaging or feeds with periodical checks for updates. Chats and messaging seem a good fit for that since Cassandra does not have a subscription/push functionality to it.
Write & Read ALL is a bad implementation. It IS a WASTE of resource as you will get the same consistency as if you were using one of the three set up I mentioned above.
A final note about Write ANY vs. Write ONE : ANY only confirms that anything in the cluster has received the mutation, but ONE confirms that it has been applied at least by one node. ANY is not safe as it could return without error even if all the nodes responsible for that mutation are down, or any other condition that could make the mutation fail after reception. It is also slightly quicker (I only use it as an async dump for logs that are not critical) that is its only advantage, but do not trust the response at 100%.
A good reference to study this subject about cassandra is http://www.datastax.com/docs/1.2/dml/data_consistency
If you want always be consistent at any read the rule is
(write consistency level + read consistency level) > replication factor.
So you could
Write All + Read All (worst solution)
Write One + Read All (second-worst solution)
Write All + Read One (probably faster solution)
Write Quorum + Read Quorum (imho, best solution)
I want remember that if a node of RF is down during the r/w operation the operation will fail so I'd avoid the CL ALL.
Regards, Carlo
Based on their document (https://docs.datastax.com/en/cql/3.0/cql/ddl/ddl_counters_c.html), consistency level ONE is recommended. I guess some sort of merging is used to resolve conflict for counter columns, instead of usual last write win. That's likely why setting a value is not allowed.