VoltDB is in memory and ACID compliant database. How we can implement transactions in this database. Please suggest.
All operations against VoltDB are 100% strict-serializable ACID transactions.
There is no BEGIN, COMMIT or ROLLBACK.
If you want a multi-statement transaction, you must create a stored procedure. The run() method of the stored procedure becomes the transactional context. It implicitly starts with BEGIN and ends with COMMIT. If an exception is thrown, it will ROLLBACK.
This is covered in the tutorial: https://docs.voltdb.com/tutorial/index.php
Chapter 5 is on procedures: https://docs.voltdb.com/tutorial/Part5.php
Related
Can documentDb stored procedures run in parallel and update the same object? Will documentDb process them sequentially?
Consider the following scenario.
I have an app and I have 10000 coins to give away to my users when they complete a task. And I have the following object
{
remainingPoints: 10000
}
I have a stored procedure that subtracts 10 points from this object and adds them to the users' points.
Now lets say 10 users complete the task at the same time and I call the stored procedure 10 times at the same time, will DocDb execute them sequentially? Or will I have to execute the stored procedures sequentially?
I had similar questions when I first started using DocumentDB and got good answers here and in email from the DocumentDB product managers. Quoting:
Stored procedures ... get an isolated snapshot of the database for transactional support. The snapshot reflects the current state of the world (no stale data) at the time the sproc begins execution (strongly consistent).
Caveat – since stored procedures are operating on a snapshot, you can still get a stale read in a sproc if a new write come in from the outside world during execution.
Also, stored procedures will ALWAYS read their owns writes.
Sprocs are DocumentDB’s mechanism for multi-document transactions. Sproc writes are committed when a sproc successfully complete execution. If an exception is thrown, all work done in a sproc gets rolled back.
So if two are sprocs are running concurrently, they won’t see eachother’s writes.
If both sprocs happen to write to the same document (replace) – then the 2nd one will fail due to an etag mismatch when it attempts to commit writes.
From that, I went forward with my design making sure to use ETags in my writes as #Julian suggests. I also automatically retry up to 3 times each sproc execution to handle the case where they fail due to parallel operations among other reasons. In practice, I've never exceed the 3 retries (except in cases where my sproc had a bug) and I rarely even get a single retry.
I assume from the behavior that I observe that it sends each new sproc execution to a different replica until it runs out of replicas and then it queues them for sequential execution, so it's a hybrid of parallel and serial execution.
One other tip that I learned through experimentation is that you are better off doing pure read operations (no writes and no significant aggregation) client-side rather than in a sproc when you are on a heavily loaded system. I assume the advantage is because DocumentDB can satisfy different reads from different replicas in parallel. I have modularized my sproc code using the expandScript functionality of documentdb-utils to make sure that I use the exact same code for write validation, intra-document consistency, and derived fields both client-side and server-side, which is possible using node.js. Even if you are mostly .NET, you may want to use expandScripts to build your sprocs in a modular DRY way. You'll still need to run node.js in your build process to pre-process your sprocs or use Edge.NET (node running inside of .NET) to do so on the fly.
It will depend on the consistency you have choose for your collection. But the idea is that DocumentDb handle concurrency using etag and executes stored procedure on a snapshot of a document version, and commit the result only if the execution succeed.
See: https://azure.microsoft.com/en-us/documentation/articles/documentdb-faq/#develop
This thread may help too: Atomically increment an integer in a document in Azure DocumentDB
Is it possible to do multiple operations in a single stored procedure in cosmos Db with bounded execution?
I have to perform below operations in a single stored procedure
A new record to be inserted
Few records to be deleted
Update Operation to be performed
How can data consistency be maintained with transaction support in this case?
Cosmos DB stored procedures work transactionally, within a single partition of a single collection. So, as long as your inserts, deletes, and updates (replacements, to be more accurate) are all within a single partition, they would all be handled transactionally within a single stored procedure call.
Hey Ramakrishna Reddy,
As David mentioned, transactions can only be achieved within a partition in a collection. see documentation here: https://learn.microsoft.com/en-us/azure/cosmos-db/database-transactions-optimistic-concurrency. I have the experience of a time when multiple collections were merged into one collection for the ability to achieve transactions. You might need to do the same thing as well.
There are examples to achieve transactions here: https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/cosmos-db/how-to-write-stored-procedures-triggers-udfs.md of how to create stored procedures.
In your particular situation, you will probably need to write a transaction that takes an array of items to upsert, and an array of items to delete. You can find an example of a deletion transaction here: https://github.com/Azure/azure-cosmos-dotnet-v2/blob/master/samples/clientside-transactions/DocDBClientBulk/DocDBClientBulk/bulkDelete.js
Alternatively, you can use Transactional batch in .NET SDK that Cosmos now supports. In addition to transactions support, you can see other upcoming long-awaited updates in this blog: https://devblogs.microsoft.com/cosmosdb/whats-new-in-azure-cosmos-db-nov-2019/. However, I am unclear whether it supports the deletion that you are seeking. I haven't gotten a chance to play with it. Maybe you can share when you figure it out!
How can I ensure each execution of a stored procedure can complete it's entire transaction without the document state being modified by another simultaneous execution of the same procedure by another client?
Is there anything I need to do to ensure a lock on the documents that are being modified?
Is there a way to implement something like a T-SQL TRANSACTION to ensure that the entire operation is rolled back if an error occurs?
Stored procedures are executed on a database snapshot in isolation. Either everything in the operation works or it's all rolled back.
You should still use the etag option for any replaceDocument() or deleteDocument() calls to ensure that a document hasn't been changed out under you.
etag = oldDocument._etag;
options = {etag: etag};
stillQueueing = collection.replaceDocument(documentLink, newDocument, options);
I am currently working on an application using Neo4j as an embedded database.
And I wondering how it would be possible to make sure that separate threads use separate transactions. Normally, I would assign database operations to a transaction, but the code examples I found, don't allow for making sure that write operations use separate transactions:
try (Transaction tx = graphDb.beginTx()) {
Node node = graphDb.createNode();
tx.success();
}
As graphDB shall be used as a thread-safe singleton, I really don't see, how that shall work... (E.g. for several users creating a shopping list in separate transactions.)
I would be grateful for pointing out where I misunderstand the concept of transactions in Neo4j.
Best regards and many thanks in advance,
Oliver
The code you posted will run in separate transactions if executed by multiple threads, one transaction per thread.
The way this is achieved (and it's quite a common pattern) is storing transaction state against ThreadLocal (read the Javadoc and things will become clear).
Neo4j Transaction Management
In order to fully maintain data integrity and ensure good transactional behavior, Neo4j supports the ACID properties:
atomicity: If any part of a transaction fails, the database state is left unchanged.
consistency: Any transaction will leave the database in a consistent state.
isolation: During a transaction, modified data cannot be accessed by other operations.
durability: The DBMS can always recover the results of a committed transaction.
Specifically:
-All database operations that access the graph, indexes, or the schema must be performed in a transaction.
Here are the some useful links to understand Neo4j transactions
http://neo4j.com/docs/stable/rest-api-transactional.html
http://neo4j.com/docs/stable/query-transactions.html
http://comments.gmane.org/gmane.comp.db.neo4j.user/20442
I am storing my all data in cassandra using cli . There is any possibility to rollback in cassandra or other technique please tell me.
The closest you can get to the transactional behavior you are asking about is using BATCH. Anyways the semantics of BATCH are not equivalent to an RDBMS transaction. Mainly:
all updates in a BATCH belonging to a given partition key are performed atomically and in isolation