Multi update on ledger using Putstate() in ChaincodeStub - hyperledger-fabric

I have a scenario where i have to update multiple transactions in ledger at the same time.
In simple case, two transactions has to executed at the same time in order to make the use case valid. if anyone of them fails the other one should revert back.
err = stub.PutState(key, tradeJSONasBytes)
using hyperledger 1.1 and golang smart contract.

If you want to save multiple transaction you can call multiple PutState() but there is nothing like reverting the transaction, even if the transaction fails it is still stored as a block, as it is the part of immutability condition.

Related

Is there a best practice to invoke cross channel transaction?

For example, there are 2 channels as below.
CHANNEL_A:
  chaincode name -> chaincode_a
  ledger data -> amount=150
CHANNEL_B:
  chaincode name -> chaincode_b
  ledger data -> amount=0
I want to withdraw 100 from CHANNEL_A's ledger data and deposit 100 to CHANNEL_B's ledger data.
According to the Fabric documentation, if the called chaincode is on a different channel, only the response is returned to the calling chaincode; any PutState calls from the called chaincode will not have any effect on the ledger.
So, if I call chaincode_b and it calls chaincode_a, I can update the amount on chaincode_B, but I can't update on chaincode_A.
So, I have to invoke 2 transactions for both channels on the application side. I have to consider error handling and so on for data consistency.
Is there a best practice to handle such a transaction on the application side?
To update something in channel A and also in channel B - you need to do 2 transactions and make them commit or not commit in an atomic manner.
A way to do this is to implement something equivalent to a 2-phase commit in the application layer.
However this isn't trivial to do, as you may always get MVCC conflicts that will get in the way.

Hyperledger fabric - Concurrent transactions

i'm wondering how is possible to execute concurrent transactions in hyperledger fabric using hyperledger composer.
When i try to submit two transactions at the same time against the same resource I get this error:
Error trying invoke business network. Error: Peer has rejected transaction \'transaction-number\' with code MVCC_READ_CONFLICT
Does anyone know if exist a workaround or a design pattern to avoid this?
Though I may not be providing the best solution, I hope to share some ideas and possible workarounds to this question.
First let's briefly explain why you are getting this error. The underlying DB of Hyperledger Fabric employs a MVCC-like (Multi-Version Concurrency Control) model. An example to this would be two clients trying to update an asset of version 0 to a certain value. One would succeed (updated the value and incremented the version number in the stateDB to 1), while another would fail with this error (MVCC_READ_CONFLICT) due to version mismatch.
One possible solution discussed here (https://medium.com/wearetheledger/hyperledger-fabric-concurrency-really-eccd901e4040) would be to implement a FIFO queue on your own between the business logic and Fabric SDK. Retry logic could also be added in this case.
Another way would be using the delta-concept. Suppose there is an asset A with value 10 (maybe it's representing account balance). This asset is being updated frequently (say being updated in this set of value 12 -> 19 -> 16) by multiple concurrent transactions and the above mentioned error would easily be triggered. Instead, we store the value as deltas (+2 -> +7 -> -3) and the final aggregated value would be the same in the ledger. But keep in mind this trick MAY NOT suit every case and in this example, you may also need to closely monitor the running total to avoid giving money if you got empty in your account. So it depends heavily on the data type and use case.
For more information, you can take a look at this: https://github.com/hyperledger/fabric-samples/tree/release-1.1/high-throughput
I recently ran into this problem and solved it by creating an array of promises of calls to async functions, then resolving one at a time.
My transactions add items from arrays of asset2Ids and asset3Ids to an array field on asset1. My transactions are all acting on the same asset so I was getting an MVCC_READ_CONFLICT error as the read/write set is changes before each transaction is committed. By forcing the transactions to resolve in a synchronous way, this conflict is fixed:
// Create a function array
let funcArray = [];
for (const i of asset2Ids) {
// Add this transaction to array of promises to be resolved
funcArray.push(()=>transactionFunctionThatAddsAsset2IdToAsset1(i).toPromise());
}
for (const j of asset3Ids) {
// Add this transaction to array of promises to be resolved
funcArray.push(()=>transactionFunctionThatAddsAsset3IdToAsset1(j).toPromise());
}
// Resolve all transaction promises against asset in a synchronous way
funcArray.reduce((p,fn) => p.then(fn), Promise.resolve());

Can 2 transactions of the same block update the same state key?

I believe the answer is no but I'd like confirmation.
With Fabric, endorsers simulate the transaction upon latest state and prepare the proposal adding the read and write set of keys.
At the commit phase, the peer will receive a block from the ordering service and the write set update is only applied if the read set has not been updated (versioning check).
So for the same block, the same key cannot be updated by 2 different transactions of the same block.
If it is the case, aggregating value and maintaining balance on-chain might be problematic for frequent transactions use-case. Such operation should be left for off-chain application layer.
So for the same block, the same key can not be updated by 2 different transactions of the same block.
The above is correct. Hyperledger Fabric uses an MVCC-like model in order to prevent collisions (or "double spend"). You'll want to wait for the previous state change transaction to commit before attempting to update the state again.

How can i use parallel transactions in neo4j?

I am currently working on an application using Neo4j as an embedded database.
And I wondering how it would be possible to make sure that separate threads use separate transactions. Normally, I would assign database operations to a transaction, but the code examples I found, don't allow for making sure that write operations use separate transactions:
try (Transaction tx = graphDb.beginTx()) {
Node node = graphDb.createNode();
tx.success();
}
As graphDB shall be used as a thread-safe singleton, I really don't see, how that shall work... (E.g. for several users creating a shopping list in separate transactions.)
I would be grateful for pointing out where I misunderstand the concept of transactions in Neo4j.
Best regards and many thanks in advance,
Oliver
The code you posted will run in separate transactions if executed by multiple threads, one transaction per thread.
The way this is achieved (and it's quite a common pattern) is storing transaction state against ThreadLocal (read the Javadoc and things will become clear).
Neo4j Transaction Management
In order to fully maintain data integrity and ensure good transactional behavior, Neo4j supports the ACID properties:
atomicity: If any part of a transaction fails, the database state is left unchanged.
consistency: Any transaction will leave the database in a consistent state.
isolation: During a transaction, modified data cannot be accessed by other operations.
durability: The DBMS can always recover the results of a committed transaction.
Specifically:
-All database operations that access the graph, indexes, or the schema must be performed in a transaction.
Here are the some useful links to understand Neo4j transactions
http://neo4j.com/docs/stable/rest-api-transactional.html
http://neo4j.com/docs/stable/query-transactions.html
http://comments.gmane.org/gmane.comp.db.neo4j.user/20442

Node.js - Scaling with Redis atomic updates

I have a Node.js app that preforms the following:
get data from Redis
preform calculation on data
write new result back to Redis
This process may take place several times per second. The issue I now face is that I wish to run multiple instances of this process, and I am obviously seeing out of date date being updated due to each node updating after another has got the last value.
How would I make the above process atomic?
I cannot add the operation to a transaction within Redis as I need to get the data (which would force a commit) before I can process and update.
Can anyone advise?
Apologies for the lack of clarity with the question.
After further reading, indeed I can use transactions however the area I was struggling to understand was that I need separate out the read from the update, and just wrap the update in the transaction along with using WATCH on the read. This causes the update transaction to fail if another update has taken place.
So the workflow is:
WATCH key
GET key
MULTI
SET key
EXEC
Hopefully this is useful for anyone else looking to an atomic get and update.
Redis supports atomic transactions http://redis.io/topics/transactions

Resources