i'm wondering how is possible to execute concurrent transactions in hyperledger fabric using hyperledger composer.
When i try to submit two transactions at the same time against the same resource I get this error:
Error trying invoke business network. Error: Peer has rejected transaction \'transaction-number\' with code MVCC_READ_CONFLICT
Does anyone know if exist a workaround or a design pattern to avoid this?
Though I may not be providing the best solution, I hope to share some ideas and possible workarounds to this question.
First let's briefly explain why you are getting this error. The underlying DB of Hyperledger Fabric employs a MVCC-like (Multi-Version Concurrency Control) model. An example to this would be two clients trying to update an asset of version 0 to a certain value. One would succeed (updated the value and incremented the version number in the stateDB to 1), while another would fail with this error (MVCC_READ_CONFLICT) due to version mismatch.
One possible solution discussed here (https://medium.com/wearetheledger/hyperledger-fabric-concurrency-really-eccd901e4040) would be to implement a FIFO queue on your own between the business logic and Fabric SDK. Retry logic could also be added in this case.
Another way would be using the delta-concept. Suppose there is an asset A with value 10 (maybe it's representing account balance). This asset is being updated frequently (say being updated in this set of value 12 -> 19 -> 16) by multiple concurrent transactions and the above mentioned error would easily be triggered. Instead, we store the value as deltas (+2 -> +7 -> -3) and the final aggregated value would be the same in the ledger. But keep in mind this trick MAY NOT suit every case and in this example, you may also need to closely monitor the running total to avoid giving money if you got empty in your account. So it depends heavily on the data type and use case.
For more information, you can take a look at this: https://github.com/hyperledger/fabric-samples/tree/release-1.1/high-throughput
I recently ran into this problem and solved it by creating an array of promises of calls to async functions, then resolving one at a time.
My transactions add items from arrays of asset2Ids and asset3Ids to an array field on asset1. My transactions are all acting on the same asset so I was getting an MVCC_READ_CONFLICT error as the read/write set is changes before each transaction is committed. By forcing the transactions to resolve in a synchronous way, this conflict is fixed:
// Create a function array
let funcArray = [];
for (const i of asset2Ids) {
// Add this transaction to array of promises to be resolved
funcArray.push(()=>transactionFunctionThatAddsAsset2IdToAsset1(i).toPromise());
}
for (const j of asset3Ids) {
// Add this transaction to array of promises to be resolved
funcArray.push(()=>transactionFunctionThatAddsAsset3IdToAsset1(j).toPromise());
}
// Resolve all transaction promises against asset in a synchronous way
funcArray.reduce((p,fn) => p.then(fn), Promise.resolve());
Related
I have an operation with X meters, the number of meter may vary.
For each meter, I have to set a percentage of allocation.
So let's say, in my Operation 1, I have 3 meters, m1, m2, m3, I will assign 10% for m1, 50% for m2, and 40% for m3.
So, in this case, when I receive data from m1, I will want to check that operation 1 and meter 1 exists, that meter 1 belongs to operation 1, and get the repartition for my meter.
All those settings are present in an external DB (postgres). I can get it easily in Golang. Thing is I heard that chaincode must be deterministic, and it is a good practice not to have any external dependency. I understand that if the result of your chaincode depends on an external DB, you will not be able to audit this one, so the whole blockchain lose a bit of interest.
Should I hardcode it in an array or in a config file ? So each time I have a config change, I must publish my chaincode again ? I am not so happy on having 2 configs to sync ( DB + config file in DB), it might quickly lead to mistakes.
What is the recommended way of managing external DB connection in a chaincode ?
You could place the "meter information" into the blockchain data store and query it from there?
For instance the application may be used:
maintain the state of all meters, with what ever information they require. This data is written to the fabric state store, where is may be queried.
perform an additional transaction that contains the logic required to query meter information and act accordingly
In the above case the chaincode will be able to update meter information and act on information stored via queries and subsequent action.
Everything is then on-chain, therefore it is accessible, updatable and auditable
For example, there are 2 channels as below.
CHANNEL_A:
chaincode name -> chaincode_a
ledger data -> amount=150
CHANNEL_B:
chaincode name -> chaincode_b
ledger data -> amount=0
I want to withdraw 100 from CHANNEL_A's ledger data and deposit 100 to CHANNEL_B's ledger data.
According to the Fabric documentation, if the called chaincode is on a different channel, only the response is returned to the calling chaincode; any PutState calls from the called chaincode will not have any effect on the ledger.
So, if I call chaincode_b and it calls chaincode_a, I can update the amount on chaincode_B, but I can't update on chaincode_A.
So, I have to invoke 2 transactions for both channels on the application side. I have to consider error handling and so on for data consistency.
Is there a best practice to handle such a transaction on the application side?
To update something in channel A and also in channel B - you need to do 2 transactions and make them commit or not commit in an atomic manner.
A way to do this is to implement something equivalent to a 2-phase commit in the application layer.
However this isn't trivial to do, as you may always get MVCC conflicts that will get in the way.
I have a scenario where i have to update multiple transactions in ledger at the same time.
In simple case, two transactions has to executed at the same time in order to make the use case valid. if anyone of them fails the other one should revert back.
err = stub.PutState(key, tradeJSONasBytes)
using hyperledger 1.1 and golang smart contract.
If you want to save multiple transaction you can call multiple PutState() but there is nothing like reverting the transaction, even if the transaction fails it is still stored as a block, as it is the part of immutability condition.
I believe the answer is no but I'd like confirmation.
With Fabric, endorsers simulate the transaction upon latest state and prepare the proposal adding the read and write set of keys.
At the commit phase, the peer will receive a block from the ordering service and the write set update is only applied if the read set has not been updated (versioning check).
So for the same block, the same key cannot be updated by 2 different transactions of the same block.
If it is the case, aggregating value and maintaining balance on-chain might be problematic for frequent transactions use-case. Such operation should be left for off-chain application layer.
So for the same block, the same key can not be updated by 2 different transactions of the same block.
The above is correct. Hyperledger Fabric uses an MVCC-like model in order to prevent collisions (or "double spend"). You'll want to wait for the previous state change transaction to commit before attempting to update the state again.
I have a Node.js app that preforms the following:
get data from Redis
preform calculation on data
write new result back to Redis
This process may take place several times per second. The issue I now face is that I wish to run multiple instances of this process, and I am obviously seeing out of date date being updated due to each node updating after another has got the last value.
How would I make the above process atomic?
I cannot add the operation to a transaction within Redis as I need to get the data (which would force a commit) before I can process and update.
Can anyone advise?
Apologies for the lack of clarity with the question.
After further reading, indeed I can use transactions however the area I was struggling to understand was that I need separate out the read from the update, and just wrap the update in the transaction along with using WATCH on the read. This causes the update transaction to fail if another update has taken place.
So the workflow is:
WATCH key
GET key
MULTI
SET key
EXEC
Hopefully this is useful for anyone else looking to an atomic get and update.
Redis supports atomic transactions http://redis.io/topics/transactions