For example, there are 2 channels as below.
CHANNEL_A:
chaincode name -> chaincode_a
ledger data -> amount=150
CHANNEL_B:
chaincode name -> chaincode_b
ledger data -> amount=0
I want to withdraw 100 from CHANNEL_A's ledger data and deposit 100 to CHANNEL_B's ledger data.
According to the Fabric documentation, if the called chaincode is on a different channel, only the response is returned to the calling chaincode; any PutState calls from the called chaincode will not have any effect on the ledger.
So, if I call chaincode_b and it calls chaincode_a, I can update the amount on chaincode_B, but I can't update on chaincode_A.
So, I have to invoke 2 transactions for both channels on the application side. I have to consider error handling and so on for data consistency.
Is there a best practice to handle such a transaction on the application side?
To update something in channel A and also in channel B - you need to do 2 transactions and make them commit or not commit in an atomic manner.
A way to do this is to implement something equivalent to a 2-phase commit in the application layer.
However this isn't trivial to do, as you may always get MVCC conflicts that will get in the way.
Related
I am currently participating in the hyperledger fabric project.
The concept of hyperledger fabric is still insufficient. I understand that the transaction is stored in the block. The transaction knows that the chaincode is executed.
For example, suppose you executed send chaincode. Suppose A sends 500 to B with A = 1000 and B = 1000. It will be A = 500 and B = 1500. Suppose this is TxId = AAA.
I want to see AAA's history of "A sent 500 to B" in this situation. Mychannel.block and mychannel in the channel-artifact directory created by running the current network.I tried to decode tx to json file.
However, it was found that there was no content related to this. Is there any way I can see the contents of TxId=AAA?
Decode .tx and .block file. but I didn't get what I wanted.
If you want to see the history of transactions, you can use the ctx.stub.getHistoryForKey(id) function, where the id parameter is a record key. This is the Node.js SDK method, I expect that it is similarly named for Java and Go. I think that the information that you require should be held in the contract, as the history only returns the different contract versions over time. If you want to see that A transacted with B, you would need the contract code to show that funds came from A and landed with B during the transfer. Depending on implementation, this might require a cross-contract call to a different contract (one containing clients A and B) so that 500 could be taken from Account A's fund and added to Account B's fund. In this scenario (if we are talking about a sale), the AssetTransfer contract could show the change of ownership, whereas the client contract will show 2 updates, one where A's fund decreases by 500 and another where B's fund increases by 500.
In the scenario above, there are now three updates that have history i.e. an asset sale, which you don't mention, but I am using as an example which will have a change of ownership history. A client A, whose fund record will have decreased, and a client B, who will have a corresponding increase in funds. Therefore, it's not a block history that you require, but a history of the Client records for A and B. Even if you only had a single contract e.g. (Client), and you exchanged funds, you will still have two updates, one for A and the other for B. It's the records within the contract code that change. The block is the manifestation of the entire transaction i.e. all rules have been satisfied by the different peers and whatever concensus policy is in place.
In the documentation for the transaction flow of hyperledger fabric it is mentioned that
"The ordering service does not need to inspect the entire content of a transaction in order to perform its operation, it simply receives transactions from all channels in the network, orders them chronologically by channel, and creates blocks of transactions per channel."
I have a couple of questions here
What does "chronological ordering" mean?. Does it mean that the transactions for a channel are ordered depending on the time they are received at the Ordering service node (Leader) ?
If two client applications are submitting an update transaction for the same key on the ledger almost at the same time [Let us call them tx1 (updating key x to value p) , tx2 (updating key x to value q)], all the endorsing peers will simulate the update transaction proposal and return the write set in the transaction proposal response. When the clients send these endorsement proposal requests to ordering service nodes , in which order will these update transactions be ordered in a block ?.
The order of transactions in the block can be tx1,tx2 OR tx2,tx1 , assuming the update transaction has only the write set and no read set , both the transactions in either orders are valid. What will be the final value of the key on the ledger [p or q] ?
I am trying to understand if the final value of x is deterministic , and what factors would influence it.
What does "chronological ordering" mean?. Does it mean that the transactions for a channel are ordered depending on the time they are received at the Ordering service node (Leader)?
In general, the orderer makes no guarantees about the order in which messages will be delivered just that messages will be delivered in the same order to all peer nodes.
In practice, the following generally holds true for the current orderer implementations:
Solo - messages should be delivered in the order in which they were received
Kafka - messages should be delivered in the order in which they were received by each orderer node and generally even in the order they are received across multiple ordering nodes.
This holds true for the latest fabric version as well.
If two client applications are submitting an update transaction for the same key on the ledger almost at the same time [Let us call them tx1 (updating key x to value p) , tx2 (updating key x to value q)], all the endorsing peers will simulate the update transaction proposal and return the write set in the transaction proposal response. When the clients send these endorsement proposal requests to ordering service nodes, in which order will these update transactions be ordered in a block ?.
When you submit a transaction, the peer generates a read and write set. This read/write set is then used when the transaction is committed to the ledger. It contains the name of the variables to be read/written and their version when they were read. If, during the time between set creation and committing, a different transaction was committed and changed the version of the variable, the original transaction will be rejected during committal because the version when read is not the current version.
To address this, you will have to create data and transaction structures that avoid concurrently editing the same key, other you might get MVCC_READ_CONFLICT error.
I have a scenario where i have to update multiple transactions in ledger at the same time.
In simple case, two transactions has to executed at the same time in order to make the use case valid. if anyone of them fails the other one should revert back.
err = stub.PutState(key, tradeJSONasBytes)
using hyperledger 1.1 and golang smart contract.
If you want to save multiple transaction you can call multiple PutState() but there is nothing like reverting the transaction, even if the transaction fails it is still stored as a block, as it is the part of immutability condition.
i'm wondering how is possible to execute concurrent transactions in hyperledger fabric using hyperledger composer.
When i try to submit two transactions at the same time against the same resource I get this error:
Error trying invoke business network. Error: Peer has rejected transaction \'transaction-number\' with code MVCC_READ_CONFLICT
Does anyone know if exist a workaround or a design pattern to avoid this?
Though I may not be providing the best solution, I hope to share some ideas and possible workarounds to this question.
First let's briefly explain why you are getting this error. The underlying DB of Hyperledger Fabric employs a MVCC-like (Multi-Version Concurrency Control) model. An example to this would be two clients trying to update an asset of version 0 to a certain value. One would succeed (updated the value and incremented the version number in the stateDB to 1), while another would fail with this error (MVCC_READ_CONFLICT) due to version mismatch.
One possible solution discussed here (https://medium.com/wearetheledger/hyperledger-fabric-concurrency-really-eccd901e4040) would be to implement a FIFO queue on your own between the business logic and Fabric SDK. Retry logic could also be added in this case.
Another way would be using the delta-concept. Suppose there is an asset A with value 10 (maybe it's representing account balance). This asset is being updated frequently (say being updated in this set of value 12 -> 19 -> 16) by multiple concurrent transactions and the above mentioned error would easily be triggered. Instead, we store the value as deltas (+2 -> +7 -> -3) and the final aggregated value would be the same in the ledger. But keep in mind this trick MAY NOT suit every case and in this example, you may also need to closely monitor the running total to avoid giving money if you got empty in your account. So it depends heavily on the data type and use case.
For more information, you can take a look at this: https://github.com/hyperledger/fabric-samples/tree/release-1.1/high-throughput
I recently ran into this problem and solved it by creating an array of promises of calls to async functions, then resolving one at a time.
My transactions add items from arrays of asset2Ids and asset3Ids to an array field on asset1. My transactions are all acting on the same asset so I was getting an MVCC_READ_CONFLICT error as the read/write set is changes before each transaction is committed. By forcing the transactions to resolve in a synchronous way, this conflict is fixed:
// Create a function array
let funcArray = [];
for (const i of asset2Ids) {
// Add this transaction to array of promises to be resolved
funcArray.push(()=>transactionFunctionThatAddsAsset2IdToAsset1(i).toPromise());
}
for (const j of asset3Ids) {
// Add this transaction to array of promises to be resolved
funcArray.push(()=>transactionFunctionThatAddsAsset3IdToAsset1(j).toPromise());
}
// Resolve all transaction promises against asset in a synchronous way
funcArray.reduce((p,fn) => p.then(fn), Promise.resolve());
I believe the answer is no but I'd like confirmation.
With Fabric, endorsers simulate the transaction upon latest state and prepare the proposal adding the read and write set of keys.
At the commit phase, the peer will receive a block from the ordering service and the write set update is only applied if the read set has not been updated (versioning check).
So for the same block, the same key cannot be updated by 2 different transactions of the same block.
If it is the case, aggregating value and maintaining balance on-chain might be problematic for frequent transactions use-case. Such operation should be left for off-chain application layer.
So for the same block, the same key can not be updated by 2 different transactions of the same block.
The above is correct. Hyperledger Fabric uses an MVCC-like model in order to prevent collisions (or "double spend"). You'll want to wait for the previous state change transaction to commit before attempting to update the state again.