After enrolling, installing and instantiating the chaincode fabric/example/chaincode/go/chaincode_example02, I run the following steps.
peer chaincode instantiate --orderer orderer0:7050 --tls true --path example02 --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/orderer/localMspConfig/cacerts/ordererOrg0.pem --chainID mychannel --name example02cc --version 1.0 --ctor '{"Args":["init","A","1000","B","2000"]}'
peer chaincode query --chainID mychannel --name example02cc --ctor '{"Args":["query","A"]}'
peer chaincode query --chainID mychannel --name example02cc --ctor '{"Args":["query","B"]}'
So far, I confirm that A is equal to 1000 and B is equal to 2000. Afterwards, The result will be variable if I invoke the following step with different timings.
peer chaincode invoke --orderer orderer0:7050 --tls true --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/orderer/localMspConfig/cacerts/ordererOrg0.pem --chainID mychannel --name example02cc --ctor '{"Args":["invoke","A","B","1"]}'
Specifically, A will be equal to 998 and B will be equal to 2002 if I run the previous step twice with a 10 second pause. A will be equal to 990 and B will be equal to 2010 if I run the previous step ten times with 10 second pause between every step. However, without any pause, A will be equal to 999 and B will be equal to 2001 if I run the previous step twice. A will be equal to 999 and B will be equal to 2001 if I run the previous step ten times without pause between every step.
I have tested several times with different arguments. Furthermore, I have tested other chaincodes. It seems like that the chaincode only accept the first invoking request, and discards subsequent invoking requests. So, the questions are:
Is this a mechanism to prevent double-spending? or just a weakness?
How to solve this problem which limits the transaction rate.
I think that chaincode should support concurrent invocations. Can chaincode support concurrent invocations actually?
Can a single chaincode invoke multiple requests in a single block period?
The chaincode need to do endorsement and then commit it the ledger state before the next invoke.
When you call peer chaincode invoke ..., fabric response quickly after the endorsement has finished. But the commit will still take some time. If you run the second invoke directly after the first invoke, the second invoke will be endorse correctly, but no commit happened.
So, for your questions:
You can try to invoke chaincode by java-sdk or node-sdk. take the javasdk for example, the progress is first send transactionProposalRequest to chaincode, which is the endorsement process in fabric. And after the endorsement is finished, and your endorment policy is correctly passed. the transaction is send to fabirc by sdk-client, this API will return an CompletableFuture<BlockEvent.TransactionEvent>, which is similar to the Promise in js. when the transaction is done, the CompletableFuture.thenApply() is triggled. You may check the src/test/java/org.hyperledger.fabric.sdkintergration/End2endIT.java for example.
You may write a batch in your chaincode. This can support multiple invoke and query at once, it solves your question in some degree. The batch is designed as follow, new a m map[string]string and toDelete []string in your cc, when invoke put your key/val pairs in m, and when query get key/val from m first, if not found, query from stub, and when delete requirement, delete from m and put the key to toDelete , after all requirements are finished, commit all data in m and toDelete at once. I tried this mechanism in my project, it work good.
blockchain is designed not for high concurrent. It is designed for confidentiality, scalability and security.
see 2
From the point of view of fabric this is actually normal behavior - although may seem a bit counterintuitive first.
What you see happening follows logically from the documented transaction flow (+ knowing that there is some batching performed on the orderer with batch-timeouts). Let us assume that during simulation (endorsement), variable A is read and then marked for re-setting by the chaincode simulation. What value was read and what value you want the variable set to become part of the proposed transaction (+ whether endorsers accept the transaction + crypto-stuff). Then we go through submission to orderer + distribution to channel peers + channel peers checking (among others) whether the original "read value assumptions" of the proposal still hold. (See 5.: Validation on the referenced peers.) If the value of A has changed "since simulation", then the transaction will not be valid.
The impact of timing on this is the following. If there is enough "leeway" between two invokes, the following happens:
Call1 - Proposal created with original A value
Call1 proposal gets endorsements, sent to orderer, gets batched
After some time, peers get the ordered-endorsed Tx from the orderer, check it, find it fine, the value of A is modified (committed to the Ledger) as requested
Call2 - Proposal created with new A value read etc.
The crucial thing is that what happens if you create and submit the second proposal before the effects of the first would have been committed to the Ledger. (So without "waiting enough".)
Call2 simulation still sees the original A value, endorsements are computed based on that
This is bundled into the "read values" property of the transaction
By the time it reaches the channel peers for committing, the value of A has been already modified by Call1
Thus, this transaction will be invalid and have no effect on the ledger
Such effects are not exactly new, we did get bitten by similar stuff with fabric 0.6. Fortunately, there are a few "clean" ways out of this; I suggest to read up on "tokenization" in the Blockchain context. In the context of example02, you can make all "units" have a GUID and track essentially ownership vectors; or you can go full-UTXO, Bitcoin style. Option b) can be to write the Tx requests themselves into the Ledger (Tx1:+20, Tx2: -40, Tx3: +65, ...) and base the comuptation of "current acccount state" on Ledger-stored Tx logs, although this can get a bit messy.
And do note that massive (and painless) concurrency is eminently possible, as long as the "working sets" of Tx requests do not overlap much. For many domains, this is possible; e.g. for a cryptocurrency, it is not the same currency unit that is being passed around like hot potatoes, but a large set of currency units having many transactions, but levelled out across them.
On your specific questions:
see far above
see the two suggestions above
Depends on what you mean by concurrent invocation, but at the heart of the fabric 1.0 architecture is that a full ordering is set up on all endorsed requests and this ordering is enforced during validation and Ledger update. Massive amounts of peers requesting things concurrently - sure; "reentrant chaincode" - no-no-no.
Chaincodes don't "invoke requests". If you mean whether a single chaincode can have multiple transactions in a single block: sure, e.g. selling 100 different cars using a "sell_cars" chaincode.
Related
I want to write a timed chaincode. I wish the chaincode won't response to any invoke after a certain period. In Ethereum, I can do this by counting blocks. For example, I can make a smart contract valid only before block 100000. But in hyperledger fabric, I don't know how to do it.
I had similar problem. Even though the block information is available via system chaincode I was not able to implement the querying of it from my chaincode elegantly. For my purpose, however, the following implementation was enough:
Introduce a counter state and initialize it with '0'.
Check this state in all your important chaincode functions. If it is lower than the certain limit, increase it by 1 and proceed with the function logic. Otherwise, throw error/print message/do nothing.
Based on the expected frequency of your transactions you can set more or less appropriate limit.
Maybe this soltion will work for you as well.
I would like to understand the advantage of the execute-order-validate architecture of Hyperledger Fabric compared to the order-execute architecture in terms of efficiency.
The execute-order-validate approach allows peers to execute the transactions without considering their order. This allows the peers to run the transactions in parallel during the execute phase.
However, based on my understanding, in the validate phase, all transactions (except the first one) that will update the same set of keys in the world state will be invalidated to avoid double-spending.
Given this, will the execute-order-validate architecture will potentially produce a lot of invalidated transactions?
Below is a sample smart contract that will illustrate my concern:
reserveTicket(eventId, ticketingAgencyId, ticketCount){
//check if there is not enough tickets left
if (worldState[eventId] < ticketCount)
throw "there is not enough tickets"
worldState[eventId] -= ticketCount;
if (worldState[eventId + ":" + ticketingAgencyId] == null)
worldState[eventId + ":" + ticketingAgencyId] = 0;
worldState[eventId + ":" + ticketingAgencyId] += ticketCount
}
In this smart contract, a ticket reservation system is implemented. For a particular event, there can be many ticketing agencies that can reserve tickets by calling the reserveTicket function.
If there are 10 ticketing agencies (e.g., agency1 to agency10) that will make a reservation on the same event (e.g., event9999) at the same time (i.e., the orderer will make the 10 transactions part of the same block), does this mean that 9 of the transactions will be invalidated in the validate phase since all of them will update the same key in the world state:
worldState["event9999"] -= ticketCount;
Will this make the execute-order-validate less efficient since 9 of the 10 transactions need to be retried?
In an order-execute approach, the 10 transactions will not be executed in parallel, however, as long as there are enough tickets left, all the transactions will be successful.
Is my understanding if execute-order-validate correct? If yes, is there a way to avoid invalidating 9 of the 10 transactions?
Here the link to a chaincode example ( from the official samples repository ) that may help you to achieve your goal.
HL Fabric Samples - High Throughput
I have a question regarding the piece selecion stradegy. It is mentioned in a paper that there is a "Strick Priority" as below:
BitTorrent’s first policy for piece selection is that
once a single sub-piece has been requested, the remaining sub-pieces from that particular piece are requested before sub-pieces from any other piece. This
does a good job of getting complete pieces as quickly
as possible.
The stradegy above is easy to understand, but it does not mention from the peer's point of view.
So here is my question:
Is it true that all the blocks for a piece should be requested from the same peer?
If answer of question 1 is true, does the client will request the same block from diffrent peers at the same time in case some peer failed to respond?
Is it true that all the blocks for a piece should be requested from the same peer?
False. The idea is, by requesting different blocks from different peers, adding together the bandwidth from all the peers to complete the piece quickly.
does the client will request the same block from diffrent peers at the same time in case some peer failed to respond?
Normally a specific block is only requested from one peer. Only if that peer hasn't sent that block after a long time out, that block is requested from another peer.
A exception from this is the "End Game" mode.
When the download is almost complete and all remaining blocks has been requested:
Blocks may be requested in parallel from more than one peer to avoid needing to wait for the slowest peers to finish their blocks and make the download slow down to a crawl.
I have 2 transactions, transaction A and transaction B. Each transactions having one event in them.
transaction A -- Event A
transaction B -- Event B
When I am calling transaction A inside/from the transaction B it is making all the changes defined in transaction A but there is no sign of any Event A in transaction history.
I know the fact that it will be only one transaction there will not be a separate transaction for transaction A but is there a way to get Event A? As it must be triggered.
Thanks
I am editing the question.
We can see the event A in transaction B.
No answer required. Thanks
Do you mean in the same business network? (also: shouldn't the title be changed to Hyperledger Composer (if you've tagged Composer?) If yes to 'same network', you cannot call one transaction from another in Composer. However, as commented on here -> https://github.com/hyperledger/composer/issues/4375 a transaction in Composer (as modeled in your model file) can call other functions to allow for code modularisation, but it will only be registered as a single transaction request in the transaction registry. That transaction can be (for example) multiple smart contract updates from one or more transaction functions (if a, then update b and c, then update d and add e and emit events, as a unit of work), to update/add/delete from different asset or participant registries, as a unit of work.
On events: while you can issue an 'emit' at any point in your transaction logic, that event will not actually be emitted until the transaction is committed. However, throwing the error in the transaction, makes the transaction roll back - changes made by transactions are atomic, either the transaction is successful and all changes are applied, or the transaction fails and no changes are applied (same applies for event emission).
for loose endorsement policy, for same transaction, even if correct-behavior committers, different committer maybe have different validation result due to their different world state.
And how do commiter know who's validation result is correct(major) or wrong(minor) ?
Hint: at a moment, due to network delay, correct commiter may have different world state.
thanks for answer.
I have got it!
because each correct commiter receive block/transaction in exactly same order, and implement same endorsement policy, so that at same transaction's validation moment, all correct commiter indeed have exactly same world state, despite at not same time!
key point is :
(1) at same moment, different commiters may have different world state due to network delay, etc;
(2) but, different commiter do validation for same transaction at not same moment.