I am currently participating in the hyperledger fabric project.
The concept of hyperledger fabric is still insufficient. I understand that the transaction is stored in the block. The transaction knows that the chaincode is executed.
For example, suppose you executed send chaincode. Suppose A sends 500 to B with A = 1000 and B = 1000. It will be A = 500 and B = 1500. Suppose this is TxId = AAA.
I want to see AAA's history of "A sent 500 to B" in this situation. Mychannel.block and mychannel in the channel-artifact directory created by running the current network.I tried to decode tx to json file.
However, it was found that there was no content related to this. Is there any way I can see the contents of TxId=AAA?
Decode .tx and .block file. but I didn't get what I wanted.
If you want to see the history of transactions, you can use the ctx.stub.getHistoryForKey(id) function, where the id parameter is a record key. This is the Node.js SDK method, I expect that it is similarly named for Java and Go. I think that the information that you require should be held in the contract, as the history only returns the different contract versions over time. If you want to see that A transacted with B, you would need the contract code to show that funds came from A and landed with B during the transfer. Depending on implementation, this might require a cross-contract call to a different contract (one containing clients A and B) so that 500 could be taken from Account A's fund and added to Account B's fund. In this scenario (if we are talking about a sale), the AssetTransfer contract could show the change of ownership, whereas the client contract will show 2 updates, one where A's fund decreases by 500 and another where B's fund increases by 500.
In the scenario above, there are now three updates that have history i.e. an asset sale, which you don't mention, but I am using as an example which will have a change of ownership history. A client A, whose fund record will have decreased, and a client B, who will have a corresponding increase in funds. Therefore, it's not a block history that you require, but a history of the Client records for A and B. Even if you only had a single contract e.g. (Client), and you exchanged funds, you will still have two updates, one for A and the other for B. It's the records within the contract code that change. The block is the manifestation of the entire transaction i.e. all rules have been satisfied by the different peers and whatever concensus policy is in place.
Related
I'm trying to design a double-entry ledger with DDD and running into some trouble with defining aggregate roots. There are three domain models:
LedgerLine: individual line items that have data such as amount, timestamp they are created at, etc.
LedgerEntry: entries into the ledger. Each entry contains multiple LedgerLines where the debit and credit lines must balance.
LedgerAccount: accounts in the ledger. There are two types of accounts: (1) internal accounts (e.g. cash) (2) external accounts (e.g. linked bank accounts). External accounts can be added/removed.
After reading some articles online (e.g. this one: https://lorenzo-dee.blogspot.com/2013/06/domain-driven-design-accounting-domain.html?m=0). It seems like LedgerEntry should be one aggregate root, holding references to LedgerLines. LedgerAccount should be the other aggregate root. LedgerLines would hold the corresponding LedgerAccount's ID.
While this makes a lot of sense, I'm having trouble figuring out how to update the balance of ledger accounts when ledger lines are added. The above article suggests that the balance should be calculated on the fly, which means it wouldn't need to be updated when LedgerEntrys are added. However, I'm using Amazon QLDB for the ledger, and their solutions engineer specifically recommended that the balance should be computed and stored on the LedgerAccount since QLDB is not optimized for such kind of "scanning through lots of documents" operation.
Now the dilemma ensues:
If I update the balance field synchronously when adding LedgerEntrys, then I would be updating two aggregates in one operation, which violates the consistency boundary.
If I update the balance field asynchronously after receiving the event emitted by the "Add LedgerEntry" operation, then I could be reading a stale balance on the account if I add another LedgerEntry that spends the balance on the account, which could lead to overdrafts.
If I subsume the LedgerAccount model into the same aggregate of LedgerEntry, then I lose the ability to add/remove individual LedgerAccount since I can't query them directly.
If I get rid of the balance field and compute it on the fly, then there could be performance problems given (1) QLDB limitation (2) the fact that the number of ledger lines is unbounded.
So what's the proper design here? Any help is greatly appreciated!
You could use Saga Pattern to ensure the whole process completes or fails.
Here's a primer ... https://medium.com/#lfgcampos/saga-pattern-8394e29bbb85
I'd add 'reserved funds' owned collection to the Ledger Account.
A Ledger Account will have 'Actual' balance and 'Available Balance'.
'Available Balance' is 'Actual' balance less the value of 'reserved funds'
Using a Saga to manage the flow:
Try to reserve funds on the Account aggregate. The Ledger Account will check its available balance (actual minus total of reserved funds) and if sufficient, add another reserved funds to its collection. If reservation succeeds, the account aggregate will return a reservation unique id. If reservation fails, then the entry cannot be posted.
Try to complete the double entry bookkeeping. If it fails, send a 'release reservation' command to the Account aggregate quoting the reservation unique id, which will remove the reservation and we're back to where we started.
After double entry bookkeeping is complete, send a command to Account to 'complete' reservation with reservation unique id. The Account aggregate will then remove the reservation and adjust its actual balance.
In this way, you can manage a distributed transaction without the possibility of an account going overdrawn.
An aggregate root should serve as a transaction boundary. A multi-legged transaction spans multiple accounts, hence an account cannot be.
So a ledger itself is an aggregate root. An accounting transaction should correspond to database transaction.
Actually, "ledger itself" doesn't mean a singleton. It can be org branch * time period ledger. And it usually is in non-computer event-sourcing systems.
Update.
A ledger account balances is merely a view into the ledger. And as a view it has a state as of some known event. When making up a decision whether to accept an operation or not, you should make sure that the actual state of the ledger is the latest state processed as of the balances. If it is not - the newer events should be processed first, and then an account operation should be tried again.
I have an operation with X meters, the number of meter may vary.
For each meter, I have to set a percentage of allocation.
So let's say, in my Operation 1, I have 3 meters, m1, m2, m3, I will assign 10% for m1, 50% for m2, and 40% for m3.
So, in this case, when I receive data from m1, I will want to check that operation 1 and meter 1 exists, that meter 1 belongs to operation 1, and get the repartition for my meter.
All those settings are present in an external DB (postgres). I can get it easily in Golang. Thing is I heard that chaincode must be deterministic, and it is a good practice not to have any external dependency. I understand that if the result of your chaincode depends on an external DB, you will not be able to audit this one, so the whole blockchain lose a bit of interest.
Should I hardcode it in an array or in a config file ? So each time I have a config change, I must publish my chaincode again ? I am not so happy on having 2 configs to sync ( DB + config file in DB), it might quickly lead to mistakes.
What is the recommended way of managing external DB connection in a chaincode ?
You could place the "meter information" into the blockchain data store and query it from there?
For instance the application may be used:
maintain the state of all meters, with what ever information they require. This data is written to the fabric state store, where is may be queried.
perform an additional transaction that contains the logic required to query meter information and act accordingly
In the above case the chaincode will be able to update meter information and act on information stored via queries and subsequent action.
Everything is then on-chain, therefore it is accessible, updatable and auditable
In fabric, if two transactions in a block conflicted(eg, two users try to buy an asset almost at the same time) ,then these two will not be committed to the ledger, or just the last one will not?
Only one is going to be successfully committed (the first one as ordered by the ordering service), as the version of the read set of the second transaction is not going to match the expected one.
Is very well explained in: https://medium.com/#arora.aditya520/chaincode-writing-best-practices-hyperledger-fabric-43d2adffbeec
For example, there are 2 channels as below.
CHANNEL_A:
chaincode name -> chaincode_a
ledger data -> amount=150
CHANNEL_B:
chaincode name -> chaincode_b
ledger data -> amount=0
I want to withdraw 100 from CHANNEL_A's ledger data and deposit 100 to CHANNEL_B's ledger data.
According to the Fabric documentation, if the called chaincode is on a different channel, only the response is returned to the calling chaincode; any PutState calls from the called chaincode will not have any effect on the ledger.
So, if I call chaincode_b and it calls chaincode_a, I can update the amount on chaincode_B, but I can't update on chaincode_A.
So, I have to invoke 2 transactions for both channels on the application side. I have to consider error handling and so on for data consistency.
Is there a best practice to handle such a transaction on the application side?
To update something in channel A and also in channel B - you need to do 2 transactions and make them commit or not commit in an atomic manner.
A way to do this is to implement something equivalent to a 2-phase commit in the application layer.
However this isn't trivial to do, as you may always get MVCC conflicts that will get in the way.
In a Hyperledger Fabric network, ledgers which all peers(endorsing peers and committing peers) have are replicated ledgers.
It seems to imply there is a unique 'real/original/genuine' ledger per channel.
I'd like to ask these:
Is there a real ledger? If so, where is it(or where is it defined?) and who owns it?
Those replicated ledgers are updated by each peer, after VSCC, MVCC validation. Then who updates the 'real' ledger?
Does 'World State' only refers to the 'real' ledger?
I'd really appreciate if you answer my questions.
Please tell me if these questions are clarified to you. Thank you!
I don't understand what exactly you mean by 'real' ledger. There is one & only ledger per channel, replicated across all participants per channel. When I say participants, I mean all peers (both endorsing & committing) of an organization's MSP belonging to a given channel.
State DB (a.k.a World state) refers to the database that maintains the current value of a given key. Let me give you an example. You know that a blockchain is a liked list on steroids (with added security, immutability, etc). Say, you have a key A with value 100 in Block 1. You transact in the following manner.
Block 2 -- A := A-10
Block 15 -- A := A-12
.
.
.
Block 10,000 -- A := A-3
So, after Block 10,000, if you need the current value of key A, you have to calculate the value from Block 1. So to manage this efficiently, Fabric folks implemented a state database that updates the value of a key in the state after every transaction. It's sole responsibility is to improve efficiency. If your state gets corrupted, Fabric will automatically rebuild it from Block 0.