Hyperledger Fabric 1.4 CouchDB Txn logs - couchdb

I have created Hyperledger Fabric network with 2 Orgs and 1 solo orderer. On the peer I configured CouchDB as state database and launched the peer(After creating channel and joining). I can see CouchDB is creating databases:
mychannel_
mychannel_mycc
mychannel_lscc
I installed and initiated "chain-code_example02" Go chain-code on mychannel. I can successfully query and invoke commands on peer end. CouchDB gets updated on invoke command and mychannel_mycc updates field "revpos", but I cant see transaction logs anywhere like I saw in many tutorials. Where can I see the history of transaction logs with ID? mychannel_mycc database only have data for A and B key but not the values I transferred from A to B with Transaction details like how much I transfer and value.

CouchDB only saves the state, not the transactions.
Transactions (and events...) are ordered in blocks and added to the chain, which is saved in files under /var/hyperledger/production in your joined peers.
You can see the logs in the peer container...
docker logs -f --tail 100 mypeercontainer
...or use the client SDK to inspect your channel's chain elements: https://hyperledger.github.io/fabric-sdk-node/release-1.4/Channel.html.

Related

Expected behavior of peer in case of all orderers down

In Hyperledger Fabric, what is the expected behavior of peer when all orderer nodes are down.
Should peer also down, or stop serving request from client, or continue to serve query request?
In our test, after orderers are stopped, the peer keeps writing "failed to create connection to orderer" log. When we query a key by calling chaincode the value is returned.
Can you help clarify if this is expected behavior. Thank you.
I am working on a distributed hyperledger fabric network. I would recommend the Orderer Raft Consensus https://hyperledger-fabric.readthedocs.io/en/release-2.2/orderer/ordering_service.html#ordering-service-implementations.
I have solved this in such a way that in my case I have three orderers that run independently on different environments.
If I crash all these orderers, the peer containers will continue to run on the other participants of the network. As you said, they cannot make any transactions.
If one of my orderers crashes it is not so bad after the raft consensus, the containers keep running. If another one fails, no transactions can be made. In this case I let the peers continue and check if the orderers are available again.
The behaviour you described I would put down to the fact that the peer requests the value from his ledger, he doesn't need an orderer for that. https://hyperledger.github.io/fabric-chaincode-node/master/api/fabric-shim.ChaincodeStub.html#getState
Have a read of this: https://github.com/hyperledger/fabric/blob/master/docs/source/peers/peers.md This is the best documentation for how the system works I've found and there's more in the docs directory on the repo for orderers, etc.
My understanding is: The peers are there to sign (endorse) transaction proposals. The orderer exists to order, validate, package and distribute transactions to peers. The peers can also distribute their knowledge of validated transactions via the gossip channel.
If all orderers go down, the transactions will not be validated/packaged/distributed so the blockchain will be out of action until the orderers are restored.
When we query a key by calling chaincode the value is returned.
Peers will still remain up and ready to sign/endorse transaction proposals, and querying the blockchain held at the peers will still work. Chaincodes are hosted by the peers. Orderers do not host chaincode.
Also see here https://github.com/hyperledger/fabric/blob/master/docs/source/orderer/ordering_service.md#ordering-service-implementations for the various modes the orderer can be run in: Raft mode, Kafka ordering, Solo ordering.
I think the current observerd behavour is expected and in my view it is just fine.
Let's check the purpose of orderer?
Order the transactions
Cut the block and distribute the block amongst the orgs when the criteria is met ( min txn/size or time).
This also means, orderer is needed when your Fabric network is processing those transactions which intend to write data into the ledger, isnt it? And Query is not a transaction which writes into the ledger. So it doesn't need the orderer. For query, it will pick up the data from the peer's local database.
So I think what could be done is, to send out an alert to the production support when your application detect orderer node down ( with some health check ? ). And your application displays a dimnished capacity/limited operations message while work on bringing up the orderer network, the system can still serve the search queries.
From my view, its just fantastic. But its finally upto you. Cheers!!

hyperledger fabric mychannel block in peer get deleted once the container is down

I have hyper-ledger fabric setup with 2 organisation which works well.I am keeping the separate storage for the blocks state in file system. Now i turn down the all organisation container, all the states inside the container is deleted, but i am keeping states which are stored the my file path. Next, when i use the existing file storage, and turn up the docker, all the peers and ordered load well from the state which i was stop. The problem here is, I am unable to reinitiate the channel transaction and i am unable to join the same channel from the peer. where does the mychannel.block get stored. when i try to join the channel i get error
2019-11-27 03:49:01.631 UTC [channelCmd] InitCmdFactory -> INFO 001 Endorser and orderer connections initialised
Error: genesis block file not found open mychannel.block: no such file or directory
You should know what volumes are you using to persist that file.
You should persist:
/var/hyperledger/production in your orderers and peers.
/opt/couchdb/data in your CouchDB containers.
Wherever you store your MSP, TLS files and other configuration files (genesis block, etc.). Only you know about your configuration.
/var/lib/postgresql/data in your CA's PostgreSQL container.
Whatever other file/folder you want to persist.
Anyway, I don't know if I have understood you, but if you persist all these, you don't need to join a channel again, the peers remain joined after restarting the network.

How can a peer node retrieve old data after it crashes in Hyperledger Fabric?

I am pretty new to Hyperledger Fabric. I read a bit about gossip protocol but did not get a clear idea. Please help me with these questions.
How could a node recover old data from a channel after a crash?
What if the channel had only a single peer node and this node crashed?
A peer can get old data from a channel from other peers when it recovers. Another way, if you are pointing to a volume were it is storing the ledger information and all its credentials when it recovers can read it from there, thats why it is recomended to use a persistance storage.
Thats bad practice as you are not going to offer High availability, so without peers you are going to stop giving service and your ledger is not going to be available. But, as you can read from the documentation, you can recover from the orderer.
All ledger, blocks etc stores in below particular location in the peer container
/var/hyperledger/production
All you have todo is create a backup volume and map it
Sample snippet below
Create Volume:
volumes:
backup_peer1:
Add Volume to container:
- backup_peer1:/var/hyperledger/production

State Sync in for Hyperledger Fabric using LEVELDB?

Reading the docs ,I understand that there are two options to maintain the world state LEVELDB and COUCHDB .
In case of LevelDB ,"LevelDB is the default state database embedded in the peer node" I am assuming it is local to a peer . As in every peer has a copy of LevelDB running .
In case of CouchDB there is a separate container to run it ,and all the peers can use it to execute transactions (all the peers see the same data )
In the first case for LevelDB ,how is the version of the data synced across all the peers?
Is this a plug and play feature , Can I for
example use an ETCD cluster ,instead of CouchDB ?
It's actually the same for both LevelDB and CouchDB. The database is basically used as a persistent last know value store. It can be rebuilt at any time for the ledger (the block chain) which is actually what is replicated across all peers via the ordering service
EDIT To clarify, peers receive blocks from the orderer. Each peer than validates the transactions in each block and for each valid transaction it will update the state in the database. In the case of LevelDB, it's an embedded call and in the case of CouchDB the peer communicates with CouchDB via the CouchDB HTTP API. Of course it also writes the blocks to the ledger files on disk as well.
It's not plug and play ... only LevelDB and CouchDB are currently supported. The codebase itself supports adding additional databases, but someone would need to implement the support in the actual codebase.

How your data is safe in Hyperledger Fabric when one can make changes to couchdb data directly

I am wondering that how your data is safe when an admin can change the latest state in Couchdb using Fauxton or cURL provided by Couchdb directly.
According to my understanding Hyperledger Fabric provides immutable data feature and is best for fraud prevention(Blockchain feature).
The issue is :- I can easily change the data in couchdb and when I query from my chaincode it shows the changed data. But when I query ledger by using GetHistoryForKey() it does not shows that change I made to couchdb. Is there any way I can prevent such fraud? Because user will see the latest state always i.e data from couchdb not from ledger
Any answer would be appreciated.
Thanks
You should not expose the CouchDB port beyond the peer's network to avoid the data getting tampered. Only the peer's administrator should be able to access CouchDB, and the administrator has no incentive to tamper their own data. Let me explain further...
The Hyperledger Fabric state database is similar to the bitcoin unspent transaction database, in that if a peer administrator tampers with their own peer’s database, the peer will not be able to convince other peers that transactions coming from it are valid. In both cases, the database can be viewed as a cache of current blockchain state. And in both cases, if the database becomes corrupt or tampered, it can be rebuilt on the peer from the blockchain. In the case of bitcoin, this is done with the -reindex flag. In the case of Fabric, this is done by dropping the state database and restarting the peer.
In Fabric, peers from different orgs as specified in the endorsement policy must return the same chaincode execution results for transactions to be validated. If ledger state data had been altered or corrupted (in CouchDB or LevelDB file system) on a peer, then the chaincode execution results would be inconsistent across endorsing peers, the 'bad’ peer/org will be found out, and the application client should throw out the results from the bad peer/org before submitting the transaction for ordering/commit. If a client application tries to submit a transaction with inconsistent endorsement results regardless, this will be detected on all the peers at validation time and the transaction will be invalidated.
You must secure your couchdb from modification by processes other than the peer, just as you must generally protect your filesystem or memory.
If you make your filesystem world writable, other users could overwrite ledger contents. Similarly, if you do not put access control on couchdb writes, then you lose the immutability properties.
In Hyperledger Fabric v1.2, each peer has its own CouchDB. So even if you change the data directly from CouchDB of one peer. The endorsement would fail. If the endorsement fails, your data will not be written neither in world state nor in the current state.
That's the beauty of a decentralized distributed system. Even if you or someone else changes the state of your database/ledger it will not match with the state of others in the network, neither will it match the transaction block hash rendering any transactions invalid by the endorsers unless you can restore the actual agreed upon state of the ledger from the network participants or the orderer.
To take advantage of the immutability of ledger you must query the ledger. Querying the database does not utilize the power of blockchain and hence must be protected in fashion similar to how access to any other database is protected.
You need to understand 2 things here
Although the data of a couchdb of a peer maybe tampered, you should setup your endorsement policy in such a way that it must be endorsed by all the peers.
You cannot expose your couchdb to be altered, I recommend to see Cilium
As explained by others - endorsements/consensus is the key. Despite the fact that ledger state of an endorsing peer can be modified externally - in that event all transactions endorsed by that peer would get discarded, because other endorsing peers would be sending correct transactions (assuming other's world state was also not tampered with) and consensus would play the key role here to help select the correct transaction.
Worst case scenario all transactions would fail.
Hyperledger fabric's world state (Ledger state) can be regenerated from the blockchain (Transactions Log) anytime. And, in the event of peer failure this regeneration happens automatically. With some careful configuration, one can build a self-healing network where a peer at fault would automatically rise from ashes (pun intended).
The key point to consider here is the Gossip Data dissemination protocol which can be considered as the mystical healer. All peers within the network continuously connect, and exchange data with other peers within the network.
To quote the documentation -
Peers affected by delays, network partitions, or other causes resulting in missed blocks will eventually be synced up to the current ledger state by contacting peers in possession of these missing blocks.
and ...
Any peer with data that is out of sync with the rest of the channel identifies the missing blocks and syncs itself by copying the correct data.
That's why it is always recommended to have more and more of endorsing peers within the network and organizations. The bigger the network - harder it would be beat with malicious intent.
I hope, I could be of some help. Ref Link: https://hyperledger-fabric.readthedocs.io/en/release-1.4/gossip.html
Even though this is plausible, the endorsement policy is the general means by which you protect yourself (the system) from the effects of such an act.
"a state reconciliation process synchronizes world state across peers on each channel. Each peer continually pulls blocks from other peers on the channel, in order to repair its own state if discrepancies are identified."

Resources