How to Enable Full Logging in Balance Transfer? - hyperledger-fabric

I am trying to enable peer/orderer logging in the Balance Transfer sample of Hyperledger Fabric, so I can understand the step-by-step consensus and transaction process of my Hyperledger Fabric project.
---------
If we start the blockchain network of chaincode-docker-devmode, and go to the Terminal 1 where this command below is executed
docker-compose -f docker-compose-simple.yaml up
We can easily see all the peer/orderers/cli logs in the terminal, for example like this:
peer | 2018-07-26 08:58:07.426 UTC [chaincode] Execute -> DEBU 73d Entry
peer | 2018-07-26 08:58:07.426 UTC [chaincode] Execute -> DEBU 73e chaincode canonical name: escc:1.1.0
orderer | 2018-07-26 08:58:07.434 UTC [policies] Evaluate -> DEBU 3c4 Signature set satisfies policy /Channel/Orderer/SampleOrg/Writers
orderer | 2018-07-26 08:58:07.434 UTC [policies] Evaluate -> DEBU 3c5 == Done Evaluating *cauthdsl.policy Policy /Channel/Orderer/SampleOrg/Writers
orderer | 2018-07-26 08:58:07.434 UTC [policies] Evaluate -> DEBU 3c6 Signature set satisfies policy /Channel/Orderer/Writers
peer | 2018-07-26 08:58:07.426 UTC [chaincode] sendExecuteMessage -> DEBU 73f [82a18317]Inside sendExecuteMessage. Message TRANSACTION
peer | 2018-07-26 08:58:07.426 UTC [chaincode] setChaincodeProposal -> DEBU 740 Setting chaincode proposal context...
orderer | 2018-07-26 08:58:07.435 UTC [policies] Evaluate -> DEBU 3c7 == Done Evaluating *policies.implicitMetaPolicy Policy /Channel/Orderer/Writers
orderer | 2018-07-26 08:58:07.435 UTC [policies] Evaluate -> DEBU 3c8 Signature set satisfies policy /Channel/Writers
orderer | 2018-07-26 08:58:07.435 UTC [policies] Evaluate -> DEBU 3c9 == Done Evaluating *policies.implicitMetaPolicy Policy /Channel/Writers
orderer | 2018-07-26 08:58:07.436 UTC [orderer/common/blockcutter] Ordered -> DEBU 3ca Enqueuing message into batch
orderer | 2018-07-26 08:58:07.436 UTC [orderer/common/broadcast] Handle -> DEBU 3cb [channel: myc] Broadcast has successfully enqueued message of type ENDORSER_TRANSACTION from 172.23.0.5:57804
peer | 2018-07-26 08:58:07.426 UTC [chaincode] setChaincodeProposal -> DEBU 741 Proposal different from nil. Creating chaincode proposal context...
In my case, I want to replicate that condition into the Balance Transfer sample. So when I run ./runApp.sh, it will show all the logs.
How can I do that? What environment should I put in Balance Transfer's docker-compose.yaml file?
Thanks!

In balance transfer go to directory artifacts and type this command.
docker-compose -f docker-compose.yaml logs -f for live logging.[1][2]
docker-compose -f docker-compose.yaml logs` for upto the point logs with no live logging.
NOTE : When you run the script in balance-transfer, docker starts the docker containers in detached mode : Run containers in the background,print new container names. That's the reason logs are not shown as soon as containers are orchestrated.[3][4]
References :
[1] : Docker - How do I view real time logging of Docker containers? (https://success.docker.com/article/view-realtime-container-logging)
[2] : docker container logs | Docker Documentation (https://docs.docker.com/engine/reference/commandline/container_logs/)
[3] : docker run | Docker Documentation (https://docs.docker.com/engine/reference/commandline/run/)
[4] : docker-compose up | Docker Documentation (https://docs.docker.com/compose/reference/up/)

Related

Chaincode is instantiated but doesn't appear in the list of instantiated codes

I am running Hyperledger Fabric 1.4.0
I have 1 org (Org1), 2 peers (peer0, peer1) and two orderers (ord0, ord1). The peers use couchdb as a storage backend.
I am able to successfully install my chaincode, then instantiate it.
Looking at peer0 logs, the docker image is built and the container started. peer0 also receives and acknowledges the REGISTER request sent by the chaincode binary within the container:
2019-06-24 10:15:57.003 UTC [dockercontroller] createContainer -> DEBU b563 created container {"imageID": "nid1-peer0-mynet-mychain-v1-613158e6e99c2c9e7d567e8b57fe2dfb56444f7fdcbc263dd1f61626a374843d", "containerID": "nid1-peer0-mynet-mychain-v1"}
2019-06-24 10:15:57.160 UTC [dockercontroller] Start -> DEBU b564 Started container nid1-peer0-mynet-mychain-v1
2019-06-24 10:15:57.160 UTC [container] unlockContainer -> DEBU b565 container lock deleted(mychain-v1)
2019-06-24 10:15:57.181 UTC [chaincode] handleMessage -> DEBU b566 [] Fabric side handling ChaincodeMessage of type: REGISTER in state created
2019-06-24 10:15:57.181 UTC [chaincode] HandleRegister -> DEBU b567 Received REGISTER in state created
2019-06-24 10:15:57.182 UTC [chaincode] Register -> DEBU b568 registered handler complete for chaincode mychain:v1
2019-06-24 10:15:57.182 UTC [chaincode] HandleRegister -> DEBU b569 Got REGISTER for chaincodeID = name:"mychain:v1" , sending back REGISTERED
2019-06-24 10:15:57.182 UTC [chaincode] HandleRegister -> DEBU b56a Changed state to established for name:"mychain:v1"
2019-06-24 10:15:57.182 UTC [chaincode] sendReady -> DEBU b56b sending READY for chaincode name:"mychain:v1"
2019-06-24 10:15:57.182 UTC [chaincode] sendReady -> DEBU b56c Changed to state ready for chaincode name:"mychain:v1"
2019-06-24 10:15:57.182 UTC [chaincode] Launch -> DEBU b56d launch complete
2019-06-24 10:15:57.182 UTC [chaincode] Execute -> DEBU b56e Entry
2019-06-24 10:15:57.182 UTC [chaincode] handleMessage -> DEBU b56f [1a98f442] Fabric side handling ChaincodeMessage of type: COMPLETED in state ready
Despite this, the chaincode is not registered in couchdb:
$ peer chaincode list --instantiated -C mychannel
2019-06-24 11:26:29.317 BST [main] InitCmd -> WARN 001 CORE_LOGGING_LEVEL is no longer supported, please use the FABRIC_LOGGING_SPEC environment variable
2019-06-24 11:26:29.332 BST [main] SetOrdererEnv -> WARN 002 CORE_LOGGING_LEVEL is no longer supported, please use the FABRIC_LOGGING_SPEC environment variable
Get instantiated chaincodes on channel mychannel:
peer0 logs immediately after submitting the list command:
2019-06-24 10:26:30.057 UTC [couchdb] ReadDocRange -> DEBU c02e [mychannel_lscc] HTTP/1.1 200 OK
Transfer-Encoding: chunked
Cache-Control: must-revalidate
Content-Type: application/json
Date: Mon, 24 Jun 2019 10:26:30 GMT
Server: CouchDB/2.1.1 (Erlang OTP/18)
X-Couch-Request-Id: 20d0beb9c3
X-Couchdb-Body-Time: 0
2a
{"total_rows":0,"offset":0,"rows":[
]}
0
If I try to invoke a method on the chaincode, I get this error:
Error: endorsement failure during invoke. response: status:500 message:"make sure the chaincode mychain has been successfully instantiated and try again: chaincode mychain not found"
which just confirms that the chaincode has not been registered within the network.
Update
I realised I had missed an important detail: the peer logs repeatedly report errors connecting to the orderer, e.g.:
2019-06-24 11:30:35.931 UTC [ConnProducer] NewConnection -> ERRO 100e6 Failed connecting to ord0.mynet.example.com , error: context deadline exceeded
which might be the reason why the "chaincode instantiated" message doesn't get propagated...
After much debugging, it turned out the issue was pretty simple: the peers could not communicate with the orderers.
In my particular case the addresses of the orderers were wrong in configtx.yaml. Fixing them resulted in the chaincode instantiation process tu fully succeed.

Hyperledger Fabric - Orderer logs show error during broadcast even though the transaction is successful and committed to all peers

Orderer logs show following error(highlighted) during broadcast even though the transaction is successful and committed to all peers:
2018-12-19 13:43:42.724 UTC [policies] Evaluate -> DEBU 997 == Done Evaluating *policies.implicitMetaPolicy Policy /Channel/Writers
2018-12-19 13:43:42.724 UTC [orderer/common/broadcast] Handle -> DEBU 998 [channel: mychannel] Broadcast has successfully enqueued message of type ENDORSER_TRANSACTION from 172.19.0.7:40978
2018-12-19 13:43:42.724 UTC [orderer/common/blockcutter] Ordered -> DEBU 999 Enqueuing message into batch
2018-12-19 13:43:42.724 UTC [orderer/consensus/solo] main -> DEBU 99a Just began 2s batch timer
2018-12-19 13:43:42.730 UTC [grpc] infof -> DEBU 99b transport: loopyWriter.run returning. connection error: desc = "transport is closing"
2018-12-19 13:43:42.730 UTC [orderer/common/broadcast] Handle -> WARN
99c Error reading from 172.19.0.7:40978: rpc error: code = Canceled
desc = context canceled
2018-12-19 13:43:42.731 UTC [orderer/common/server] func1 -> DEBU 99d Closing Broadcast stream
2018-12-19 13:43:44.724 UTC [orderer/consensus/solo] main -> DEBU 99e Batch timer expired, creating block
2018-12-19 13:43:44.724 UTC [msp] GetDefaultSigningIdentity -> DEBU 99f Obtaining default signing identity
Fabric Config: All configs and setup same as specified in Build you First Network tutorial at https://hyperledger-fabric.readthedocs.io/en/release-1.3/build_network.html
Query
What is meant by this error? What does orderer read from cli(IP of cli: 172.19.0.7:40978) during broadcast?

Hyperledger fabric channel create failed: Principal deserialization failure

I'm trying to create channel to test my fabric environment, and I don't use docker instead I am running the actual executable itself. However, the creation failed with errors.
Error on the orderer:
2018-09-04 20:36:55.034 CST [cauthdsl] deduplicate -> ERRO 251 Principal deserialization failure (MSP OrdererOrg is unknown) for identity 0a0a4f7264657265724f72671281062d2d2d2d2d424547494e2043455254494649434154452d2d2d2d2d0a4d4949434444434341624b67417749424167495158346c644c424e55705271796451705845475767446a414b42676771686b6a4f50515144416a42704d5173770a435159445651514745774a56557a45544d4245474131554543424d4b5132467361575a76636d3570595445574d4251474131554542784d4e5532467549455a790a5957356a61584e6a627a45554d4249474131554543684d4c5a586868625842735a53356a62323078467a415642674e5642414d54446d4e684c6d5634595731770a62475575593239744d423458445445344d446b774e4445794d6a55784d466f58445449344d446b774d5445794d6a55784d466f775744454c4d416b47413155450a42684d4356564d78457a415242674e5642416754436b4e6862476c6d62334a7561574578466a415542674e564241635444564e6862694247636d467559326c7a0a593238784844416142674e5642414d54453239795a4756795a5849755a586868625842735a53356a623230775754415442676371686b6a4f50514942426767710a686b6a4f50514d4242774e434141514670514e4b7665305a332f5059377369315730537550553347326d6a61366b3744495756727a41766541516d6a4169415a0a6e4b6d6c4c4a742b5164655a4e4342446c6558742b72384b69656c4d72556b6e7159554e6f303077537a414f42674e56485138424166384542414d43423441770a44415944565230544151482f424149774144417242674e5648534d454a444169674341716e446c77684434524f6e6f6b424d72476249496d51724871697934680a6e514279524657617a5233774f54414b42676771686b6a4f5051514441674e4941444246416945416b6c356336596a64515038565352384679694462393553310a325130633032727269593662764a454d53544d4349435445346f6f79627843414a577a5a4f5069507173766d4c53667a316238676a337572692b5a434d664e470a2d2d2d2d2d454e442043455254494649434154452d2d2d2d2d0a
>2018-09-04 20:36:55.035 CST [cauthdsl] func1 -> DEBU 252 0xc00000f0e8 gate 1536064615035064391 evaluation starts
2018-09-04 20:36:55.035 CST [cauthdsl] func2 -> DEBU 253 0xc00000f0e8 signed by 0 principal evaluation starts (used [false])
2018-09-04 20:36:55.035 CST [cauthdsl] func2 -> DEBU 254 0xc00000f0e8 principal evaluation fails
2018-09-04 20:36:55.035 CST [cauthdsl] func1 -> DEBU 255 0xc00000f0e8 gate 1536064615035064391 evaluation fails
2018-09-04 20:36:55.035 CST [policies] Evaluate -> DEBU 256 Signature set did not satisfy policy /Channel/Orderer/OrdererOrg/Writers
2018-09-04 20:36:55.035 CST [policies] Evaluate -> DEBU 257 == Done Evaluating *cauthdsl.policy Policy /Channel/Orderer/OrdererOrg/Writers
2018-09-04 20:36:55.035 CST [policies] func1 -> DEBU 258 Evaluation Failed: Only 0 policies were satisfied, but needed 1 of [ OrdererOrg.Writers ]
2018-09-04 20:36:55.035 CST [policies] Evaluate -> DEBU 259 Signature set did not satisfy policy /Channel/Orderer/Writers
2018-09-04 20:36:55.035 CST [policies] Evaluate -> DEBU 25a == Done Evaluating *policies.implicitMetaPolicy Policy /Channel/Orderer/Writers
2018-09-04 20:36:55.035 CST [policies] func1 -> DEBU 25b Evaluation Failed: Only 0 policies were satisfied, but needed 1 of [ Orderer.Writers Consortiums.Writers ]
2018-09-04 20:36:55.035 CST [policies] Evaluate -> DEBU 25c Signature set did not satisfy policy /Channel/Writers
2018-09-04 20:36:55.035 CST [policies] Evaluate -> DEBU 25d == Done Evaluating *policies.implicitMetaPolicy Policy /Channel/Writers
2018-09-04 20:36:55.035 CST [orderer/common/broadcast] Handle -> WARN 25e [channel: roberttestchannel] Rejecting broadcast of config message from 192.168.136.100:54494 because of error: Failed to reach implicit threshold of 1 sub-policies, required 1 remaining: permission denied
2018-09-04 20:36:55.035 CST [orderer/common/server] func1 -> DEBU 25f Closing Broadcast stream
2018-09-04 20:36:55.037 CST [common/deliver] Handle -> WARN 260 Error reading from 192.168.136.100:54492: rpc error: code = Canceled desc = context canceled
2018-09-04 20:36:55.037 CST [orderer/common/server] func1 -> DEBU 261 Closing Deliver stream
2018-09-04 20:36:55.037 CST [grpc] infof -> DEBU 262 transport: loopyWriter.run returning. connection error: desc = "transport is closing"
2018-09-04 20:36:55.037 CST [grpc] infof -> DEBU 263 transport: loopyWriter.run returning. connection error: desc = "transport is closing"
Error on the peer:
2018-09-04 20:36:55.007 CST [channelCmd] InitCmdFactory -> INFO 001 Endorser and orderer connections initialized
Error: got unexpected status: FORBIDDEN -- Failed to reach implicit threshold of 1 sub-policies, required 1 remaining: permission denied
I have tried to reboot or delete all the project files. But it didn't work.
Thank you for your attention!
Here is the condition from you description:
modifying the crypto-config.yaml and configtx.yaml file to create your own fabric network.
using cryptogen and configtxgen to generate the channel artifacts and keycerts file
then your may use fabric-tool or Node-SDK to handle the fabric network like enroll the user, create channel and join channel.
the error threw out when your are create channel
In the head of the log, it shows that MSP OrdererOrg is unknown. That problem may cause by after using orderer certs signed the channel-creating request then the client sent the request to the orderer. It is fine that you use the order to sign the channel-creating request, only if you configured your Signature policy to your target member. Like setting the Rule like:
Policies:
Readers:
Type: Signature
Rule: "OR('OrgOrderer.admin','OrgOrderer.client')"
Writers:
Type: Signature
Rule: "OR('OrgOrderer.admin')"
If you are left the Policies setting to blank, it will follow the default policy.
The logs similar to Signature set did not satisfy policy /Channel/Orderer/OrdererOrg/Writers do shows that your signature is not valid. If you want to know more detail about what specific policy configuration, using the configentx to inspect the channel.tx or genesis.block:
configtxgen -inspectBlock ./channel-artifacts/genesis.block -configPath ./crypto-config/example.com/ >genesis.json
configtxgen -inspectChannelCreateTx ./channel-artifacts/channel.tx --configPath ./crypto-config/example.com/ >channel.json

Signature set did not satisfy policy /Channel/Application/Org1/Admins when building balance-transfer from fabric samples

I am trying to build Hyperledger network like balanace-transfer from fabric-samples.
I have error
status: 'BAD_REQUEST',
- info: 'Error authorizing update: Error validating DeltaSet: Policy for [Groups] /Channel/Application not satisfied: Failed to reach implicit threshold of 1 sub-policies, required 1 remaining'
docker logs orderer.exmaple.com output
.
-2018-06-26 14:41:04.631 UTC [policies] Evaluate -> DEBU 120 Signature set did not satisfy policy /Channel/Application/Gov1MSP/Admins
-2018-06-26 14:41:04.631 UTC [policies] Evaluate -> DEBU 121 == Done Evaluating *cauthdsl.policy Policy /Channel/Application/Gov1MSP/
-2018-06-26 14:41:04.631 UTC [policies] func1 -> DEBU 122 Evaluation Failed: Only 0 policies were satisfied, but needed 1 of [ Gov1MSP.Admins ]
-2018-06-26 14:41:04.631 UTC [policies] Evaluate -> DEBU 123 Signature set did not satisfy policy /Channel/Application/ChannelCreationPolicy
-2018-06-26 14:41:04.631 UTC [policies] Evaluate -> DEBU 124 == Done Evaluating *policies.implicitMetaPolicy Policy /Channel/Application/ChannelCreationPolicy
-2018-06-26 14:41:04.631 UTC [orderer/common/broadcast] Handle -> WARN 125 [channel: usachannel] Rejecting broadcast of config message from 172.18.0.1:46638 because of error: Error authorizing update: Error validating DeltaSet: Policy for [Groups] /Channel/Application not satisfied: Failed to reach implicit threshold of 1 sub-policies, required 1 remaining
-2018-06-26 14:41:04.631 UTC [orderer/common/server] func1 -> DEBU 126 Closing Broadcast stream
I generate artifacts with cryptogen and configtxgen
I know that this error comes with wrong certificates when trying to create a channel
I can create channel, join peer, add new peers and add new channels with cli, e.g. same step with cli
docker exec -e "CORE_PEER_LOCALMSPID=Org1MSP" -e "CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/msp/users/Admin#org1.example.com/msp" peer0.org1.example.com peer channel create -o orderer.example.com:7050 -c usachannel -f /etc/hyperledger/configtx/channel.tx works properly
Thank you for your help

Running an invoke or query operation forces the chaincode container to restart

I am not sure if the community is aware of this problem but I tried to run the balance-transfer application under [fabric samples]: https://github.com/hyperledger/fabric-samples. Everything seems to run smoothly. However, when running a query or an invoke operation, the docker container containing the chaincode crashes and gets restarted. You can check it out by running
docker ps -a. The status will show that the container had just started.
I looked up the logs of the peer that was queried and it seems the problem resides somewhere here:
2018-01-17 07:06:33.654 UTC [container] lockContainer -> DEBU 891 waiting for container(dev-peer0.org1.example.com-mycc-v0) lock
2018-01-17 07:06:33.654 UTC [container] lockContainer -> DEBU 892 got container (dev-peer0.org1.example.com-mycc-v0) lock
2018-01-17 07:06:33.655 UTC [dockercontroller] Start -> DEBU 893 Cleanup container dev-peer0.org1.example.com-mycc-v0
****2018-01-17 07:06:33.693 UTC [chaincode] processStream -> ERRO 894 Error handling chaincode support stream: rpc error: code = Canceled desc = context canceled****
2018-01-17 07:06:33.693 UTC [chaincode] deregisterHandler -> DEBU 895 Deregister handler: mycc:v0
2018-01-17 07:06:34.343 UTC [dockercontroller] stopInternal -> DEBU 896 Stopped container dev-peer0.org1.example.com-mycc-v0
2018-01-17 07:06:34.343 UTC [dockercontroller] stopInternal -> DEBU 897 Kill container dev-peer0.org1.example.com-mycc-v0 (API error (409): {"message":"Cannot kill container: dev-peer0.org1.example.com-mycc-v0: Container d818357f76068ab0a9efbf70be9b9a19fd7f6cc7bbe11eaba95c0a61d208ceac is not running"}
)
2018-01-17 07:06:34.459 UTC [dockercontroller] stopInternal -> DEBU 898 Removed container dev-peer0.org1.example.com-mycc-v0
2018-01-17 07:06:34.459 UTC [dockercontroller] Start -> DEBU 899 Start container dev-peer0.org1.example.com-mycc-v0
2018-01-17 07:06:34.459 UTC [dockercontroller] createContainer -> DEBU 89a Create container: dev-peer0.org1.example.com-mycc-v0
2018-01-17 07:06:34.724 UTC [dockercontroller] createContainer -> DEBU 89b Created container: dev-peer0.org1.example.com-mycc-v0-f021beca29998638e0bb7caa7af8fda7f1e709518214a3181d259abcb2347093
Any idea what is going on?
There exist two types of chaincode
System chaincode (part of peer binary)
Application chaincode (developed by smart contract/chaincode developers)
System chaincodes are part of the peer binary itself, whereas application chaincodes are instantiated in docker containers. There are various system chaincodes like LSCC, CSCC, QSCC, etc.
LSCC refers to the Life cycle system chaincode, it manages the life cycle of application chaincode.
LSCC contains various chaincode related logics like installation logic, instantiation logic, up-gradation logic, invoking and querying logic.
So, the point is
If
the chaincode container is already up and running
then, The Query/Invoke function get executed and the result is returned to the
caller.
else
due to any reason if the chaincode container is killed and not running to serve
the requests over gRPC
then,
the LSCC invoking/querying logic launches the chaincode container again until
it's ready to serve the requests make by the caller.

Resources