Feasibility of Dynamically setting Environment Variables in Hyperledger Fabric - hyperledger-fabric

Can we dynamically change the Hyperledger environment variables that we are setting before setting up the HyperLedger components during the run time. For instance, if we need to change the FABRIC_LOGGING_SPEC from debug to info during the Orderer or PEER runtime with or without docker image, is it possible?

Yes, The peer logging can dynamically changed using the cli docker access.
There are certain helpful commands that will guide you the usage like
To get the log level for logger peer:
peer logging getlevel peer
To get the active logging spec for the peer:
peer logging getlogspec
To set the log level for loggers matching logger name prefix gossip to log level INFO:
peer logging setlevel gossip info
To revert the logging spec to the start-up value:
peer logging revertlevels
Get a more detailed explanation and usage on docs.

I was trying to achieve the same in past once. But found out after you create a docker container using the service mentioned in yaml file, one can't modify the env parameters. Using 'export' you can change it but only as long as you are bashed in that container. Once you bash out of that container, the old default value will set in. One solution to it can be, spin a new container with desired env parameters. And port all the data from old container to new container. Also required updates to config blocks of the channel.

Related

How to change the block size in Hyper Ledger Fabric 2.X?

I want to adjust the size of newly created blocks. I found that there is AbsoluteMaxBytes in configtx.yaml. However, I do not understand how to change it. I have docker images including peer and orderer. Both peer and orderer I suppose have default values including default value for AbsoluteMaxBytes. Should I rebuild docker images after I change configtx.yaml or should I somehow modify AbsoluteMaxBytes inside running container?
What is the procedure?
This requires a channel config update to be done.
Please refer here:
https://hyperledger-fabric.readthedocs.io/en/release-2.2/config_update.html?highlight=channel%20config%20update

Hyperledger Fabric Orderer - TLS Handshake Bad Certificate Issue

I'm developing an insurance application project that utilizes a hyperledger fabric network. I currently have a problem where my orderer nodes do not stay online for more than about 10 seconds before they crash. Inspecting the logs, there are a multitude of error messages suggesting that TLS certificates are not agreeing. The error messages do not seem to specify exactly what certificates are the faulting certificates in question however further up the logs is an error that says it could not match an expected certificate with a certificate that it found instead (Shown in this screenshot). While this error was also vague, I deduced that it was in fact comparing the orderer's public key certificate with the certificate within the locally stored genesis block. Upon inspection of the genesis block in the orderer node, it is indeed a completely different certificate. I have noticed that even after destroying the whole network from Docker and re-building the network, the certificate inside the genesis block of which is stored in the orderer nodes always remains exactly the same.
In terms of my network layout, I have 2 organizations. One for my orderer(s) and one for my peers. I also have 4 CA Servers (A CA and TLS CA Server for both the orderer organization and peer organization).
I have 3 peer nodes and 3 orderer nodes.
Included below is a pastebin of the logs from orderer1 node before it crashes, and the GitHub repo
orderer1 logs - ``https://pastebin.com/AYcpAKHn
repo - ``https://github.com/Ewan99/InsurApp/tree/remake
NOTE: When running the app, first run ./destroy.sh, then ./build.sh
"iaorderer" is the org for my orderers
"iapeer" is the org for my peers
I've tried re-naming certificates incase they were being over-written by each other on creation.
I tried reducing down from 3 orderers to 1 to see if it made any differences.
Of course, in going from 3 to 1 orderers I changed from RAFT to solo, and still encountered the same problems
As per david_k's comment suggestion:
Looks like you are using persistent volumes in docker, so do you prune these volumes when you start again from scatch if you don't then you can pick up data that doesn't match newly created crypto material
I had to prune my old docker volumes as they were conflicting and providing newly built containers with older certificates
I looked at the docker-compose.yaml file and there is something there that by all accounts should create a failure. Each peer definition uses the same port e.g.
CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer1.iapeer.com:7051
CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer2.iapeer.com:7051
CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer3.iapeer.com:7051
To my mind, this cannot possibly work while running on a single server, but perhaps I am missing something.

Options for getting logs in kubernetes pods

Have few developer logs in kubernetes pods, what is the best method to get the logs for the developers to see it.
Any specific tools that we can use?
I have the option of graylog, but not sure if that can be customized to get the developer logs into it.
The most basic method would be to simply use kubectl logs command:
Print the logs for a container in a pod or specified resource. If the
pod has only one container, the container name is optional.
Here you can find more details regarding the command and it's flags alongside some useful examples.
Also, you may want to use:
Logging Using Elasticsearch and Kibana
Logging Using Stackdriver
Both should do the trick in your case.
Please let me know if that is what you had on mind and if my answer was helpful.
If you want to see the application logs - from the development side you just need to print logs to the STDOUT and STDERR streams.
The container runtime (I guess Docker in your case) will redirect those streams
to /var/log/containers.
(So if you would ssh into the Node you can run docker logs <container-id> and you will see all the relevant logs).
Kuberentes provides an easier way to access it by kubectl logs <pod-name> like #WytrzymaƂy_Wiktor described.
(Notice that the logs are being rotated automatically every 10MB so the kubectl logs command will show the log entry from the last rotation only).
If you want to send the logs to a central logging system like (ELK, Splunk, Graylog etc') you will have to forward the logs from your cluster by running log forwarders inside your cluster.
This can be done for example by running a daemonset that manage a pod on each node that will access the logs path (/var/log/containers) by a hostPath volume and will forward the logs to the remote endpoint. See example here.

CORE_PEER_ADDRESS in chaincode-docker-devmode

I am following the tutorial Chaincode for Developers and in the section Testing Using dev mode in Terminal 2 there is the following instantiation of the environment variable
CORE_PEER_ADDRESS=peer:7052
Could you please tell me what is the purpose of this variable and why the port of the used peer is 7052?
I couldn't find in the docker-compose file a container running on this port..
Generally,chain code is going to run on containerized environment, but for dev activities like code/test/deploy, we have a sample folder called Chaincode-dev in fabric samples .It is of optimized with limited orderer,peer,cli. Normally we specify chaincode address as 7052,8052,... and chaincode will be maintained by peers (you can check these parameters in docker-composebase.yaml files ),but now here in dev --peer-chaincodedev mode , chaincode is running from user,you check parameters with chaincode won't be present,so these variables are exporting from user.

Does Orderer have Block(Ledger) data?

I built hyperledger fabric network using Kafka-based Ordering Service.
I thought that Orderer doesn't have Block data.
But, when I checked /var/hyperledger/production/orderer/chains/mychannel in Orderer server, I found blockfile_000000 file.
I checked this file using "less" command.
Then, I found key-value data which I registered by invoking chaincode.
What is this file?
This means that Orderer also maintain Block data(i.e. Ledger)?
The orderer has the blockchain of all channels it is part of.
However, it doesn't have the world state and it doesn't inspect the content of the transactions.
It just writes the blocks into the disk, to serve peers that pull the blocks from it.

Resources