How do I upgrade from Hyperledger Fabric 1.0 to 1.1? - hyperledger-fabric

I have an existing Hyperledger Fabric 1.0.x install, how do I perform an upgrade to the new 1.1 release(s)?

At a high level, upgrading a Fabric network can be performed with the following sequence:
Update orderers, peers, and fabric-ca. These updates may be done in parallel.
Update client SDKs.
Enable v1.1 channel capability requirements.
(Optional) Update the Kafka cluster.
The details of each step in the process are described in the documentation.

For my development environment, I deleted 1.0.5 images(emptied bin folder) and executed the command:

Related

How to patch GKE Managed Instance Groups (Node Pools) for package security updates?

I have a GKE cluster running multiple nodes across two zones. My goal is to have a job scheduled to run once a week to run sudo apt-get upgrade to update the system packages. Doing some research I found that GCP provides a tool called "OS patch management" that does exactly that. I tried to use it but the Patch Job execution raised an error informing
Failure reason: Instance is part of a Managed Instance Group.
I also noticed that during the creation of the GKE Node pool, there is an option for enabling "Auto upgrade". But according to its description, it will only upgrade the version of the Kubernetes.
According to the Blog Exploring container security: the shared responsibility model in GKE:
For GKE, at a high level, we are responsible for protecting:
The nodes’ operating system, such as Container-Optimized OS (COS) or Ubuntu. GKE promptly makes any patches to these images available. If you have auto-upgrade enabled, these are automatically deployed. This is the base layer of your container—it’s not the same as the operating system running in your containers.
Conversely, you are responsible for protecting:
The nodes that run your workloads. You are responsible for any extra software installed on the nodes, or configuration changes made to the default. You are also responsible for keeping your nodes updated. We provide hardened VM images and configurations by default, manage the containers that are necessary to run GKE, and provide patches for your OS—you’re just responsible for upgrading. If you use node auto-upgrade, it moves the responsibility of upgrading these nodes back to us.
The node auto-upgrade feature DOES patch the OS of your nodes, it does not just upgrade the Kubernetes version.
OS Patch Management only works for GCE VM's. Not for GKE
You should refrain from doing OS level upgrades in GKE, that could cause some unexpected behavior (maybe a package get's upgraded and changes something that will mess up the GKE configuration).
You should let GKE auto-upgrade the OS and Kubernetes. Auto-upgrade will upgrade the OS as GKE releases are inter-twined with the OS release.
One easy way to go is to signup your clusters to release channels, this way they get upgraded as often as you want (depending on the channel) and your OS will be patched regularly.
Also you can follow the GKE hardening guide which provide you with step to make sure your GKE clusters are as secured as possible

Does 2.x external chaincode container depends on capabilities or binaries?

I was following the link for upgrade components https://hyperledger-fabric.readthedocs.io/en/release-2.2/upgrading_your_components.html. Currently I am running 1.4.1 network, and if I upgrade my binaries to 2.2, and put the capabilities as is (1.4), will it possible to run external chaincode container or chaincode as a service with 1.4 capabilities?
You run images and binary 1.4 along the capabilities 1.4, actually your chaincodes are launched by the peer and then connect to peer. So it's a 2.0 feature and u should run with capability 2.0.

Can fabric-node-sdk 1.4 be used with a Fabric 1.2 network?

I use fabric-node-sdk 1.4 to make an API server with Fabric 1.4.4 on local It works normally. But when using with service blockchain on AWS, get error:
error: [Remote.js]: Error: Failed to connect before the deadline URL:grpcs:.......
So I'm not sure because on AWS is fabric version 1.2 or not.
(or is there a way to test ping URL:grpcs:?)
I solved, I fix the connection profile. I can conclude that fabric-node-sdk 1.4 can be used with fabric 1.2.

hyperledger fabric setup with more than one orderer

I have setup a fabric network with more than one orderer and analyzing few scenarios on how it is working. Have two question.
One of the advantages of multi orderer network is to avoid a single
point of failure. So if one orderer fails it has to automatically
take another orderer into the picture and continue the work. But in
the actual scenario for peer chaincode invoke through cli we pass
arguments of orderer and cafile of orderer to make a transaction.
Here we are passing the orderer info so if the orderer we choose is
down the transaction will not be done. My question is - this is not
the objective of multi orderer network so why we need to pass the
orderer related arguments?
I deployed this network with 4 kafka brokers and 3 zookeepers. Even
after stopping all the three zookeepers the fabric network is giving
the correct response. What is the significance of zookeeper?
The point of multiple orderers is to eliminate a single point of
failure and to allow the ordering service to scale horizontally. The
peer CLI is really not intended to be used for invokes in a
production application. Typically, an SDK such as Node or Java would
be used, and on failure, the invoke would be retried to another
orderer.
The Kafka brokers use Zookeeper to manage leader election, and
generally orchestrate changes in the Kafka cluster. I would expect
that with zookeeper down, eventually you will experience problems
with the cluster. Network may run properly until kakfa doesn't have any issues. But, when kafka gets some issue, it is the Zookeeper which will take care of next steps.

For Hyperledger Fabric production deployments what is the recommended number of kafka nodes and orderer nodes?

I have recently been able to deploy a production environment for Fabric and I am looking to find what should be considered for deploying a Fabric network in production. Is there any considerations I need to take when deploying the orderers and kafka nodes? IE number of nodes and configurations. I cannot find much information on production grade Fabric networks.
Quoting the Hyperledger Fabric documentation here Docs » Bringing up a Kafka-based Ordering Service
Let K and Z be the number of nodes in the Kafka cluster and the
ZooKeeper ensemble respectively:
At a minimum, K should be set to 4. (As we will explain in Step 4
below, this is the minimum number of nodes necessary in order to
exhibit crash fault tolerance, i.e. with 4 brokers, you can have 1
broker go down, all channels will continue to be writeable and
readable, and new channels can be created.)
Z will either be 3, 5, or > 7. It has to be an odd number to avoid split-brain scenarios, and larger than 1 in order to avoid single point of failures. Anything beyond 7 ZooKeeper servers is considered an overkill.
Update 14 Nov 2020
Please note that Hyperledger has deprecated the use of Ordering Service Network based on Kafka. Usage of Raft Ordering Service is recommended for production.
Based on this guide Bringing up a Kafka-based Ordering Service, I configured 4 kafka nodes for production.

Resources