I was following the link for upgrade components https://hyperledger-fabric.readthedocs.io/en/release-2.2/upgrading_your_components.html. Currently I am running 1.4.1 network, and if I upgrade my binaries to 2.2, and put the capabilities as is (1.4), will it possible to run external chaincode container or chaincode as a service with 1.4 capabilities?
You run images and binary 1.4 along the capabilities 1.4, actually your chaincodes are launched by the peer and then connect to peer. So it's a 2.0 feature and u should run with capability 2.0.
Related
I have a GKE cluster running multiple nodes across two zones. My goal is to have a job scheduled to run once a week to run sudo apt-get upgrade to update the system packages. Doing some research I found that GCP provides a tool called "OS patch management" that does exactly that. I tried to use it but the Patch Job execution raised an error informing
Failure reason: Instance is part of a Managed Instance Group.
I also noticed that during the creation of the GKE Node pool, there is an option for enabling "Auto upgrade". But according to its description, it will only upgrade the version of the Kubernetes.
According to the Blog Exploring container security: the shared responsibility model in GKE:
For GKE, at a high level, we are responsible for protecting:
The nodes’ operating system, such as Container-Optimized OS (COS) or Ubuntu. GKE promptly makes any patches to these images available. If you have auto-upgrade enabled, these are automatically deployed. This is the base layer of your container—it’s not the same as the operating system running in your containers.
Conversely, you are responsible for protecting:
The nodes that run your workloads. You are responsible for any extra software installed on the nodes, or configuration changes made to the default. You are also responsible for keeping your nodes updated. We provide hardened VM images and configurations by default, manage the containers that are necessary to run GKE, and provide patches for your OS—you’re just responsible for upgrading. If you use node auto-upgrade, it moves the responsibility of upgrading these nodes back to us.
The node auto-upgrade feature DOES patch the OS of your nodes, it does not just upgrade the Kubernetes version.
OS Patch Management only works for GCE VM's. Not for GKE
You should refrain from doing OS level upgrades in GKE, that could cause some unexpected behavior (maybe a package get's upgraded and changes something that will mess up the GKE configuration).
You should let GKE auto-upgrade the OS and Kubernetes. Auto-upgrade will upgrade the OS as GKE releases are inter-twined with the OS release.
One easy way to go is to signup your clusters to release channels, this way they get upgraded as often as you want (depending on the channel) and your OS will be patched regularly.
Also you can follow the GKE hardening guide which provide you with step to make sure your GKE clusters are as secured as possible
I use fabric-node-sdk 1.4 to make an API server with Fabric 1.4.4 on local It works normally. But when using with service blockchain on AWS, get error:
error: [Remote.js]: Error: Failed to connect before the deadline URL:grpcs:.......
So I'm not sure because on AWS is fabric version 1.2 or not.
(or is there a way to test ping URL:grpcs:?)
I solved, I fix the connection profile. I can conclude that fabric-node-sdk 1.4 can be used with fabric 1.2.
I have an existing Hyperledger Fabric 1.0.x install, how do I perform an upgrade to the new 1.1 release(s)?
At a high level, upgrading a Fabric network can be performed with the following sequence:
Update orderers, peers, and fabric-ca. These updates may be done in parallel.
Update client SDKs.
Enable v1.1 channel capability requirements.
(Optional) Update the Kafka cluster.
The details of each step in the process are described in the documentation.
For my development environment, I deleted 1.0.5 images(emptied bin folder) and executed the command:
I want to be able to run Spark 2.0 and Spark 1.6.1 in cluster mode on a single cluster to be able to share resources, what are the best practices to do this? this is because I want to be able to shield a certain set of applications from code changes that rely on 1.6.1 and others on Spark 2.0.
Basically the cluster could rely on dynamic allocation for Spark 2.0 but maybe not for 1.6.1 - this is flexible.
By using Docker this is possible you can run various version of Spark application, since Docker runs the application in Isolation.
Docker is an open platform for developing, shipping, and running applications. . With Docker you can separate your applications from your infrastructure and treat your infrastructure like a managed application.
Industries are adopting Docker since it provide this flexibility to run various version application in a single nut shell and many more
Mesos also allows to Run Docker Containers using Marathon
For more information please refer
https://www.docker.com/
https://mesosphere.github.io/marathon/docs/native-docker.html
Hope this helps!!!....
While running jobs thorugh jenkins server, which basically sending request to one of the grid gain node for tests execution. The tests gets executed successfully, but when its trying to unlock the nodes it gets hanged. Its not getting respons from grid gain. No detail logs either on grid gain or jenkins. It started happening the time I upgraded jenkins and java to 1.7. The tomcat and grid gain kept on existing (old) version.
Gridgain ver: 2.1.1
Apache Tomcat: apache-tomcat-6.0.24
Jenkins: 1.549
GridGain v.2.1.1 was not tested against Java 1.7, it also has been a while since that version was released. You should upgrade to the latest version of GridGain which was released under Apache 2.0 license (current GridGain version is 6.0.1)