Using Hyperledger Fabric in production - hyperledger-fabric

I am using HF for some time and trying different things regarding business network specification and configuration.
But, I have couple of question regarding best practices (if there are any yet) in using HF in production.
When we talk about using HF in production, should we use docker-compose-base.yaml, docker-compose-cli.yams, cofigtx.yaml.... etc. as files used to setup and configure our business network, and if not, can you please specify what is the best practice use-case?
Thank you for your answers.

You could use Docker Swarm/Compose with derivatives of the sample compose files you referenced, or you could use Kubernetes to manage a network (or subset of same). Project Cello is working on delivering such capability. The Ansible driver in particular has been demonstrated to work effectively - though it is far from a 1.0 level of maturity.
The reality is that you'll want to manage (likely) more than just four peer nodes all on the same VM or host, but manage multiple peers on multiple VMs/hosts even across multiple networks for a production deployment.
Further, you will obviously need to add management and monitoring to the deployed containers for a true production experience. The Hyperledger chat and mailing lists can be good sources of help and insight.

Related

Hyperledger Fabric and Caliper. Question about performance

experts!
I am researching about the Performance of Hyperledger Fabric with the Caliper.
Today, I suddenly thought about something good idea to upgrade the performance of the Read/Write set TPS.
On the first step of using the Caliper, we usually install it and run it on the local which means using a computer resources.
So, I thought about using two different computers to run the TPS test with the Caliper. Thus, we able to utilize two computers resources which is twice bigger than a computer.
However, I have no idea how to remotely connect the Caliper and blockchain network in two different individual computer.
Question 1. Is it even possible to work as my idea?
Question 2. If do so, do you know how to make this kind of testing environment?
I will be really appreciated if you answer my questions.
Thank you

Hyperledger Cello vs Minifabric

I am a newcomer to the Hyperledger world. While exploring the options to build the blockchain network, I came to the two options.
Hyperledger Cello
Minifabric (https://github.com/hyperledger-labs/minifabric/)
Both the projects are under the hyperledger-labs repo.
I like to understand the purposes of these 2 projects, I looking for an option that can help me to focus on the application layer/Chaincode rather than messing into the multihost network management.
by the way at last I need to deploy the project on production. Possibly I will be using on-premises machines and cloud VMs both. Also, I am looking for options to lay down the production network as well.
In addition to the excellent response by Kekomal, you might also want to consider looking at the free IBM Blockchain Platform extension for VSCode (can be found in the VSCode extensions marketplace). This is designed to help you focus on the application/chaincode layer and provides an inbuilt fabric you can use for testing
For your purpose, use Minifabric. I find it the simpler, faster and lighter way to start your development network to test your chaincodes.
Cello is more oriented to build and maintain your pre-production or production network as a service.
By the way, there is a third option that is building your network from Fabric docker images and Fabric tools, which is the best method to learn how your Fabric network really works. But, for your purpose, I would use Minifabric.

What are reasons for the deprecation of Hyperledger Composer?

Hyperledger Composer is a platform for accelerating the development process of Business Network Application. Why is it deprecated and what are the alternatives to composer for development of BNAs?
According to IBM, there are following three problems with Hyperledger Composer:
Composer has been designed from the start to support multiple blockchain platforms, not just Fabric - but this design has come at a cost. This design has meant that there are two completely different programming models - the Fabric programming model (chaincode) and the Composer programming model (business networks). This has caused significant confusion to users, with them needing to make a "choice" between the two programming models, with very few similarities between the two. In this particular case choice has been a bad thing, with many users opting not to use the "optional" part past the initial exploration or POC stage.
This design has also made it a lot harder for us to adopt and expose the latest Fabric features. For example, one of the questions we are constantly getting at the moment is "when can I use the Fabric v1.2 private data feature with Composer?". Whilst we've taken some steps (getNativeAPI) to assist with this problem, it is extremely difficult for us to keep up with and aligned with the latest features in Fabric when we are trying to maintain a design that keeps us blockchain platform independent. This has meant that users have understandably stopped using Composer and instead have reverted to developing with Fabric.
Finally, those of you that have used Composer will likely be fans of our simple, easy-to-use APIs (JavaScript and REST) for building applications that interact with a blockchain network. There is a lot of code behind the scenes to enable these APIs that doesn't really belong in Composer. What we have ended up doing is glossing over the underlying, low-level Fabric APIs instead of pushing improvements directly into these Fabric APIs. Today it takes ~50 lines of code to submit a transaction using the Fabric APIs, whilst in Composer it takes ~5 lines of code, and that's wrong - Composer's value should not come from just making Fabric easier to use.
Please read this for details.
The only problem with Composer is that IBM, et al abandoned it. Composer was (to an extent kind of still is) an effective way for users of Fabric to proof of concept (POC) business solutions for prospective customers -- and for users wanting to justify internal budgets to attempt to deploy projects internally. Using real-world business logic.
Composer should be the business logic stack that sits on top of Fabric and allows users to deploy without having to get down into the weeds.
I don't need to know that I need an orderer or CA for every org -- but I do need to know that I have 6 orgs who will participate in my network, two of them need to communicate using private data on a separate channel from the others and I do need to know what my business use cases rules are. An automated tool or script should allow me to launch an internal network **locally* and go from there. Yes, I will need to know fabric details or have someone on hand that does to be able to tweak my networks -- but Composer let me POC these.
There is no -- as in zero -- equivalent for Fabric -- in fact there is no tool to allow one to easily clone fabric samples for their own use and easily plug in their own network / org settings.
And the IBM VS Code plugin tool is garbage if you're wanting to setup an internal, standalone network without going to the IBM cloud. Really? Seriously?
Without Composer -- or a tool like it -- investing in Hyperledger Fabric is a huge financial & resource gamble and time sink. Period. The code changes almost weekly, there are significant bugs, the community is reticent to fix what at times are glaring documentation issues and address hardware sizing issues. Not to mention the cost to assign engineers and software architects to test what is not-yet-ready-for-prime-time software. Forget the amount of time that is needed to just get familiar with the documentation and fabric components to be able to architect business-grade networks.
Regarding the points made in the answer above:
There should be two distinct programming models because the BNA approach works from a business deployment point of view. To say that having a Composer stack with it's API on top of Fabric "confuses" users is like the old saying "if the customer is too stupid to know how to use the deeply technical product the customer is too stupid" -- that's fundamentally wrong.
I shouldn't have to refresh my knowledge of combustion engines every time I get into a vehicle and press the start button -- I know where I have to go, how I will get there and know how to operate the vehicle to do so. And if I want to tweak or otherwise modify the vehicle, its engine, electrical system, etc. I get out the equivalent of fabric documentation and learn to use those tools or hire a mechanic that already knows how to use them.
And the design did not make it harder to adopt and expose the latest features of Fabric -- what the development team failed to do was to implement those features in Composer in lock step with releases of Fabric. This was a dev team deployment issue and not an end user issue. And to say -- not imply, say -- that the community didn't step up to the plate was a load of crap. If IBM wanted to support it, it could have - it has the personnel, financial and global resources to do so.
Within real-world settings the business perspective of blockchain / distributed ledger viability for enterprise applications is less than enthusiastic -- in fact it's doubtful at best. The number one compliant we get from prospects globally (NA, EMEA) is no one can adequately demo this. No kidding -- showing a prospect via a terminal window that car ownership can move from one user to another is going to solve their business needs? Really? Via a terminal window no less.
For us to POC a complex use case and be able to back of the napkin demo it we now have to write entire fabric apps or hope we can cobble through fixing a fabric samples example -- and in the process work through the bugs in samples.
We've spent hundreds upon hundreds of hours building out POC use cases only to have Composer go by the way side, Fabric version x not work with the just released Fabric version xx, have prerequisite software versions change or issues with god forbid Raft or Kafka that haven't been fully tested prior to "alpha" next greatest thing Fabric release. etc., etc, etc.
And to the writers last point above -- the value of Composer should absolutely be making Fabric easier to use for basic network stand up and POCs. No one is suggesting that getting into the weeds with Fabric is a bad thing -- but from a BUSINESS point of view having something like Composer to POC before committing to projects it's essential.
Will we continue to work with Fabric and hope that the development team catches up with real-world business needs -- probably. All those IBM and other training sessions for composer we've put employees through for the most part have been a waste.
So, from a team who is trying so very hard to justify what is good about Hyperledger and Fabric -- please don't just sack something like Composer in the future. Because we're not going to invest in personnel and train them if this is just the next big thing to go by the wayside. I have 15 teams deployed with prospects globally working prospective use cases and implementations -- trying to tweak and push customer-centric use case demos to them has been Hyperledger Fabric hell.
One persons rather small opinion. GR
I think the reasons are clear from the previous comments, but for your last question one option hundreds of devs are taking is using Convector. Convector is a Hyperledger Labs project that was created before Hyperledger Composer was deprecated but that looks similar to developers. It follows a model controller pattern (similar to Composer assets and transactions) however it compiles natively to Fabric code and does not create a runtime.
Code created with Convector can be taken to production and include all sorts of helpers like an API generator, a development environment bootstrapper (one command to create a local network), decorators for making models more predictable, unit tests by default (CI/CD friendly), tens of code samples and real-life projects to use as reference.
Convector has a community of hundreds of devs, some of them migrated from Composer rather easily, others it is the first tool they get to know for Fabric. The main difference about why Convector won't go away anytime soon even if it looks and feels similar to Composer is its decoupled architecture and capability to use and run natively with Fabric.
If you'd like to join the community there people will help you migrate from Composer to Convector. You can join here.
Here's a blog post mapping concepts from Hyperledger Composer to Convector.
Small recap about Convector:
Looks familiar to Hyperledger Composer.
Same code can be taken to production.
Run natively and scales natively with Fabric.
An ecosystem of tools: unit tests, developer environment, API generator, etc.
Great and friendly community in Discord.
--
Disclaimer: I work with Covalent, the developers of Convector. Convector is a free open source Apache 2.0 group of projects.

hyperledger fabric : discover network topology graphically

I have just started working with Hyperledger Fabric. I was looking for a tool that would demonstrate graphically the entire network and its operations.
Is there any existing project that does that?
If not, I intend to create one. Would anyone like to collaborate?
As a starting point, I am looking at using http://hyperledger-fabric.readthedocs.io/en/latest/build_network.html . I understand that it will be worth including the following:
Organizations
MSPs
CAs
Peers
Channels
(please add to this list)
Looking through the APIs, I am trying to figure out the starting point. I was hoping to find something like Client.getOrganisations() to get a list of all organisations on the network, but it doesnt seem to exist. Any ideas how to discover all Organisations using the Node SDK?
Entities in Hyperledger Fabric usually executed with Docker,Docker Swarm or Kubernetes environments, hence I'd suggest to take a look on Weave Scope project which could provide quite good visualization for your docker containers deployment.

Geo Redundancy in Azure Service Fabric Applications

I'm trying to come up with a solution for achieving Geo-Redundancy (2+ datacentres) while using Service Fabric reliable Actors/Services to manage state. It insinuates here that geo replication is possible
This may happen when, for example, if you aren’t geo replicated and your entire cluster is in one data center, and the entire data center goes down.
but doesn't explain how to switch it on.
Does anybody know if it's a planned feature for ASF that just hasn't been released yet, or whether it's present but not fully explored yet?
Alternatively does anybody have any recommended approaches for cross DC resilience when the state required to run the app is stored using ASF's StateManager?
thanks,
Alex
Alex,
Apparently the service fabric team is still to crack this problem - more info below. However, you should be able to GeoHA Service Fabric Cluster on Azure by yourself. Here's an example of that:
https://alexandrebrisebois.wordpress.com/2016/05/31/deploy-a-geo-ha-service-fabric-cluster-on-azure/
Not today, but this is a common request that we continue to investigate.
The core Service Fabric clustering technology knows nothing about Azure regions and can be used to combine machines running anywhere in the world, so long as they have network connectivity to each other. However, the Service Fabric cluster resource in Azure is regional, as are the virtual machine scale sets that the cluster is built on. In addition, there is an inherent challenge in delivering strongly consistent data replication between machines spread far apart. We want to ensure that performance is predictable and acceptable before supporting cross-regional clusters. Source: https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-common-questions
Cheers,
Paulo
There is no reason you cannot install a series of nodes in different regions as part of the same Fabric, and use placement constraints to control service allocation. As long as the nodes can properly communicate with each other, there should be no problem with this.
If you're using Azure, you should deploy them to Virtual Networks, and link them together using VPNs. You could even cross to on-prem.
I believe the answer would be to use a custom replicator implementation and bridging multiple clusters with expressroute.

Resources