Hyperledger fabric setup multiple org - hyperledger-fabric

I want to setup a fabric network with multiple organisations in the financial industry.
Is there any detailed guide that helps me to decide what kind of implementation makes sense? Aka how many orderer nodes, how many channels, when private data, who has which rights, etc...
Kind of searching for a decision tree/flow chart, whatever...
If you can recommend anything I'd be really thankful
Cheers

Number of channels, use of private data, access control etc. depends on the use case. I would recommend going through the official fabric documentation. Also if you want a highly available kubernetes n/w setup, there is already one sample available at fabric-smaples

Related

What are reasons for the deprecation of Hyperledger Composer?

Hyperledger Composer is a platform for accelerating the development process of Business Network Application. Why is it deprecated and what are the alternatives to composer for development of BNAs?
According to IBM, there are following three problems with Hyperledger Composer:
Composer has been designed from the start to support multiple blockchain platforms, not just Fabric - but this design has come at a cost. This design has meant that there are two completely different programming models - the Fabric programming model (chaincode) and the Composer programming model (business networks). This has caused significant confusion to users, with them needing to make a "choice" between the two programming models, with very few similarities between the two. In this particular case choice has been a bad thing, with many users opting not to use the "optional" part past the initial exploration or POC stage.
This design has also made it a lot harder for us to adopt and expose the latest Fabric features. For example, one of the questions we are constantly getting at the moment is "when can I use the Fabric v1.2 private data feature with Composer?". Whilst we've taken some steps (getNativeAPI) to assist with this problem, it is extremely difficult for us to keep up with and aligned with the latest features in Fabric when we are trying to maintain a design that keeps us blockchain platform independent. This has meant that users have understandably stopped using Composer and instead have reverted to developing with Fabric.
Finally, those of you that have used Composer will likely be fans of our simple, easy-to-use APIs (JavaScript and REST) for building applications that interact with a blockchain network. There is a lot of code behind the scenes to enable these APIs that doesn't really belong in Composer. What we have ended up doing is glossing over the underlying, low-level Fabric APIs instead of pushing improvements directly into these Fabric APIs. Today it takes ~50 lines of code to submit a transaction using the Fabric APIs, whilst in Composer it takes ~5 lines of code, and that's wrong - Composer's value should not come from just making Fabric easier to use.
Please read this for details.
The only problem with Composer is that IBM, et al abandoned it. Composer was (to an extent kind of still is) an effective way for users of Fabric to proof of concept (POC) business solutions for prospective customers -- and for users wanting to justify internal budgets to attempt to deploy projects internally. Using real-world business logic.
Composer should be the business logic stack that sits on top of Fabric and allows users to deploy without having to get down into the weeds.
I don't need to know that I need an orderer or CA for every org -- but I do need to know that I have 6 orgs who will participate in my network, two of them need to communicate using private data on a separate channel from the others and I do need to know what my business use cases rules are. An automated tool or script should allow me to launch an internal network **locally* and go from there. Yes, I will need to know fabric details or have someone on hand that does to be able to tweak my networks -- but Composer let me POC these.
There is no -- as in zero -- equivalent for Fabric -- in fact there is no tool to allow one to easily clone fabric samples for their own use and easily plug in their own network / org settings.
And the IBM VS Code plugin tool is garbage if you're wanting to setup an internal, standalone network without going to the IBM cloud. Really? Seriously?
Without Composer -- or a tool like it -- investing in Hyperledger Fabric is a huge financial & resource gamble and time sink. Period. The code changes almost weekly, there are significant bugs, the community is reticent to fix what at times are glaring documentation issues and address hardware sizing issues. Not to mention the cost to assign engineers and software architects to test what is not-yet-ready-for-prime-time software. Forget the amount of time that is needed to just get familiar with the documentation and fabric components to be able to architect business-grade networks.
Regarding the points made in the answer above:
There should be two distinct programming models because the BNA approach works from a business deployment point of view. To say that having a Composer stack with it's API on top of Fabric "confuses" users is like the old saying "if the customer is too stupid to know how to use the deeply technical product the customer is too stupid" -- that's fundamentally wrong.
I shouldn't have to refresh my knowledge of combustion engines every time I get into a vehicle and press the start button -- I know where I have to go, how I will get there and know how to operate the vehicle to do so. And if I want to tweak or otherwise modify the vehicle, its engine, electrical system, etc. I get out the equivalent of fabric documentation and learn to use those tools or hire a mechanic that already knows how to use them.
And the design did not make it harder to adopt and expose the latest features of Fabric -- what the development team failed to do was to implement those features in Composer in lock step with releases of Fabric. This was a dev team deployment issue and not an end user issue. And to say -- not imply, say -- that the community didn't step up to the plate was a load of crap. If IBM wanted to support it, it could have - it has the personnel, financial and global resources to do so.
Within real-world settings the business perspective of blockchain / distributed ledger viability for enterprise applications is less than enthusiastic -- in fact it's doubtful at best. The number one compliant we get from prospects globally (NA, EMEA) is no one can adequately demo this. No kidding -- showing a prospect via a terminal window that car ownership can move from one user to another is going to solve their business needs? Really? Via a terminal window no less.
For us to POC a complex use case and be able to back of the napkin demo it we now have to write entire fabric apps or hope we can cobble through fixing a fabric samples example -- and in the process work through the bugs in samples.
We've spent hundreds upon hundreds of hours building out POC use cases only to have Composer go by the way side, Fabric version x not work with the just released Fabric version xx, have prerequisite software versions change or issues with god forbid Raft or Kafka that haven't been fully tested prior to "alpha" next greatest thing Fabric release. etc., etc, etc.
And to the writers last point above -- the value of Composer should absolutely be making Fabric easier to use for basic network stand up and POCs. No one is suggesting that getting into the weeds with Fabric is a bad thing -- but from a BUSINESS point of view having something like Composer to POC before committing to projects it's essential.
Will we continue to work with Fabric and hope that the development team catches up with real-world business needs -- probably. All those IBM and other training sessions for composer we've put employees through for the most part have been a waste.
So, from a team who is trying so very hard to justify what is good about Hyperledger and Fabric -- please don't just sack something like Composer in the future. Because we're not going to invest in personnel and train them if this is just the next big thing to go by the wayside. I have 15 teams deployed with prospects globally working prospective use cases and implementations -- trying to tweak and push customer-centric use case demos to them has been Hyperledger Fabric hell.
One persons rather small opinion. GR
I think the reasons are clear from the previous comments, but for your last question one option hundreds of devs are taking is using Convector. Convector is a Hyperledger Labs project that was created before Hyperledger Composer was deprecated but that looks similar to developers. It follows a model controller pattern (similar to Composer assets and transactions) however it compiles natively to Fabric code and does not create a runtime.
Code created with Convector can be taken to production and include all sorts of helpers like an API generator, a development environment bootstrapper (one command to create a local network), decorators for making models more predictable, unit tests by default (CI/CD friendly), tens of code samples and real-life projects to use as reference.
Convector has a community of hundreds of devs, some of them migrated from Composer rather easily, others it is the first tool they get to know for Fabric. The main difference about why Convector won't go away anytime soon even if it looks and feels similar to Composer is its decoupled architecture and capability to use and run natively with Fabric.
If you'd like to join the community there people will help you migrate from Composer to Convector. You can join here.
Here's a blog post mapping concepts from Hyperledger Composer to Convector.
Small recap about Convector:
Looks familiar to Hyperledger Composer.
Same code can be taken to production.
Run natively and scales natively with Fabric.
An ecosystem of tools: unit tests, developer environment, API generator, etc.
Great and friendly community in Discord.
--
Disclaimer: I work with Covalent, the developers of Convector. Convector is a free open source Apache 2.0 group of projects.

Hyperledger Fabric private data collection to distribute large files

We are currently researching on Hyperledger Fabric and from the document we know that a private data collection can be set up among some subset of organizations. There would be a private state DB (aka. side DB) on each of these organizations and per my understanding, the side DB is just like a normal state DB which normally adopts CouchDB.
One of our main requirements is that we have to distribute files (e.g. PDFs) among some subset of the peers. Each file has to be disseminated and stored at the related peers, so a centralized storage like AWS S3 or other cloud storage / server storage is not acceptable. As the file maybe large, the physical copies must be stored and disseminate off-chain. The transaction block may only store the hash of these documents.
My idea is that we may make use of the private data collection and the side DB. The physical files can be stored in the side DB (maybe in the form of base64string?) and can be distributed via Gossip Protocol (a P2P protocol) which is a feature in Hyperledger Fabric. The hash of the document along with other transaction details can be stored in a block as usual. As they are all native features by Hyperledger Fabric, I expect the transfer of the files via Gossip Protocol and the creation of the corresponding block will be in-sync.
My question is:
Is this way feasible to achieve the requirement? (Distribution of the files to different peers while creating a new block) I kinda feel like it is hacky.
Is this a good way / practice to achieve what we want? I have been doing research but I cannot find any implementation similar to this.
Most of the tutorial I found online pre-assumes that the files can be stored in a single centralized storage like cloud or some sort of servers, while our requirement demands a distribution of the files as well. Is my idea described above acceptable and feasible? We are very new to Blockchain and any advice is appreciated!
Is this way feasible to achieve the requirement? (Distribution of the files to different peers while creating a new block) I kinda feel like it is hacky.
So the workflow of private data distribution is that the orderer bundles the private data transaction containing only a hash to verify the data to a new block. So you dont have to do a workaround for this since private data provides this per default. The data itself gets distributed between authorized peers via gossip data dissemination protocol.
Is this a good way / practice to achieve what we want? I have been doing research but I cannot find any implementation similar to this.
Yes and no. Sry to say so. But this depends on your file sizes and amount. Fabric is capable of providing rly high throughput. I would test things out and see if it meets my requirements.
The other approach would be to do a work around and use IPFS (a p2p file system). You can read more about that approach here here
And here is an article discussing storing 'larger files' on chain. Maybe this gives some constructive insights aswell. But keep in mind this is an older article.
Check out IBM Blockchain Document Store, it is the implementation of storing any document (pdf or otherwise) both on and off chain. It has been done.
And while the implementation isn't publicly available, there is vast documentation on it's usage, can probably disseminate some information from it

Which is best solution for supply chain application fabric or sawtooth?

I am little confused bw fabric and sawtooth for supply chain application development from documentation it appears that sawtooth is best for supply chain but all the validator node keeps the copy of the distributed ledger there is no concept of channels and private data.
Sawtooth has an open source solution for supply chain.
Fabric Channels may solve your use case. It doesn't scale if you need several channels. You can also have multiple blockchains with Sawtooth, which is essentially what a channel is anyway.
The blockchain concept is that all data is transaparent and viewable and auditable by everyone. That doesn't always work in some use cases and is an active research area. Encrypting payload data and storing some data off-chain are some solutions.

Main differences Hyperledger Fabric & BigchainDB

Both, Hyperledger Fabric and BichainDB offer the possibility to have a private, permissioned blockchain database. With their concepts they try to address the main disadvantages of public blockchains like lack of privacy and lack of performance (low throughput etc).
What are the main differences between the two technologies?
If you try out example application of both frameworks, you will quickly notice that BigchainDB is easier to start with. Hyperledger Fabric involves a lot of more knowledge to master it.
Fabric knows different kinds of nodes (peers, peers additionally being endorser nodes, orderer) and thereby allows a very flexible setup, depending on the consortium design and organisations themselves. BigchainDB has one kind of node that can be deployed. Also every involved organisation gets one node of course.
Fabric has richer capabilities to model assets and all kinds of transactions. One transaction kind can always be implemented by a custom processor function doing whatever is needed to query or modify the state of the ledger. BigchainDB only knows CREATE and TRANSFER transactions on every defined asset. You can create something (that may be also divisable, e.g. amounts of tokens) and can transfer them completely or partly.
Both seem to have pretty low level APIs. Fabric has more APIs and config models that need to be mastered. But Fabric is being complemented by frameworks such as Composer (with all its nice libraries involved, like playground and rest-server) that really improve the programming model. As far as I know there is nothing like that for BigchainDB, also because it is pretty simple from the start.
Consensus: BigchainDB uses Tendermint which is Byzantine Fault Tolerant. Fabric does PBFT which is Practically Byzantine Fault Tolerance, based on the idea of Miguel Castro.
In general I would say that Fabric is intended to be used for complex business use cases. BigchainDB is simpler and nice for assets that can be divided (financal stuff like coins/tokens maybe).

Using Hyperledger Fabric in production

I am using HF for some time and trying different things regarding business network specification and configuration.
But, I have couple of question regarding best practices (if there are any yet) in using HF in production.
When we talk about using HF in production, should we use docker-compose-base.yaml, docker-compose-cli.yams, cofigtx.yaml.... etc. as files used to setup and configure our business network, and if not, can you please specify what is the best practice use-case?
Thank you for your answers.
You could use Docker Swarm/Compose with derivatives of the sample compose files you referenced, or you could use Kubernetes to manage a network (or subset of same). Project Cello is working on delivering such capability. The Ansible driver in particular has been demonstrated to work effectively - though it is far from a 1.0 level of maturity.
The reality is that you'll want to manage (likely) more than just four peer nodes all on the same VM or host, but manage multiple peers on multiple VMs/hosts even across multiple networks for a production deployment.
Further, you will obviously need to add management and monitoring to the deployed containers for a true production experience. The Hyperledger chat and mailing lists can be good sources of help and insight.

Resources