I have generated a angular app using hyperledger composer generator and now i want to assure that is it suitable for building large scale application?
The Hyperledger Composer Angular Generator is used to generate an example web application from a provided business network archive (.bna) or running business network. For the business network in question, it discovers the defined business network schema (eg. assets participants and list of transaction classes) and produces a skeleton Angular application which interfaces/can connect with a Composer REST server. Building 'large scale application(s) usually requires an app design & architectural approach (scaling, capacity, zones, recovery etc etc).
The main purpose of the generator is to provide a sample web application which allows developers to understand how their business network can become a complete application.
The generator is a useful tool because it provides a functioning use case with a business network as a basis to demonstrate interaction to the blockchain to interested parties. There are plenty of resources out on Google to help you depending on what approach you wish to take from an Angular app design perspective.
Related
I have a doubt.
Let us consider a scenario. A Large OEM and Few Large Vendors and Many Small Vendors are participating in a blockchain network. The small vendors may not be able to afford to set-up separate infrastructure / a dedicated Node to participate in the network. In that case,
Will we be able to enable the OEM or any Large Vendors to
provide access to the small vendors to participate in the blockchain
network, via their own node (i.e. infrastructure as a service), for a
small subscription fee?
Will we be able to create
private channels to those small vendors, providing the privacy from
the large vendor whose infrastructure is been used?
Will they be put as sub-organisations (logically in Membership Service Provider, not in real world acquisition or something like that) of the large vendor, like which we can commonly see in LDAP?
If a small vendor is using infrastructure provided by a larger vendor, then unless they have the necessary permissions to allow them to generate certificates and configure peers, and the infrastructure provider does not have access to these certificates, they are implicitly trusting that vendor with their data. Since this removes one of the advantages of a blockchain solution, maybe it would be preferable for the small vendor to use their own cloud-hosted infrastructure?
This could be done with only a single VM for a small low-throughput implementation. VM templates could even be provided for popular cloud providers to make it easy for the smaller vendors to configure their peer.
I recently started trying to grasp the concepts of Hyperledger Composer.
Based on what I understand, Hyperledger Composer is just a layer on top of Hyperledger Fabric with the purpose of simplifying how things are done.
The confusion came when I tried to understand the difference between participants (composer term) and peers (fabric term). Based on the definition of the former, I understand that the participants are some kind of clients of the blockchain network (e.g. car manufacturer, car buyer) that have a user interface and interact with the blockchain through a REST api. Peers on the other hand are the actual nodes in the network. Intuitively, these concepts seem kind of related with each other, in the sense that an organization (participant) needs to contact each own node(peer) in the network where this peer has specific read/write rights in the network.
In their example-networks they use a default network configuration (crypto-config.yaml) in which they define, among other things, a single peer. However, I am allowed to create different types of participants with only a single peer in the network. Moreover, a single REST api is generated for the entire network.
For a network of two parties (e.g. car-manufacturer and car-quality-assurance-guy) it would make sense for me to have 2 participants (clients with ui), 2 peers (one with read/write rights and one with read-only rights) and 2 REST APIs (one for the car-manufacturer and one for the car-qa-guy). However, that doesn't seem to be how Composer works.
1) Is my understanding that different types of participants need to have their own peer in the network wrong?
2) Why do they generate a single REST api including methods for every participant in the network and not multiple so that they can be used by different clients with different rights?
To answer your questions first:
1) Your description that
I understand that the participants are some kind of clients of the blockchain network (e.g. car manufacturer, car buyer) that have a user interface and interact with the blockchain through a REST api. Peers on the other hand are the actual nodes in the network.
is indeed correct and that is how I understand it after using Composer for more than half a year in multiple projects. However, the statement that
different types of participants need to have their own peer in the network
is not quite true. As you stated it correctly, Composer is an abstraction of Fabric and aims to simplify prototype development on Fabric significantly. As a result, some of the nuances in Fabric are lost. For instance, it is incredibly complicated if you want to run Composer with support for multiple channels (in the Fabric sense).
In the case of participants vs peers, they are completely different and have little to almost no relations to each other. The peers belong to the Fabric world and they are responsible for running the Fabric blockchain infrastructure. In the basic tutorial (for Fabric which is also used in Composer), you have just one peer in the entire Fabric network. Once you have a Fabric network running, you can use Composer to model and deploy business networks however you wish. Note the distinction between Fabric network and business network. Fabric network refers to the underlying blockchain infrastructure built with Fabric while business network is a model built with Composer. Participants live in the business network modelled and deployed using Composer while peers are the backbone running the blockchain infrastructure. Hence, the two are weakly related in that without the peers, you simply can't have any business network at all. However, once you have a network running, the participants are almost entirely independent of the Fabric peers.
2) You have generated a single REST API most likely because the tutorial is worded as such. If you still remember, when you bring up the REST API, you needed to specify a business network card. Hence, each owner of a business network card can very well run their own REST API. In practice, you would issue an identity and business network card for each and every participant in the business network. Each participant will have different permissions granted by the access controls you have created when you modelled the business network (recall that these access controls are written in ACL). Hence, even though every participant and every REST API can see all the methods available, they can't invoke ones that they are not supposed to invoke. Of course you would have to model the access control policies properly in ACL.
Here are some of my thoughts on Composer.
Hyperledger Composer is just a layer on top of Hyperledger Fabric with the purpose of simplifying how things are done.
This is indeed true but it is a pity that they will be dropping support for Composer. (See this update by the authors) Therefore, it is recommended that production software should not be running on Composer. However, I personally find it extremely easy and quick to create prototypes (with nice UI) using Composer and I would personally continue using it for prototypes despite its deprecation simply because it is incredibly easy to use and free from major problems.
When I was tasting the fabric, I found many other blockchain projects, like Composer, Cello, Explorer. They are all belong to Hyperledger. I'm very confused that there are so many projects. Should I learn all of them? It seems each project plays a role in the blockchain. BUT:
What's the relation between them?
I draw a picture to explain my question. The picture is not correct, I just want to make my question clear.
If I figure it out, when we want to use the blockchain in our project, I can just study few of them, and use them appropriately.
Fabric provides a framework to set up a blockchain network. It is data/application agnostic.
Composer provides a set of tools to define a business network on top of Fabric. This provides a higher level of abstraction than Fabric where the data are essentially just bits. With Composer you can define assets, transactions, etc.
Cello helps with provisioning of the network.
Explorer simply provides a web based interface to explore what's on a blockchain.
Fabric is a permissioned blockchain distributed ledger, it is at the end; an implementation of a permissioned blockchain.
Hyperledger project comprises a suite of tools for building blockchain business networks, what they name it "Hyperledger Composer".
it is used for developing,testing, deploying application on blockchain
also it is used for integrating your blockchain with external systems,
Cello, is a blockchain provision and operation system, which helps manage blockchain networks, and make the blockchain as service BaaS, it is not a blockchain, but it is used to manage the blockchain network.
Check this link for better understanding Cello
Hyperledger Cello
Explorer provide you KPI that show the blockchain
Cello is admin tool that help you to monitor the host and network with and container inside it, actually cello using fabric script if you take around in fabric you will find byfn (build your first network) script that setup container and example peer.
so cello uses this script to create container and show to you status of this and delete it or edit.
composer provide a tools to help you create your business card and your smart contract to write your smart contract you need to write acl(access control language which define rule),your logic , define participant and assets.
then composer will create bna archive file and install it on blockchain.
there is some module in composer like playground and restserver and also there is generator for frontend
fabric is the framework that setup the blockchain network, it's the base that cello and other module uses it
i hope i help you
I have been assigned to think of a layered microservices architecture for Azure Service Fabric. But my experience mostly been on monolithic architectures I can't come up with a specific solution.
What I have thought as of now is like...
Data Layer - This is where all the Code First entities resides along with DBContext.
Business Layer - This is where all the Service Managers would be performing and enforcing the Business Logic i.e. UserManager (IUserManager), OrderManager (IOrderManager), InvoiceManager (IInvoiceManager) etc.
WebAPI (Self Hoted Inside Service Fabric) - Although this WebAPI is inside Service Fabric but does nothing except to receive the request and call respectic Services under Service Fabric. WebAPI Layer would also do any Authentication and Authorization (ASP.NET Identity) before passing on the call to other services.
Service Fabric Services - UserService, OrderService, InvoiceService. These services are invoked from WebAPI Layer and DI the Business Layer (IUserManager, IOrderManager, IInvoiceManager) to perform it's operation.
Do you think this is okay to proceed with?
One theoretical issue though, while reading up for several microservices architecture resources, I found that, all of them suggests to have Business Logic inside the service so that the specific service can be scaled independently. So I believe, I'm violating the basic aspect of microservices.
I'm doing this because, the customer requirement is to use this Business Layer across several projects, such as Batch Jobs (Azure Web Jobs), Backend Dashboard for Internal Employees (ASP.NET MVC) etc. So If I don't keep the Business Layer same, I have to write the same Business Logic again for Web Jobs and Backend Dashboard which I feel is not a good idea. As a simple change in Business Logic would require change in code at several places then.
One more concern is, in that case, I have to go with Service to Service communication for ACID transactions. Such as, while creating an Order, a Order and Invoice both must be created. So in that case, I thought of using Event Driven programming i.e. Order Service will emit an event which the Invoice Service can subscribe to, to create Invoice on creation of Order. But the complications are if the Invoice Service fails to create invoice, it can either keep trying do that infinitely (which is a bad idea I think), or emit another event to Order Service to subscribe and roll back the order. There can be lots of confusion with this.
Also, I must mention that, we are using a Single Database as of now.
So my questions are...
What issue do you see with my approach? Is it okay?
If not, please suggest me a better approach. You can guide me to some resources for implementation details or conceptual details too.
NOTE : The requirement of client is, they can scale specific module in need. Such as, UserService might not be used much as there won't be many signups daily or change in User Profile, but OrderService can be scaled along as there can be lots of Orders coming in daily.
I'll be glad to learn. As this is my first chance of getting my hands on designing a microservices architecture.
First of all, why does the customer want to use Service Fabric and a microservices archtecture when it at the same time sounds like there are other parts of the solution (webjobs etc) that will not be a part of thar architecture but rather live in it's own ecosystem (yet share logic)? I think it would be good for you to first understand the underlying requirements that should guide the architecture. What is most imortant?
Scalability? Flexibility?
Development and deployment? Maintinability?
Modularity in ability to compose new solutions based on autonomous microservices?
The list could go on. Until you figure this out there is really no point in designing further as you don't know what you are designing for...
As for sharing business logic with webjobs, there is nothing preventing you from sharing code packages containing the same BL, it doesn't have to be a shared service and it doesn't mean that it has to be packaged the same way in relation to its interface or persistance. Another thing to consider is, why do you wan't to run webjobs when you can build similar functionality in SF services?
Background
Having run through the Getting started with API Apps and ASP.NET in Azure App Service tutorial (https://azure.microsoft.com/en-gb/documentation/articles/app-service-api-dotnet-get-started/), we had an architecture question arise today around the design decisions made to split out the To Do List Application API layers into a Middle tier API app and Data tier API app.
When approaching build of an application using a distributed architecture, what considerations should take place to understand when this type of separation should occur in your API layers?
Another way of asking this question is what are the pros and cons of having a separate middle tier API layer and data tier API app when building your application?
Other Questions
I had a read of Web apps architecture: 1 or n API question (see link that follows) which while being insightful, was slightly different to the question we are asking. We are talking a single domain which has separate API layers for middle tier (logic) and the data tier.
Web apps architecture: 1 or n API
This of course, depends. Deciding whether to build out what I call "infrastructure services" is very strongly dependent on your needs and your application(s).
Infrastructure tier services generally get much more re-use than business logic tier services. They are very easy to recompose into new applications. The most common instance of this is building an admin interface as a separate application.
If you have already build several applications in your organization, and have found reuse to be occurring regularly, then I would seriously contemplate infrastructure services. If your organization is writing it's first application, and you don't see this fanning out to additional interfaces, then maybe just isolate your data access in a DAO pattern, it's fairly straightforward to refactor it out to a stand-alone service later.
I think the example design is somewhat confusing. In real world, I have not seen such design yet because design looks like having every function to be http/rpc call?
My experience would be the SPA uses a Public API (or Gateway API) which then calls your Internal API / Microservices to aggregate results. It is your Microservices that may have DAOs and, most importantly, the business logic