Fabric CA Server DB in the Client - hyperledger-fabric

I am building a Hyperledger Fabric blockchain application (docker containers) where I have created also an application gateway (docker container) with node js to interact with it as a backend service.
For my surprise I have witnessed the following case:
I have registered new users and enrolled them through the application gateway and the official Fabric SDK using the CAs of the network and stored them in the Fabric Wallet. And I realized that they are stored in a different fabric-ca-server.db than the one that lives under the /etc/hyperledger/fabric-ca-server/ directory of the fabric ca docker container.
What I mean is that when I tried to list the identities through the cli I got zero identities since they were registered through SDK. And when I tried to register the same identities from the SDK I got the usual message of user is already registered.
But I was not able to find where this fabric-ca-server.db is stored that reads for the application gateway (SDK)

The DB file can be found in the CA container using the following path
/etc/hyperledger/fabric-ca-server
You will find that this path is also mentioned in the docker compose file.

The correct path as is stated in the above answer is the directory inside the fabric CA container!
My mistake was that through the application SDK I was only enrolling some admins without registering them. And this was only issuing certificates without adding them to the actual db. After I registered some users and registered them I was able to view them all in the db that is in the docker container.

Related

Run Docker in production with environment variables that are secret and cannot be seen on the server

I need to send environment variables to my application running in a container but I understand that it is bad practice that the ".env" file is on the server since the "root" user could read it. What would be the best option to use these variables in my application and leave no trace on the server and without using Kubernetes?
There are several solutions, depending on your actual production stack:
(1) Running on a k8s cluster
Kubernete supports user uploading binary as a secret. You could mount the secret to your production pod to decouple your docker image and the secret.
https://kubernetes.io/docs/concepts/configuration/secret/
(2) Docker on a standalone server
This is an isomorphic solution to (1), but without native support from k8s.
https://docs.docker.com/storage/volumes/
(3) External Key management service
If you are using hosting your application on cloud, there are much more options for you to consider. Take azure as example, if you are hosting your application on a virtual machine, you could use service like Azure KeyVault:
https://learn.microsoft.com/en-us/azure/key-vault/general/basic-concepts
The concept is that all your key is stored and obtained via connecting your server to the service. You could have the secret loaded in your application on the fly fetching from KeyVault, which prevent leaving secret footprint in your service instance. The connection between Key Management Service and your virtual machine could be configured in a password less way (iam in aws / managed identity in azure) to prevent having secret in your server.

How do clients connect to hyperledger fabric network?

If i have a fabric network with multiple peers, how can, say, a mobile app (representing the user) query data from that network ?
It would need the IP address of at least 1 peer, but how do i deliver this to the app as dynamically as possible ?
You can develop a gateway application using one of the below SDK's
Fabric comes with
NodeJS SDK
Go SDK
JAVA SDK
This gateway application will expose api's so that mobile app can consume whenever it required
I here by mention a sample open source fabcar application here

Hyperledger Fabric: Unable to Invoke using Node SDK

I am having an issue in the Hyperledger fabric Node SDK.
Network Details:
The network consists of 4 organisations each deployed on different Kubernetes cluster.
Each organisation has 2 peers which joined a single channel say mychannel.
Each organisation has 1 CA running.
Ordering service is Raft.
CouchDB is used as statedb.
The invokes from the CLI are also working fine and the data is being synced between all the 8 peers.
Hypelredger explorer is up and running with the one organisation details and is able to list all the other 6 peers in the dashboard.
Now back to issue, I tried to deploy the Node.js SDK for Org1.
I created a connection profile having the details of the Org1 Peers, Orderer and CA.
The users are enrolled (Admin and user1).
Now when I try to invoke transaction there are two cases:
Service discovery enabled: In this case, the SDK tries to communicate with the other peers in the network and creator org peers but all returns the context deadline exceed error.
Service discovery disabled: Invokes successful.
I have no idea why the invoke transactions are failed when the service discovery is enabled.
The above issue is resolved when I added the host aliases into my /etc/hosts file.
I need to add the host names and IP for each peer in the /etc/hosts files.
After adding the host names, the SDK started working.
Thanks
Could be your anchor peers are configured incorrectly, since discovery reports those, and some clients will use them.

Deploying NodeJs to Service Fabric Cluster

Has Anyone had any experience to deploy a RESTFul nodejs service to service fabric?
What tools are possible, Jenkins or even code ship
Found a knowledge base article from Microsoft here, looks like you just package it up and drop in into a folder location for Service fabric to consume
https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-deploy-multiple-apps
To deploy a RESTFul node.js service to service fabric you need to have a Node.js host - to actually run your JavaScript. There are three options for that:
Create custom executable and deploy it as Service Fabric guest. Example: https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-deploy-multiple-apps
Use Dicker containers on Service Fabric: https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-tutorial-package-containers
Use SupercondActor to host Node.js API natively inside Service Fabric stateless service. See GitHub repo: https://github.com/SupercondActor/platform-app-angular

Create one more peer node using hyperledger-composer

I implemented hyperledger composer tutorial and create a simple business network definition and deploy it on through composer and implement it's rest API's through composer-rest-server. Now I want to add one more peer to it on a different local machine which can access blockchain I created previously, so my question is How can I achieve that a different peer node (another local machine) connected to blockchain I created in the composer tutorial?
you can check the S/Overflow link provided by Ahmed Nasser relating to adding your peer to an existing Fabric network.
Once you have your Fabric network up and running, and all of your network configuration / resolution / docker configuration tested and working, you can come to Composer to define the connection info, such as adding additional peers (and therefore create the requisite business network cards that contain that info).
This single organisation tutorial can give you an idea of what's involved - it builds upon a Fabric network that was already created (a simple, one-peer Fabric blockchain Dev environment) ..see here -> https://hyperledger.github.io/composer/tutorials/deploy-to-fabric-single-org.html
It obviously refers to 'localhost' in this scenario - obviously, you are creating something on an IP network, so you will need IP addresses/ host resolution as appropriate.

Resources