Not able to create hyperledger fabric network using cello - hyperledger-fabric

I started cello and i am able to open user dashboard on 8081 but not the operator dashboard (I checked the port and found docker-proxy named service is running). Now on user dashboard i am trying to create a new network. But it's throwing error Apply Chain myorg fail. I have attached the screenshot. . I checked the post request reponse and it shows {"success":false,"message":"System maintenance, please try again later!"}

The Cello system is suggested to be deployed on multiple servers, at least 1 Master Node + 1 Worker Node.
Make sure that you have done installation of worker
http://hyperledger-cello.readthedocs.io/en/latest/installation_worker_docker/
Then make sure you added worker hosts in operator dashboard and chains are active.
http://localhost:8080/view/hosts (Add hosts)
http://localhost:8080/view/clusters?type=active (Check chains active or not)
For more information on cello installation refer: Hyperledger Cello Installation

Related

Minifabric problem with apiserver and fabric 2.2.1 error join on the channel in multihost configuration

I had created a custom network with my organizations and my peers running 100% on a host with which I was interfacing via an ApiServer based on this: https://kctheservant.medium.com/rework-an-implementation-of-api-server-for-hyperledger-fabric-network-fabric-v2-2-a747884ce3dc The problem is when I switched to using networking on 2 hosts using docker swarm. When I join the channel from the second host I get the error: "unable to contact the endpoint". So I switched to using "minifabri" which promises easy use, and in fact the network is customized in a short time, even there it gave me an error when joining the channel of the second host, solved by setting the variable EXPOSE_ENDPOINT = true. The problem is that now nmon I can no longer get my apiserver to work, what I did is (as indicated in the Readme) replace the contents of the "main.js" file with my server code and run the "apprun" command. This gives me an error if I leave the server listening on a port, while it is successful if I comment out the last 2 lines of the code. The problem is that I don't have how to query the server if I don't have a listening port. Summarizing my questions are:
how can I create an api server done like that on minifabric?
alternatively how can I solve the problem on the Fabric base (I can't find an EXPOSE_ENDPOINT variable to set), probably the problem will be the same as the one I had on minifabric. Thanks to those who will help me.

How does Fabric cli get ip of peer/orderer in the example of byfn?

Could any body tell me how the cli knows the IPs of other peers and orders just according to the Host in the configtx.yaml?
When does the DNS information generated?
Can anybody also tell me some more information about the configuration below "CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock"?
When you run fabric example, it always refer default credentials or already confirugred fabric configuration.
For example, if you use basic fabric example, you will run [your directory]/fabric-dev-servers/startFabric.sh
this file refer already configured information. One of them is connection profile. If you look at createPeerAdmin.sh file, you can find DevServer_connection.json. This file contains connection information for the fabric network.
As you are using byfn.sh, you can add the host ip address using "extra_hosts" in docker-compose.yaml file.
As there is no definition about this, it will use localhost as default.
https://medium.com/1950labs/setup-hyperledger-fabric-in-multiple-physical-machines-d8f3710ed9b4
like this,
extra_hosts:
- "peer0.org1.example.com:192.168.1.10"
- "ca.org1.example.com:192.168.1.15"
- "peer0.org2.example.com:192.168.1.20"
- "ca.org2.example.com:192.168.1.25"

How to use get cf ssh-code password

We are using CF Diego API 2.89 version, Currently I was able to use it and see the vcap and the app resources when running cf ssh myApp.
Now it's become harder :-)
I want to deploy App1 that will "talk" with "APP2"
and have access to to it file system (as it available in the command line when you run ls...) via code (node.js), is it possible ?
I've found this lib which are providing the ability to connect to ssh via code but not sure what I should put inside host port etc
In the connect I provided the password which should be retrieved
via code
EDIT
});
}).connect({
host: 'ssh.cf.mydomain.com',
port: 2222,
username: 'cf:181c32e2-7096-45b6-9ae6-1df4dbd74782/0',
password:'qG0Ztpu1Dh'
});
Now when I use cf ssh-code (To get the password) I get lot of requests which I try to simulate with Via postman without success,
Could someone can assist? I Need to get the password value somehow ...
if I dont provide it I get following error:
SSH Error: All configured authentication methods failed
Btw, let's say that I cannot use CF Networking functionality, volume services and I know that the container is ephemeral....
The process of what happens behind the scenes when you run cf ssh is documented here.
It obtains an ssh token, this is the same as running cf ssh-code, which is just getting an auth code from UAA. If you run CF_TRACE=true cf ssh-code you can see exactly what it's doing behind the scenes to get that code.
You would then need an SSH client (probably a programmatic one) to connect using the following details:
port -> 2222
user -> cf:<app-guid>/<app-instance-number> (ex: cf:54cccad6-9bba-45c6-bb52-83f56d765ff4/0`)
host -> ssh.system_domain (look at cf curl /v2/info if you're not sure)
Having said this, don't go this route. It's a bad idea. The file system for each app instance is ephemeral. Even if you're connecting from other app instances to share the local file system, you can still lose the contents of that file system pretty easily (cf restart) and for reasons possibly outside of your control (unexpected app crash, platform admin does a rolling upgrade, etc).
Instead store your files externally, perhaps on S3 or a similar service, or look at using Volume services.
I have exclusively worked with PCF, so please take my advice with a grain of salt given your Bluemix platform.
If you have a need to look at files created by App2 from App1, what you need is a common resource.
You can inject an S3 resource as a CUPS service and create a service instance and bind to both apps. That way both will read / write to the same S3 endpoint.
Quick Google search for Bluemix S3 Resource shows - https://console.bluemix.net/catalog/infrastructure/cloud_object_storage
Ver 1.11 of Pivotal Cloud Foundry comes with Volume Services.
Seems like Bluemix has a similar resource - https://console.bluemix.net/docs/containers/container_volumes_ov.html#container_volumes_ov
You may want to give that a try.

App service fabric not starting

I keep getting errors related to conflicting ports. When I set a breakpoint inside Program.cs at the line containing
ServiceRuntime.RegisterServiceAsync
It actually stops there more then once per service in the service fabric project which is obviously why it's trying to bind to the same port more than once! Why is it doing this all of a sudden?!
HttpListenerException: Failed to listen on prefix 'https://+:446/' because it conflicts with an existing registration on the machine.
The problem is that the httplistener is trying to bind to a port that is already in use. The cause of this problem can be one of the following.
Another process is already using the port. Try netstat -ano to find out the process that is using the port and then tasklist /fi "pid eq <pid of process>" to find the process name.
Maybe you are starting your development cluster as a multi node instance. That way several nodes on one machine are trying to access the same port.
Maybe you have a frontend and an api that you want to run on the same port then you have to use the path-based binding capabilities of http.sys (If you are using the WebListener)
If this fails could you please post a snippet of the ServiceManifest.xml.
There should be a line defining your endpoint <Endpoint Protocol="https" Type="Input" Port="446" />
In your application manifest, you define how many instances of your service you want, the common mistake people do is to set this number to more than 1, and it will fail, because your local cluster show 5 nodes, but they all run on same machine, and the machine port will be used only in the first instance started.
Set the number of instances to 1 and you won't see multiple entrance on main entry-point at program.cs.
Make it configurable from ApplicationParameters, so you can define these number per environment.
You say that you didn't have to set the instance count before and that could be because you have the option to use Publish profiles that can differ from Cloud vs Local deployment. The profile will point to the corresponding Application Parameters file in which you can set the instance count to 1 for local deployments.
Perhaps something happened to your publish profiles?
ApplicationParameters/Local.1Node.xml:

What is the gcloud command to allow http traffic on a VM instance? (It's not create firewall rule!)

First, I wish to use purely gcloud commands to acheive my objective - NOT the GCE interface - so please don't provide answers using the GUI!
I created an image from a disk attached to a VM instance. In order to do so, I had to delete the instance, per the Google documentation for creating images. After that, I recreated my instance using the image.
Almost everything seems to have worked perfectly from that process except http and https traffic is now disabled in the instance! I can no longer browse to the website hosted on the VM. I also cannot get a response by pinging the domain anymore.
When I look in the GCE gui (just looking - not modifying anything!) I can see that the checkboxes for the "Allow http traffic" and "Allow https traffic" are not checked for the instance. It seems that must be related to my problem.
I checked the firewall rules on the server (ipTables), and on the Google network assiocated with the VM. There is nothing wrong with either of those (and the VM is definitely assiocated with that network). If I listen on port 80 using tcpdump on the server and I browse to my domain, I can see the requests are reaching server, so they aren't blocked by an incoming firewall. I also explictly restarted Apache, just be make sure that wasn't the problem.
So, is there something I need to do to unblock port 80 and 443 on an outgoing basis instead? Is this possibley an SELinux thing? Since the image should represent exactly what was on the disk it shouldn't be. It seems this must be on the GCE side...
What do those checkboxes actually do for the instance if they don't edit iptables on the server or the firewall rules on the Google network? What is the gcloud command to set those switches, or ideally specify that with an instance create command?
Solved. I don't entirely understand what is going on behind the scenes, but the solution to this requires the use of "tags" which associate firewall rules on the network with the VM instance. As far as I can see at this point, this is only pertinent for http and https. Other ports that are open on the network and the VM seem to work without this additional piece.
If you view your firewall rules, you'll probably see the port 80 and 443 rules have the tags "http-server" and "https-server" respectively. If they don't, you'll need to add those (or other tags of your choosing). It turns out the instance needs those tags added to it as well.
To add the tags to an existing VM instance, use this gcloud command:
gcloud compute instances add-tags [YOUR_INSTANCE_NAME] --tags http-server,https-server
To add the tags at the time of the instance creation, include that flag in your statement:
gcloud compute instances create [YOUR_INSTANCE_NAME] --tags http-server,https-server
If you look in the GCE gui, you'll see those "Allow http traffic" and "Allow https traffic" checkboxes are checked after doing that. Requests and responses then flow across ports 80 and 443 as expected.
One of the super helpful things the Google Cloud Console offers is a link at the bottom of the create for most resources for the REST API and command line to create the same resource. I am challenging myself to be able to do everything I can do in the console from SDK command line, so I use this often when I have a question like yours.
Having the same question as above, in the console I created a VM and selected "Allow HTTP traffic". Looking at the command line for this, you will see two commands. The first is the create command with the tag as noted above (http-server):
gcloud beta compute --project=XXXX instances create cgapperi-vm1 \
--zone=XXXXX --machine-type=f1-micro --subnet=default \
--tags=http-server --image=debian-10-buster-v20200413 \
--image-project=debian-cloud --boot-disk-size=10GB \
--boot-disk-type=pd-standard --boot-disk-device-name=cgapperi-vm1 \
--no-shielded-secure-boot --shielded-vtpm --shielded-integrity-monitoring \
--reservation-affinity=any
The second actually creates the firewall rule (default-allow-http) for you, and sets the target for requests to the http-server tag (--target-tags=http-server) on tcp port 80 (--rules=tcp:80) from incoming requests (--direction=INGRESS) from all sources (--source-ranges=0.0.0.0/0):
gcloud compute --project=XXXX firewall-rules create default-allow-http \
--direction=INGRESS --priority=1000 --network=default --action=ALLOW \
--rules=tcp:80 --source-ranges=0.0.0.0/0 --target-tags=http-server
I hope this is helpful for anyone else.
NOTE: I did reduce the output of the gcloud compute instance create to relevant bits in order to reduce the clutter.
As per the details in this link:
https://cloud.google.com/vpc/docs/special-configurations
"By selecting these checkboxes, the VPC network automatically creates a default-http or default-https rule that applies to all instances with either the http-server or https-server tags. Your new instance is also tagged with the appropriate tag depending your checkbox selection."
So ticking these boxes tags your server and creates the necessary firewall rule for you that will apply to all servers with that tag. From a gcloud perspective I guess you would need to ensure the tag is created and applied and the rule is also created for it to do what the console option does for you.

Resources