Deploying simpleStorage contract with Kaleido - truffle

I have been following this tutorial to connect the consortium i have created using the Kaleido UI to truffle: link . When i finally do : ./truffle_migrate.sh
it gets stuck in there, here is the output:
$ ./truffle_migrate.sh
+ truffle migrate --network supnode --reset Using network 'supnode'.
Running migration: 1_initial_migration.js Saving artifacts... Running
migration: 2_deploy_simplestorage.js Deploying SimpleStorage...
... 0xd6d9cfe1ab5b01abb759fb8280920d8f7ba0cef73340af22e47a9c7e40120c14
I don't understand where is the problem, i'm sure i have followed the tutorial carefully and i ve created the same scenario.. If anyone have any idea i would appreciate it. Thanks.

So I went through step by step in an attempt to recreate your scenario - 3 nodes running Quorum + Raft and a private transaction between nodes 1 & 3.
On the initial migration attempt I came across the same hung state as you observed. Inspection of the block explorer reveals that both contracts were actually deployed, however neither were invoked (i.e. no state was set for simple storage via the migration file).
I then changed the truffle_migrate.sh file to target the original privateFor node (3) and used the original targeted node (1) as the new privateFor recipient. This worked immediately. The question is why :)
Truffle is finicky sometimes, especially when using RAFT with private transactions.
I would suggest:
Check your block explorer to ensure that the connection to the network was successful and the contracts were deployed.
Kill the running migrate process and just kick it off again.
If that doesn't work, try the flip flop process I described.
I'm curious if there is a correlation with targeting the RAFT leader or if perhaps truffle just needs a few extra nudges sometimes. We will investigate.
FYI I tried originally with a public transaction and used a truffle.js file with only a single node. This also worked immediately. So my supposition is there is some nuanced approach for private transactions and RAFT.
In the meantime this should give you a potential workaround; you'll just have the original migration and simple storage contracts as orphans in your environment.

Related

Minio Distributed Mode Error All ServerPool Must Have Same Deployment ID

I've been trying to setup Minio server in Distributed Mode using 2 nodes, but everytime I tried, I always get error "All serverPools should have same deployment ID expected xxx, got yyy".
I'm setting up minio on Ubuntu servers.
I followed the instruction in Minio official docs here, but I can't find any mention of this error or any tutorial to make the Deployment ID the same.
Does anybody know what this is or how to make the deployment ID the same?
Thanks!
I've typically seen this happen when users attempt to do something similar to the following:
Start MinIO with minio server http://minio.example.net/mnt/disk-{1...4}
Later try to 'migrate' to distributed mode as minio server http://minio-{1...4}.example.net/mnt/disk-{1...4}
The second command is an entirely different topology, and results in a new deployment ID. When MinIO checks the existing backend disks, it sees that there is existing metadata, with a deployment ID that was generated based on the original topology. It then throws an error.
We would need to know quite a bit more about what you are attempting here - whether this is a fresh deployment, what MinIO version you are using, what the startup command was, etc, before being able to debug any further. But the above would be my guess as to what the issue is.
If this is a fresh deployment and you have no data to be concerned with, you can completely clean the backend drives of all data - including the .minio.sys folder at each path - and go from there. If you keep having the same issue with completely empty backend drives, then that is a little more unusual, and might be better suited as a Github issue so we can try to track that down further.

Fabric 1.4.2 -> 2.0 alpha System Minimum HW requirements?

We are currently trying to deploy a POC Fabric network with 5 orgs participating across two channels, with private data on each channel, using Raft. This mirrors the HLF sample network for 4 orgs outlined in the docs.
It has been suggested on RocketChat and elsewhere that we may have a system resource issue however there is no, as in ZERO, info on minimum system requirements for HLF installs -- so we are asking for input based on the scenario / issues below. We are running on current macOS. Docker is dedicated to only HLF test cases so there are no other containers running -- ever.
We have run through all the tutorials successfully with a couple of exceptions:
-- When we spin up BYFN per HLF tutorial steps, while console (terminal) output indicates success for START and END, we see via Kitematic that two entities -- sometimes orderers, sometimes peers, sometimes one of each -- are stopped. These containers exhibit no errors. Containers for other running entities may have errors listed, but eventually recover.
-- Commercial Paper seems to run as per the docs. We will be testing again FabCar today.
So my ask: what are the minimum and suggested system configurations for a viable HLF POC test case, memory, available disk, etc? We are more than confident in all prerequisite software installed.
One final note -- we do not believe this is a sys sizing issue but wanted to confirm as HLF's dev team has confirmed to us that there were in-fact issues with 1.4.x and 2.0 builds mixed on github which appears may have been resolved.
We want to take system config off the table so we can move forward with actual HLF implementation issues.
Thanks for any suggestions in advance -- reviewing AWS, et al guides do not apply as we are running locally in a private network.

Azure Ethereum(PoW) Blockchain issues

I'm trying to run Ethereum Blockchain network with Azure Blockchain Service but I stuck with some issues.... And below I described one of them.
I will realy apreciate any advice and help from people who alredy has experience with Azure Blockchain Service.
Could you please help me to eliminate this error:
"Error: Contract transaction couldn't be found after 50 blocks"
I create new Ethereum blockchain network and it's working fine approximately 1 day (24 hours) but after this period I'm not able to deploy my smart-contract and getting this error. When I restart my virtual machines (mining and transaction nodes) it's start working for a while but later on it's failing again.
What could be the cause of this error: "Error: Contract transaction couldn't be found after 50 blocks"? I tried increasing gasPrice but it didn't help.
What the recomended server hardware configuration for mining and transaction nodes to run Azure Ethereum Blockchain Service? Maybe my virtual machines run out of the RAM or SSD?
Also could you tell me please where can I see some error log related to my blockchain network in Azure?
Thanks!
Please take a look at this GitHub issue detailed in the Truffle Suite (issue) repository. The issue you are experiencing is cause by Truffle Suite, and can be quickly explained as:
... now believe this may be caused by Infura closing the connection while truffle-hdwallet-provider continues to try to poll them.
Contract transaction couldn't be found after 50 blocks
using infura in truffle and get Error: Invalid JSON RPC response:""
Investigating...

Hyperledger composer got stuck when upgrading the network

Here are the steps that I went through:
Stop and tear down fabric
Start fabric
Create a Business network using yo hyperledger-composer
Create .bna archive and install it
Start network with version 0.0.1
Import card to the playground
All these steps work fine, but when I start playground and try to upgrade business network with my changes, in a browser it gets stuck on
Please Wait: Your new business network is being upgraded
Upgrading business network using PeerAdmin#hlfv1 (2/2)
and never responds
Here what I see in logs of composer--playground:
info: [Hyperledger-Composer] :ConnectionProfileManager :getConnectionManagerByTyp Looking up a connection manager for type 0=hlfv1
Maybe someone has already faced this kind of issue and knows how solve it? Or in the local environment, I should upgrade it manually?
P.S I am new to Composer, so all these steps I found in the
Developer tutorial
The composer network upgrade command and its equivalent action in the Composer Playground generate a new docker "chaincode image" and "chaincode container". Creating the image and starting the container is what takes the time. You will see that you now have redundant docker containers and images of previous versions of the Business Network. This is intended behaviour of Hyperledger Fabric (and Composer) but you may want to do some housekeeping to remove the old versions.
If you are in early versions of development and experimentation - generating lots of versions of Networks, you can use the 'Web Profile' in the Playground which simulates a Fabric in the LocalStorage of the Browser - it is much faster but if you use it be sure to periodically export to a BNA otherwise you might lose work if there is a browser issue or upgrade.
Updated following Comment
The command docker ps can be used to see all running containers (docker ps -a will also show stopped containers.) docker stop is used to stop a container and docker rm to remove the container.
Docker containers are running (or stopped) instances of docker images so you will also want to remove the redundant images. You list the images with docker images and remove them with docker rmi.
The docker web site has a full list of commands.
Interestingly, but the process of upgrading the network took more time than I thought, so the solution will be simple:
Wait 3-4 minutes until the process finishes and do not click anywhere
in the browser (by mistake I tried to reconnect to the card, and in
that case, the process of upgrading fails).
Additionally, important to mention, in the manual process of upgrading of the card(using CLI), it takes the same amount of time

Cannot start multiple nodes for RabbitMQ cluster on Windows

I am trying to setup multiple RabbitMQ nodes in Windows environment. Based on the official guide, I am setting up 2 nodes but that's where problem starts to occur.
My first node is successfully created and up and running. But I cannot start 2nd node.
Check below output. ( All the commands are executed from Admin cmd. Erlang and python is also present. All precautionary steps are taken as per guide along with management plugin.)
You can see above, that my "hare" node is running. But second node "rabbit " fails to start.
I also replaced cookie as per stack-overflow similar question. Still the problem persists.
Any help is appreciated. Thanks.
For anyone facing similar problem, I changed my approach and was successfully able to run rabbitmq cluster.
I moved my cluster to Linux and faced no problems. Although it did satisfy my current needs; any solution to above problem is welcome.
Cheers.

Resources