Old versions of chaincode being run unintentionally - node.js

Old chaincode is being run even after I do the following:
1. stop and remove all docker containers with
docker stop $(docker ps -aq) && docker rm $(docker ps -aq)
2. remove shared volume
sudo rm -r prod/
After restarting the network I then try to install chaincode with the same chaincodeID and same version number as old network. Somehow the old chaincode that was deployed on the previous network gets instantiated instead of the new one. There must be some cache somewhere that I am not clearing. These are the volumes set in my docker-compose.yaml Any help would be great. Thanks
- ../crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/msp:/etc/hyperledger/fabric/msp
- ../crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls:/etc/hyperledger/fabric/tls
- ../prod/peer0.org1.example.com:/var/hyperledger/production

You seem to have old images created for the chaincode not removed.
I personally run
docker rmi $(docker images |grep 'dev-peer')
to delete my dev peer imaegs which contain the chaincode before bringing up the network and when i dont want to change version of the chaincode. Try this out but it will remove EVERY image containing that dev-peer string! So when you have some images called the same way they are removed as well.

Related

Hyperledger-Fabric network.sh not working

When I run the network.sh up command I get as result " ERROR: for peer0.org1.example.com Cannot start service peer0.org1.example.com: driver failed programming external connectivity on endpoint peer0.org1.example.com (9dace0451ce23579ca2750b24f788c04c566e9007534c6cf6e472c0bd204ba28): Error starting userland proxy: listen tcp4 0.0.0.0:7051: bind: address already in use"
Can someone help me please
As per my thought, there were 2 cases:
It was some other process that is also using port 7051. So you have to remove that particular process as per your os that will help you in solving the problem.
As in other cases, it mostly is the same container running at some other instance. In that case, verify using docker ps It shows a list of containers, and if the same containers running in other directories and then try running again at other places where the container name is the same. for that, you have to remove all the containers.
Command:
docker kill $(docker ps -q)
docker rm -f $(docker ps -aq)

How to delete a volume in docker if it is being used by containers

I have a task in which they have sent us to delete a docker volume which has a container associated with it, I would like to know if there is any way to delete the volume without deleting the container that is using it.
I have tried the following but it won't let me:
sudo docker volume rm -f vol1
It returns the following.
Error response from daemon: remove vol1: volume is in use - [a90d72c647bf7cdf2a3d8d8f0005163f072e4f6da80f27bca7b81f437f2f21d3]
To remove the volume, you will have to remove/stop the container first. Deleting volumes will wipe out their data. Back up any data that you need before deleting a container.
Stop/Remove the container(s)
Delete the volume(s)
Restart the container(s) (if you did stop)
I have found a way, although for this I have to stop the container that uses it and delete it. I leave it here in case anyone is interested, although I would like to know if there is any way to eliminate the volume without touching the container
sudo docker stop <container>
sudo docker rm <container>
sudo docker volume rm <volume>
if it was created by compose, see this question see also Docker is in volume in use, but there aren't any Docker containers
and this :
Docker is in volume in use, but there aren't any Docker containers
docker-compose down --volumes
To delete all containers including its volumes used
sudo docker rm -vf $(sudo docker ps -a -q)

Container error when creating channel during running fabcar (fabric-sample)

Getting up to speed with Hyperledger and trying to run through the Hyperledger-Fabric tutorial fabcar, but hitting an error every time that I try to create the channel in the startFabric.sh script.
Here's the error:
Error response from daemon: Container 3640f4fca98aef120a2069292a3fc613954a0fbe7c625a31c2843ec643462 is not running
Ran all the pre-requisites and the commands listed, cloned the latest fabric-samples, updated node, tried longer start times. But still have this error. If anyone knows where I am going wrong would really appreciate some help to resolve. Thanks in advance.
Perhaps worth mentioning that I am running on Windows 7 and using Docker Toolbox.
startFabric.sh output is shown below.
$ ./startFabric.sh node
# don't rewrite paths for Windows Git Bash users
export MSYS_NO_PATHCONV=1
docker-compose -f docker-compose.yml down
Stopping ca.example.com ... done
Stopping couchdb ... done
Removing peer0.org1.example.com ... done
Removing orderer.example.com ... done
Removing ca.example.com ... done
Removing couchdb ... done
Removing network net_basic
docker-compose -f docker-compose.yml up -d ca.example.com orderer.example.com peer0.org1.example.com couchdb
Creating network "net_basic" with the default driver
Creating ca.example.com ... done
Creating couchdb ... done
Creating orderer.example.com ... done
Creating peer0.org1.example.com ... done
# wait for Hyperledger Fabric to start
# incase of errors when running later commands, issue export FABRIC_START_TIMEOUT=<larger number>
export FABRIC_START_TIMEOUT=10
#echo ${FABRIC_START_TIMEOUT}
sleep ${FABRIC_START_TIMEOUT}
# Create the channel
docker exec -e "CORE_PEER_LOCALMSPID=Org1MSP" -e "CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/msp/users/Admin#org1.example.com/msp" peer0.org1.example.com peer channel create -o orderer.example.com:7050 -c mychannel -f /etc/hyperledger/configtx/channel.tx
Error response from daemon: Container 4ebfce361f3e71dd2d678efca1dbf1853cc5387b491f706917b8c54013ec6a80 is not running
docker ps output:
$ docker ps
CONTAINER ID IMAGE COMMAND CRED STATUS PORTS AMES
2d93296f3cb1 hyperledger/fabric-couchdb "tini -- /docker-entâ¦" 13nutes ago Up 13 minutes 4369/tcp, 9100/tcp, 0.0.0.0:5984->5984/tcpcouchdb
6b8638d0ecaf hyperledger/fabric-ca "sh -c 'fabric-ca-seâ¦" 13nutes ago Up 13 minutes 0.0.0.0:7054->7054/tcp ca.example.com
docker ps -a output:
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4ebfce361f3e hyperledger/fabric-peer "peer node start" 15 minutes ago Exited (1) 15 minutes ago peer0.org1.example.com
1187120cdcd0 hyperledger/fabric-orderer "orderer" 15 minutes ago Exited (1) 15 minutes ago orderer.example.com
2d93296f3cb1 hyperledger/fabric-couchdb "tini -- /docker-entâ¦" 15 minutes ago Up 15 minutes 4369/tcp, 9100/tcp, 0.0.0.0:5984->5984/tcp couchdb
6b8638d0ecaf hyperledger/fabric-ca "sh -c 'fabric-ca-seâ¦" 15 minutes ago Up 15 minutes 0.0.0.0:7054->7054/tcp ca.example.com
For me this issue was caused by the volume handoff from WSL (Windows Subsystem for Linux - available on Windows 10) to Docker. Use Kitematic to view your docker containers (See screenshot below and note how the folder path is messed up). Click the container and go to "Settings" and "Volumes". Changing the volume manually made it run properly.
In my case, docker automatically is changing \c to C:
I was able to resolve it with this trick: https://superuser.com/questions/1195215/docker-client-on-wsl-creates-volume-path-problems/1203233
WSL mounts the windows drives at /mnt/ (so /mnt/c/ = C:). If we create a /c/ bind point, this allows Docker to properly interpret this as C:\
Here's the process:
Bind the drive and download fabric to your windows user directory by using the following set of commands (one at a time)
sudo mkdir /c
sudo mount --bind /mnt/c /c
cd /c/Users/YOUR_WINDOWS_USERNAME #(go to C:\Users to see what its called)
mkdir blockchain #(or whatever you want to call it)
cd blockchain
curl -O https://raw.githubusercontent.com/hyperledger/composer-tools/master/packages/fabric-dev-servers/fabric-dev-servers.tar.gz
tar -xvf fabric-dev-servers.tar.gz
cd ~/fabric-dev-servers
export FABRIC_VERSION=hlfv12
./downloadFabric.sh
Now you should be able to successfully run Fabric
./startFabric.sh
It may ask you to share C drive with docker. You should do so. You can check the volume mount points and containers in Kitematic, like below
Try to do:
Remove all containers - docker rm -f $(docker ps -aq)
Remove network - docker network prune
Remove the chaincode image (if existing) - docker rmi dev-peer0.org1.example.com
Restart docker
Run startFabric.sh again.
Hope help you.
The actual problem is that the location /etc/hyperledger/msp/peer/signcerts is not accessible. For some reason it seems Docker Toolbox looks for this location under the home directory. So, check the home directory in the Docker VM using
echo $HOME (in my case it is C:/Users/knwny) and then place the fabric-samples folder in that location.
This fixed the problem for me.
Got this working by re-running through the fabric installation guidelines: http://hyperledger-fabric.readthedocs.io/en/latest/prereqs.html
The following procedure was entirely performed in a windows account with admin rights
Initial steps:
Set the GOPATH environment variable
Checked that Go is building executables correctly
All next steps in powershell in Admin mode
Ran npm install npm#5.6.0 -g
sudo apt-get install python (note that for me This command failed, but had python anyway and it seemed to work OK as it couldn't find sudo)
Checked that python 2.7 is installed correctly
Set git config --global core.autocrlf false
Set git config --global core.longpaths true
Ran npm install --global windows-build-tools
Ran npm install --global grpc
Ran the following in Git Bash because cURL didnt work in powershell:
curl -sSL http:/ /bit.ly/2ysbOFE | bash -s 1.2.0
Note that for the above command to work successfully I needed to have Docker running in a shell, i.e. running the Docker Toolbox installed start.sh script (I ran in Git Bash and worked OK).
Finally, in registerUser.js and enrollAdmin.js changed the local host, to be the IP address stated on docker on startup.
fabric_ca_client = new Fabric_CA_Client('http://localhost:7054', null , '', crypto_suite);
After doing this I re-ran the Fabric-samples 1) first-network and 2) fabcar examples and they worked as expected!! I think that I had missed out the GoPath and hadn't successfully installed the cURL command first time I tried.
Thanks to Nhat Duy and knwny for your help to resolve this issue.

Is there a way to create a copy of the docker image on the same host?

I have this requirement where I need to delete old images during the build, but there could be a chance that they are used by some containers in the same host that I cannot stop and remove during build. So i am wondering if there is a way to duplicate an image and use them for the external containers.
I already tried exploring tag or commit, but they seems not to be fitting my need.
The dangling filter will only remove inactive images
sudo docker rmi $(sudo docker images -f "dangling=true" -q)
see How to use docker images filter

Does Docker purge old/unused images?

Many organizations are using Docker specifically for the advantage of being able to seamlessly roll back deployed software. For instance, given an image called newapi, deployment looks like this:
# fetch latest
docker pull newapi:latest
# stop old one and terminate it
docker stop -t 10 newapi-container
docker rm -f newapi-container
# start new one
docker run ... newapi:latest
If something goes wrong, we can revert back to the previous version like this:
docker stop -t 10 newapi-container
docker rm -f newapi-container
docker run ... newapi:0.9.2
The problem becomes that over time, our local Docker images index will get huge. Does Docker automatically get rid of old, unused images from its local index to save disk space, or do I have to manually manage these?
It doesn't do it for you but you can use the following commands to do it manually.
#!/bin/bash
# Delete all containers
sudo docker rm $(sudo docker ps -a -q)
# Delete all images
sudo docker rmi $(sudo docker images -q)
The documentation relating to the docker rm and rmi commands is here: https://docs.docker.com/reference/commandline/cli/#rm
The additional commands are standard bash.
Update Sept. 2016 for docker upcoming docker 1.13: PR 26108 and commit 86de7c0 introduce a few new commands to help facilitate visualizing how much space the docker daemon data is taking on disk and allowing for easily cleaning up "unneeded" excess.
docker system prune will delete ALL dangling data (i.e. In order: containers stopped, volumes without containers and images with no containers). Even unused data, with -a option.
You also have:
docker container prune
docker image prune
docker network prune
docker volume prune
Taken from here.

Resources