I am following the tutorial on writing your first application to make a sample hyperledger fabric application. I am using Ubuntu 16.04 and I have installed prerequisites as well as binaries and docker images. When I move into fabric-samples/fabcar, after npm install. I run:
./startFabric.sh
I get the following error:
docker-compose -f docker-compose.yml down.
./start.sh: line 13: docker-compose: command not found.
I looked into ./startFabric.sh by nano. Line 13 is as follows:
starttime=$(date +%s)
LANGUAGE=${1:-"golang"}
This is a screen shot of the error I get:
It may be irrelevant, but I have also issues running .byfn -m up as I have posted on issue with byfn. I am not sure if these two are related. But obviously neither can I start fabric, nor can I build a network.
I appreciate any help to solve the issue.
Thank you for your attention.
You should install docker. If you already installed docker, you should check if your docker bin folder is referenced in your PATH environment variable.
https://docs.docker.com/install/
Related
I recently got a new working PC with Ubuntu 22.04 on it. I pulled down my repo from Github and decided to give running everything through docker containers a try.
I found a script on laravel docs in order to run composer install through a container.
docker run --rm \
-u "$(id -u):$(id -g)" \
-v $(pwd):/var/www/html \
-w /var/www/html \
laravelsail/php81-composer:latest \
composer install --ignore-platform-reqs
Instantly I'm getting an issue saying
In Filesystem.php line 254:
/var/www/html/vendor does not exist and could not be created.
I've read through the documentation, but I can't seem to find anything. I've been googling for hours and keep finding people saying stuff about chown etc. but that just messes things up even worse.
I tried installing a brand new application through
curl -s "https://laravel.build/example-app" | bash
Here I have sail installed and run sail up, but when I try to run composer install or npm install, I get permission issues again.
I'm about to lose my mind here, so I hope someone will be able to help me out.
I had this exact problem today, and I was able to determine that Docker Desktop was the cause (mainly due to this GitHub issue). I was able to resolve my issue by switching my docker context to use Docker Engine:
$ docker context use default
If you are not using Docker Desktop, then I do not know what the solution might be. Setting the folder permissions to 777 did work for me, but like you I was not satisfied with that as an answer.
Update 3/23: I used author's package.json, npm install on my Mac, upgrade react-scripts to 3.4.0 and Dockerfile to fix a few issues and now this version works: https://github.com/harrywang/my-flask-react-auth/tree/6e65a7deaf89244a41a7c91843f07f4756956f95 however, this does not explain why the previous version did not work.
Update 3/23: if I only replace package.json and package-lock.json at https://github.com/harrywang/my-flask-react-auth/tree/master/services/client with the authors' versions at https://gitlab.com/testdriven/flask-react-auth/-/tree/master/services%2Fclient, it will work. Don't know why.
Following the tutorial at https://testdriven.io/courses/auth-flask-react/
Docker 2.2.0.4 Desktop on Mac
My code repo is at https://github.com/harrywang/my-flask-react-auth, where you can see the Dockerfile and docker-compose.yml, you can clone and run docker-compose up -d --build to reproduce my problem.
When I run docker-compose up -d --build, the flask and database containers work well but the node container exits with error code 0 when "Starting the development server..."
One thing I noticed is that I don't see [wds] webpack related info locally on my Mac. I don't know what they are.
but when I go to /services/client and run npm start, the node server starts and works well locally.
There is no error message during the docker building process. I have spent a few hours on this and cannot figure it out. Please help!! Thanks a lot!!
However, the author's repo at https://gitlab.com/testdriven/flask-react-auth with older versions does not have this issue:
I had the same problem. I am currently working through the same course. I have done every exactly like written in the guide. I was able two start 2 containers, the third one (flask-react-auth_client_1) stoped with 0.
I first tried to get into the container and execute docker run -it --entrypoint sh <image> and then npm start
No problem was detected. So I did a little research.
Apparently with react-scripts#3.4.0 or react-scripts#3.4.1+ it is impossible to run in the background now without the CI flag enabled.
My fix was, to add CI=true to the docker-compose.yml under client > environment.
I go the idea from https://github.com/facebook/create-react-app/issues/8688
It seems that the following part is missing from the client app Dockerfile:
COPY src /usr/src/app/src
COPY public /usr/src/app/public
This will copy the source folders into the Docker image. These two lines can be set above the following line in your Dockerfile.
The way you can troubleshoot such issues is by doing a docker run -it --entrypoint sh <image> to spin up a new container and get a shell to it. Then you can run the command that the server intended to run (npm start in our case). From there, you can spot errors that may not have been propagated to docker-compose. In our case, the error was the following:
react-scripts start Could not find a required file.
Name: index.html Searched in: /usr/src/app/public
In my case, just simply add CI=true into npm start scrpts, in package.json:
"start": "CI=true react-scripts start",
Thank to p1umyi 's answer above
You must include command line on docker-compose.yml :
version: '3'
services:
redis-server:
image: 'redis'
node-app:
command: npm start #this line
build: .
ports:
- "4001:8081"
I have already installed and worked in hyperledger fabric in ubuntu 16.04 and somehow i deleted the packages. I want to reinstall it again and working but it always showing up the error in first-network itself.
So can anyone suggest me how to start over it from the first?
Make sure you add the go.mod and go.sum for the vendored packages, chaincode in 2.0.0 does not include the packages, follow the chaincode structure of go.mod and go.sum like here
https://github.com/hyperledger/fabric-samples/tree/master/chaincode/marbles02/go
, then execute the following command to vendor the packages,
go mod download
try pruning the docker volumes and system. Use docker volume prune -f and docker system prune -f
I am using Ubuntu 16.04.2 LTS as VM, and composer v0.19.1. I have installed all the prerequisites as well as hyperledger composer and fabric by folowing the documents at hyperledger composer playgroung. I have followed the tutorial line by line to make a business network. When I want to install the business network by the following command:
composer network install --card PeerAdmin#hlfv1 --archiveFile tutorial-network#0.0.1.bna
It gives me the following error:
No connection type provided, probably because the connection profile has no 'x-type' property defined.
The screen shot of the error is provided as well.
I have checked and made sure that PeerAdminCard exists by
composer card list
And of course Fabric is started. I highly appreciate if someone mentions what I am doing wrong. Thank you.
The 2nd problem you are having is with the createPeerAdminCard.sh script - you are using an 'old' version of the where the default is Fabric v1.0.
The default is assuming hlfv1 because the environment variable FABRIC_VERSION is not set. So the createPeerAdminCard.sh script assumes you want a hlfv1 card and creates the files, but Composer v0.19 can't import that old card at the end of the script.
The fast solution is to export FABRIC_VERSION="hlfv11" then run the createPeerAdminCard.sh
I suspect that you may also have a problem with hlfv1 / v11 with the Fabric. You can check the Fabric version by running docker ps or docker images - if they have 1.0 at the end you need to remove them all and run downloadFabric.sh in the same window as you exported the FABRIC_VERSION variable, then run startFabric.sh
You need to remember to export that environment variable everytime you run one of those Fabric Tools scripts - so the better answer might be to delete the Fabric Tools folder and all docker Images and Containers - then download a new version of Fabric tools which includes the new default for Fabric 1.0
I was getting exactly the same error as you. Turns out if you update the Composer from version 0.16 or earlier, the card store still has the old PeerAdmin card which is now not compatible with version 0.19. Even deleting the card using composer card delete --card <CARD_NAME_HERE> doesn't work. The quick and dirty solution is to manually delete the card store. It is normally in ${HOME}/.composer, so to deleting this directory should work.
rm -fr ${HOME}/.composer
For your other problems, the easiest solution is to replace your older version of Hyperledger and do a new install from scratch. That means removing composer as well as killing and removing all previous Docker containers:
docker kill $(docker ps -q)
docker rm $(docker ps -aq)
docker rmi $(docker images dev-* -q)
Basically start from a clean slate if you can!
When you upgrade the composer modules from an earlier version to the latest version connection profile will not be compatible with the system. An ideal solution is to remove and delete the composer folder and create the .composer folder from the home directory and try creating PeerAdmin card again. Once that is done you are good to go for installing the starting the new business network application.
I have a perplexing Docker problem. I am running Docker on my Mint laptop and on a Ubuntu VPS. I have been able to build images in the past locally and send them to the server and have them run there. However, for clarity, the ones that work were probably built when I was running Ubuntu locally (more on that later).
I have an example based on Alpine:
FROM alpine:3.5
# Do a system update
RUN apk update
ENTRYPOINT ["sleep", "3"]
I build like so, and send to the remote:
docker build -t alpine-sleep .
docker save alpine-sleep | gzip > alpine-sleep.tgz
rsync --progress alpine-sleep.tgz myserver.example.com:/path/to/images/
I then unpack/import on the remote, and run, thus:
docker import /path/to/images/alpine-sleep.tgz alpine-sleep
docker run -it alpine-sleep
I get this console reply:
docker: Error response from daemon: No command specified.
See 'docker run --help'.
However, if I copy the Dockerfile to the remote, then do this:
docker build -t alpine-sleep-localbuild .
docker run -it alpine-sleep-localbuild
then I get the sleep working fine.
My Docker and kernel versions locally:
jon#jvb ~/alpine_test $ uname -r
4.4.0-79-generic
jon#jvb ~/alpine_test $ docker -v
Docker version 1.12.6, build 78d1802
And remotely:
root#vps:~/alpine-sleep# uname -r
3.13.0-24-generic
root#vps:~/alpine-sleep# docker -v
Docker version 17.05.0-ce, build 89658be
I wonder, does the major difference in the kernel make a difference? I expect 3.13 to 4.4 is quite a big jump. I don't recall what version of the kernel I was using when I build things when I was running Ubuntu locally, but it would not surprise me if it is was 3.x.
The other thing that strikes me as unexpected is the high variation in Docker version numbers. How do I have version 1.x locally, and 17.x remotely? Has the project been through a version re-numbering?
Update
I've just checked the kernel version when I was running Ubuntu locally, and that was:
4.4.0-75-generic
So, this makes me think that a major kernel discrepancy could not be to blame.
The issue is that Docker won't warn you when you use the wrong combination of save/load and export/import. You save/load an image, and you export/import a tar file from a container. Since you are doing a docker save to save your image, you need to do a docker load to restore it on the other host:
docker load < /path/to/images/alpine-sleep.tgz
I have found this very old issue: https://github.com/moby/moby/issues/1826
An image imported via docker import won't know what command to run. Any image will lose all of its associated metadata on export, so the default command won't be available after importing it somewhere else.
So, run it with the entrypoint:
docker run --entrypoint sleep alpine-sleep 3