When I start my network ( ./byfn.sh up ) I get the error, that I can't run peer:
Cannot run peer because cannot init crypto, folder "/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/users/Admin#org1.example.com/msp" does not exist
!!!!!!!!!!!!!!! Channel creation failed !!!!!!!!!!!!!!!!
========= ERROR !!! FAILED to execute End-2-End Scenario ===========
What can I do to solve my problem?
I deleted the samples and installed it again
removed images
stopped and started docker
checked all the requirements - all are installed
I could solve it now:
#AdityaJoshi thank you for your suggestion (first Step).
So what I did:
docker volume prune
docker ps -aq | xargs docker kill
docker ps -aq | xargs docker rm
./byfn.sh up
Related
I'm using github codespaces and I have pre-built my devcontainer with feature kubernetes-helm-minikube which contains a minikube pre-installed in the dev container.
My question is, with prebuilding, my devcontainer users still have to download gcr.io/k8s-minikube/kicbase image when they run minikube start in a newly created codespace. Is there a way to cache the minikube start resources in the prebuild process? Or at least cache the gcr.io/k8s-minikube/kicbase image with prebuild?
Could anyone point out the right way to do that? Greatly appreciate any suggestions.
What I've tried:
I've read docs in: https://docs.github.com/en/codespaces/prebuilding-your-codespaces/configuring-prebuilds and https://containers.dev/implementors/json_reference/#_lifecycle-scripts
I tried run minikube start on container create:
"onCreateCommand": [
"nohup bash -c 'minikube start &' > minikube.log 2>&1"
]
and the prebuild workflow failed with error $ nohup bash -c 'minikube start &' > minikube.log 2>&1 OCI runtime exec failed: exec failed: container_linux.go:380: starting container process caused: exec: "nohup bash -c 'minikube start &' > minikube.log 2>&1": executable file not found in $PATH: unknown
I tried run docker pull gcr.io/k8s-minikube/kicbase:v0.0.36 on container create and similar error occurred: starting container process caused: exec: "docker pull gcr.io/k8s-minikube/kicbase:v0.0.36": stat docker pull gcr.io/k8s-minikube/kicbase:v0.0.36: no such file or directory: unknown
I tried run minikube start with postCreateCommand and the result made it clear that the postCreateCommand is only executed after the dev container has been assigned to a user, not during prebuild (as the prebuild doc states).
I just installed docker on my linux and the first command I ran docker run alpine -d
is giving this error.
docker: Error response from daemon: failed to create shim: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "-d": executable file not found in $PATH: unknown.
ERRO[0000] error waiting for container: context canceled
I'm new to docker, please help !!
Move the '-d' flag next to the run command then the image name.
docker run -d alpine
I'm ubuntu user. I use the following docker image, tensorflow/tensorflow:nightly-gpu
If I try to run this command
$ docker run -it --rm --gpus all tensorflow/tensorflow:nightly-gpu bash
There's permission denied error.
docker: Error response from daemon: OCI runtime create failed: container_linux.go:380: starting container process caused: process_linux.go:545: container init caused: Running hook #0:: error running hook: exit status 1, stdout: , stderr: nvidia-container-cli: mount error: open failed: /sys/fs/cgroup/devices/user.slice/devices.allow: permission denied: unknown.
Of course, I can run this command if I am using sudo, but I want to use gpu without sudo.
Is there any good solution? Any leads, please?
As your problem seems to be only when running "--gpu".
Add/update these two sections of /etc/nvidia-container-runtime/config.toml
[nvidia-container-cli]
no-cgroups = true
[nvidia-container-runtime]
debug = "/tmp/nvidia-container-runtime.log"
Source: https://github.com/containers/podman/issues/3659#issuecomment-543912380
If you can't use docker without sudo at all
If you are running in a Linux environment, you need to create a user for docker so you won't need to use sudo every time. Below are the steps to create:
$ sudo groupadd docker
$ sudo usermod -aG docker $USER
$ newgrp docker
Source: https://docs.docker.com/engine/install/linux-postinstall/
I am trying to Create and Join channel by entering into cli container using command : docker exec -it cli bash
But, i am getting following error response :
Error Response from daemon : Container dasdjha343343xxxxx is not running.
First stop all your running containers and remove them, try to rerun the exact container, and lastly, when you try to bash to explicit container on Windows.10 use $winpty
$docker stop $(docker ps -a -q)
$docker ps -qa|xargs docker rm
$cd fabric-samples/first-network
$docker-compose -f docker-compose-cli.yaml up -d
$winpty docker exec -it cli bash
In your working folder, update to latest version by:
$ git clone https://github.com/hyperledger/fabric-samples.git
Newer version should resolve this issue.
My Dockerfile is
FROM node:4
RUN npm install -g yarn
WORKDIR /app
I run docker run -d and mount my current working directory as a volume. All the deps are installed by yarn. I have a npm script to lint the files.
If I do docker exec -it [container] npm run lint it works as expected and I can see all the logs. But if I do docker exec -itd [container] npm run lint, it exits immediately which is expected. But I can't see the logs by running docker logs [container]. How do I reattach the exec or just see to the logs?
I tried docker attach [container] it goes to the repl of nodejs. Why is that?
As mentioned in "Docker look at the log of an exited container", you can use docker logs
docker logs -t <container>
That will show stdout/stderr (with timestamps because of the -t option).
For that last 50 lines of those logs:
docker logs -t <container id> | tail -n 50
Note: that would work only if npm run lint is run by your container (docker run <image> npm run lint)
If your docker exec exits immediately, then yes, there would be no logs produces by the container itself.