docker run is failing - linux

I'm seeing below error with docker running in rhel7 on top of virtual box
I'm just trying to use hello-world image
[root#localhost ~]# docker run hello-world
docker: Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "exec: \"/hello\": stat /hello: no such file or directory": unknown.

It seems you may not have execute rights where you are trying to run the Docker.
Try using a directory that you have execute rights to such as your Home directory.
Or you may need to run the chmod +x on the directory you are running the docker command on.

try this command
sudo service docker restart
it usually helps in Ubuntu

The / cannot be used as a volume.
change \"/hello\": to \"hello\":
I suppose it is in Dockerfile?
check fromlatest.io for Dockerfile errors

Related

gitlab local docker instance: "vboxmanage": executable file not found in $PATH

I used
docker run --rm -it -v /srv/gitlab-runner/config:/etc/gitlab-runner gitlab/gitlab-runner register
to install a virtualbox runner on my gitlab which also runs on docker. My host computer has access to the command vboxmanage, but the CI gives this error:
ERROR: Preparation failed: exec: "vboxmanage": executable file not found in $PATH
Could it be possible that it's trying to access vboxmanage from inside the container and thus not finding it? If yes, then how can I access vboxmanage from non docker?

the installation tools such as 'make/apt/apt-get/dpkg/rpm' not found in the image 'docker run xxx docker', how to install openssh-service

Docker in Docker is great, but the biggest problem for me is failing to use the 'sshd',
I want is using 'docker run xxxx docker' to start a container as a virtual instant. And I want to 'ssh xx#hostIP -p XXX' to connnet this container.
but the image is hard to install 3rd software, because 'make/apt/apt-get/dpkg/rpm' is all not found in the image. It's similar to the 'Linux minimum system'. So how should I install openssh-service and start the service?
thank for your help!
docker pull docker

unable to evaluate symlinks in Dockerfile path: lstat <path> no such file or directory

I'm trying to run tacotron2 on docker within Ubuntu WSL2 (v.20.04) on Win10 2004 build. Docker is installed and running and I can run hello world successfully.
(There's a nearly identical question here, but nobody has answered it.)
When I try to run docker build -t tacotron-2_image docker/ I get the error:
unable to prepare context: unable to evaluate symlinks in Dockerfile path: lstat /home/nate/docker/Dockerfile: no such file or directory
So then I navigated in bash to where docker is installed (/var/lib/docker) and tried to run it there, and got the same error. In both cases I created a docker directory, but kept getting that error in all cases.
How can I get this to work?
As mentioned here, the error might have nothing to do with "symlinks", and everything with the lack of Dockerfile, which should be in the Tacotron-2/docker folder.
docker build does mention:
The docker build command builds Docker images from a Dockerfile and a “context”.
A build’s context is the set of files located in the specified PATH or URL.
In your case, docker build -t tacotron-2_image docker/ is supposed to be executed in the path you have cloned the Rayhane-mamah/Tacotron-2 repository.
To be sure, you could specify said Dockerfile, but that should not be needed:
docker build -t tacotron-2_image -f docker/Dockerfile docker/
Or:
cd
git clone https://github.com/Rayhane-mamah/Tacotron-2
cd Tacotron-2
cd docker
docker build -t tacotron-2_image .
I thought these commands I'm executing are for the purpose of installing it
To build the image, you need the sources (the repository to clone).
If the name of you Dockerfile is with capital F rename it
For others like me who somehow couldn't get it works because of symlink
just copy out your files out to a new directory that hasn't been symlink and build your image from there
if ony if you've confirm that your Dockerfile isn't dockerfile .Dockerfile ,DockerFile or dockerfile.txt .
My OS elementary which is base on ubuntu.

Parent Docker Containers using Docker in Docker

I am working on a jenkins ssh agent for my builds
I want to have docker installed so it can run and build docker images
I currently have the following in my Dockerfile
RUN curl -fsSL get.docker.com -o /opt/get-docker.sh
RUN chmod +x /opt/get-docker.sh
RUN sh /opt/get-docker.sh
This works fine when I run docker with
docker run <image> -v /var/run/docker.sock:/var/run/docker.sock
Issue I'm having is when I run docker ps with in the container, it shows all my parent containers as well, is there a way to prevent this?
If you mount the host's /var/run/docker.sock your docker client will connect to the host's docker daemon, and so see everything that is running on the host.
To make it so your containers can run docker in a way that appears isolated from the host you should investigate Docker-in-docker.

Issue docker commands on Jenkins slave

I have a Jenkins master running on Windows Server 2016. I need to be able to run linux containers to run some automated e2e tests. For reasons I won't get into, I cannot enable hyper-v on this machine. This is preventing me from installing lcow and docker on my Jenkins master
What I've done instead is setup a Ubuntu 18.04 VM in virtualbox and installed docker there. I've configured the VM as a Jenkins slave using ssh to login as the jenkins user. I've setup and configured everything for this user to be able to run docker commands without using sudo. If I manually ssh into the server as the jenkins user I can run docker commands without an issue. Everything works the way you would expect.
I've then setup a test build to check that everything was working correctly. The problem is that when I try to run docker commands using the Execute Shell build step I'm getting a docker: not found error. From what I can tell, the build is running as the correct user. I added who -u to the build step so I could check which user the build was running as.
Here is the output from my build:
[TEST - e2e - TEST] $ /bin/sh -xe /tmp/jenkins16952572249375249520.sh
+ who -u
jenkins pts/0 2018-08-10 16:43 . 10072 (10.0.2.2)
+ docker run hello-world
/tmp/jenkins16952572249375249520.sh: 3: /tmp/jenkins16952572249375249520.sh: docker: not found
As I mentioned, the jenkins user has been added to the docker group and Docker has been added to $PATH (/snap/bin/):
jenkins#jenkins-docker-slave:~$ which docker
/snap/bin/docker
jenkins#jenkins-docker-slave:~$ $PATH
-bash:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin: No such file or directory
jenkins#jenkins-docker-slave:~$ who -u
jenkins pts/0 2018-08-10 16:43 . 10072 (10.0.2.2)
jenkins#jenkins-docker-slave:~$ cat /etc/group | grep docker
docker:x:1001:qctesting,jenkins
As you can see by this snippet I can successfully run docker commands by logging into the server as the jenkins user:
jenkins#jenkins-docker-slave:~$ docker run hello-world
Hello from Docker!
This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
(amd64)
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.
To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash
Share images, automate workflows, and more with a free Docker ID:
https://hub.docker.com/
For more examples and ideas, visit:
https://docs.docker.com/engine/userguide/
I have also configured the path to docker in the slaves node properties as I thought it would fix my issue. As you can see I have both git and docker listed. Git commands are working just fine. It is only the docker commands that are giving me problems. I have tried both /snap/bin and /snap/bin/docker with no luck.
I am trying to build a jenkins job that will clone a git repo, spin up the containers I need using docker-compose and some build parameters I pass in at build time, and run my e2e tests against any environment (qa, staging, production, etc.). I just can't get the jenkins slave to run the docker commands. What am I missing? How can I get the slave to recognize that docker is already installed on the system and the user has the correct permissions to execute those commands.
NOTE: I am NOT trying to run docker in docker. Practically all questions/documentation I've found on running docker commands on a jenkins slave describe how to solve this issue by running the slave in a docker container and installing the docker client in the slave container. That is not what I'm trying to accomplish. I am trying to ssh from a jenkins master into a jenkins slave that already has docker installed and run docker commands on that server as the jenkins user.
I finally figured this out thanks to the answer for this question. After reading that answer I realized I had installed the wrong version of docker on Ubuntu. I removed the previous installation and installed the correct docker package using sudo curl -sSL https://get.docker.com/ | sh. I then restarted my jenkins slave and everything started working.

Resources