I know how to -v the docker socket into a container to make the host's docker daemon available inside a container. Fine.
I have a dockerized application A that can operate on files on the host.
I have another dockerized application B that wants to use that application A to operate on files on the host, but is hardcoded to call /usr/bin/A filename.
How do I alias /usr/bin/A within container B, so that it will call out to the other container, like
docker run -ti --rm A filename
You could just replace /usr/bin/A in container B with a shell script:
#!/bin/sh
docker run -ti --rm A "$#"
You could do this in the image itself (via your Dockerfile), or you could bin-mount the script when you start container B (docker run -v /path/to/my/script:/usr/bin/A ...).
Now you run /usr/bin/A some_filename and it should do the right thing, assuming that you can successfully run docker inside the container.
Related
Hi I've got a problem with docker. I'm using it on s390x Debian, everything was working fine but now i can't start my containers. Old containers are working but when i create new container using for example: docker run ubuntu then i'm trying docker start [CONTAINER] my container don't start. When i use docker ps -a I've got all of my containers, but after when I use docker ps i can't see my new container. As you can see on scr. I created container with name practical_spence and ID 3e8562694e9f but when i use docker start, it's not starting. Please help.
As you do not specify a CMD or entrypoint to run, the default is used which is set to "bash". But you are not running the container in interactive terminal mode, so the bash just exits. Run:
docker run -it ubuntu:latest
to attach the running container to you terminal. Or specify the command you want to run in the container.
You container did start but exit instantly as it has nothing to do. You can start like this docker run -d ubuntu sleep infinity. Then use docker ps to see the running container. You can of course exec into it to do something docker exec -it <container> bash. You can stop it docker stop <container>. Re-start it docker start <container>. Finally delete (stopped) it as you don't need it anymore docker container rm <container>.
I have two containers running on a host. When I'm in container A I want to run a diff on container B compared to it's image to see what has changed in the filesystem. I know this can be ran easily from the host itself, but I'm wondering is there any way of doing this from inside container A, to see the difference on container B?
You can run any docker commands from within container which will communicate with host docker daemon if:
You have access to docker socket inside container
You have docker client inside container
You can achieve first condition by mounting docker socket to container - add following to your docker run call:
-v /var/run/docker.sock:/var/run/docker.sock.
The second condition depends on your docker image.
If you are running bare Ubuntu image you can have shell inside container which will be able to do what you want with following command:
docker run -it -v /var/run/docker.sock:/var/run/docker.sock ubuntu:latest sh -c "apt-get update ; apt-get install docker.io -y ; bash"
I am using vagrant to build docker host and then i have shell script which basically install all required packages for host and That script also build and run containers
vagrant file
config.vm.provision :shell, :inline => "sudo /vagrant/bootstrap.sh"
Inside that i run containers like
docker run -d . .bla bla .. .
This works fine but i have to ssh into container and run make deploy to install all the stuff.
Is there any way i can run that make deploy from within my bootsrap.sh.
The one way is make that as entry point but then that will do with every run,
I just want that when i provision host then that command should run inside some container and show me output like vagarnt shows for host
use docker exec
see the doc
http://docs.docker.com/reference/commandline/exec/
for example
docker exec -it container_id make deploy
or
docker exec -it container_id bash
and then
make deploy
inside your container
I have a Ubuntu host with a mounted network folder located at /mnt/mynetworkshare.
When I run my docker image, I want to expose the /mnt/mynetworkshare folder on my Ubuntu host so I run the container with the command:
docker run -it --rm -v /mnt/mynetworkshare/:/opt/shared_folder/ brspurri/my-app:latest /bin/bash
However, when I'm inside the bash shell of the container, the folder /opt/shared_folder/ exists, but no files are present or visible.
When I change the host folder to a local one using:
docker run -it --rm -v /mnt/mylocalfolder/:/opt/shared_folder/ brspurri/my-app:latest /bin/bash
all the contents of /mnt/mylocalfolder/ are visible and accessible in /opt/shared_folder/
Does anyone have any idea why this could be happening? Is it possible to share a network folder with a Docker container?
Looking at shipyard, I noticed that the deploy container launches containers on the host ( redis, router, database, load balancer, shipyard)
This is done by using the -H flag.
So I decided to try this to deploy my apps as this would make deployment tons easier ( versus systemd, init.d ).
I was able to get about 70% there, but the thing that broke was --volumes-from tag.
The container starts, but the volume it's mounting to is empty. I have a simple example posted here.
http://goo.gl/a558XL
If you run these commands on host. it works fine.
on_host$ docker run --name data joshuacalloway/data
on_host$ docker run --volumes-from data ubuntu cat /data/hello.txt
However if you do this in a container. It is broken.
on_host$ docker run -it --entrypoint=/bin/bash -v /var/run/docker.sock:/var/run/docker.sock joshuacalloway/deploy -s
in_container:/# docker ps -----> this shows docker processes on the host
in_container:/# docker rm data ---> this removes docker container data that was created above
in_container:/# docker run --name data joshuacalloway/data
in_container:/# docker run --volumes-from data ubuntu cat /data/hello.txt