Maybe a trivial question but that's my problem:
I attached to a running docker container, after some use I needed to run a Unit Test and gdb at "the same time".
So I openned another shell tab (konsole tab) and attached again to the same docker container $ docker attach container_name but everything I did echoed in both attachments. If I execute cd /home/user/folder_foo the other tab will "do the same", ended up both konsole tabs in the same folder. Like the same command was echoed to both tabs. Maybe it's a unique user structure and what I isn't even possible.
I really need to do two thing in parallel in the same docker container, how it could be done?
$ docker --version
Docker version 20.10.9, build c2ea9bc
I am using Ubuntu 21.04
Run multiple services in a container
It is generally recommended that you separate areas of concern by using one service per container.
But for development purposes, you can follow mentioned guide.
Additionally, there is a similar answer already provided
You can run docker exec -it <container> bash from multiple terminals to launch several sessions connected to the same container.
Related
I have a Dockerfile that is currently using amazonlinux as the base image.
The purpose of the image is to run two binaries in the container. Consequently, the CMD instruction of the Dockerfile currently looks like this:
CMD [ "/bin/sh", "-c", "/binary1 & /binary2"]
I am looking to modify this Dockerfile to migrate it to a "distroless" image. This entails modifying the Dockerfile FROM to be built on top of a stripped-down base image (which will itself be Linux-based).
My problem is that this new stripped-down base image will no longer contain the "&" that previously came with the shell in the prior Linux image. It does not have "&&" either, or for that matter any operator that would enable me to run both binaries from within the Dockerfile.
I am wondering if there is some way to run multiple binaries in a stripped down image like this?
For example, perhaps I can install the files containing "&", "&&", or some similar command in my Dockerfile to accomplish this, since the new "distroless" image will still be Linux based? If so, how can I determine which specific files I would need, and how can I install them?
Any pointers would be appreciated, as I am quite new to Docker.
Any pointers would be appreciated, as I am quite new to Docker.
In general, don't try running multiple binaries in a single container like this. In almost all cases, it is more flexible and management to run two separate containers: so if you were to build a "distroless" image containining your two binaries, you would start two containers from the same image (e.g. docker run myimage binary1 and docker run myimage binary2).
When you do something like...
CMD [ "/bin/sh", "-c", "/binary1 & /binary2"]
...you have made failures of binary1 invisible to Docker: if the command fails, your container will merrily keep running, and you can't use a restart policy to restart it for you automatically.
Alternately, if you really want to do the thing you're trying to do, rather than using a "distroless" base image, consider instead using a minimal image like busybox or alpine: these will provide you with a shell and common unix utilities for debugging work, but are still quite small.
I am writing software on macOS. As a subroutine I would like to call certain Linux-only CLI tools, e.g., > mytool inputfile. Can I use Docker for Mac to compile the Linux tool inside a container and call it from outside the container (after copying input files into the container?). And if I can, is it a good idea or will there be issues installing and compiling Linux packages?
From my understanding of docker as basically a lightweight VM that uses a stripped down version of a Linux distribution, this approach seems to make sense, but the stripped down aspect might be an impediment.
Can Docker be used to run Linux CLI tools from macOS?
Docker supports macOS according to documentation.
Can I use Docker for Mac to compile the Linux tool inside a container and call it from outside the container (after copying input files into the container?
Yes.
And if I can, is it a good idea
Depends on the term "good" - it's subjective and highly depends on specific case.
or will there be issues installing and compiling Linux packages?
No.
From my understanding of docker as basically a lightweight VM
Yes.
that uses a stripped down version of a Linux distribution, this approach seems to make sense, but the stripped down aspect might be an impediment.
What is in docker container depends on the container. Overall, usually man pages and system package manager repository information are removed from images. I would disagree - mostly docker containers come with full Linux distributions and can be used as such.
You should do as follow:
docker run --rm -v /:/host -ti ubuntu ... your command referring to /host...
And this is the command parameters explanation:
--rm : remove sthe container after running (but keep cached the image for next calls).
-t : allocates a visibile shell terminal.
-i : runs in interactive mode.
-v /:/host : maps your root folder to container /host folder.
ubuntu : pulls the ubuntu image, which you can change with any other you prefer.
As last parameter put the commands to run into the container but relatives to /host.
I am currently working on automating commands for a Docker container with a Python script on the host machine. This Python script for now, builds and runs a docker-compose file, with the commands for the containers written into the docker-compose file and the Dockerfile itself.
What I want to do is have the Python script action all commands to run within the container, so if I have different scripts I want to run, I am not changing the container. I have tried 2 ways.
First was to run os.system() command within the Python script, however, this works only as far as opening the shell for the container, the os.system() command does not execute code in the Docker container itself.
The second way uses CMD within the Dockerfile, however, this is limited and is hard coded to the container. If I have multiple scripts I have to change the Dockerfile, I don't want this. What I want is to build a default container with all services running, then run Python scripts on the host to run a sequence of commands on the container.
I am fairly new to Docker and think there must be something I am overlooking to run scripted commands on the container. One possible solution I have come across is nsenter. Is this a reliable solve and how does it work? Or is there a much simpler way? I have also used docker-volume to copy the python files into the container to be run on build, however, I can still not find a solve to automate the accessing and running these python scripts from the host machine.
If the scripts need to be copied into a running container, you can do this via the docker cp command. e.g. docker cp myscript.sh mycontiainer:/working/dir.
Once the scripts are in the container, you can run them via a docker exec command. e.g docker exec -it mycontainer /working/dir/myscript.sh.
Note, this isn't a common practice. Typically the script(s) you need would be built (not copied) into container image(s). Then when you want to execute the script(s), within a container, you would run the container via a docker run command. e.g. docker run -it mycontainerimage /working/dir/myscript.sh
I am working on a project of online python compiler. When user sends a python, Server will execute it. What I want do is,create a sandbox with virtual filesystem, execute that script instide it, and that sandbox should far from real-server's filesystem, but nodejs should be able to control stdin and stdout of that sandbox.
How to make it possible?
Docker is a great way to sandbox things.
You can run
docker run --network none python:3
from your node.js server. Look at other switches of docker run to plug in as many security holes as possible.
The shtick is, you run the docker command from your node.js server and pass the user's python code via stdin.
Now, if your node.js server is on one machine and the sendbox should run on another machine, you tell docker to connect to the other machine using the DOCKER_HOST environment variable.
Docker containers wrap up the software in a complete filesystem that contains everything it needs to run: code, runtime, system tools, system libraries — basically anything you can install on a server. This guarantees that it will always run the same, regardless of the environment it is running in.
This might be worth to read https://instabug.com/blog/the-difference-between-virtual-machines-and-containers/
The user guide states that an image should be run as follows:
docker run -t -i ubuntu /bin/bash
I get that -t creates the pseudo-terminal and -i makes it interactive. But it seems that the /bin/bash part is unnecessary. Whether I run it with or without /bin/bash, I'm given an interactive prompt that I can read and write from both times.
root#77eeb1f4ac2a:/#
Why do we need /bin/bash?
Part 2
I'm running on Docker for Mac. When I download the hello-world binary and run it, it's only 1kb. Obviously a Linux image wasn't downloaded with it. Is the small hello-world binary running off my Mac kernel or off of a small Linux kernel that comes with Docker for Mac?
Why do we need /bin/bash?
Because while the ubuntu image may be configured to run /bin/bash by default, that's not going to be true of every image. If you have an image that starts a webserver by default, and you want to run bash...you need to make that explicit. Some images don't specify any default command, leading to:
$ docker run -it alpine
docker: Error response from daemon: No command specified.
It never hurts to be explicit when starting a container, especially using an inmage that you didn't build yourself.
When I download the hello-world binary and run it...
Which hello-world binary?
but is a VM of Linux executing it or is my mac executing it?
Docker only runs under Linux. When you are using Docker under OS X or Windows, you are running containers inside a Linux VM spawned for that purpose by docker-machine (or, previously, boot2docker). Under Windows Docker uses Hyper V, and on OS X it previously used VirtualBox and in more recent versions may be using something else (it's been a while since I've run Docker under OS X).
Part 1:
Whatever you pass after docker run -t -i ubuntu is the first command that your container will run. You can try using /bin/bash, /bin/sh, or even echo hello and see it in action. Ubuntu uses bash by default, but other containers use other commands based on their Dockerfiles.
part 2:
When you run hello-world, a docker container is created from the hello-image. Containers "include the application and all of its dependencies --but share the kernel with other containers, running as isolated processes in user space on the host operating system.".
Hello-world in specific is created from scratch https://hub.docker.com/_/scratch/.