docker attach vs lxc-attach - linux

UPDATE: Docker 0.9.0 use libcontainer now, diverting from LXC see: Attaching process to Docker libcontainer container
I'm running an istance of elasticsearch:
docker run -d -p 9200:9200 -p 9300:9300 dockerfile/elasticsearch
Checking the process it show like the following:
$ docker ps --no-trunc
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
49fdccefe4c8c72750d8155bbddad3acd8f573bf13926dcaab53c38672a62f22 dockerfile/elasticsearch:latest /usr/share/elasticsearch/bin/elasticsearch java About an hour ago Up 8 minutes 0.0.0.0:9200->9200/tcp, 0.0.0.0:9300->9300/tcp pensive_morse
Now, when I try to attach the running container, I get stacked:
$ sudo docker attach 49fdccefe4c8c72750d8155bbddad3acd8f573bf13926dcaab53c38672a62f22
[sudo] password for lsoave:
the tty doesn't connect and the prompt is not back. Doing the same with lxc-attach works fine:
$ sudo lxc-attach -n 49fdccefe4c8c72750d8155bbddad3acd8f573bf13926dcaab53c38672a62f22
root#49fdccefe4c8:/# ps -ef
UID PID PPID C STIME TTY TIME CMD
root 1 0 49 20:37 ? 00:00:20 /usr/bin/java -Xms256m -Xmx1g -Xss256k -Djava.awt.headless=true -XX:+UseParNewGC -XX:+UseConcMa
root 88 0 0 20:38 ? 00:00:00 /bin/bash
root 92 88 0 20:38 ? 00:00:00 ps -ef
root#49fdccefe4c8:/#
Does anybody know what's wrong with docker attach ?
NB. dockerfile/elasticsearch ends with:
ENTRYPOINT ["/usr/share/elasticsearch/bin/elasticsearch"]

You're attaching to a container that is running elasticsearch which isn't an interactive command. You don't get a shell to type in because the container is not running a shell. The reason lxc-attach works is because it's giving you a default shell. Per man lxc-attach:
If no command is specified, the current default shell of the user
running lxc-attach will be looked up inside the container and
executed. This will fail if no such user exists inside the container
or the container does not have a working nsswitch mechanism.
docker attach is behaving as expected.

As Ben Whaley notes this is expected behavior.
It's worth mentioning though that if you want to monitor the process you can do a number of things:
Start bash as front process: e.g. $ES_DIR/bin/elasticsearch && /bin/bash will give you your shell when you attach. Mainly useful during development. Not so clean :)
Install an ssh server. Although I've never done this myself it's a good option. Drawback is of course overhead, and maybe a security angle. Do you really want ssh on all of your containers? Personally, I like to keep them as small as possible with single-process as the ultimate win.
Use the log files! You can use docker cp to get the logs locally, or better the docker logs $CONTAINER_ID command. The latter give you the accumulated stdin/stderr output for the entre lifetime of the container each time though.
Mount the log directory. Just mount a directory on your host and have elasticsearch write to a logfile in that directory. You can have syslog on your host, Logstash, or whatever turns you on ;). Of course, the drawback here is that you are now using your host more than you might like. I also found a nice experiment using logstash in this blog.

FWIW, now that Docker 1.3 is released, you can use "docker exec" to open up a shell or other process on a running container. This should allow you to effectively replace lxc-attach when using the native driver.
http://blog.docker.com/2014/10/docker-1-3-signed-images-process-injection-security-options-mac-shared-directories/

Related

How to callculate the free space on a host machine and get the information inside a docker container [duplicate]

How to control host from docker container?
For example, how to execute copied to host bash script?
This answer is just a more detailed version of Bradford Medeiros's solution, which for me as well turned out to be the best answer, so credit goes to him.
In his answer, he explains WHAT to do (named pipes) but not exactly HOW to do it.
I have to admit I didn't know what named pipes were when I read his solution. So I struggled to implement it (while it's actually very simple), but I did succeed.
So the point of my answer is just detailing the commands you need to run in order to get it working, but again, credit goes to him.
PART 1 - Testing the named pipe concept without docker
On the main host, chose the folder where you want to put your named pipe file, for instance /path/to/pipe/ and a pipe name, for instance mypipe, and then run:
mkfifo /path/to/pipe/mypipe
The pipe is created.
Type
ls -l /path/to/pipe/mypipe
And check the access rights start with "p", such as
prw-r--r-- 1 root root 0 mypipe
Now run:
tail -f /path/to/pipe/mypipe
The terminal is now waiting for data to be sent into this pipe
Now open another terminal window.
And then run:
echo "hello world" > /path/to/pipe/mypipe
Check the first terminal (the one with tail -f), it should display "hello world"
PART 2 - Run commands through the pipe
On the host container, instead of running tail -f which just outputs whatever is sent as input, run this command that will execute it as commands:
eval "$(cat /path/to/pipe/mypipe)"
Then, from the other terminal, try running:
echo "ls -l" > /path/to/pipe/mypipe
Go back to the first terminal and you should see the result of the ls -l command.
PART 3 - Make it listen forever
You may have noticed that in the previous part, right after ls -l output is displayed, it stops listening for commands.
Instead of eval "$(cat /path/to/pipe/mypipe)", run:
while true; do eval "$(cat /path/to/pipe/mypipe)"; done
(you can nohup that)
Now you can send unlimited number of commands one after the other, they will all be executed, not just the first one.
PART 4 - Make it work even when reboot happens
The only caveat is if the host has to reboot, the "while" loop will stop working.
To handle reboot, here what I've done:
Put the while true; do eval "$(cat /path/to/pipe/mypipe)"; done in a file called execpipe.sh with #!/bin/bash header
Don't forget to chmod +x it
Add it to crontab by running
crontab -e
And then adding
#reboot /path/to/execpipe.sh
At this point, test it: reboot your server, and when it's back up, echo some commands into the pipe and check if they are executed.
Of course, you aren't able to see the output of commands, so ls -l won't help, but touch somefile will help.
Another option is to modify the script to put the output in a file, such as:
while true; do eval "$(cat /path/to/pipe/mypipe)" &> /somepath/output.txt; done
Now you can run ls -l and the output (both stdout and stderr using &> in bash) should be in output.txt.
PART 5 - Make it work with docker
If you are using both docker compose and dockerfile like I do, here is what I've done:
Let's assume you want to mount the mypipe's parent folder as /hostpipe in your container
Add this:
VOLUME /hostpipe
in your dockerfile in order to create a mount point
Then add this:
volumes:
- /path/to/pipe:/hostpipe
in your docker compose file in order to mount /path/to/pipe as /hostpipe
Restart your docker containers.
PART 6 - Testing
Exec into your docker container:
docker exec -it <container> bash
Go into the mount folder and check you can see the pipe:
cd /hostpipe && ls -l
Now try running a command from within the container:
echo "touch this_file_was_created_on_main_host_from_a_container.txt" > /hostpipe/mypipe
And it should work!
WARNING: If you have an OSX (Mac OS) host and a Linux container, it won't work (explanation here https://stackoverflow.com/a/43474708/10018801 and issue here https://github.com/docker/for-mac/issues/483 ) because the pipe implementation is not the same, so what you write into the pipe from Linux can be read only by a Linux and what you write into the pipe from Mac OS can be read only by a Mac OS (this sentence might not be very accurate, but just be aware that a cross-platform issue exists).
For instance, when I run my docker setup in DEV from my Mac OS computer, the named pipe as explained above does not work. But in staging and production, I have Linux host and Linux containers, and it works perfectly.
PART 7 - Example from Node.JS container
Here is how I send a command from my Node.JS container to the main host and retrieve the output:
const pipePath = "/hostpipe/mypipe"
const outputPath = "/hostpipe/output.txt"
const commandToRun = "pwd && ls-l"
console.log("delete previous output")
if (fs.existsSync(outputPath)) fs.unlinkSync(outputPath)
console.log("writing to pipe...")
const wstream = fs.createWriteStream(pipePath)
wstream.write(commandToRun)
wstream.close()
console.log("waiting for output.txt...") //there are better ways to do that than setInterval
let timeout = 10000 //stop waiting after 10 seconds (something might be wrong)
const timeoutStart = Date.now()
const myLoop = setInterval(function () {
if (Date.now() - timeoutStart > timeout) {
clearInterval(myLoop);
console.log("timed out")
} else {
//if output.txt exists, read it
if (fs.existsSync(outputPath)) {
clearInterval(myLoop);
const data = fs.readFileSync(outputPath).toString()
if (fs.existsSync(outputPath)) fs.unlinkSync(outputPath) //delete the output file
console.log(data) //log the output of the command
}
}
}, 300);
Use a named pipe.
On the host OS, create a script to loop and read commands, and then you call eval on that.
Have the docker container read to that named pipe.
To be able to access the pipe, you need to mount it via a volume.
This is similar to the SSH mechanism (or a similar socket-based method), but restricts you properly to the host device, which is probably better. Plus you don't have to be passing around authentication information.
My only warning is to be cautious about why you are doing this. It's totally something to do if you want to create a method to self-upgrade with user input or whatever, but you probably don't want to call a command to get some config data, as the proper way would be to pass that in as args/volume into docker. Also, be cautious about the fact that you are evaling, so just give the permission model a thought.
Some of the other answers such as running a script. Under a volume won't work generically since they won't have access to the full system resources, but it might be more appropriate depending on your usage.
The solution I use is to connect to the host over SSH and execute the command like this:
ssh -l ${USERNAME} ${HOSTNAME} "${SCRIPT}"
UPDATE
As this answer keeps getting up votes, I would like to remind (and highly recommend), that the account which is being used to invoke the script should be an account with no permissions at all, but only executing that script as sudo (that can be done from sudoers file).
UPDATE: Named Pipes
The solution I suggested above was only the one I used while I was relatively new to Docker. Now in 2021 take a look on the answers that talk about Named Pipes. This seems to be a better solution.
However, nobody there mentioned anything about security. The script that will evaluate the commands sent through the pipe (the script that calls eval) must actually not use eval for the whole pipe output, but to handle specific cases and call the required commands according to the text sent, otherwise any command that can do anything can be sent through the pipe.
That REALLY depends on what you need that bash script to do!
For example, if the bash script just echoes some output, you could just do
docker run --rm -v $(pwd)/mybashscript.sh:/mybashscript.sh ubuntu bash /mybashscript.sh
Another possibility is that you want the bash script to install some software- say the script to install docker-compose. you could do something like
docker run --rm -v /usr/bin:/usr/bin --privileged -v $(pwd)/mybashscript.sh:/mybashscript.sh ubuntu bash /mybashscript.sh
But at this point you're really getting into having to know intimately what the script is doing to allow the specific permissions it needs on your host from inside the container.
My laziness led me to find the easiest solution that wasn't published as an answer here.
It is based on the great article by luc juggery.
All you need to do in order to gain a full shell to your linux host from within your docker container is:
docker run --privileged --pid=host -it alpine:3.8 \
nsenter -t 1 -m -u -n -i sh
Explanation:
--privileged : grants additional permissions to the container, it allows the container to gain access to the devices of the host (/dev)
--pid=host : allows the containers to use the processes tree of the Docker host (the VM in which the Docker daemon is running)
nsenter utility: allows to run a process in existing namespaces (the building blocks that provide isolation to containers)
nsenter (-t 1 -m -u -n -i sh) allows to run the process sh in the same isolation context as the process with PID 1.
The whole command will then provide an interactive sh shell in the VM
This setup has major security implications and should be used with cautions (if any).
Write a simple server python server listening on a port (say 8080), bind the port -p 8080:8080 with the container, make a HTTP request to localhost:8080 to ask the python server running shell scripts with popen, run a curl or writing code to make a HTTP request curl -d '{"foo":"bar"}' localhost:8080
#!/usr/bin/python
from BaseHTTPServer import BaseHTTPRequestHandler,HTTPServer
import subprocess
import json
PORT_NUMBER = 8080
# This class will handles any incoming request from
# the browser
class myHandler(BaseHTTPRequestHandler):
def do_POST(self):
content_len = int(self.headers.getheader('content-length'))
post_body = self.rfile.read(content_len)
self.send_response(200)
self.end_headers()
data = json.loads(post_body)
# Use the post data
cmd = "your shell cmd"
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, shell=True)
p_status = p.wait()
(output, err) = p.communicate()
print "Command output : ", output
print "Command exit status/return code : ", p_status
self.wfile.write(cmd + "\n")
return
try:
# Create a web server and define the handler to manage the
# incoming request
server = HTTPServer(('', PORT_NUMBER), myHandler)
print 'Started httpserver on port ' , PORT_NUMBER
# Wait forever for incoming http requests
server.serve_forever()
except KeyboardInterrupt:
print '^C received, shutting down the web server'
server.socket.close()
If you are not worried about security and you're simply looking to start a docker container on the host from within another docker container like the OP, you can share the docker server running on the host with the docker container by sharing it's listen socket.
Please see https://docs.docker.com/engine/security/security/#docker-daemon-attack-surface and see if your personal risk tolerance allows this for this particular application.
You can do this by adding the following volume args to your start command
docker run -v /var/run/docker.sock:/var/run/docker.sock ...
or by sharing /var/run/docker.sock within your docker compose file like this:
version: '3'
services:
ci:
command: ...
image: ...
volumes:
- /var/run/docker.sock:/var/run/docker.sock
When you run the docker start command within your docker container,
the docker server running on your host will see the request and provision the sibling container.
credit: http://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/
As Marcus reminds, docker is basically process isolation. Starting with docker 1.8, you can copy files both ways between the host and the container, see the doc of docker cp
https://docs.docker.com/reference/commandline/cp/
Once a file is copied, you can run it locally
docker run --detach-keys="ctrl-p" -it -v /:/mnt/rootdir --name testing busybox
# chroot /mnt/rootdir
#
I have a simple approach.
Step 1: Mount /var/run/docker.sock:/var/run/docker.sock (So you will be able to execute docker commands inside your container)
Step 2: Execute this below inside your container. The key part here is (--network host as this will execute from host context)
docker run -i --rm --network host -v /opt/test.sh:/test.sh alpine:3.7
sh /test.sh
test.sh should contain the some commands (ifconfig, netstat etc...) whatever you need.
Now you will be able to get host context output.
You can use the pipe concept, but use a file on the host and fswatch to accomplish the goal to execute a script on the host machine from a docker container. Like so (Use at your own risk):
#! /bin/bash
touch .command_pipe
chmod +x .command_pipe
# Use fswatch to execute a command on the host machine and log result
fswatch -o --event Updated .command_pipe | \
xargs -n1 -I "{}" .command_pipe >> .command_pipe_log &
docker run -it --rm \
--name alpine \
-w /home/test \
-v $PWD/.command_pipe:/dev/command_pipe \
alpine:3.7 sh
rm -rf .command_pipe
kill %1
In this example, inside the container send commands to /dev/command_pipe, like so:
/home/test # echo 'docker network create test2.network.com' > /dev/command_pipe
On the host, you can check if the network was created:
$ docker network ls | grep test2
8e029ec83afe test2.network.com bridge local
In my scenario I just ssh login the host (via host ip) within a container and then I can do anything I want to the host machine
I found answers using named pipes awesome. But I was wondering if there is a way to get the output of the executed command.
The solution is to create two named pipes:
mkfifo /path/to/pipe/exec_in
mkfifo /path/to/pipe/exec_out
Then, the solution using a loop, as suggested by #Vincent, would become:
# on the host
while true; do eval "$(cat exec_in)" > exec_out; done
And then on the docker container, we can execute the command and get the output using:
# on the container
echo "ls -l" > /path/to/pipe/exec_in
cat /path/to/pipe/exec_out
If anyone interested, my need was to use a failover IP on the host from the container, I created this simple ruby method:
def fifo_exec(cmd)
exec_in = '/path/to/pipe/exec_in'
exec_out = '/path/to/pipe/exec_out'
%x[ echo #{cmd} > #{exec_in} ]
%x[ cat #{exec_out} ]
end
# example
fifo_exec "curl https://ip4.seeip.org"
Depending on the situation, this could be a helpful resource.
This uses a job queue (Celery) that can be run on the host, commands/data could be passed to this through Redis (or rabbitmq). In the example below, this is occurring in a django application (which is commonly dockerized).
https://www.codingforentrepreneurs.com/blog/celery-redis-django/
To expand on user2915097's response:
The idea of isolation is to be able to restrict what an application/process/container (whatever your angle at this is) can do to the host system very clearly. Hence, being able to copy and execute a file would really break the whole concept.
Yes. But it's sometimes necessary.
No. That's not the case, or Docker is not the right thing to use. What you should do is declare a clear interface for what you want to do (e.g. updating a host config), and write a minimal client/server to do exactly that and nothing more. Generally, however, this doesn't seem to be very desirable. In many cases, you should simply rethink your approach and eradicate that need. Docker came into an existence when basically everything was a service that was reachable using some protocol. I can't think of any proper usecase of a Docker container getting the rights to execute arbitrary stuff on the host.

How to start single process using service script passed to ENTRYPOINT

I am passing the service script to ENTRYPOINT. The service is started but exited.
I have to start a process per container using service script from ENTRYPOINT or CMD. This way, I can reload the configuration inside the container using service script. I tried with CMD statement as well, but it starting the service but immediately exists the container.
ENTRYPOINT ["/etc/init.d/elasticsearch", "start"]
/etc/init.d/elasticsearch script has below code to start the service as daemon.
cd $ES_HOME
echo -n $"Starting $prog: "
daemon --user elasticsearch --pidfile $pidfile $exec -p $pidfile -d
retval=$?
echo
[ $retval -eq 0 ] && touch $lockfile
return $retval
Is it not possible to start the service using startup script and keep the container running?
commands used to create and run the containers.
docker build -f Dockerfile -t="elk/elasticsearch" .
docker run -d elk/elasticsearch
docker run -it elk/elasticsearch bash
The sysv initscripts are of type "forking" speaking in terms of a service manager. So it will detach from the start script. The container then needs some init process on pid 1 that controls the background process(es).
If you do not want to extract the relevant command from the initscript then you could still use the docker-systemctl-replacement to do both things for you. If it is run as CMD then it will start enabled service scripts just as you are used from a normal machine.
In general you do not use service scripts with Docker. Also in general, you never restart the service inside a container; instead, you stop the existing container, delete it, and start a new one.
The standard pattern is to launch whatever service it is you are trying to run, directly, as a foreground process. (No /etc/init.d, service, or systemctl anything.) You can extract the relevant command from the init script you show. I would replace your ENTRYPOINT command with
CMD ["elasticsearch"]
(but also double-check the Elasticsearch documentation just in case there are some other command-line options that matter).
The second part of this is to make sure database data is stored outside the container. Usually you use the docker run -v option to mount some alternate storage into the container. For example:
docker run \
--name elasticsearch \
-p 9200:9200 \
-v ./elasticsearch:/var/data/elasticsearch \
imagename
Once you’ve done this, you are free to stop, delete, and recreate the container, which is the right way to restart the service. (You need to do this if the underlying image ever changes; this happens if there is a bug fix or security issue in the image software or in the underlying Linux distribution.)
docker stop elasticsearch
docker rm elasticsearch
docker run -- name elasticsearch ...
You can write a simple shell script to hold the docker run command, or use an orchestration tool like Docker Compose that lets you declare the container parameters.

Docker container on Alpine Linux 3.7: Strange pid 1 not visible within the container's pid namespace

I am currently tracking a weird issue we are experiencing using dockerd 17.10.0-ce on an Alpine Linux 3.7 host. It seems for all the containers on this host, the process tree initiated as the entrypoint/command of the Docker image is NOT visible within the container itself. In comparison, on an Ubuntu host, the same image will have the process tree visible as PID 1.
Here is an example.
Run a container with an explicit known entrypoint/command:
% docker run -d --name testcontainer --rm busybox /bin/sh -c 'sleep 1000000'
Verify the processes are seen by dockerd properly:
% docker top testcontainer
PID USER TIME COMMAND
6729 root 0:00 /bin/sh -c sleep 1000000
6750 root 0:00 sleep 1000000
Now, start a shell inside that container and check the process list:
% docker exec -t -i testcontainer /bin/sh
/ # ps -ef
PID USER TIME COMMAND
6 root 0:00 /bin/sh
12 root 0:00 ps -ef
As can be observed, our entrypoint command (/bin/sh -c 'sleep 1000000') is not visible inside the container itself. Even running top will yield the same results.
Is there something I am missing here? On an Ubuntu host with the same docker engine version, the results are as I would expect. Could this be related to Alpine's hardened kernel causing an issue with how the container PID space is separated?
Any help appreciated for areas to investigate.
-b
It seems this problem is related to grsecurity module which the Alpine kernel implements. In this specific case, the GRKERNSEC_CHROOT_FINDTASK kernel setting is used to limit what processes can do outside of the chroot environment. This is controlled by the kernel.grsecurity.chroot_findtask sysctl variable.
From the grsecurity docs:
kernel.grsecurity.chroot_findtask
If you say Y here, processes inside a chroot will not be able to kill,
send signals with fcntl, ptrace, capget, getpgid, setpgid, getsid, or
view any process outside of the chroot. If the sysctl option is
enabled, a sysctl option with name "chroot_findtask" is created.
The only workaround I have found for now is to disable this flag as well as the chroot_deny_mknod and chroot_deny_chmod flags in order to get the same behaviour as with a non-grsecurity kernel.
kernel.grsecurity.chroot_deny_mknod=0
kernel.grsecurity.chroot_deny_chmod=0
kernel.grsecurity.chroot_findtask=0
Of course this is less than ideal since it bypasses and disables security features of the system but might be a valid workaround for a development environment.

Can't attach to bash running the Docker container

Having troubles with attaching to the bash instance keeping the container running.
To be more detailed. I am running container as here:
$ docker run -dt --name test ubuntu bash
Now it should be actually running, not finished.
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
f3596c613cfe ubuntu "bash" 4 seconds ago Up 2 seconds test
After this, I am trying to attach to that instance of bash that keeps the container running. Like this:
$ docker attach test
Running this command I am able to write something to stdin, but no result following. I am not sure if bash is getting lines I typed.
Is there some other way to bash that keeps the container running?
I know, that I can run a different instance of bash and use it docker exec -it test bash. But being more general, is there a way to connect to process that's running in Docker container?
Sometimes it can be useful to save the session of a process running inside the container.
SOLUTION
Thanks to user2915097 for pointing out the missing -i flag.
So now we can have persistent bash session. For example, let's set some alias and reuse after stopping and restarting the container.
$ docker run -itd --name test ubuntu bash
To attach to bash instance just run
$ docker attach test
root#3534cbe1e994:/# alias test="Hello, world!"
To detach from container and not to stop the container press Ctrl+p, Ctrl+q
Then we can stop and restart the container
$ docker stop test
$ docker start test
Now we can attach to the same bash instance and check our alias
$ docker attach test
root#3534cbe1e994:/# test
Hello, world!
Everything is working perfectly!
As I have pointed out in my comment use-case for this can be running some interactive shells as bash, octave, ipython in Docker container persisting all the history, imports, variables and temporary settings just
by reattaching to the same instance.
Your container is running, it is not finished, as you can see
it appears in docker ps, so it is a running container
it show up n seconds
you launch it with -dt so you want it
detached (for d)
allocate a tty (for t)
but not interactive, as you do not add -i
Usually, you nearly always provide -it together, it may be -idt
See this thread
When would I use `--interactive` without `--tty` in a Docker container?
as you want bash, I think you should add -i
I am not sure why you use -d
Usually it is
docker run -it --rm --name=mytest ubuntu bash
and you can test
A container's running lifecycle is determined by its root process, which is bash in your example. When your start your ubuntu container with bash as the process, bash is immediately exiting because it has nothing to keep it running. That's why the container immediately exits and there's nothing to attach to.

Why does "docker attach" hang?

I can run an ubuntu container successfully:
# docker run -it -d ubuntu
3aef6e642327ce7d19c7381eb145f3ad10291f1f2393af16a6327ee78d7c60bb
# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3aef6e642327 ubuntu "/bin/bash" 3 seconds ago Up 2 seconds condescending_sammet
But executing docker attach hangs:
# docker attach 3aef6e642327
Until I press any key, such as Enter:
# docker attach 3aef6e642327
root#3aef6e642327:/#
root#3aef6e642327:/# ls
bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var
Why does docker attach hang?
Update:
After reading the comments, I think I get the answers:
prerequisite:
"docker attach" reuse the same tty, not open new tty.
(1) Executing the docker run without daemon mode:
# docker run -it ubuntu
root#eb3c9d86d7a2:/#
Everything is OK, then run ls command:
root#eb3c9d86d7a2:/# ls
bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var
root#eb3c9d86d7a2:/#
(2) Run docker run in daemon mode:
# docker run -it -d ubuntu
91262536f7c9a3060641448120bda7af5ca812b0beb8f3c9fe72811a61db07fc
Actually, the following should have been outputted to stdout from the running container:
root#91262536f7c9:/#
So executing docker attach seems to hang, but actually it is waiting for your input:
# docker attach 91262536f7c9
ls
bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var
root#91262536f7c9:/#
It does not really hang. As you can see in the comment below (You are running "/bin/bash" as command) it seems to be expected behaviour when attaching.
As far as I understand you attach to the running shell and just the stdin/stdout/stderr - depending on the options you pass along with the run command - will just show you whatever goes in/out from that moment. (Someone with a bit more in-depth knowledge hopefuly can explain this on a higher level).
As I wrote in my comment on your question, there are several people who have opened an issue on the docker github repo describing similar behaviour:
docker attach [container] hangs, requires input #8521
docker attach hangs setting terminal state when attaching to container
Since you mention shell, I assume you have a shell already running. attach doesn't start a new process, so what is the expected behavior of connecting to the in/out/err streams of a running process?
I didn't think about this. Of course this is the expected behavior of attaching to a running shell, but is it desirable?
Would it be at all possible to flush stdout/stderr on docker attach thereby forcing the shell prompt to be printed or is it a bit more complex than that? That's what I personally would "expect" when attaching to an already running shell.
Feel free to close this issue if necessary, I just felt the need to document this and get some feedback.
Taken from a comment on this github issue. You can find more insight in the comments of this issue.
If instead of enter you would start typing a command, you would not see the extra empty prompt line. If you were to run
$ docker exec -it ubuntu <container-ID-or-name> bash
where <container-ID-or-name> is the ID or name of the container after you run docker run -it -d ubuntu (so 3aef6e642327 or condescending_sammet in your question) it would run a new command, thus not having this "stdout problem" of attaching to an existing one.
Example
If you would have a Dockerfile in a directory containing:
FROM ubuntu:latest
ADD ./script.sh /timescript.sh
RUN chmod +x /timescript.sh
CMD ["/timescript.sh"]
And have a simple bash script script.sh in the same directory containing:
#!/bin/bash
#trap ctrl-c and exit, couldn't get out
#of the docker container once attached
trap ctrl_c INT
function ctrl_c() {
exit
}
while true; do
time=$(date +%N)
echo $time;
sleep 1;
done
Then build (in this example in the same directory as the Dockerfile and script.sh) and run it with
$ docker build -t nan-xiao/time-test .
..stuff happening...
$ docker run -itd --name time-test nan-xiao/time-test
Finally attach
$ docker attach time-test
You will end up attached to a container printing out the time every second. (CTRL-C to get out)
Example 2
Or if you would have a Dockerfile containing for example the following:
FROM ubuntu:latest
RUN apt-get -y install irssi
ENTRYPOINT ["irssi"]
Then run in the same directory:
$ docker build -t nan-xiao/irssi-test .
Then run it:
$ docker run -itd --name irssi-test nan-xiao/irssi-test
And finally
$ docker attach irssi-test
You would end up in a running irssi window without this particular behaviour. Of course you can substitute irrsi for another program.
I ran into this issue as well when attempting to attach to a container that was developed by someone else and already running a daemon. (In this case, it was LinuxServer's transmission docker image).
Problem:
What happened was the terminal appeared to 'hang', where typing anything didn't help and wouldn't show up. Only Ctrl-C would kick me back out.
docker run, docker start, docker attach all was not successful, turns out the command I needed (after the container has been started with run or start) was to execute bash, as chances are the container you pulled from doesn't have bash already running.
Solution:
docker exec -it <container-id> bash
(you can find the container-id from running docker ps -a).
This will pull you into the instance with a functional bash as root (assuming there was no other explicit set up done by the image you pulled).
I know the accepted answer has captured this as well, but decided to post another one that is a little more terse and obvious, as the solution didn't pop out for me when I was reading it.
When I run docker attach container-name, then nothing output, even Ctrl-c is invalid. So, first try
docker attach container-name --sig-proxy=false
and then ctrl-c can stop it. Why it didn't output anything?
just because the container doesn't output. Actually I need to enter my container and run some shell command. So the correct command is
docker exec -ti container-name bash
This happened to me once for the following reason:
It could be that the bash command inside the container is executing a "cat" command.
So when you attach to the container (the bash command) you are actualy inside the cat command which is expecting input. (text and/or ctrl-d to write the file)
If you cannot access command line, just make sure you run your container with -i flag at start.
I just had a similar problem today and was able to fix it:
Here is what was happening for me:
docker-compose logs -f nginx
Attaching to laradock_nginx_1
Then it would hang there until I quit via CTRL-C: ^CERROR: Aborting.
docker ps -a showed that what SHOULD have been called laradock_nginx did not exist with that Image Name, so I figured I'd just remove and re "up" that container:
docker stop cce0c32f7556
docker rm cce0c32f7556
docker-compose up -d laradock_nginx
Unfortunately: ERROR: No such service: laradock_nginx
So I did a sudo reboot and then docker ps -a, but laradock_nginx still wasn't there.
Luckily, docker-compose up -d nginx then worked and docker-compose logs -f nginx now works.
Using: docker exec -it CONTAINER_ID/NAME bash
Instead: docker attach...

Resources