Getting all logs from Stdout in Docker - linux

I run a simple Flask application in a docker container.
To run it i do :
docker run --name my_container_name my_image_name
The logs are redirected to stdout. So after this command, i see as an output :
* Serving Flask app 'main'
* Debug mode: off
WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
* Running on all addresses (0.0.0.0)
* Running on http://127.0.0.1:8080
* Running on http://172.17.0.2:8080
Press CTRL+C to quit
When i want to get the logs with a separate command, i do :
docker container logs my_container_name
It returns well the logs. Exactly the same output as the output written above.
But if I try to redirect the output to a file :
docker container logs my_container_name > mylogfile.log
I don't get all the logs! I get only :
* Serving Flask app 'main'
* Debug mode: off
Why that ?

Running the container with a dedicated Pseudo-TTY solves the problem.
docker run -t --name my_container_name my_image_name
The parameter "-t" solved my issue.....But i don't understand why.

No need to redirect logs, just use:
tail -f `docker inspect --format='{{.LogPath}}' my_container_name`
or if you don`t like it , you can try:
docker logs -f my_container_name &> my_container.log
the trick is the &> which redirects both pipes to the file

Related

Docker container STDOUT not showing in Docker Logs

I am trying to configure my php errors to output to docker logs. All documentation I have read indicates that docker logs are tied to the containers stdout and stderr which come from /proc/self/fd/1 and /proc/self/fd/2. I created a symlink from my php error log file /var/log/php_errors.log to /proc/self/fd/1 with command:
ln -sf /proc/self/fd/1 /var/log/php_errors.log
After linking the error log I have tested its functionality by running this php script:
<?php
error_log("This is a custom error message constructed to test the php error logging functionality of this site.\n");
?>
The output echos the error message to the console so I can see that php error logging is now redirected to stdout in the container, but when I run docker logs -f <containername> I never see the error message in the logs. Also echoing from inside the container doesn't show in the logs either which is confusing because my understanding is the echo command is stdout.
Further reading informed me that docker logs will only show output from pid 1 which could be the issue. If this is the case how can I correctly configure my php error logging to show in docker logs outside the container.
Also I have checked that I am using the default json-file docker driver, and have tried this both on my local environment and a web server.

How to debug docker restart not restarting in node.js app?

I have a container with a docker-compose like this
services:
app:
build:
context: app
restart: always
version: '3.5'
It launches a node app docker-compose run -d --name my-app app node myapp.js
the app is made to either run to completion or throw, and then the goal would be to have docker restart it in an infinite loop, regardless of the exit code. I'm unsure why but it doesn't restart it.
How can I debug this? I have no clue what exit code node is sending, nor do I know which exit code docker uses to decide to restart or not.
I am also on mac, haven't tested on linux yet. Edit: It does restart on linux, don't have another mac to see if the behavior is isolated to my mac only.
It is important to understand the following two concepts:
Ending your Node app doesn't mean the end of your container. Your container runs a shared process from your OS and your Node app is only a sub process of that. (Assuming your application runs with the Deamon)
The restart indicates the "starting" policy - it will never terminate and start your container again.
Having said that, what you need is a way you can really restart your container from within the application. The best way to do this is via Docker healthchecks:
https://docs.docker.com/engine/reference/builder/#healthcheck
Or, here are some answers on restarting a container from within the application.
Stopping docker container from inside
From Github Issue seems like it does not respect `--restart``, or from the #Charlie comment seems like its vary from platform to platform.
The docker-compose run command is for running “one-off” or “adhoc” tasks.The run command acts like docker run -ti in that it opens an interactive terminal to the container and returns an exit status matching the exit status of the process in the container.
docker-compose run
Also if its like docker run -it then I am not seeing an option for restart=always but it should then respect ``restart` option in compose.
Usage:
run [options] [-v VOLUME...] [-p PORT...] [-e KEY=VAL...] [-l KEY=VALUE...]
SERVICE [COMMAND] [ARGS...]
Options:
-d, --detach Detached mode: Run container in the background, print
new container name.
--name NAME Assign a name to the container
--entrypoint CMD Override the entrypoint of the image.
-e KEY=VAL Set an environment variable (can be used multiple times)
-l, --label KEY=VAL Add or override a label (can be used multiple times)
-u, --user="" Run as specified username or uid
--no-deps Don't start linked services.
--rm Remove container after run. Ignored in detached mode.
-p, --publish=[] Publish a container's port(s) to the host
--service-ports Run command with the service's ports enabled and mapped
to the host.
--use-aliases Use the service's network aliases in the network(s) the
container connects to.
-v, --volume=[] Bind mount a volume (default [])
-T Disable pseudo-tty allocation. By default `docker-compose run`
allocates a TTY.
-w, --workdir="" Working directory inside the container

How to redirect all docker-compose logs automactically

I have node APIs and I run that with the help of docker-compose. I host that on EC2, so whenever I go and check for the logs I type docker-compose logs and it will give me all logs on the screen but how can I save all logs to the file automatically. What I mean is when I deploy new docker on the server then it should start saving all the logs to specific file so later I can go and check that out.
I can save docker-compose logs manually by executing this command:
docker-compose logs > logs.txt
You can try to use a scheduled crontab
For example:
0 1 * * * /bin/sh backup.sh
In your case, I guess that will be something like this:
0 1 * * * docker-compose logs > logs.txt
You can also read more about crontabs here

Why the RUN command in this dockerfile is not working

I have a docker image (MyBaseImage) where I have Open SSH server installed. Now in the docker file, I have the following.
#Download base image
FROM MyBaseImage
RUN service ssh start
I build the new image by typing
docker build .
Docker builds the image fine, giving the following information.
Step 1/2 : FROM MyBaseImage
---> 56f88e347f77
Step 2/2 : RUN service ssh start
---> Running in a1afe0c2ce71
* Starting OpenBSD Secure Shell server sshd [ OK ]
Removing intermediate container a1afe0c2ce71
---> 7879cebe8b6a
But when I run the new image by typing
docker run -it 7879cebe8b6a
Typing the following in the terminal of the container
service ssh status
gives
* sshd is not running
I have to then manually start the Open SSH server by typing service ssh start.
What could be the reason for it?
If you look at your build, you can see the ssh service start in an intermediate container which is deleted in the next build step:
---> Running in a1afe0c2ce71
* Starting OpenBSD Secure Shell server sshd [ OK ]
Removing intermediate container a1afe0c2ce71
To start a service in a Dockerfile, you should use either a CMD or ENTRYPOINT statement as the last line (depending on whether you might want to pass an argument in the docker run ... command, normally.
Generally, a service will start in the background as a daemon however, so having this as your last line:
CMD ["service", "ssh", "start"]
Will not work, as the container will exit as it has nothing do to
What you probably want (from the docker docs) is this:
CMD ["/usr/sbin/sshd", "-D"]
Which starts the service in the foreground so that the container will stay alive
This link has useful info about the difference between CMD & ENTRYPOINT, and also the difference between the exec & shell formats.
Depending on which distro of linux you are using command slightly changes.
If you are using ubuntu your start command should work.
But if your base image is centos/RHEL try service sshd start

Error response from daemon: driver failed programming external connectivity on endpoint modest_aryabhata

I'm going through this tutorial
making docker image with: docker build -t myapp_back .
and then want to run container with: docker run -p 3000:3000 -d myapp_back
it's simlpe node/express app
But I'm getting an error:
C:\Program Files\Docker\Docker\Resources\bin\docker.exe: Error
response from daemon: driver failed programming external connectivity
on endpoint wizardly_wescoff
(a7c53e0d168f915f900e3d67ec72805c2f8e4f5e595f6ae3c7fed8e097886a8b):
Error starting userland proxy: mkdir
/port/tcp:0.0.0.0:3000:tcp:172.17.0.2:3000: input/output error.
What's wrong?
my dockerfile:
FROM node:carbon
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD ['npm', 'start']
and start in package.json:
"start": "nodemon src/app.js --exec babel-node"
To solve the following error in Windows: just Restart Docker (from tray menu or selecting the 'Restart Docker...' option in Settings/Reset)
Cannot start service YOUR_SERVICE: driver failed programming external connectivity on endpoint ...
Looks like it is a known issue from docker: https://github.com/docker/for-win/issues/573
Try:
disabling "Experimental Features" in the Settings/Daemon menu
restarting docker
stopping all containers.
To stop all containers, run: docker ps -a -q | ForEach { docker stop $_ }
EDIT: Working ShellScript code to Stop All Containers
for a in `docker ps -a -q`
do
echo "Stopping container - $a"
docker stop $a
done
Just restarted my computer and it works now..
I am able to get docker working on my windows 10 pc by resetting docker to the factory defaults. Restarting docker, restarting my machine did not work.
On Mac Mojave, run the following command to find which processes are using the port.
sudo lsof -i #localhost:<port_no>
In my case I was checking port 8080 so I run
sudo lsof -i #localhost:8080
I found that the http-alt is running on port 8080 and after getting the process id using above command you can kill the processes by
sudo kill -9 <process_id>
However, in my case four applications ArtemisSe, Mail, Google and Slack are using http-alt on port 8080. Since they look important applications so I changed my port and run the container on 8888 instead of 8080. i.e.
docker run -it --rm -p 8888:8080 <imageid or image name>
Restarting the computer is not the actual fix, just a workaround, that one would need to be doing on a frequent basis.
The problem is related with the default windows 10 shutdown behaviour.
The actual fix can be achieved disabling windows fast startup settings:
Control Panel -> Power Options -> Choose what the power button does -> Change settings that are currently unavailable -> Toggle Turn on fast startup
I am running under linux. If I run docker as root with the sudo command, it works fine.
Just restart docker, right click on its icon then restart. that solved my problem
In my case, the same error in PHP Container. I solve changing the public port and works.
This command throw error after restart my Windows 10:
docker run -d -p 8080:80 --name php_apache php_app
Solution:
docker run -d -p 8081:80 --name php_apache php_app
Just run this command to stop your all containers
It worked for me.
for a in docker ps -a -q
do
echo "Stopping container - $a"
docker stop $a
done
In some case,restarting your computers solve problem. But it is not really best solution, especially UNIX like operation system.
First of all you should know other process is either running or not in specific port, If you see such port is already in use by other resources. you should kill that procees which running in that port. To do that just follow:
sudo lsof -i #localhost:<port number>
Output looks like this
Command PID USER TYPE SIZE ...
<command> <pid number>
We need pid number which is defines procees id
And then kill that process by its procees id
sudo kill -9 <pid>
After kill that procees you can run your new container in such port as you want

Resources