All combined docker logs with container name - linux

So I am trying to get the combined output of all the container logs with its container name in a log file
docker logs --tail -3 with container name >> logst.log file.

docker logs takes a single command so you would have to run them for each container. I guess I could do something along:
docker ps --format='{{.Names}}' | xargs -P0 -d '\n' -n1 sh -c 'docker logs "$1" | sed "s/^/$1: /"' _
docker ps --format='{{.Names}}' - print container names
xargs - for input
-d '\n' - for each line
-P0 - execute in parallel with any count of parallel jobs
remove this option if you don't intent to do docker logs --follow
it may cause problems, consider adding stdbuf -oL and sed -u to unbuffer the streams
-n1 - pass one argument to the underyling process
sh -c 'script' _ - execute script for each line with line passed as first positional argument
docker logs "$1" - get the logs
sed 's/^/$1: /' - prepend the name of the docker name to each log line
But a way better and industrial grade solution would be to forward docker logs to journalctl or other logging solution and use that utility to aggregate and filter logs.

Got it.
for i in docker ps -a -q --format "table {{.ID}}"; do { docker ps -a -q --format "table {{.ID}}\t{{.Names}}\n" | grep "$i" & docker logs --timestamps --tail 1 "$i"; } >> logs.log; done
logs.log is a generic file name.

Related

Is it possible to suppress specific messages from docker logs?

I have thousands of INFO reaped unknown pid XXXXX in my docker logs when I do a docker logs -f, is there any way to suppress these messages via a regex for these logs?
Example Format [DATE] [TIME] INFO reaped unknown pid XXXX
You could use grep -v -e '<REGEXP>'
Example:
docker logs <CONTAINER_ID> -f 2>&1 | grep -v -e "2019.*INFO"

Issue in removing docker images remotely on CentOS

I've installed a Kubernetes cluster using Rancher on 5 different CentOS nodes (let's say node1, node2, ..., node5). For our CI run, we need to clean up stale docker images before each run. I created a script that runs on node1, and password-less ssh is enabled from node1 to rest of the nodes. The relevant section of the script looks something like below:
#!/bin/bash
helm ls --short --all | xargs -L1 helm delete --purge
echo "Deleting old data and docker images from Rancher host node."
rm -rf /var/lib/hadoop/* /opt/ci/*
docker images | grep localhost | awk '{print $3}' | xargs docker rmi -f
hosts=(node2 node3 node4 node5)
for host in ${hosts[*]}
do
echo "Deleting old data and docker images from ${host}"
ssh root#${host} docker images | grep localhost | awk '{print $3}' | xargs docker rmi -f
ssh root#${host} rm -rf /var/lib/hadoop/* /opt/ci/*
done
echo "All deletions are complete! Proceeding with installation."
sleep 2m
Problem is, while the docker rmi command inside the for loop runs for all other 4 nodes, I get the error Error: No such image: <image-id> for each of the images. But if I execute same command on that node, it succeeds. I'm not sure what's the issue here. Any help is appreciated.
The problem is that the only the first command in the ssh pipe is executed remotely:
ssh root#${host} docker images | grep localhost | awk '{print $3}' | xargs docker rmi -f
Shell understand that it is
ssh ssh-arguments | grep grep-arguments | awk awk-arguments | xarg xarg-arguments
And the result is that the only docker images is executed remotely. Then the output from the remote docker images is transferred to the local machine where it is filtered by grep and awk and then docker rmi is executed on local machine.
It is necessary to add quotes to inform shell that everything at command line is a ssh argument:
ssh root#${host} "docker images | grep localhost | awk '{print $3}' | xargs docker rmi -f"

docker - pull image and get it's id (linux cli)

I am running this command in centos7 termnial:
docker pull www.someRepository.com/authorization:latest
Now, I want to run the "docker run" command, but I need to know the id of the image that was created
$id=commandThatParsesTheId
Is there a command that gets the id back from the "docker images" list?
You can use the name of image and its' tag
Or you can use
docker images -q | grep yourimagename
docker images | grep yourimagename | awk {'print $3'}
You might be able to clean this up a bit, but the following should work:
docker pull <someimage> | grep "Digest:" | cut -f2 -d " " > container_digest
docker images --digests | grep $(cat container_digest) | sed -Ee 's/\s+/ /g' | cut -f4 -d " "
It maps the digest you receive when you pull an image to the image ID.
you can run the "docker run" command on images name also instead of image id
docker pull www.someRepository.com/authorization:latest
docker run -i -t www.someRepository.com/authorization:latest "/bin/bash"
Above one is to run the container for interactive mode.
you can get image id also to run the docker run command
docker images
docker run -i -t dockerid "/bin/bash"

How to get current foldername and remove characters from the name in bash

I'm trying to write a single command in my makefile to get the current folder and remove all "." from the name.
I can get the current folder with $${PWD##*/} and I can remove the "."s with $${PWD//.} but I can't figure out how to combine these two into one.
The reason I need this is to kill my docker containers based on name of project. This is my command:
docker ps -q --filter name="mycontainer" | xargs -r docker stop
and i was hoping I could inject the project name before my container name like this:
docker ps -q --filter name=$${PWD##*/}"_mycontainer" | xargs -r docker stop
You can try:
var=$(echo ${PWD##*/} | sed "s/\.//")
or:
var=$(tmp=${PWD##*/} && printf "${tmp//./}")
In your use case will be something like:
docker ps -q --filter name=$(tmp=${PWD##*/} && printf "%s_mycontainer" "${tmp//./}") | xargs -r docker stop
Note that there are more ways to do that (even more efficient).

tail command with dynamic file parameter

I am now using the tail command as below
show_log.sh:
LOGFILE=`ls -1 -r ./myservice.log.????????.?????? | head -n 1`
tail -v -f -s 1 -n 100 ${LOGFILE}
to monitor the log file.
The problem with it is that after each service restart, a new log file will be created, and the prior log file will be compressed. So the tail command stops working.
I need to change the script so that to continue tailing with the new file
Found a way. ojblass with the capital F parameter suggestion helped.
Actually I created a link to the latest log file by the following command after each service restart:
ln -n service-blabla.log log_lnk
and changed the tail command like this:
tail -v -F -s 1 -n 100 log_lnk
Note the capital F in the tail command. Lowercase f doesn't work in this situation.
done.

Resources