Issue in removing docker images remotely on CentOS - linux

I've installed a Kubernetes cluster using Rancher on 5 different CentOS nodes (let's say node1, node2, ..., node5). For our CI run, we need to clean up stale docker images before each run. I created a script that runs on node1, and password-less ssh is enabled from node1 to rest of the nodes. The relevant section of the script looks something like below:
#!/bin/bash
helm ls --short --all | xargs -L1 helm delete --purge
echo "Deleting old data and docker images from Rancher host node."
rm -rf /var/lib/hadoop/* /opt/ci/*
docker images | grep localhost | awk '{print $3}' | xargs docker rmi -f
hosts=(node2 node3 node4 node5)
for host in ${hosts[*]}
do
echo "Deleting old data and docker images from ${host}"
ssh root#${host} docker images | grep localhost | awk '{print $3}' | xargs docker rmi -f
ssh root#${host} rm -rf /var/lib/hadoop/* /opt/ci/*
done
echo "All deletions are complete! Proceeding with installation."
sleep 2m
Problem is, while the docker rmi command inside the for loop runs for all other 4 nodes, I get the error Error: No such image: <image-id> for each of the images. But if I execute same command on that node, it succeeds. I'm not sure what's the issue here. Any help is appreciated.

The problem is that the only the first command in the ssh pipe is executed remotely:
ssh root#${host} docker images | grep localhost | awk '{print $3}' | xargs docker rmi -f
Shell understand that it is
ssh ssh-arguments | grep grep-arguments | awk awk-arguments | xarg xarg-arguments
And the result is that the only docker images is executed remotely. Then the output from the remote docker images is transferred to the local machine where it is filtered by grep and awk and then docker rmi is executed on local machine.
It is necessary to add quotes to inform shell that everything at command line is a ssh argument:
ssh root#${host} "docker images | grep localhost | awk '{print $3}' | xargs docker rmi -f"

Related

Why update docker image base version from node 14.19.0 to node 14.19.1 change linux machine id?

Problem : I'm using node-machine-id library inside container
This library should get the default machine-id exist in one of files /var/lib/dbus/machine-id , /etc/machine-id or create new one
During updating my docker node image from 14.19.0 to 14.19.1 , I found that docker have different behavior for machine-id ?
My Notice :
Same OS version is used , so there is no difference from os behavior
In the Image for Node version 14.19.1, I found that the file /etc/machine-id does not exist during running the container.
In the Image for Node version 14.19.0, I found that the file /etc/machine-id exists during running the container.
My Code :
Dockerfile :
FROM node:14.19.1
CMD uname -a && node --version && cat /etc/machine-id /var/lib/dbus/machine-id
Command I use to test ( to build image , run container , delete the container ,and delete the image )
temp_image_name=nodeimageupdate ; docker rm $(docker ps --filter status=exited | grep $temp_image_name) 2>/dev/null ; docker build -t $temp_image_name . ; docker run -d -it $temp_image_name ; sleep 1 ; docker logs $(docker ps -a | grep $temp_image_name | awk '{print $1;}' | xargs echo ) ; docker rm $(docker ps --filter status=exited | grep $temp_image_name | awk '{print $1;}' | xargs echo); docker rmi -f $(docker images | grep $temp_image_name | awk '{print $3;}' | xargs echo ) ;
Result for node 14.19.1 ( no machine-id ) , So running twice will have non statice machine id
Result for node 14.19.0 ( machine-id file exist ) , So running twice will have same statice machine id
I cannot recognize why that happens , Any help ?

cant execute command stored in txt file in bash successfully

I have to make ssh request for different nodes based on given IP.
different ssh commands are stored in node.txt as below
0.0.0.0 a ssh -t user#0.0.0.0 sudo -u node0 /path/script.sh
0.0.0.1 b ssh -t user#0.0.0.1 sudo -u node1 /path/script.sh
0.0.0.2 c ssh -t user#0.0.0.2 sudo -u node2 /path/script.sh
I try to grep the needed ssh command like this
comm=$(grep 0.0.0.2 node.txt | grep c | cut -f3)
when I run
status=$($comm)
the following error appears:
/path/script.sh not found
while if I hard coded the command in the script itself it work correctly,
comm='ssh -t user#0.0.0.2 sudo -u node2 /path/script.sh'
status=$($comm)
what could be the problem here?
#Karim Ater
Try the following at your system:
    comm=$(grep 0.0.0.2 node.txt | grep c | cut -f3)
    $ echo $comm
Hence I tried
    comm=$(grep 0.0.0.2 node.txt | grep c | sed "s/.*ssh/ssh/;")
    $ echo $comm
I cannot confirm the contents of node.txt to know related delimiter.
Hence I used sed instead of using cut.

docker - pull image and get it's id (linux cli)

I am running this command in centos7 termnial:
docker pull www.someRepository.com/authorization:latest
Now, I want to run the "docker run" command, but I need to know the id of the image that was created
$id=commandThatParsesTheId
Is there a command that gets the id back from the "docker images" list?
You can use the name of image and its' tag
Or you can use
docker images -q | grep yourimagename
docker images | grep yourimagename | awk {'print $3'}
You might be able to clean this up a bit, but the following should work:
docker pull <someimage> | grep "Digest:" | cut -f2 -d " " > container_digest
docker images --digests | grep $(cat container_digest) | sed -Ee 's/\s+/ /g' | cut -f4 -d " "
It maps the digest you receive when you pull an image to the image ID.
you can run the "docker run" command on images name also instead of image id
docker pull www.someRepository.com/authorization:latest
docker run -i -t www.someRepository.com/authorization:latest "/bin/bash"
Above one is to run the container for interactive mode.
you can get image id also to run the docker run command
docker images
docker run -i -t dockerid "/bin/bash"

Update (pull) all docker images at once

Is there a command to update (pull) all the downloaded Docker images at once in the terminal ?
No, there is no built-in command to pull all docker images at once.
But you can try this (multiline) bash using docker --format :
for image in $(docker images --format "{{.Repository}}:{{.Tag}}" | grep -v '<none>')
do
docker pull $image
done
Or in one line:
for image in $(docker images --format "{{.Repository}}:{{.Tag}}" | grep -v '<none>'); do docker pull $image; done;
You can use this :
docker images | awk '{print $1":"$2}' | grep -v REPOSITORY | xargs -L1 docker pull
The most "dockerist" way to do it is:
docker images --format "{{.Repository}}:{{.Tag}}" | xargs -L1 docker pull

Problems with fish shell and ssh remote commands

I use fish shell on my desktop.
We use many servers running nginx within docker. I've tried to create a function so I can ssh to the servers and then log into the docker.
The problem is fish is complaining about the $ in the command, but the command is the one to be executed on the remote server (running bash), not on my machine running fish. I've simplified the script to make it easier to see.
config.fish snippet
function ssh-docker-nginx
ssh -t sysadmin#10.10.10.10 "sudo bash && docker exec -it $(docker ps | grep -i nginx | awk '{print $1}') bash"
end
Fish error:
$(...) is not supported. In fish, please use '(docker)'.
~/.config/fish/config.fish (line 59): ssh -t sysadmin#10.10.10.10 "sudo bash && docker exec -it $(docker ps | grep -i nginx | awk '{print $1}') bash"
^
from sourcing file ~/.config/fish/config.fish
called during startup
Is there a way to get fish to ignore this?
You'll want to single-quote that argument.
In double-quotes (") fish will try to expand everything that starts with a $, so it will see that $( and then print the error for it. But it will also see the $1 in your arguments to awk and expand that.
And when you want single-quotes to go to the called command (like here, where you want the argument to awk to be single-quoted because this'll go through bash's expansion), you need to escape the quotes with \.
Try
ssh -t sysadmin#10.10.10.10 'sudo bash && docker exec -it $(docker ps | grep -i nginx | awk \'{print $1}\') bash'
Thanks for the great advice and tip above about the single/double quotes. Unfortunately the escaped quotes in awk did not play nicely being passed to ssh.
After various options, I settled with this approach (which needed force tty):
function ssh-docker-nginx
cat docker-bash.sh | ssh -t -t sysadmin#10.10.10.10
end
# docker-bash.sh
#!/bin/bash
sudo chmod 777 /var/run/docker.sock
sudo docker exec -it $(docker ps | grep -i nginx | awk '{print $1}') bash

Resources