How can I delete partially built docker images? - linux

I have three parallel jenkins builds. If one of the builds (the Linux build) fails, all the rest builds (two Windows builds that take much longer time and run on one Windows jenkins node) are interrupted.
If they are interrupted after their docker build -t mytag:myname ... is finished to execute, everything is alright and they are deleted in jenkins' post always section by docker rmi mytag:myname.
However, if the Windows builds are not finished, their images are left unnamed/untagged and I end up with them left undeleted after the failed jenkins job. And I am afraid of eating up all the storage on the Windows jenkins node.
I also must mention that I cannot run parallel docker prune commands in jenkins on the Windows node, because I get error: Error response from daemon: a prune operation is already running, as these commands end up being executed at once on the one Windows jenkins node.
The only idea I have: to have weekly cron on Linux/Windows jenkins nodes doing docker system prune -a every Sunday.
I would really appreciate any ideas on how these partially built images could be eliminated on the Windows node.

To remove untagged/dangling images you can try the command:
docker rmi $(docker image ls -q -f dangling=true )
A quick explanation what this command should do:
docker rmi - remove images with these IDs.
$(...) - execute a subcommand.
docker image ls - list all images.
-q - only show the IDs of the images.
-f dangling=true - -f is a filter, and we filter for dangling/untagged images
As a whole the subcommand gives you all the IDs of images unused and untagged, whereas the main command removes all the images with the corresponding IDs.
Sources:
https://docs.docker.com/engine/reference/commandline/image_ls/
https://docs.docker.com/engine/reference/commandline/rmi/
https://levelup.gitconnected.com/docker-remove-dangling-aka-none-images-8593ef60deda

Related

How to make multiple independent attachments to the same docker container?

Maybe a trivial question but that's my problem:
I attached to a running docker container, after some use I needed to run a Unit Test and gdb at "the same time".
So I openned another shell tab (konsole tab) and attached again to the same docker container $ docker attach container_name but everything I did echoed in both attachments. If I execute cd /home/user/folder_foo the other tab will "do the same", ended up both konsole tabs in the same folder. Like the same command was echoed to both tabs. Maybe it's a unique user structure and what I isn't even possible.
I really need to do two thing in parallel in the same docker container, how it could be done?
$ docker --version
Docker version 20.10.9, build c2ea9bc
I am using Ubuntu 21.04
Run multiple services in a container
It is generally recommended that you separate areas of concern by using one service per container.
But for development purposes, you can follow mentioned guide.
Additionally, there is a similar answer already provided
You can run docker exec -it <container> bash from multiple terminals to launch several sessions connected to the same container.

Python Script to run commands in a Docker container

I am currently working on automating commands for a Docker container with a Python script on the host machine. This Python script for now, builds and runs a docker-compose file, with the commands for the containers written into the docker-compose file and the Dockerfile itself.
What I want to do is have the Python script action all commands to run within the container, so if I have different scripts I want to run, I am not changing the container. I have tried 2 ways.
First was to run os.system() command within the Python script, however, this works only as far as opening the shell for the container, the os.system() command does not execute code in the Docker container itself.
The second way uses CMD within the Dockerfile, however, this is limited and is hard coded to the container. If I have multiple scripts I have to change the Dockerfile, I don't want this. What I want is to build a default container with all services running, then run Python scripts on the host to run a sequence of commands on the container.
I am fairly new to Docker and think there must be something I am overlooking to run scripted commands on the container. One possible solution I have come across is nsenter. Is this a reliable solve and how does it work? Or is there a much simpler way? I have also used docker-volume to copy the python files into the container to be run on build, however, I can still not find a solve to automate the accessing and running these python scripts from the host machine.
If the scripts need to be copied into a running container, you can do this via the docker cp command. e.g. docker cp myscript.sh mycontiainer:/working/dir.
Once the scripts are in the container, you can run them via a docker exec command. e.g docker exec -it mycontainer /working/dir/myscript.sh.
Note, this isn't a common practice. Typically the script(s) you need would be built (not copied) into container image(s). Then when you want to execute the script(s), within a container, you would run the container via a docker run command. e.g. docker run -it mycontainerimage /working/dir/myscript.sh

Some question on Boot2docker setup for build and run

I’m a fresh beginner on bioinformatics. Recently, I start learning it with the book named “Bioinformatics with Python Cookbook (by Antao, Tiago)”. I met some issues while setting up Docker for Linux. Please see below for the issues:
I was trying to set up the Docker files following the author’s instruction, but I found some files were “failed to download”.
docker build -t bio
https://raw.githubusercontent.com/tiagoantao/bioinf-python/master/docker/2/Dockerfile
Then I still went ahead set up the container following the instruction:
“Now, you are ready to run the container, as follows: docker run -ti -p 9875:9875 -v YOUR_DIRECTORY:/data bio”
I typed as docker run -ti -p 9875:9875 -v C:/Users/guangliang/Desktop/Bioinformation/data bio
However, it gave me an error saying “Unable to find image “bio:latest” locally”.
Can anyone give me any suggestions on this? My thought could be the first step I missed downloading some files for setting the Dockers, but I am not sure if I can fetch these files.
Thank you so much for any comments!
Best regards
Johnny
I tried downloading the docker files a few time, but the error still appears
docker build -t bio
https://raw.githubusercontent.com/tiagoantao/bioinf-python/master/docker/2/Dockerfile
docker run -ti -p 9875:9875 -v C:/Users/guangliang/Desktop/Bioinformation/data bio
In the first issue, I found some files were “failed to download”.
In the 2nd issue, an error saying “Unable to find image “bio:latest” locally”. appears
Here you have a couple of problems:
1) It looks you do not download that docker file and build required docker image locally
2) You are getting that error about not finding image locally because of previous problem
So, you should do like this:
1) Download that Dockerfile (https://raw.githubusercontent.com/tiagoantao/bioinf-python/master/docker/2/Dockerfile). If you cant download that file for some reason, just open it at the git, select all content, copy, than in some folder on your computer make a new file, name it "Dockerfile" and paste the content.
2) Build locally image - go to the folder you download that dockerfile and execute following command:
docker build -t bio .
3)Run your container with docker run ... command

Docker run error: "Thin Pool has free data blocks which is less than minimum required"

We are trying to run a docker in a way that used to work, but now we get a "Thin Pool lack of space" error:
docker run --privileged -d --net=host --name=fat-redis -v /fat/deploy:/fat/deploy -v /fat/fat-redis/var/log:/var/log -v /home:/home fat-local.indy.xiolab.myserv.com/fat-redis:latest /fat/deploy/docker/fat-redis/fat_start_docker_inner.sh
docker: Error response from daemon: devmapper: Thin Pool has 486 free data blocks which is less than minimum required 163840 free data blocks. Create more free space in thin pool or use dm.min_free_space option to change behavior.
See 'docker run --help'.
What does this error mean?
We tried 'docker rmi' and the advise from here, but all in vain.
Any ideas?
Thank you
Running with data/metadata over loopback devices was the default on older versions of docker. There are problems with this, and newer versions have changed this default. If docker was configured this way, then normal updates (e.g. through rpm/apt) don't change the configuration, which is why a full reinstall was required to fix.
Here's an article with instructions on how to configure older versions to not use loopback devices:
http://www.projectatomic.io/blog/2015/06/notes-on-fedora-centos-and-docker-storage-drivers/
You don't have to reinstall Docker. Rather, you can clean up all the containers, images, volumes, etc. under /var/lib/docker directory.
Those images could be pulled up from your Docker repositories again. (This is assuming you only use this Docker host for building Docker images.)
My issue was unrelated to the loopback device problem, but was generating the same error condition. "docker images -a" showed a number of name=none tag=none images taking up space. These images were not "dangling"; they were referenced by a current active image, and could not be deleted.
My solution was to run "docker save" and write the active image to a tar file, delete the active image (which deleted all the child images), then run "docker load -i" from the tar file and create a single new image. No more errors related to Thin Pool space.
Reinstalling docker would have corrected it, simply because reinstalling docker does clear out all images, but it would have begun building up again and then I would have re-encountered this issue in the future.
Use the following to cleanup unnecessary images.
docker image prune -a --force --filter "until=240h"
Refer to this document for more details: https://docs.docker.com/engine/reference/commandline/image_prune/
TL;DR
Sometimes you just need more space. Increase the data file with the truncate command.
Explanation:
The reason that a reinstall or a purge of all your images works is that you have a "ramdisk" that docker uses as a space to build the images, but it's not purged after the image is running. If you are running several different images, you can fill up the scratch disk and the "newer" image doesn't have enough space to run in.
The docker system prune command won't work because that space is legitimately consumed. You need to increase the size of the scratch file.
Make sure you have extra physical space on disk
df
Figure out the size of your data file
docker info |grep 'Data Space'
Find the location of your data file
docker info |grep 'loop file'
Increase the size of your data file (+50G or whatever)
sudo truncate -s 150G /var/lib/docker/devicemapper/devicemapper/data
Restart the machine. The guide talks about a bunch of commands to "cascade" the resize through the layer, but a restart did them automatically
sudo reboot
References:
{all the SO posts that complained about the loopback driver being outdated}
https://docs.docker.com/storage/storagedriver/device-mapper-driver/#use-operating-system-utilities
Turned out that re-installing docker did the trick.
Use the following link: https://docs.docker.com/engine/installation/linux/centos/
Cheers

Running Docker Image

The user guide states that an image should be run as follows:
docker run -t -i ubuntu /bin/bash
I get that -t creates the pseudo-terminal and -i makes it interactive. But it seems that the /bin/bash part is unnecessary. Whether I run it with or without /bin/bash, I'm given an interactive prompt that I can read and write from both times.
root#77eeb1f4ac2a:/#
Why do we need /bin/bash?
Part 2
I'm running on Docker for Mac. When I download the hello-world binary and run it, it's only 1kb. Obviously a Linux image wasn't downloaded with it. Is the small hello-world binary running off my Mac kernel or off of a small Linux kernel that comes with Docker for Mac?
Why do we need /bin/bash?
Because while the ubuntu image may be configured to run /bin/bash by default, that's not going to be true of every image. If you have an image that starts a webserver by default, and you want to run bash...you need to make that explicit. Some images don't specify any default command, leading to:
$ docker run -it alpine
docker: Error response from daemon: No command specified.
It never hurts to be explicit when starting a container, especially using an inmage that you didn't build yourself.
When I download the hello-world binary and run it...
Which hello-world binary?
but is a VM of Linux executing it or is my mac executing it?
Docker only runs under Linux. When you are using Docker under OS X or Windows, you are running containers inside a Linux VM spawned for that purpose by docker-machine (or, previously, boot2docker). Under Windows Docker uses Hyper V, and on OS X it previously used VirtualBox and in more recent versions may be using something else (it's been a while since I've run Docker under OS X).
Part 1:
Whatever you pass after docker run -t -i ubuntu is the first command that your container will run. You can try using /bin/bash, /bin/sh, or even echo hello and see it in action. Ubuntu uses bash by default, but other containers use other commands based on their Dockerfiles.
part 2:
When you run hello-world, a docker container is created from the hello-image. Containers "include the application and all of its dependencies --but share the kernel with other containers, running as isolated processes in user space on the host operating system.".
Hello-world in specific is created from scratch https://hub.docker.com/_/scratch/.

Resources