docker copy all files in directory with given extention - linux

I can copy one file from a docker container to the server with
docker cp docker_session_name:/root/mydir/ .
I would like know to copy only files from mydir with a given extension, say, pdf

I don't think you can do this with docker cp command
To do this you can mount the directory inside the docker and then you can run the regular cp command with regex to copy it to another directory.
Mount:
docker run -d --name containerName -v myvol2:/app imageName:tag
Inside Container:
cp app/*.pdf /destination

it looks like you cant just run like in Linux (see similar thread)
Not like this:
docker cp docker_session_name:/root/mydir/*.pdf .
simple answer
use this script:
path="/root/mydir"
for file in $(docker exec docker_session_name sh -c "ls ${path}/*.pdf"); do
docker cp docker_session_name:${file} .
done
credits to this thread
cumbersome answer with easier use (no script)
you could however mount a bind mount between the host and the wanted path like so in the docker run command:
docker run -v /host/path/:/root/mydir/ my-image
then run cp with wildcard *.pdf from the host path of /host/path/ used in the docker run command

Related

create the file or copy the file from docker container to host

Below is my code:
docker container run -it ubuntu /bin/bash -c "touch /root/test.txt"
What I wanted is this file to be created at my host. Can someone help me, please?
One option is to mount a directory from the host in the container. Touch the .txt file in the location where you created the mount, inside the container.
This example will create on the docker host, in the working directory, the test.txt file:
docker run -it -v $(pwd):/root/mout-from-host ubuntu /bin/bash -c "touch /root/mout-from-host/test.txt"
ls
test.txt <<--file on host, created by the container

Docker: mount image's original /docker-entrypoint.sh to a volume in read/write mode

I try to mount image's original /docker-entrypoint.sh to a volume in read/write mode, in order to be able to change it easily from outside (without entering the container) and then restart the container to observe the changes.
I do it (in ansible) like this:
/app/docker-entrypoint.sh:/docker-entrypoint.sh:rw
If /app/docker-entrypoint.sh doesn't exist on the host, a directory /app/docker-entrypoint.sh (not a file, as wish) is created, and I get following error:
Error starting container e40a90eef1525f554e6078e20b3ab5d1c4b27ad2a7d73aa3bf4f7c6aa337be4f: 400 Client Error: Bad Request (\"OCI runtime create failed: container_linux.go:348: starting container process caused \"process_linux.go:402: container init caused \\\"rootfs_linux.go:58: mounting \\\\\\\"/app/docker-entrypoint.sh\\\\\\\" to rootfs \\\\\\\"/var/lib/docker/devicemapper/mnt/4d3e4f4082ca621ab5f3a4ec3f810b967634b1703fd71ec00b4860c59494659a/rootfs\\\\\\\" at \\\\\\\"/var/lib/docker/devicemapper/mnt/4d3e4f4082ca621ab5f3a4ec3f810b967634b1703fd71ec00b4860c59494659a/rootfs/docker-entrypoint.sh\\\\\\\" caused \\\\\\\"not a directory\\\\\\\"\\\"\": unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type
If I touch /app/docker-entrypoint.sh (and set proper permissions) before launching the container - the container fails to start up and keeps restarting (I assume because the /app/docker-entrypoint.sh and therefore internal /docker-entrypoint.sh are empty).
How can I mount the original content of container's /docker-entrypoint.sh to the outside?
If you want to override docker-entry point it should be executable or in other words you have to set chmod +x your_mount_entrypoint.sh in the host then you can mount otherwise it will through permission error. As entrypoint script should be executable.
Second thing, As mentioned in the comment you can mount the file better to keep the entrypoint script in directory like docker-entrypoint/entrypoint.sh.
or if you want to mount specific file then both name should be same otherwise entrypoint script will not be overridden.
docker run --name test -v $PWD/entrypoint.sh:/docker-entrypoint/entrypoint.sh --rm my_image
or
docker run --name test -v $PWD/entrypoint.sh:/docker-entrypoint/entrypoint.sh:rw --rm my_image
See this example, entrypoint generated inside dockerfile and you can overide this from any script but it should be executable and should be mount to docker-entrypoint.
Dockerfile
FROM alpine
RUN mkdir -p /docker-entrypoint
RUN echo -e $'#!/bin/sh \n\
echo "hello from docker generated entrypoint" >> /test.txt \n\
tail -f /test.txt ' >> /docker-entrypoint/entrypoint.sh
RUN chmod +x /docker-entrypoint/entrypoint.sh
ENTRYPOINT ["/docker-entrypoint/entrypoint.sh"]
if you build and run it you will
docker build -t my_image .
docker run -t --rm my_image
#output
hello from docker generated entrypoint
Now if you want to overide
Create and set permission
host_path/entrypoint/entrypoint.sh
for example entrypoint.sh
#!/bin/sh
echo "hello from entrypoint using mounted"
Now run
docker run --name test -v $PWD/:/docker-entrypoint/ --rm my_image
#output
hello from entrypoint using mounted
Update:
If you mount directory of the host it will hide the content of docker image.
The workaround
Mount some directory other then entrypoint name it backup
add instruction in entrypoint to copy entrypoint to that location at run time
So it will create new file on the host directory instead
FROM alpine
RUN mkdir -p /docker-entrypoint
RUN touch /docker-entrypoint/entrypoint.sh
RUN echo -e $'#!/bin/sh \n\
echo "hello from entrypoint" \n\
cp /docker-entrypoint/entrypoint.sh /for_hostsystem/ ' >> /docker-entrypoint/entrypoint.sh
RUN chmod +x /docker-entrypoint/entrypoint.sh
ENTRYPOINT ["/docker-entrypoint/entrypoint.sh"]
Now if you run you will have the docker entrypoint in the host, as opposit as you want
docker run --name test -v $PWD/:/for_hostsystem/ --rm my_image

How to set limit on folder memory running in Docker Container

I am running a script in docker container which create some files and logs information in that.
Command is
docker run -t --name a6f97966d3a2552283df -v "/temp/a6f97966d3a2552283df":/usercode ubuntu_16_04:firsttry /usercode/script.sh
I want to limit the size of that folder which i have mounted using this command because log size may increase very much.
One solution for that may be i mount a virtual filesystem in container using following commands
mkdir -p /quota
mkdir -p /var/virtual_disks
touch /var/virtual_disks/directory_with_size_limit.ext3
dd if=/dev/zero of=/var/virtual_disks/directory_with_size_limit.ext3 count=51200
mkfs.ext3 /var/virtual_disks/directory_with_size_limit.ext3
mount -o loop,rw,usrquota,grpquota /var/virtual_disks/directory_with_size_limit.ext3 /quota
Its working fine on my local system but not in container.
Is there any other way of acheiving this?
It is now working fine.Actually the mounted path in folder should be the same as the mount path of the virtual filesystem
So the modified command is
docker run -t --name a6f97966d3a2552283df -v "/quota":/usercode ubuntu_16_04:firsttry /usercode/script.sh

Why are my mounted docker volume files turning into folders inside the container?

The scenario is docker inside/beside docker via a sock binding for the purpose of having an easily deployable and scalable runner agent for C.I./C.D. tools (in this particular case, VSTS). The reason for this set up is that the various projects that I want to test use docker/compose to run tests, and configuring a C.I./C.D. worker to be compatible with docker/compose a bunch of times gets cumbersome and time consuming. (This'll eventually be deployed to 4+ Kubernetes Clusters)
Anyway, the problem:
Steps to replicate
Run the vsts-agent image
docker run \
-it \
-v /var/run/docker.sock:/var/run/docker.sock \
nullvoxpopuli/vsts-agent-with-aws-ecr:latest \
/bin/bash
Run another image (to emulate docker/compose running tests)
echo 'test' > test-file.txt
docker run -it -v file-test.txt:/file-test.txt busybox /bin/sh
Check for existence of test-file.txt
cd /
ls -la # shows that test-file.txt is a directory
So,
- why are files being mounted as folders inside containers?
- what do I need to do to make the volumes mount correctly?
Solution A - thanks to #BMitch
# On Host machine
docker run -it \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /tmp/vsts/work/:/tmp/vsts/work \
nullvoxpopuli/vsts-agent-with-aws-ecr:latest \
/bin/bash
# In vsts-agent-with-aws-ecr
cd /tmp/vsts/work/
git clone https://NullVoxPopuli#bitbucket.org/group/project.git
cd project/
./scripts/run/eslint.sh
# Success! (this uses docker-compose to map files to the node-based docker image)
Docker creates containers and mounts volumes from the docker host. Any time a file or directory in a volume mount doesn't exist, it gets initialized as an empty directory. So if you are running docker commands from inside of a container to the docker socket those commands get interpreted outside the container on the docker host, where the file doesn't exist. Additionally, the docker run command requires a full path to the volume being mounted when you want a host volume, otherwise it's interpreted as a named volume.
What you likely want to do at this point is:
docker volume rm file-test.txt
docker run -it -v $(pwd)/file-test.txt:/file-test.txt busybox /bin/sh
If instead you are trying to include a file from inside the container to another container, you can initialize a named volume with input redirection like this:
tar -cC . . | docker run -i --rm -v file-test:/target busybox tar -xC /target
docker run -it -v file-test:/data busybox /bin/sh
That uses tar to copy the contents of the current directory to stdout which is processed by the interactive docker command which then extracts those directory contents into /target inside the container which is a named volume. Note that I didn't mount the volume in root in this second example since named volumes are directories and I didn't want to replace the root filesystem.
Another option is to share a volume mount point between multiple containers on the docker host so that files you edit inside one container go to the host where they are mounted into the other container and visible there:
docker run \
-it \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /container-data:/container-data \
nullvoxpopuli/vsts-agent-with-aws-ecr:latest \
/bin/bash
echo 'test' > /container-data/test-file.txt
docker run -it -v /container-data:/container-data busybox /bin/sh
I don't recommend mounting individual files into a container if these files may be modified while the container is running. File changes often result in a changed inode and docker will have the old inode mounted into the container. As a result, changes either inside or outside of the container to the file may not be seen on the other side, and if you modify the file inside the container, that change may be lost when you delete the container. The solution to the inode issue is to mount the entire directory into the container.

How to copy files from dockerfile to host?

I want to get some files when dockerfile built successfully, but it doesn't copy files from container to host.
That means when I have built dockerfile, the files already be host.
This is now possible since Docker 19.03.0 in July 2019 introduced "custom build outputs". See the official docs about custom build outputs.
To enable custom build outputs from the build image into the host during the build process, you need to activate the BuildKit which is a newer recommended back-compatible way for the engine to do the build phase. See the official docs for enabling BuildKit.
This can be done in 2 ways:
Set the environment variable DOCKER_BUILDKIT=1, or
Set it in the docker engine by default by adding "features": { "buildkit": true } to the root of the config json.
From the official docs about custom build outputs:
custom exporters allow you to export the build artifacts as files on the local filesystem instead of a Docker image, which can be useful for generating local binaries, code generation etc.
...
The local exporter writes the resulting build files to a directory on the client side. The tar exporter is similar but writes the files as a single tarball (.tar).
If no type is specified, the value defaults to the output directory of the local exporter.
...
The --output option exports all files from the target stage. A common pattern for exporting only specific files is to do multi-stage builds and to copy the desired files to a new scratch stage with COPY --from.
e.g. an example Dockerfile
FROM alpine:latest AS stage1
WORKDIR /app
RUN echo "hello world" > output.txt
FROM scratch AS export-stage
COPY --from=stage1 /app/output.txt .
Running
DOCKER_BUILDKIT=1 docker build --file Dockerfile --output out .
The tail of the output is:
=> [export-stage 1/1] COPY --from=stage1 /app/output.txt .
0.0s
=> exporting to client
0.1s
=> => copying files 45B
0.1s
This produces a local file out/output.txt that was created by the RUN command.
$ cat out/output.txt
hello world
All files are output from the target stage
The --output option will export all files from the target stage. So using a non-scratch stage with COPY --from will cause extraneous files to be copied to the output. The recommendation is to use a scratch stage with COPY --from.
Copying files "from the Dockerfile" to the host is not supported. The Dockerfile is just a recipe specifying how to build an image.
When you build, you have the opportunity to copy files from host to the image you are building (with the COPY directive or ADD)
You can also copy files from a container (an image that has been docker run'd) to the host with docker cp (atually, the cp can copy from the host to the container as well)
If you want to get back to your host some files that might have been generated during the build (like for example calling a script that generates ssl), you can run a container, mounting a folder from your host and executing cp commands.
See for example this getcrt script.
docker run -u root --entrypoint=/bin/sh --rm -i -v ${HOME}/b2d/apache:/apache apache << COMMANDS
pwd
cp crt /apache
cp key /apache
echo Changing owner from \$(id -u):\$(id -g) to $(id -u):$(id -u)
chown -R $(id -u):$(id -u) /apache/crt
chown -R $(id -u):$(id -u) /apache/key
COMMANDS
Everything between COMMANDS are commands executed on the container, including cp ones which are copying on the host ${HOME}/b2d/apache folder, mounted within the container as /apache with -v ${HOME}/b2d/apache:/apache.
That means each time you copy anything on /apache in the container, you are actually copying in ${HOME}/b2d/apache on the host!
Although it's not directly supported via the Dockerfile functionality, you can copy files from a built docker image.
containerId=$(docker create example:latest)
docker cp "$containerId":/source/path /destination/path
docker rm "$containerId"
Perhaps you could mount a folder from the host as a volume and put the files there? Usually mounted volumes could be written to from within the container, unless you specify read-only option while mounting.
docker run -v ${PWD}:/<somewritablepath> -w <somewritablepath> <image> <action>
On completion of Docker Build using Dockerfile, following worked for me :
where /var/lib/docker/ is Docker Root Dir: at my setup.
sudo find /var/lib/docker/ -name DESIRED_FILE -type f | xargs sudo ls -hrt | tail -1 | grep tgz | xargs -i sudo cp -v '{}' PATH_ON_HOST
You can also use the network. Starting with the script from this answer: https://stackoverflow.com/a/58255859/836450
I run this script on my host in the folder I want my output to go
Inside my docker I have:
RUN apt-get install -y curl
RUN curl -F 'file=#/path/to/file' http://172.17.0.1:44444/
Note, for me at least, 172.17.0.1 is always the IP of the host while building the dockerfile

Resources