How to unpack a file on one docker volume into another? - linux

I have two questions:
What command can I use, to move a file into another docker volume?
What command can I use, to extract a file into another volume?
I have Docker running on a VPS with 160GB Disk space.
I downloaded a snapshot .tar file on that VPS and the next step would be to unpack it. However, because the unpacked file is 88GB, I added an additional volume with 100GB to my droplet.
My plan is, to move that .tar file to the 100GB volume.
And then unpack it back into the main 160 volume.
This would be the code to unpack the file:
cd /tmp
an then:
sudo tar xvC /var/lib/docker/volumes/NAME_OF_YOUR_VOLUME/_data/data/tomo/ -f 20190617.tar
But I am a newbie and I don't understand that command and don't know how it works, when you have two volumes.

This is how I solved it.
find the new volume: fdisk -l
create a new directory and then mount the volume on it: sudo mount /dev/something /new/dir
extract the .tar on that mounted directory: sudo tar xvC /new/dir -f 20190617.tar
move it to the docker volume(after making room by deleting the .tar): cp -R /var/lib/docker/volumes/...

Related

Copying files to an unmounted ext4 image file

With FAT32 images, copying files to an unmounted image is possible with the mcopy command:
mcopy -v -i [image file] -s [files to copy] ::/
However, this is not possible with an ext4-based image file. Is there an equivalent command to achieve the same result for an ext4-based image file? Mounting is not an option due to the image build tool (Yocto) being used not having sudo access.

Increasing Amazon Linux 2 AMI /tmp size

I'm trying to install tensorflow on a Amazon linux AMI EC2 micro instance, but I keep getting EnvironmentError: [Errno 28] No space left on device even while the disk is empty.
On ubuntu server, I fix this increasing the /tmp size with the command sudo mount -o remount,size=4G,noatime /tmp, however this command fails on Amazon Linux telling me that /tmp is not mounted at all.
How can I increase /tmp size on Amazon Linux 2?
Thx!
Following can help in achieving the tmp space.
cd # change to your home directory
fallocate -l 20G mydrive.img # create the virtual drive file
mkfs -t ext3 mydrive.img # format the virtual drive
sudo umount /tmp # unmount the /tmp
sudo mount -t auto -o loop mydrive.img /tmp # mount the virtual drive

How to deploy files to /boot partition with Yocto

I'm trying to deploy some binary files to /boot in a Yocto image for RPi CM3 but it deploys them to the wrong location.
do_install() {
install -d ${D}/boot/overlays
install -m 0664 ${WORKDIR}/*.dtb ${D}/boot/overlays/
install -m 0664 ${WORKDIR}/*.dtbo ${D}/boot/overlays/
}
The files are deployed to /boot in the / partition of the final image, but not to the /boot partition. So they are not available at boot time.
I already googled and studied the kernel recipes (and classes) of the Poky distribution but I didn't find the mechanism it uses how to ensure that the files are deployed to the boot image (and not to the /boot dir in the root image).
Any help is appreciated :)
Update #1
In my local.conf I did:
IMAGE_BOOT_FILES_append = " \
overlays/3dlab-nano-player.dtbo \
overlays/adau1977-adc.dtbo \
...
"
And in my rpi3-overlays.bb
do_deploy() {
install -d ${DEPLOYDIR}/${PN}
install -m 0664 ${WORKDIR}/*.dtb ${DEPLOYDIR}/${PN}
install -m 0664 ${WORKDIR}/*.dtbo ${DEPLOYDIR}/${PN}
touch ${DEPLOYDIR}/${PN}/${PN}-${PV}.stamp
}
Using this the image builds, but the files stillt don't get deployed in the /boot partition.
Using RPI_KERNEL_DEVICETREE_OVERLAYS I get a build error because the kernel recipe tries to build the dtbo files like dts files.
RPI images are created with sdimage-raspberrypi.wks WIC wks file. It contains:
part /boot --source bootimg-partition ...
so it uses bootimg-partition.py wic plugin to generate /boot partition. It copies every files defined by IMAGE_BOOT_FILES variable.
It seems you want to add some devicetree overlays, so you need to modify machine configuration and more specifically RPI_KERNEL_DEVICETREE_OVERLAYS variable. IMAGE_BOOT_FILES variable is set in rpi-base.inc.
If you don't have any custom machine or custom distro defined, you can add it in local.conf:
RPI_KERNEL_DEVICETREE_OVERLAYS_append = " <deploy-path>/<dto-path>"
You can see here how to add files in deploy directory.
After too many hours of investigation it turned out, that deploying files to other partitions than / is not easily possible. I now went the way of a post-processing script that mounts the final image, deploys the additional files and unmounts it.
# Ensure the first loopback device is free to use
sudo -n losetup -d /dev/loop0 || true
# Create a loopback device for the given image
sudo -n losetup -Pf ../deploy/images/bapi/ba.rootfs.rpi-sdimg
# Mount the loopback device
mkdir -p tmp
sudo -n mount /dev/loop0p1 tmp
# Deploy files
sudo -n cp -n ../../meta-ba-rpi-cm3/recipes-core/rpi3-overlays/files/* tmp/overlays/
sudo -n cp ../../conf/config.txt tmp/config.txt
sudo -n cp ../../conf/cmdline.txt tmp/cmdline.txt
# Unmount the image and free the loopback device
sudo -n umount tmp
sudo -n losetup -d /dev/loop0

How to make an incremental backup with rsync from ext4 to xfs network drive?

I'm on Ubuntu 14.04.
I try to make an incremental backup of some files on my Ubuntu HD (ext4) to a Buffalo network HD (XFS).
My script mounts the Buffalo HD with this command :
sudo mount.cifs //192.168.1.12/Sauvegardes /mnt/Sauvegardes -o username=myusername,password=mypassword
After the disk is mounted, I use rsync trying to make an incremental backup with rsync and --link-dest. Each day, when the script is launched, all the folders change according to actual date of the day. Here is an example when the script is launched on 2017-03-09. It should check on 2017-03-08 backup if files already exist :
sudo rsync -arR --link-dest="/mnt/Sauvegardes/racine_2017-03-08" --timeout=30 /home/flooder/Sauvegardes/ /mnt/Sauvegardes/racine_2017-03-09/
The problem : rsync doesn't seem to check on the --link-dest destination. It copies all the files all the day. So the disk will be full quickly and the backup is very very long each day...
Would you have an idea for me?
Should I mount the network drive an other way?
Do I have the right rsync command?
I have mounted my network disk with this line instead. It works well now. If the file already exists in --link-dest, only an hard link is created. Second pass is very very quick!
sudo mount -t cifs //192.168.1.12/Sauvegardes /mnt/Sauvegardes -o username=myusername,password=mypassword,uid=1000,gid=1000
uid and gid are those of my logged user.

docker container size much greater than actual size

I am trying to build an image from debian:latest. After the build, the reported virtual size of the image from docker images command is 1.917 GB. I logged in to check the size (du -sh /)and it's 573 MB. I am pretty sure that this huge size is not possible normally. What is going on here? How to get the correct size of the image? More importantly when I push this repository the size is 1.9 GB and not 573 MB.
Output of du -sh /*
8.9M /bin
4.0K /boot
0 /dev
1.1M /etc
4.0K /home
30M /lib
4.0K /lib64
4.0K /media
4.0K /mnt
4.0K /opt
du: cannot access '/proc/11/task/11/fd/4': No such file or directory
du: cannot access '/proc/11/task/11/fdinfo/4': No such file or directory
du: cannot access '/proc/11/fd/4': No such file or directory
du: cannot access '/proc/11/fdinfo/4': No such file or directory
0 /proc
427M /root
8.0K /run
3.9M /sbin
4.0K /srv
0 /sys
8.0K /tmp
88M /usr
15M /var
Do you build that image via a Dockerfile? When you do that take care about your RUN statements. When you execute multiple RUN statements for each of those a new image layer is created which remains in the images history and counts on the images total size.
So for instance if one RUN statement downloads a huge archive file, a next one unpacks that archive, and a following one cleans up that archive the archive and its extracted files remain in the images history:
RUN curl <options> http://example.com/my/big/archive.tar.gz
RUN tar xvzf <options>
RUN <do whatever you need to do with the unpacked files>
RUN rm archive.tar.gz
There are more efficient ways in terms of image size to combine multiple steps in one RUN statement using the && operator. Like:
RUN curl <options> http://example.com/my/big/archive.tar.gz \
&& tar xvzf <options> \
&& <do whatever you need to do with the unpacked files> \
&& rm archive.tar.gz
In that way you can clean up files and folders that you need for the build process but not in the resulting image and keep them out of the images history as well. That is a quite common pattern to keep image sizes small.
But of course you will not have a fine-grained image history which you could make reuse of, then.
Update:
As well as RUN statements ADD statements also create new image layers. Whatever you add to an image that way it stays in history and counts on the total image size. You cannot temporarily ADD things and then remove them so that they do not count on the total size.
Try to ADD as less as possible to the image. Especially when you work with large files. Are there other ways to temporary get those files within a RUN statement so that you can do a cleanup during the same RUN execution? E.g. RUN git clone <your repo> && <do stuff> && rm -rf <clone dir>?
A good practice would be to only ADD those things that are meant to stay on the image. Temporary things should be added and cleaned up with a single RUN statement instead where possible.
The 1.9GB size is not the image, it's the image and its history. Use docker history textbox to check what takes so much space.
See also Why are Docker container images so large?
To reduce the size, you can change the way you build the image (it will depends on what you do, see answers from the link above), use docker export (see How to flatten a Docker image?) or use other extensions.

Resources