"No space left on device" when using dd to create a disk image - linux

I am trying to trying to create a disk image of my Raspberry Pi Model 3 B+ onto a USB drive using dd. I know there are easier ways to do this on a Raspberry Pi, but I want to try this to test the procedure on a 'sacrificial' system, which I hope to then use on another linux computer running a much larger Ubuntu disk to create a backup. OS is Raspbian Buster 10.
I have been following a procedure I found on an article here: https://www.makeuseof.com/tag/easily-clone-restore-linux-disk-image-dd/
The USB drive has 64GB capacity and has been formatted, initially as exFAT but I also tried NTFS thinking maybe that was the issue. The command ended with the same error, however each time i have tried this the file size transferred has been different, varying from 2-8GB in size before the error occurred.
This is to identify my drives - the SD card is "mmcblk" and my USB drive is "sda", called "NINJA":
pi#raspberrypi:~ $ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 1 57.9G 0 disk
└─sda1 8:1 1 57.9G 0 part
mmcblk0 179:0 0 14.9G 0 disk
├─mmcblk0p1 179:1 0 256M 0 part /boot
└─mmcblk0p2 179:2 0 14.6G 0 part /
This my command I tried to use:
pi#raspberrypi:~ $ sudo dd bs=4M if=/dev/mmcblk0 of=/media/pi/NINJA/raspibackup.img
and this is the output:
dd: error writing '/media/pi/NINJA/raspibackup.img': No space left on device
605+0 records in
604+0 records out
2535124992 bytes (2.5 GB, 2.4 GiB) copied, 325.617 s, 7.8 MB/s

Check how much disk space is "Avail" on the target device.
Example:
[jack#server1 ~]$ df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 484M 0 484M 0% /dev
tmpfs 496M 41M 456M 9% /dev/shm
tmpfs 496M 6.9M 489M 2% /run
tmpfs 496M 0 496M 0% /sys/fs/cgroup
/dev/mapper/centos-root 6.2G 6.2G 172K 100% /
/dev/sda1 1014M 166M 849M 17% /boot
tmpfs 100M 24K 100M 1% /run/user/1000
/dev/sr0 552M 552M 0 100% /run/media/jack/CentOS 7 x86_64
Terminology:
df: DiskFree
-h: Human Readable Sizes (Ex: 6.2G instead of 6485900)
In this example, let's say I want to make a backup of the Boot drive (/dev/sda1) and save it in my Local User Home Folder on my Root Drive (/dev/mapper/centos-root).
When I so this, I will get an error that looks like:
[jack#server1 ~]$ sudo dd if=/dev/sda1 of=boot.img
dd: error writing 'boot.img': No space left on device
1736905+0 records in
1736904+0 records out
889294848 bytes (889 MB) copied, 4.76575 s, 187 MB/s
Terminology:
sudo: Super User Do
dd: Disk Duplicate
if: Input File (source)
of: Output File (destination)
The system is trying to copy ALL of /dev/sda1 (to include freespace) to boot.img, which is impossible at this because /dev/sda1 is 1014M and there is only 172K space left on /dev/mapper/centos-root.
With that said, the actual size of the /dev/sda is actually 16G total! Which means that there is 8G not allocated.
My /dev/sda1 should be 1G where my /dev/sda2 (centos-root) should be 15G... in which it is currently 6.2G
[jack#server1 ~]$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 16G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 15G 0 part
├─centos-root 253:0 0 6.2G 0 lvm /
└─centos-swap 253:1 0 820M 0 lvm [SWAP]
sr0 11:0 1 552M 0 rom /run/media/jack/CentOS 7 x86_64
This partition can be extended by doing the following:
[jack#server1 ~]$ sudo lvextend -L +8G /dev/mapper/centos-root
[jack#server1 ~]$ sudo xfs_growfs /dev/mapper/centos-root
Now that my partition is extended, I can do another DiskFree command to double check.
[jack#server1 ~]$ df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 484M 0 484M 0% /dev
tmpfs 496M 33M 463M 7% /dev/shm
tmpfs 496M 6.9M 489M 2% /run
tmpfs 496M 0 496M 0% /sys/fs/cgroup
/dev/mapper/centos-root 15G 7.0G 7.3G 49% /
/dev/sda1 1014M 166M 849M 17% /boot
tmpfs 100M 24K 100M 1% /run/user/1000
/dev/sr0 552M 552M 0 100% /run/media/jack/CentOS 7 x86_64
My root partition is now 15G! Now I can perform my backup of the /dev/sda1 partition!
[jack#server1 ~]$ sudo dd if=/dev/sda1 of=boot.img
2097152+0 records in
2097152+0 records out
1073741824 bytes (1.1 GB) copied, 5.59741 s, 192 MB/s
Mission Complete!

sda1 is not mounted in /media/pi/NINJA/, the image you create is therefore stored on the mmcblk0p2 partition.
Since mmcblk0 is by definition larger than mmcblk0p2, you logically run out of space on it.
Solution :
You need to first mount sda1 using sudo mount /dev/sda1 /media/pi/NINJA/ and try your dd command again after.

Related

How to check the swap partion in Ubuntu

I found a strange thing, the result of free shows:
> free
total used free shared buff/cache available
Mem: 6060516 1258584 3614828 34340 1187104 4469908
Swap: 2097148 0 2097148
But the result of df:
> df -h
Filesystem Size Used Avail Use% Mounted on
udev 2.9G 0 2.9G 0% /dev
tmpfs 592M 1.9M 591M 1% /run
/dev/sda1 98G 85G 8.8G 91% /
tmpfs 2.9G 40K 2.9G 1% /dev/shm
tmpfs 5.0M 4.0K 5.0M 1% /run/lock
tmpfs 2.9G 0 2.9G 0% /sys/fs/cgroup
/dev/loop0 161M 161M 0 100% /snap/gnome-3-28-1804/116
...
/dev/loop18 2.5M 2.5M 0 100% /snap/gnome-calculator/730
tmpfs 592M 32K 592M 1% /run/user/1000
There is no swap partion ... I use default config to build ubuntu18.04 in VMWare
> uname -a
Linux ubuntu 5.3.0-51-generic #44~18.04.2-Ubuntu SMP Thu Apr 23 14:27:18 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
Is the swap partition enabled in the system ?
The swap partition is not a file system and as a consequence is not displayed by df which works only on file systems.
Instead you can use swapon:
$ swapon
NAME TYPE SIZE USED PRIO
/swap.img file 2G 0B -2
Other than swapon suggested by pifor you can also check your partition, swap included, with lsblk command:
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 80G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 79G 0 part
├─centos-root 253:0 0 50G 0 lvm /
├─centos-swap 253:1 0 2G 0 lvm [SWAP]
└─centos-home 253:2 0 27G 0 lvm /home

(linux) gzip fails with No space left on device

I need to compress a directory on my Ubuntu server. The directory is about 3.2 Go, and I have 15 Go left over 20 Go available on my server.
I'm using the command: tar -zcvf test src_directory
The command fails with the message:
gzip: stdout: No space left on device
tar: test: Wrote only 6144 of 10240 bytes
tar: Child returned status 1
tar: Error is not recoverable: exiting now
Why is it failing as I have enough space on my server ? (15 Go should be enough)
thanks
EDIT
$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 463M 0 463M 0% /dev
tmpfs 98M 2.2M 96M 3% /run
/dev/xvda1 20G 17G 2.2G 89% /
tmpfs 490M 0 490M 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 490M 0 490M 0% /sys/fs/cgroup
tmpfs 24K 0 24K 0% /var/gandi
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 20G 0 disk
└─xvda1 202:1 0 20G 0 part /
xvdz 202:6400 0 512M 0 disk
├─xvdz1 202:6401 0 502M 0 part [SWAP]
└─xvdz2 202:6402 0 10M 0 part
with df, I can see there is only 2.2G left on the disk. How can I have details about what is taking so much space ? Because I know that my application files only take 4.6G.
thanks
Check whether you have enough memory available on the partition you want the archive to be stored in
$: df -h
Filesystem Size Used Avail Use% Mounted on
rootfs 476G 300G 176G 63% /
Also check if you have the permissions necessary to archive the desired directory.
If the problem still persists, I may suggest a filesystem check
$: lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 698.7G 0 disk
├─sda1 8:1 0 500M 0 part /boot
├─sda2 8:2 0 5.8G 0 part [SWAP]
├─sda3 8:3 0 50G 0 part /
├─sda4 8:4 0 1K 0 part
└─sda5 8:5 0 642.4G 0 part /home
# Find the device that matches the mountpoint of the directory in question
# Replace X with the appropriate device
# Replace Y with the appropriate partition
$: sudo fsck -vcck /dev/sdXY

Setting up a swapfile in local SSD (temporary drive) in Azure VM

I'm using a DS4 Azure VM (Ubuntu 14.04). It comes with a 56GB local SSD.
I need to set up a 25GB swapfile in this local SSD. When I do df -h in the VM, I can see that it seems to be mapped to the /mnt/ folder. Following is the entire output:
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 29G 22G 6.4G 77% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
udev 14G 4.0K 14G 1% /dev
tmpfs 2.8G 472K 2.8G 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 14G 0 14G 0% /run/shm
none 100M 0 100M 0% /run/user
none 64K 0 64K 0% /etc/network/interfaces.dynamic.d
/dev/sdb1 56G 97M 56G 1% /mnt
However, if I try to initialize a swapfile in /mnt, it still gets added to the available disk space in /dev/sda1.
What do I need to do to set up my swap file? An illustrative example would be great. Thanks in advance.
I normally use the following commands to set up a swapfile:
sudo fallocate -l 25G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile
Update:
I went into /etc/waagent.conf, and tweaked the followed:
# Format if unformatted. If 'n', resource disk will not be mounted.
ResourceDisk.Format=y
# File system on the resource disk
# Typically ext3 or ext4. FreeBSD images should use 'ufs2' here.
ResourceDisk.Filesystem=ext4
# Mount point for the resource disk
ResourceDisk.MountPoint=/mnt
# Create and use swapfile on resource disk.
ResourceDisk.EnableSwap=y
# Size of the swapfile.
ResourceDisk.SwapSizeMB=26000
After this, I resized (and consequently rebooted) my Azure VM from the portal. Currently I can't tell whether the settings have taken effect. Are my settings correct and what's the best way to ensure they've taken effect?
You are right, we should modify /etc/waagent.conf to add a swap file.
By modifying the /etc/waagent.conf file and setting the following 3 parameters a swap file will be created in the directory defined by ResourceDisk.MountPoint  
 
ResourceDisk.Format=y  
ResourceDisk.EnableSwap=y    
ResourceDisk.SwapSizeMB=26000
Then we should restart walinuxagent:
service walinuxagent restart
Commands to show the new swap space in use after agent restart:
dmesg | grep swap
root#ubuntu:~# swapon -s
Filename Type Size Used Priority
/mnt/swapfile file 26623996 0 -1
root#ubuntu:~# df -Th
Filesystem Type Size Used Avail Use% Mounted on
udev devtmpfs 3.4G 12K 3.4G 1% /dev
tmpfs tmpfs 697M 412K 697M 1% /run
/dev/sda1 ext4 29G 869M 27G 4% /
none tmpfs 4.0K 0 4.0K 0% /sys/fs/cgroup
none tmpfs 5.0M 0 5.0M 0% /run/lock
none tmpfs 3.5G 0 3.5G 0% /run/shm
none tmpfs 100M 0 100M 0% /run/user
/dev/sdb1 ext4 99G 26G 68G 28% /mnt
I resized (and consequently rebooted) my Azure VM from the portal
I resized my VM, and the swap file does not lose.
Are my settings correct and what's the best way to ensure they've
taken effect?
After modify the /etc/waagent.conf and restart walinuxagent, we can use swapon -s to check it.

How does docker map host partitions?

I'm relatively new to docker, and when I started a container (an ubuntu base image), I noticed the following:
On the host,
$ df -h
...
/dev/sdc1 180M 98M 70M 59% /boot
/dev/sdc2 46G 20G 24G 46% /home
/dev/sdc5 37G 7.7G 27G 23% /usr
/dev/sdc6 19G 13G 5.3G 70% /var
$ lsblk
...
sdc 8:32 0 232.9G 0 disk
├─sdc1 8:33 0 190M 0 part /boot
├─sdc2 8:34 0 46.6G 0 part /home
├─sdc3 8:35 0 18.6G 0 part /
├─sdc4 8:36 0 1K 0 part
├─sdc5 8:37 0 37.3G 0 part /usr
├─sdc6 8:38 0 18.6G 0 part /var
├─sdc7 8:39 0 29.8G 0 part [SWAP]
└─sdc8 8:40 0 42.8G 0 part
On the container
$ df -h
Filesystem Size Used Avail Use% Mounted on
rootfs 19G 13G 5.3G 70% /
none 19G 13G 5.3G 70% /
tmpfs 7.8G 0 7.8G 0% /dev
shm 64M 0 64M 0% /dev/shm
/dev/sdc6 19G 13G 5.3G 70% /etc/hosts
tmpfs 7.8G 0 7.8G 0% /proc/kcore
tmpfs 7.8G 0 7.8G 0% /proc/latency_stats
tmpfs 7.8G 0 7.8G 0% /proc/timer_stats
$ lsblk
sdc 8:32 0 232.9G 0 disk
|-sdc1 8:33 0 190M 0 part
|-sdc2 8:34 0 46.6G 0 part
|-sdc3 8:35 0 18.6G 0 part
|-sdc4 8:36 0 1K 0 part
|-sdc5 8:37 0 37.3G 0 part
|-sdc6 8:38 0 18.6G 0 part /var/lib/cassandra
|-sdc7 8:39 0 29.8G 0 part [SWAP]
`-sdc8 8:40 0 42.8G 0 part
Question 1: why is sdc6 mounted on different places between the host and the container?
Because the contents of the two mount points are different, so I assume docker must have done some kind of device mapping on the container, so sdc6 in the container isn't the same as the one on the host. However, the partition capacity and usage are the same, so I'm confused here.
Question 2: why is the container's root dir usage so high? The docker image doesn't have much stuff on it.
Thanks for any help.
Addition
The Dockerfile has a line
VOLUME /var/lib/cassandra
Question 1: why is sdc6 mounted on different places between the host and the container?
/dev/sdc6 on your host is /var, which is where /var/lib/docker resides and where Docker keeps certain data, such as the hosts file allocated to your container.
The hosts file is exposed as a bind mount inside the container, which is why you see:
/dev/sdc6 19G 13G 5.3G 70% /etc/hosts
Question 2: why is the container's root dir usage so high? The docker image doesn't have much stuff on it.
Take a look at the df output inside the container:
rootfs 19G 13G 5.3G 70% /
Now look at the df output on your host, and you'll see:
/dev/sdc6 19G 13G 5.3G 70% /var
The df inside the container is reflecting the state of the host filesystem. This suggests that you are using the aufs or overlay storage driver, both of which create "overlay" filesystems for containers on top of the host filesystem. The output from df would look different if you were using the devicemapper storage driver, relies on device mapper block devices instead of overlay filesystems.

Number of inodes in a partition not matching up to the maximum number of inodes the partition should support

We are using Amazon EBS to store a large number of small files (<10KB) in a 3-level directory structure.
~/lists# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 9.9G 3.9G 5.5G 42% /
tmpfs 854M 0 854M 0% /lib/init/rw
varrun 854M 64K 854M 1% /var/run
varlock 854M 0 854M 0% /var/lock
udev 854M 80K 854M 1% /dev
tmpfs 854M 0 854M 0% /dev/shm
/dev/sda2 147G 80G 60G 58% /mnt
/dev/sdj 197G 60G 128G 32% /vol
The partition in question is /vol (size: 200GB)
~/lists# df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/sda1 655360 26541 628819 5% /
tmpfs 186059 3 186056 1% /lib/init/rw
varrun 186059 31 186028 1% /var/run
varlock 186059 2 186057 1% /var/lock
udev 186059 824 185235 1% /dev
tmpfs 186059 1 186058 1% /dev/shm
/dev/sda2 19546112 17573097 1973015 90% /mnt
/dev/sdj 13107200 13107200 0 100% /vol
~/lists# sudo /sbin/dumpe2fs /dev/sdj | grep "Block size"
dumpe2fs 1.41.4 (27-Jan-2009)
Block size: 4096
The number of inodes for the partition /vol are 13Million+. The block size is 4096. Taking the Block Size as 4096, the number of inodes the 200GB partition (ext3) should support is 52million+ (Maximum Inode Calculation: Volume size in bytes/2^12). So why does the partition only support 13million inode?
I'm pretty sure that inodes are allocated statically when you create the volume (using mfs.ext3 in this case). For whatever reason, mkfs.ext3 decided to reserve 13 Million inodes and now you can't create any more files.
See this 2001 discussion of inodes
The Wikipedia ext3 page has a footnote explaining this more concisely: wiki link
Also, inodes are allocated per file (not block), which is why there are only 13M inodes - mkfs.ext3 must have been configured with an average file size of 8 KB which would account for the issue you're seeing.

Resources