OSError: [Errno 28] No space left on device in Amazon EC2 instance - python-3.x

I am downloading some files from S3 to my EC2 instance. I got this error OSError: [Errno 28] No space left on device so I created a new 100 GiB volume and I attached it to the EC2 instance but the same error persists as if the new volume is not being used.
I am using an Amazon Linux image.
lsblk:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 8G 0 disk
└─xvda1 202:1 0 8G 0 part /
xvdb 202:16 0 8G 0 disk
xvdf 202:80 0 100G 0 disk
df -h:
devtmpfs 2.0G 0 2.0G 0% /dev
tmpfs 2.0G 0 2.0G 0% /dev/shm
tmpfs 2.0G 420K 2.0G 1% /run
tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup
/dev/xvda1 8.0G 8.0G 63M 100% /
tmpfs 393M 0 393M 0% /run/user/1000

Related

How to use SDB when sda is full

I am a beginner to linux. but I should install a blockchain node on ubuntu server(4T SSD).
SDA6 is only 500G and SDB is 3.5T. so I have to use SDB if sda is full.
root#UK270-1G:~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 32G 0 32G 0% /dev
tmpfs 6.3G 1.4M 6.3G 1% /run
/dev/sda6 437G 2.2G 413G 1% /
tmpfs 32G 0 32G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 32G 0 32G 0% /sys/fs/cgroup
/dev/sda5 1.9G 81M 1.8G 5% /boot
tmpfs 6.3G 0 6.3G 0% /run/user/0
root#UK270-1G:~#
SDB is not mounted yet. but for my problem, I wanna know underlying principle and need detailed instruction since I am a beginner of linux.
Thanks in advance.

AWS ami cannot execute file from attached volume

I have created ami with two volumes attached as following:
[ec2-user#ip-192-***** ~]$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme0n1 259:0 0 100G 0 disk
├─nvme0n1p1 259:1 0 100G 0 part /
└─nvme0n1p128 259:2 0 1M 0 part
nvme2n1 259:3 0 320G 0 disk
├─hardenedpartitions-tmp 253:0 0 25G 0 lvm /var/tmp
├─hardenedpartitions-home 253:1 0 25G 0 lvm /home
├─hardenedpartitions-var 253:2 0 35G 0 lvm /var
├─hardenedpartitions-varlog 253:3 0 25G 0 lvm /var/log
└─hardenedpartitions-varlogaudit 253:4 0 16G 0 lvm /var/log/audit
nvme1n1 259:4 0 320G 0 disk
[root#ip-192-**** ec2-user]# df -h
Filesystem Size Used Avail
Use% Mounted on
devtmpfs 3.8G 0 3.8G 0% /dev
tmpfs 3.8G 0 3.8G 0% /dev/shm
tmpfs 3.8G 520K 3.8G 1% /run
tmpfs 3.8G 0 3.8G 0%
/sys/fs/cgroup
/dev/nvme0n1p1 100G 1.7G 99G 2% /
/dev/mapper/hardenedpartitions-home 25G 436M 23G 2% /home
/dev/mapper/hardenedpartitions-var 35G 407M 33G 2% /var
/dev/mapper/hardenedpartitions-tmp 25G 64K 24G 1% /tmp
/dev/mapper/hardenedpartitions-varlog 25G 42M 24G 1% /var/log
/dev/mapper/hardenedpartitions-varlogaudit 16G 880K 15G 1% /var/log/audit
tmpfs 774M 0 774M 0% /run/user/1000
tmpfs 774M 0 774M 0% /run/user/0
I am trying to boot an instance from this ami in opsworks with it being stacked in the boot in opsworks(still shows that it starts in ec2). After sshing into the instance and inspecting the logs in /var/logs/aws/opsworks/ I see the following:
[Tue, 28 Jun 2022 14:44:13 +0000] opsworks-init: Starting: Download Installer.
/var/lib/cloud/instance/scripts/part-002: line 433: /tmp/opsworks-agent-
downloader.sh: Permission denied
Then doing smth like that does not work:
[root#ip-192-**** ec2-user]# chmod 777 /tmp/opsworks-agent-downloader.sh
[root#ip-192-**** ec2-user]# ls -la /tmp/opsworks-agent-downloader.sh
-rwxrwxrwx 1 root root 7045 Jun 28 14:44 /tmp/opsworks-agent-downloader.sh
[root#ip-**** ec2-user]# /tmp/opsworks-agent-downloader.sh
bash: /tmp/opsworks-agent-downloader.sh: Permission denied
Any ideas why I cannot run this file as root from attached volume?
So the problem was with the way how volume had been attached to the instance.
Specifically, the line that had been added into /etc/fstab file, smth like that:
mount /dev/hardenedpartitions/tmp ..... noexec .....
This noexec specifies that no files can be executed even if you have the correct permission. So removing that helped in booting instance in opsworks.

How to check the swap partion in Ubuntu

I found a strange thing, the result of free shows:
> free
total used free shared buff/cache available
Mem: 6060516 1258584 3614828 34340 1187104 4469908
Swap: 2097148 0 2097148
But the result of df:
> df -h
Filesystem Size Used Avail Use% Mounted on
udev 2.9G 0 2.9G 0% /dev
tmpfs 592M 1.9M 591M 1% /run
/dev/sda1 98G 85G 8.8G 91% /
tmpfs 2.9G 40K 2.9G 1% /dev/shm
tmpfs 5.0M 4.0K 5.0M 1% /run/lock
tmpfs 2.9G 0 2.9G 0% /sys/fs/cgroup
/dev/loop0 161M 161M 0 100% /snap/gnome-3-28-1804/116
...
/dev/loop18 2.5M 2.5M 0 100% /snap/gnome-calculator/730
tmpfs 592M 32K 592M 1% /run/user/1000
There is no swap partion ... I use default config to build ubuntu18.04 in VMWare
> uname -a
Linux ubuntu 5.3.0-51-generic #44~18.04.2-Ubuntu SMP Thu Apr 23 14:27:18 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
Is the swap partition enabled in the system ?
The swap partition is not a file system and as a consequence is not displayed by df which works only on file systems.
Instead you can use swapon:
$ swapon
NAME TYPE SIZE USED PRIO
/swap.img file 2G 0B -2
Other than swapon suggested by pifor you can also check your partition, swap included, with lsblk command:
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 80G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 79G 0 part
├─centos-root 253:0 0 50G 0 lvm /
├─centos-swap 253:1 0 2G 0 lvm [SWAP]
└─centos-home 253:2 0 27G 0 lvm /home

how to combine two partition in centos

when i run the command df it shows
Filesystem Size Used Avail Use% Mounted on
/dev/vda1 40G 38G 135M 100% /
devtmpfs 2.0G 0 2.0G 0% /dev
tmpfs 2.0G 0 2.0G 0% /dev/shm
tmpfs 2.0G 17M 2.0G 1% /run
tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup
/dev/vdc1 99G 60M 94G 1% /mnt/onapp-disk
tmpfs 395M 0 395M 0% /run/user/0
now when i try to install GNOME DESKTOP using command
yum groupinstall "GNOME DESKTOP", size 1.5 GB
but it prompted an error
Disk Requirements:
At least 1496MB more space needed on the / filesystem.
it is not using the vdc1 partiton

How does docker map host partitions?

I'm relatively new to docker, and when I started a container (an ubuntu base image), I noticed the following:
On the host,
$ df -h
...
/dev/sdc1 180M 98M 70M 59% /boot
/dev/sdc2 46G 20G 24G 46% /home
/dev/sdc5 37G 7.7G 27G 23% /usr
/dev/sdc6 19G 13G 5.3G 70% /var
$ lsblk
...
sdc 8:32 0 232.9G 0 disk
├─sdc1 8:33 0 190M 0 part /boot
├─sdc2 8:34 0 46.6G 0 part /home
├─sdc3 8:35 0 18.6G 0 part /
├─sdc4 8:36 0 1K 0 part
├─sdc5 8:37 0 37.3G 0 part /usr
├─sdc6 8:38 0 18.6G 0 part /var
├─sdc7 8:39 0 29.8G 0 part [SWAP]
└─sdc8 8:40 0 42.8G 0 part
On the container
$ df -h
Filesystem Size Used Avail Use% Mounted on
rootfs 19G 13G 5.3G 70% /
none 19G 13G 5.3G 70% /
tmpfs 7.8G 0 7.8G 0% /dev
shm 64M 0 64M 0% /dev/shm
/dev/sdc6 19G 13G 5.3G 70% /etc/hosts
tmpfs 7.8G 0 7.8G 0% /proc/kcore
tmpfs 7.8G 0 7.8G 0% /proc/latency_stats
tmpfs 7.8G 0 7.8G 0% /proc/timer_stats
$ lsblk
sdc 8:32 0 232.9G 0 disk
|-sdc1 8:33 0 190M 0 part
|-sdc2 8:34 0 46.6G 0 part
|-sdc3 8:35 0 18.6G 0 part
|-sdc4 8:36 0 1K 0 part
|-sdc5 8:37 0 37.3G 0 part
|-sdc6 8:38 0 18.6G 0 part /var/lib/cassandra
|-sdc7 8:39 0 29.8G 0 part [SWAP]
`-sdc8 8:40 0 42.8G 0 part
Question 1: why is sdc6 mounted on different places between the host and the container?
Because the contents of the two mount points are different, so I assume docker must have done some kind of device mapping on the container, so sdc6 in the container isn't the same as the one on the host. However, the partition capacity and usage are the same, so I'm confused here.
Question 2: why is the container's root dir usage so high? The docker image doesn't have much stuff on it.
Thanks for any help.
Addition
The Dockerfile has a line
VOLUME /var/lib/cassandra
Question 1: why is sdc6 mounted on different places between the host and the container?
/dev/sdc6 on your host is /var, which is where /var/lib/docker resides and where Docker keeps certain data, such as the hosts file allocated to your container.
The hosts file is exposed as a bind mount inside the container, which is why you see:
/dev/sdc6 19G 13G 5.3G 70% /etc/hosts
Question 2: why is the container's root dir usage so high? The docker image doesn't have much stuff on it.
Take a look at the df output inside the container:
rootfs 19G 13G 5.3G 70% /
Now look at the df output on your host, and you'll see:
/dev/sdc6 19G 13G 5.3G 70% /var
The df inside the container is reflecting the state of the host filesystem. This suggests that you are using the aufs or overlay storage driver, both of which create "overlay" filesystems for containers on top of the host filesystem. The output from df would look different if you were using the devicemapper storage driver, relies on device mapper block devices instead of overlay filesystems.

Resources