Hello I have a raspberry PI with a 8GB SD card in it where I have installed Archlinux on.
Now i was curious how much space I have used until now after installing all the packages I needed for a private dev server. this is the result.
Filesystem Size Used Avail Use% Mounted on
/dev/root 1.7G 899M 690M 57% /
devtmpfs 83M 0 83M 0% /dev
tmpfs 231M 0 231M 0% /dev/shm
tmpfs 231M 256K 231M 1% /run
tmpfs 231M 0 231M 0% /sys/fs/cgroup
tmpfs 231M 176K 231M 1% /tmp
/dev/mmcblk0p1 90M 9.0M 81M 10% /boot
As u see it shows me only 1.7G instead of +- 8G I think this is because I have installed it once on the SD card but after i messed up something I tried it again.. could it be possible that the old installation is still on the SD card? how can I see this and delete this if this is the case? or is this normal?
Thanks in advance
Beware: arch has moved to a slightly different partitioning scheme which will cause the above to fail.
See this blog post for details, but the short version is there are now three partitions. p02 is an Extended partition containing the p05 logical partition.
so:
d 2
n e 2 <cr><cr>
n l <cr><cr>
w
then reboot and resize (p05 instead of p02)
resize2fs /dev/mmcblk0p5
It's possible the rest of the SD card is unpartitioned space.
You can use GParted to view the partitions on the SD card. You can then either create an additional partition or extend your current one.
From GParted you will also be able to see if there are any old installations on other partitions, as you suggest, however I think this would be unlikely.
When installing the raspbian distro the first thing you do after booting is fixing the partitioning. Since you installed Archlinux you won't get through this "guided" solution and you have to do it manually as explained above
This is how I solved this
As root:
fdisk /dev/mmcblk0
Delete the second partition /dev/mmcblk0p2
d
2
Create a new primary partition and use default sizes prompted. This will then create a partiton that fills the disk
n
p
2
enter
enter
Save and exit fdisk:
w
Now reboot. Once rebooted:
resize2fs /dev/mmcblk0p2
Your main / partition should be the full size of the disk now.
Related
I have Azure Linux VMs for which i want to configure Azure-Monitor Alerts when my /root,/etc and /var volumes are more than 90% utilized. Please suggest way to achieve this.
Do you want to enlarge the disk space of your Azure VM?
I do know you want to enlarge your VM partition.
All the code, please log in your virtual machine, input in terminal, thanks!
Just use the code df -iThto check your VM disk space, output just like below:
Filesystem Type Inodes IUsed IFree IUse% Mounted on
udev devtmpfs 117K 378 116K 1% /dev
tmpfs tmpfs 123K 721 122K 1% /run
/dev/sda1 ext4 2.3M 297K 2.0M 13% /
tmpfs tmpfs 123K 1 123K 1% /dev/shm
tmpfs tmpfs 123K 4 123K 1% /run/lock
tmpfs tmpfs 25K 120 25K 1% /run/user/1000
Which partitions mounted at your /root, /etc and /var?
You can find the result from the display of df -hTi.
Enlarge or extend or resize your Azure VM follow below:
First, you need to know which partition mounted at your less disk target direction, e.g: /root;
Second, you need to enlarge or extend or resize that target partition;
Third, expand your disks on Azure Linux VM, please refer:
Expand virtual hard disks on a Linux VM with the Azure CLI(Azure Linux VM Document)
OVER!
You can run a KQL to achieve this.
For example,
InsightsMetrics
| where Computer in (
"CH1-UBNTVM", //Linux
"DC10.na.contosohotels.com" // Windows
)
| where Namespace == "LogicalDisk"
| where Name == "FreeSpacePercentage"
| extend Disk=tostring(todynamic(Tags)["vm.azm.ms/mountId"])
| summarize arg_max(TimeGenerated, *) by Disk, Computer
| project TimeGenerated, Disk, Computer, Val
| where Val > 90
More info can be found here :- https://learn.microsoft.com/en-us/answers/questions/831491/how-to-setup-azure-alert-when-disk-space-is-90-for.html
Thanks
I don't know actually if this is more a "classic" linux or a docker question but:
On an VM where some of my docker containers are running I've a strange thing. /var/lib/docker is an own partitionwith 20GB. When I look over the partition with df -h I see this:
eti-gwl1v-dockerapp1 root# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 7.8G 0 7.8G 0% /dev
tmpfs 7.8G 0 7.8G 0% /dev/shm
tmpfs 7.8G 815M 7.0G 11% /run
tmpfs 7.8G 0 7.8G 0% /sys/fs/cgroup
/dev/sda2 12G 3.2G 8.0G 29% /
/dev/sda7 3.9G 17M 3.7G 1% /tmp
/dev/sda5 7.8G 6.8G 649M 92% /var
/dev/sdb2 20G 47M 19G 1% /usr2
/dev/sdb1 20G 2.9G 16G 16% /var/lib/docker
So usage is at 16%. But when I now navigate to /var/lib and do a du -sch docker I see this:
eti-gwl1v-dockerapp1 root# cd /var/lib
eti-gwl1v-dockerapp1 root# du -sch docker
19G docker
19G total
eti-gwl1v-dockerapp1 root#
So same directory/partition but two sizes? How is that going?
This is really a question for unix.stackexchange.com, but there is filesystem overhead that makes the partition larger than the total size of the individual files within it.
du and df show you two different metrics:
du shows you the (estimated) file space usage, i.e. the sum of all file sizes
df shows you the disk space usage, i.e. how much space on the disk is actually used
These are distinct values and can often diverge:
disk usage may be bigger than the mere sum of file sizes due to additional meta data: e.g. the disk usage of 1000 empty files (file size = 0) is >0 since their file names and permissions need to be stored
the space used by one or multiple files may be smaller than their reported file size due to:
holes in the file - block consisting of only null bytes are not actually written to disk, see sparse files
automatic file system compression
deduplication through hard links or copy-on-write
Since docker uses the image layers as a means of deduplication the latter is most probably the cause of your observation - i.e. the sum of the files is much bigger because most of them are shared/deduplicated through hard links.
du estimates filesystem usage through summing the size of all files in it. This does not deal well with the usage of overlay2: there will be many directories which contain the same files as contained in another, but overlaid with additional layers using overlay2. As such, du will show a very inflated number.
I have not tested this since my Docker daemon is not using overlay2, but using du -x to avoid going into overlays could give the right amount. However, this wouldn't work for other Docker drivers, like btrfs, for example.
I was trying to add more volume to my device
df -h
I get:
[root#ip-172-x-x-x ~]# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 3.8G 44K 3.8G 1% /dev
tmpfs 3.8G 0 3.8G 0% /dev/shm
/dev/nvme0n1p1 7.8G 3.6G 4.2G 46% /
I wanna add all existing storage to /dev/nvme0n1p1
lsblk
I get
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme0n1 259:0 0 300G 0 disk
├─nvme0n1p1 259:1 0 8G 0 part /
└─nvme0n1p128 259:2 0 1M 0 part
I was trying to google around on aws instructions, still quite confuse. since most of the instruction is setting up brand new instance. While for my use case i cannot stop the instance.
i cannot do
mkfs
Also seems like the disk is already mount?? I guess i may misunderstand the meaning of mount...
since the filesystem is already there.
just wanna use all existing space.
Thanks for help in advance!!
your lsblk output shows that you have a 300G disk but your nvme0n1p1 is only 8G. You need to first grow your partition to fill the disk and then expand your filesystem to fill your partition:
Snapshot all ebs volumes you care about before doing any resize operations on them.
Install growpart
sudo yum install cloud-utils-growpart
Resize partiongrowpart /dev/nvme0n1 1
Reboot reboot now
Run lsblk and verify that the partition is now the full disk size
You may still have to run sudo resize2fs /dev/nvme0n1 to expand the filesystem
I am currently using the last version of owncloud. Since the installation, I cannot login anymore. A quick look at /var/log/apache2/error.log explains why :
WARNING: could not create relation-cache initialization file "global/pg_internal.init.7826": No space left on device
DETAIL: Continuing anyway, but there's something wrong.
WARNING: could not create relation-cache initialization file "base/17999/pg_internal.init.7826": No space left on device
DETAIL: Continuing anyway, but there's something wrong.
WARNING: could not create relation-cache initialization file "global/pg_internal.init.7827": No space left on device
DETAIL: Continuing anyway, but there's something wrong.
WARNING: could not create relation-cache initialization file "base/17999/pg_internal.init.7827": No space left on device
DETAIL: Continuing anyway, but there's something wrong.
WARNING: could not create relation-cache initialization file "global/pg_internal.init.7828": No space left on device
But I cannot figure where I do not have enough space. If I try df -h as root, everything seems ok to me :
:~$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/root 20G 20G 0 100% /
devtmpfs 3.9G 0 3.9G 0% /dev
tmpfs 3.9G 4.0K 3.9G 1% /dev/shm
tmpfs 3.9G 82M 3.8G 3% /run
tmpfs 5.0M 4.0K 5.0M 1% /run/lock
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
/dev/sda2 898G 912M 851G 1% /home
tmpfs 788M 0 788M 0% /run/user/0
Excepted the first line which I hardly understand what it represents. I installed owncloud into /home/owncloud so I bet everything should be ok.
Any idea?
Edit :
Results of findmnt :
~# findmnt /
TARGET SOURCE FSTYPE OPTIONS
/ /dev/sda1 ext4 rw,relatime,errors=remount-ro,data=ordered
~# findmnt /dev/sda1
TARGET SOURCE FSTYPE OPTIONS
/ /dev/sda1 ext4 rw,relatime,errors=remount-ro,data=ordered
~# findmnt /dev/sda2
TARGET SOURCE FSTYPE OPTIONS
/home /dev/sda2 ext4 rw,relatime,data=ordered
Often, these programs store their data under /var, In your case, you don't have a separate mountpoint for /var so it's a directory on your root file system /. This is full and so the program is not working.
Before you attempt a resize or anything, I think you should find out what is hogging 20GB. du / | sort -n should give you a rough idea of the guilty parties or you can use a graphical tool like xdiskusage. Clean it up and you'll be good to go.
The other alternative is to look through the config files for owncloud and make it use your home directory to store its data. That way, it will work. But you should clean up your /. Various things will misbehave if you don't.
Maybe you are out of inodes: No space left on device – running out of Inodes.
Use df -i to check that. It happened to me as my backup used to have millions of small files. So there was space left but no inodes left.
We have an issue where our CentOS 7 server will not generate a kernel dump file in /var/crash upon Kernel panic. It appears the crash kernel never boots. We’ve followed the Rhel guide (http://red.ht/1sCztdv) on configuring crash dumps and at first glance everything appears to be configured correctly. We are triggering a panic like this:
echo 1 > /proc/sys/kernel/sysrq
echo c > /proc/sysrq-trigger
This causes the system to freeze. We get no messages on the console and the console becomes unresponsive. At this point I would imagine the system would boot a crash kernel and begin writing a dump out to /var/crash. I’ve left it in this frozen state for up to 30 minutes to give it time to complete the entire dump. However after a hard cold reboot /var/crash is empty.
Additionally, I've replicated the configuration in a KVM virtual machine and kdump words as expected. So there is either something wrong with my configuration on the physical system or something odd about that hardware config that causes the hang rather than the dump.
Our server is an HP G9 with 24 cores and 128GB of memory. Here are some other details:
[user#host]$ cat /proc/cmdline
BOOT_IMAGE=/vmlinuz-3.10.0-123.el7.x86_64 root=UUID=287798f7-fe7a-4172-a35a-6a78051af4d2 ro rd.lvm.lv=vg_sda/lv_root vconsole.font=latarcyrheb-sun16 rd.lvm.lv=vg_sda/lv_swap crashkernel=auto vconsole.keymap=us rhgb nosoftlockup intel_idle.max_cstate=0 mce=ignore_ce processor.max_cstate=0 idle=mwait isolcpus=2-11,14-23
[user#host]$ systemctl is-active kdump
active
[user#host]$ cat /etc/kdump.conf
path /var/crash
core_collector makedumpfile -l --message-level 1 -d 31 -c
[user#host]$ cat /proc/iomem |grep Crash
2b000000-357fffff : Crash kernel
[user#host]$ dmesg|grep Reserving
[ 0.000000] Reserving 168MB of memory at 688MB for crashkernel (System RAM: 131037MB)
[user#host]$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg_sda-lv_root 133G 4.7G 128G 4% /
devtmpfs 63G 0 63G 0% /dev
tmpfs 63G 0 63G 0% /dev/shm
tmpfs 63G 9.1M 63G 1% /run
tmpfs 63G 0 63G 0% /sys/fs/cgroup
/dev/sda1 492M 175M 318M 36% /boot
/dev/mapper/vg_sdb-lv_data 2.8T 145G 2.6T 6% /data
After modifying the following parameters we were able to reliably get crash dumps:
Changed crashkernel=auto to crashkernel=1G: I'm not sure why we need 1G as the formula indicated 128M+64M for every 1TB of ram.
/etc/sysconfig/kdump: Removed everything from KDUMP_COMMANDLINE_APPEND excpet irqpoll nr_cpus=1 resulting in: KDUMP_COMMANDLINE_APPEND="irqpoll nr_cpus=1
/etc/kdump.cfg: Add compression (“-c”) to makedump
Not 100% sure why this works but it does. Would love to know what others think
Eric
Eric,
1G seems a bit large. I've never seen anything larger than 200M for a normal server. Not sure about the sysconfig settings. Compression is a good idea but I don't think it would affect the issue since you're target is close to total memory and you're only dumping the kernel ring.