Ubuntu wrongly shows low disk space [closed] - linux

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 years ago.
Improve this question
I installed Ubuntu 14.0 on virtual box. Initially I had allocated 10 GB for the .vdi. I increased it to 25 GB. When I check the size in the settings in virtual box, its showing correctly as 25GB. See below:
But I am frequently getting error warnings on Ubuntu for Low Disk Space.
I checked in System Monitor > File Systems and see that its not picking up allocated disk space and showing only the old 6.2 GB. See below :
What should I do to solve this? Please help.

I encountered the same problem...
Used the following to solve the problem:
vboxmanage modifyhd "/path/to/virtualdrive.vdi" --resize <NewSize>
open the virtual machine and resize the partition (easily done using gparted). The drive was resized to 100G
Then:
# df -h /home/
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/ubuntu--vg-root 24G 22G 1.1G 96% /
# lvextend -L +100G /dev/mapper/ubuntu--vg-root
Size of logical volume ubuntu-vg/root changed from 24.00 GiB (6145
extents) to 124.00 GiB (31745 extents).
Logical volume ubuntu-vg/root successfully resized.
# resize2fs /dev/mapper/ubuntu--vg-root
The filesystem on /dev/mapper/ubuntu--vg-root is now 32506880 (4k)
blocks long.
# df -h
Filesystem Size Used Avail Use% Mounted on
udev 3.9G 0 3.9G 0% /dev
tmpfs 798M 1.4M 797M 1% /run
/dev/mapper/ubuntu--vg-root 122G 22G 96G 19% /

df -h or your GUI system monitor app shows the actual FS characteristics, not the Volume size.
You should first check /dev/sda device, then ensure you can update /dev/sda1 volume size (fdisk or other software could be used for this purpose). And after that you should increase FS size using resize2fs utils.
Then you'll be able to use whole disk.

U should try "df -h" in terminal and see the big partitions and it's phat. After you use the "du -csh /phat/bing/" to see what file is so big and process it.

Related

write error disk full in EC2 Machine [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 years ago.
Improve this question
I have my EC2 linux instance where some softwares are installed.
I downloaded a new zip and was trying to unzip it.
I got this error write error (disk full?). Continue? (y/n/^C) n
The zip is not corrupted and I can unzip it from other instances.
I change instance type from small to medium and then large.Nothing worked.
I ran df -h .
$ df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 16G 56K 16G 1% /dev
tmpfs 16G 0 16G 0% /dev/shm
/dev/xvda1 9.8G 9.7G 0 100% /
I think /dev/xvda1 is culprit. How can i increase the size of this?
What is this /dev/xvda1
It is not a matter of instance type. You must change the volume (EBS) size.
Go to console and select the EBS of that instance , click action dropdown menu , then click modify volume ( A form will appear with the current volume size, increase it )
Try to remove some kilobytes to be able to run (3). rm -rf /tmp/* for example.
Grow/Expand your filesystem :
sudo growpart /dev/xvda 1
sudo resize2fs /dev/xvda1
NOTES :
check Step(1) by lsblk command and check step (3 ) by df -h
Scale down your instance before receiving a huge billing the end of month 😅 ( Let it small as it was )

upgrade from Centos 6.7 to 7 without compromise the data in a LVM [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
7 Server with 4 Discs and I have this configuration:
"/" In a LVM (Physical disk 1)
"/data" In LVM (Physical disk 1 + fake raid 0 of disks 2 and 3)
"/data1" Ext4 (Physical disk 4)
The server is a supermicro (motherboard model X8DTL) with 8gb of ram.
I need to upgrade to centos 7 because the dependencies in the newer software are only in this distro but I have afraid of fuck UP With the data in "/data"
How I can upgrade safely without screw with "/data"?
PS:
I can't make a backup, the information is more than 5TB.
"/data" and "/data1" there are only standalone files (Text, spreadsheet files, multimedia files). The programs and associations are only in "/"
Edit:
Here it is how the disks are arranged:
# lsblk -o NAME,FSTYPE,SIZE,MOUNTPOINT,LABEL
NAME FSTYPE SIZE MOUNTPOINT LABEL
sda linux_raid_member 931,5G GLaDOS:0
└─md0 LVM2_member 1,8T
└─vg_glados_media-lv_data (dm-3) ext4 3,6T /data
sdc linux_raid_member 931,5G GLaDOS:0
└─md0 LVM2_member 1,8T
└─vg_glados_media-lv_data (dm-3) ext4 3,6T /data
sdb 1,8T
├─sdb1 ext4 500M /boot
├─sdb2 LVM2_member 97,7G
│ ├─vg_glados-lv_root (dm-0) ext4 50G /
│ ├─vg_glados-lv_swap (dm-1) swap 7,8G [SWAP]
│ └─vg_glados-lv_home (dm-2) ext4 39,9G /home
└─sdb3 LVM2_member 1,7T
└─vg_glados_media-lv_data (dm-3) ext4 3,6T /data
sdd 931,5G
└─sdd1 ext4 931,5G /data1 /data1
sr0 1024M
# df -H
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg_glados-lv_root
53G 44G 6,6G 87% /
tmpfs 4,2G 78k 4,2G 1% /dev/shm
/dev/sdb1 500M 132M 342M 28% /boot
/dev/mapper/vg_glados_media-lv_data
3,9T 3,7T 28G 100% /data
/dev/mapper/vg_glados-lv_home
42G 862M 39G 3% /home
/dev/sdd1 985G 359G 576G 39% /data1
You have two options:
1) upgrade the existing installation. You could follow this RHEL manual for for example.
2) Make a fresh install, but a) tell the anaconda that you wish to do partitioning manually and b) carefully pick the correct partitions to format and to install the OS.
The latter option is much risky than the former. Also, you will lost any history/credentials/etc and would need to configure everything again.
If you have some spare disks to make a backup of your /data partition then better do it in both cases
I think you should add a new disk to your volume group. With this new space you can create a new logical volume and there you can try new experimental installations without compromising the rest of the system.

ubuntu 14.04 disk full [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
when i run df -h on my ubuntu 14.04 laptop i see the following:
pdp2907#pdp2907-Satellite-C655:~$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 933M 4.0K 933M 1% /dev
tmpfs 189M 1.1M 188M 1% /run
dev/mapper/ubuntu--vg-root 228G 215G 1.1G 100% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
none 5.0M 0 5.0M 0% /run/lock
none 943M 11M 933M 2% /run/shm
none 100M 36K 100M 1% /run/user
/dev/sda1 236M 44M 180M 20% /boot
the /dev/mapper/ubuntu--vg-root is full.
how do i correct the problem please.?
thanx for all your support
You need to know what data is on it. So far I assume you have a whole OS in / only. What you can do is, for example, move some content to anoter volume (disk) and either mount it or make a symbolic link. I personally place /usr to a separate volume, and my /opt is a link. Then the root partition does not need to be so huge. But in your case the root has over 200 Gb what seems a bit more than the OS only :). Explore the files over there, perhaps you also find some movies if the user's home directories are also there...
find / -size +100M
The command above might be helpful to search for files over 100 Mb size (normally should not appear in root filesystem)
In order to free up disk space in dev/mapper/ubuntu--vg-root you can remove cached files with the following command:
sudo apt-get clean
You still can free up more space by uninstalling packages that are not required anymore:
sudo apt-get autoremove

Linux and Hadoop : Mounting a disk and increasing the cluster capacity [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
First of all, I'm a total noob at hadoop and linux.I have a cluster of five nodes , which when starts shows a each node capacity only 46.6 GB while each machine had around 500 gb space which i dont know how to allocate to these nodes.
(1) Do I have to change the datanode and namenode file size(i checked these and it shows the same space remaining as in the Datanode information tab)? if so how should i do that.
(2)Also this 500gb disk is only shown when i do $lsblk command and not when i do $df -H command. Does that mean its not mounted? These are the results of the commands. Can someone explain what does this mean..
[hadoop#hdp1 hadoop]$ sudo lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sr0 11:0 1 1024M 0 rom
sda 8:0 0 50G 0 disk
\u251c\u2500sda1 8:1 0 500M 0 part /boot
\u2514\u2500sda2 8:2 0 49.5G 0 part
\u251c\u2500VolGroup-lv_root (dm-0) 253:0 0 47.6G 0 lvm /
\u2514\u2500VolGroup-lv_swap (dm-1) 253:1 0 2G 0 lvm [SWAP]
sdb 8:16 0 512G 0 disk
[hadoop#hdp1 hadoop]$ sudo df -H
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root
51G 6.7G 41G 15% /
tmpfs 17G 14M 17G 1% /dev/shm
/dev/sda1 500M 163M 311M 35% /boot
Please help. Thanks in advance.
First can someone help me understand why its showing different memory disks and what it means and where does it reside ?! I seem to not able to figure it out
You are right. Your second disk (sdb) is not mounted anywhere. If you are going to dedicate the whole disk to hadoop data, here is how you should format and mount it:
Format your disk:
mkfs.ext4 -m1 -O dir_index,extent,sparse_super /dev/sdb
For mounting edit the file /etc/fstab. Add this line:
/dev/sdb /hadoop/disk0 ext4 noatime 1 2
After that, create the directory /hadoop/disk0 (it doesn't have to be named like that. you could use a directory of your choice).
mkdir -p /hadoop/disk0
Now you are ready to mount the disk:
mount -a
Finally, you should let hadoop know that you want to use this disk as hadoop storage. Your /etc/hadoop/conf/hdfs-site.xml should contain these config parameters
<property><name>dfs.name.dir</name><value>/hadoop/disk0/nn</value></property>
<property><name>dfs.data.dir</name><value>/hadoop/disk0/dn</value></property>

Amazon EC2 micro instance - ran out of space? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
df -h shows that only 71% of space used:
root#ip-xxx-xxx-xxx-xxx:/home/myuser# df -h
Filesystem Size Used Avail Use% Mounted on
rootfs 7.9G 5.3G 2.2G 71% /
udev 10M 0 10M 0% /dev
tmpfs 60M 88K 60M 1% /run
/dev/xvda1 7.9G 5.3G 2.2G 71% /
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 120M 0 120M 0% /run/shm
However nothing can create a file anymore, even MC does not start
#mc
Cannot create temporary directory /tmp/mc-root: No space left on device (28)
Php can not create files
PHP Warning: fopen(/home/.../file.json): failed to open stream: No space
left on device in /webdev/www/..../my.php on line 10
What could it be?
I use Debian 7 on Micro instance.
df -h shows you disk free space in human readable format. But this sounds like an inode table issue which you can check via df -i. For example, here is my inode usage on my own Amazon ECS micro instance running Ubuntu 12.04:
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/xvda1 524288 116113 408175 23% /
udev 73475 379 73096 1% /dev
tmpfs 75540 254 75286 1% /run
none 75540 5 75535 1% /run/lock
none 75540 1 75539 1% /run/shm
Depending on the output, I bet your inode table is filled to the brim. The inode table logs each individual file data. Not just how much space. Meaning you might have 71% in use, but that 71% can be filled with thousands of files. So if you have tons of small files, you might still technically have free space, but the inode table is full so you have to clear that out to get your system fully functional again.
Not too clear on the best way to clear this up, but if you know of a directory that has tons of files you can toss away right away, I would recommend removing them first. For what it’s worth, this question & answer thread looks like it has some decent ideas.

Resources