Increase size of filesystem /dev/sdb1 after increasing the size of /dev/sdb - linux

For the FS
df -kh /store
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdb1        50G   45G  1.9G  97% /store
I have increased the size of /dev/sdb by 20G and then added the space to /dev/sdb1 by 20G by deleting the partition and creating it again
sdb               8:16   0   70G  0 disk
└─sdb1            8:17   0   70G  0 part
However i am not sure how to make it visible in /store
pvresize gives the error as no physical volume found
It is a cloud VM
vgs or vgdisplay doesnot show any VG created as well
Output of fstab as below
/dev/sdb1 /store/ ext4 defaults 0 0
However i am not sure how to make it visible in /store
pvresize gives the error as no physical volume found
It is a cloud VM
vgs or vgdisplay doesnot show any VG created as well
Output of fstab as below
/dev/sdb1 /store/ ext4 defaults 0 0

From the output you posted, I don't see any indication that the system is using LVM, rather the filesystem is directly on /dev/sdb1. So simply try umount /store followed by resize2fs /dev/sdb1.

Related

ec2 how to add more volume to exist device

I was trying to add more volume to my device
df -h
I get:
[root#ip-172-x-x-x ~]# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 3.8G 44K 3.8G 1% /dev
tmpfs 3.8G 0 3.8G 0% /dev/shm
/dev/nvme0n1p1 7.8G 3.6G 4.2G 46% /
I wanna add all existing storage to /dev/nvme0n1p1
lsblk
I get
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme0n1 259:0 0 300G 0 disk
├─nvme0n1p1 259:1 0 8G 0 part /
└─nvme0n1p128 259:2 0 1M 0 part
I was trying to google around on aws instructions, still quite confuse. since most of the instruction is setting up brand new instance. While for my use case i cannot stop the instance.
i cannot do
mkfs
Also seems like the disk is already mount?? I guess i may misunderstand the meaning of mount...
since the filesystem is already there.
just wanna use all existing space.
Thanks for help in advance!!
your lsblk output shows that you have a 300G disk but your nvme0n1p1 is only 8G. You need to first grow your partition to fill the disk and then expand your filesystem to fill your partition:
Snapshot all ebs volumes you care about before doing any resize operations on them.
Install growpart
sudo yum install cloud-utils-growpart
Resize partiongrowpart /dev/nvme0n1 1
Reboot reboot now
Run lsblk and verify that the partition is now the full disk size
You may still have to run sudo resize2fs /dev/nvme0n1 to expand the filesystem

Not able to resize the AWS EC2 volume

I created an AWS EC2 Linux instance with 8GB root volume. Then I increased the EBS volume to 9GB and it went to the completed state. It's a small volume, so the resize took a couple of minutes to complete.
Now I try to extend extend the linux file system after resizing the volume using the instructions mentioned here. But, I get the below error message. I tried two times, the entire process. But it's all the same.
The filesystem is already 2096635 (4k) blocks long. Nothing to do!
Here is the screen shot of the image.
Can someone help me?
Just reboot the instance because it automatically resizes your root filesystem on boot.
I tried it myself. Here is the instance with an 8GB volume:
[ec2-user#ip-172-31-15-216 ~]$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 8G 0 disk
└─xvda1 202:1 0 8G 0 part /
[ec2-user#ip-172-31-15-216 ~]$ df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 236M 56K 236M 1% /dev
tmpfs 246M 0 246M 0% /dev/shm
/dev/xvda1 7.8G 985M 6.7G 13% /
After modifying the EBS Volume:
[ec2-user#ip-172-31-15-216 ~]$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 9G 0 disk
└─xvda1 202:1 0 8G 0 part /
[ec2-user#ip-172-31-15-216 ~]$ df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 236M 56K 236M 1% /dev
tmpfs 246M 0 246M 0% /dev/shm
/dev/xvda1 7.8G 985M 6.7G 13% /
After the reboot:
[ec2-user#ip-172-31-15-216 ~]$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 9G 0 disk
└─xvda1 202:1 0 9G 0 part /
[ec2-user#ip-172-31-15-216 ~]$ df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 236M 56K 236M 1% /dev
tmpfs 246M 0 246M 0% /dev/shm
/dev/xvda1 8.8G 984M 7.7G 12% /
See also: increase EC2 EBS volume after cloning - resize2fs not working
# http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/storage_expand_partition.html
# Before goging to do this, detach and attach the root volume to anothe instance
# Note:
# 1) Before detach the volume, please make a note of device name which going to
# detch from the machine, why because we should mention same name when attaching back, otherwise data will be lost
# 2)
# Identifying device name which we want to expand
lsblk
# Running parted command on the device
sudo parted /dev/xvdf
# Changing the parted units of measure to sectors.
unit s
# Run the print command to list the partitions on the device
print
# if it shows warning, chose fix
# Delete the partition entry for the partition using the number (1) from the previous step
rm 1 # number 1 will change based the partition we want to delete
# Create a new partition that extends to the end of the volume
mkpart Linux 4096s 100%
# Run the print command again to verify your partition
print
# Check to see that any flags that were present earlier are still
# present for the partition that you expanded. In some cases the boot
# flag may be lost. If a flag was dropped from the partition when it was expanded,
# add the flag with the following command, substituting your partition number and the flag name.
# For example, the following command adds the boot flag to partition 1
set 1 boot on
#Run the quit command to exit parted.
quit
# verfiying the device
sudo e2fsck -f /dev/xvdf1

My mounted EBS volume is not showing up

Trying to mount a 384G volume from old instance to a newly configure instance (8G). Attached 384G volume shows up on lsblk but on df -h it doesn't come up at all. What am I doing wrong?
[ec2-user#ip-10-111-111-111 ~]$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvdf 202:80 0 384G 0 disk
xvda1 202:1 0 8G 0 disk /
[ec2-user#ip-10-111-111-111 ~]$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 7.9G 1.5G 6.4G 19% /
tmpfs 1.9G 0 1.9G 0% /dev/shm
Note: On EC2 instance dashboard it displays
Root device: /dev/sda1
Block devices: /dev/sda1 /dev/sdf
The df -k will only show mounted volumes.
You will need to mount your volume first, like this mount /dev/xvdf /mnt then you will be able to access it's content from /mnt and see it when typing df -k
For those landing here after not finding their xvdf devices on aws ec2 c5 or m5 instances, it's renamed to /dev/nvme... as per the docs
For C5 and M5 instances, EBS volumes are exposed as NVMe block
devices. The device names that you specify are renamed using NVMe
device names (/dev/nvme[0-26]n1). For more information, see Amazon EBS
and NVMe.
If this is windows follow this:
https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/recognize-expanded-volume-windows.html#recognize-expanded-volume-windows-disk-management

Hudson: returned status code 141: fatal: write error: No space left on device

I copied one of the existing project and created a new project in Hudson. While running build it says "returned status code 141: fatal: write error: No space left on device"
Like suggested in other forums I checked free space and inode used in file system and nothing seems problematic here. Hudson is running as service and Hudons user has been given sudo privilege. Older job can be run so nothing different in new cloned job.
Disk Space
bash-4.1$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg_dev-lv_root
20G 19G 28K 100% /
tmpfs 1.9G 192K 1.9G 1% /dev/shm
/dev/sda1 485M 83M 377M 19% /boot
/dev/mapper/vg_dev-lv_home
73G 26G 44G 38% /home
i-nodes used
bash-4.1$ df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/mapper/vg_dev-lv_root
1310720 309294 1001426 24% /
tmpfs 490645 4 490641 1% /dev/shm
/dev/sda1 128016 46 127970 1% /boot
/dev/mapper/vg_dev-lv_home
4833280 117851 4715429 3% /home
Hudson build log
bash-4.1$ cat log
Started by user anonymous
Checkout:workspace / /var/lib/hudson/jobs/Demo/workspace - hudson.remoting.LocalChannel#1d4ab266
Using strategy: Default
Checkout:workspace / /var/lib/hudson/jobs/Demo/workspace - hudson.remoting.LocalChannel#1d4ab266
Fetching changes from the remote Git repository
Fetching upstream changes from ssh://demouser#10.10.10.10:20/home/git-repos/proj.git
ERROR: Problem fetching from origin / origin - could be unavailable. Continuing anyway
ha:AAAAWB+LCAAAAAAAAABb85aBtbiIQSmjNKU4P08vOT+vOD8nVc8DzHWtSE4tKMnMz/PLL0ldFVf2c+b/lb5MDAwVRQxSaBqcITRIIQMEMIIUFgAAckCEiWAAAAA=ERROR: (Underlying report) : Error performing command: git fetch -t ssh://demouser#10.10.10.10:20/home/git-repos/proj.git +refs/heads/*:refs/remotes/origin/*
Command "git fetch -t ssh://demouser#10.10.10.10:20/home/git-repos/proj.git +refs/heads/*:refs/remotes/origin/*" returned status code 141: fatal: write error: No space left on device
ha:AAAAWB+LCAAAAAAAAABb85aBtbiIQSmjNKU4P08vOT+vOD8nVc8DzHWtSE4tKMnMz/PLL0ldFVf2c+b/lb5MDAwVRQxSaBqcITRIIQMEMIIUFgAAckCEiWAAAAA=ERROR: Could not fetch from any repository
ha:AAAAWB+LCAAAAAAAAABb85aBtbiIQSmjNKU4P08vOT+vOD8nVc8DzHWtSE4tKMnMz/PLL0ldFVf2c+b/lb5MDAwVRQxSaBqcITRIIQMEMIIUFgAAckCEiWAAAAA=FATAL: Could not fetch from any repository
ha:AAAAWB+LCAAAAAAAAABb85aBtbiIQSmjNKU4P08vOT+vOD8nVc8DzHWtSE4tKMnMz/PLL0ldFVf2c+b/lb5MDAwVRQxSaBqcITRIIQMEMIIUFgAAckCEiWAAAAA=hudson.plugins.git.GitException: Could not fetch from any repository
at hudson.plugins.git.GitSCM$3.invoke(GitSCM.java:887)
at hudson.plugins.git.GitSCM$3.invoke(GitSCM.java:845)
at hudson.FilePath.act(FilePath.java:758)
at hudson.FilePath.act(FilePath.java:740)
at hudson.plugins.git.GitSCM.gerRevisionToBuild(GitSCM.java:845)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:622)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1483)
at hudson.model.AbstractBuild$AbstractRunner.checkout(AbstractBuild.java:507)
at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:424)
at hudson.model.Run.run(Run.java:1366)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46)
at hudson.model.ResourceController.execute(ResourceController.java:88)
at hudson.model.Executor.run(Executor.java:145)
Your error message is quite clear: There is no space left on device.
This is verified by your df output:
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg_dev-lv_root 20G 19G 28K 100% /
This tells you, you have a root partition / with a total size of 20GB which is use by 100%.
20GB is probably a bit small in your case. As this "partition" is managed by LVM (/dev/mapper/vg...) it is possible to extend it to create more space for your data.
Otherwise you have to check, if there is some "garbage" laying around which can be removed.
You can use something like xdiskusage / to find out, what is occupying your precious disk space.
But if you don't understand the concept of a file system, maybe it is easier to find someone else to do it for you.
I had a very similar issue, it turned out to be a 40 gig log file from a "neverending" build which had been running for 8 hours

Understanding Linux partitions with Amazon EC2

I am relatively new to Linux. In one of our projects, we use amazon's EC2 instance for processing of some files. We upload files to S3 server after processing. EC2 instance is booted using an existing AMI
Recently I got an error no space left on disk, hence processing of files was halted. I cleaned up some older files and the processing continued.
Now when I look at available space using df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 9.9G 5.7G 3.7G 61% /
none 3.7G 0 3.7G 0% /dev/shm
/dev/xvdb 414G 199M 393G 1% /mnt
/dev/xvdc 414G 199M 393G 1% /data
I can see my files are effecting only /dev/xvda1.
I have following queries
What is the use of other partitions when I can see my files only effecting /dev/xvda1
It looks like we are only using 10 GB of space effectively and other is being wasted. How can I use other space? Can I move some disk space to /dev/xvda1 or directly store files in other areas?
As you can see from the output of df -h, there are two large partitions mouted on /mnt and /data respectively. I suggest that you use those partitions by processing the files in one of those directories. If you cannot move where the processing happens for some reason, you can remount the partitions in the appropriate place.
If for example your files are processed in the directory /var/mydir and you cannot change that, do the following (as root):
umount /mnt
mount /dev/xvdb /var/mydir
You can use the other partition as well of course if you prefer that.

Resources