Backup entire disk in Ubuntu - linux

I would like to make a backup of the entire HDD disk.
Step-by-step what I'am trying to do:
1) Check storage capacity (that is going to be backupped):
df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 455G 157G 275G 37% /
2) Mount extra, empty hdd to /mnt/backup/
/dev/sdb 294G 63M 279G 1% /mnt/backup
3) Run backup (using lzop as the fastest compressor)
dd if=/dev/sda1 bs=4M conv=noerror iflag=noatime,nofollow | lzop -1 > /mnt/backup/dev-sda1.lzo
But the backup fails with error: lzop: No space left on device: <stdout>
The extra hdd being fulled with dev-sda1.lzo. But the original size of /dev/sda1 "157G" is obviously less than available on /dev/sdb "279G". Even without compression.
In /etc/fstab /dev/sda1 being mounted to "/":
UUID=8a49b90e-6115-43a6-9702-7620182bbbf5 / ext4 errors=remount-ro 0 1
Is it possible that "dd" is doing recursive copy of the "/mnt/backup/" folder and this leads to it fail ?
Please, advice

Thanks to Mark Setchell to show me the correct direction.
Finally the solution to create dump of the whole partition without spaces is:
dump -0a -z1 -f /mnt/hdd1/dev-sda1.dump.gz /dev/sda1
For 157 G partition of Ubuntu 14.04 + development files + database files "dump" takes 45 minutes (on 7200 rpm HDD) and the result file was 80 G (compression level = 1).

Related

Increase size of filesystem /dev/sdb1 after increasing the size of /dev/sdb

For the FS
df -kh /store
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdb1        50G   45G  1.9G  97% /store
I have increased the size of /dev/sdb by 20G and then added the space to /dev/sdb1 by 20G by deleting the partition and creating it again
sdb               8:16   0   70G  0 disk
└─sdb1            8:17   0   70G  0 part
However i am not sure how to make it visible in /store
pvresize gives the error as no physical volume found
It is a cloud VM
vgs or vgdisplay doesnot show any VG created as well
Output of fstab as below
/dev/sdb1 /store/ ext4 defaults 0 0
However i am not sure how to make it visible in /store
pvresize gives the error as no physical volume found
It is a cloud VM
vgs or vgdisplay doesnot show any VG created as well
Output of fstab as below
/dev/sdb1 /store/ ext4 defaults 0 0
From the output you posted, I don't see any indication that the system is using LVM, rather the filesystem is directly on /dev/sdb1. So simply try umount /store followed by resize2fs /dev/sdb1.

Expand virtual hard disks on a Linux VM with the Azure CLI

I am trying to extend a disk in my vm (azure). I used to do it like this:
sudo umount /dev/sdc1
(sdc1 as an example)
sudo parted /dev/sdc
after typing print, I should see something like this:
GNU Parted 3.2
Using /dev/sdc1
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) print
Model: Unknown Msft Virtual Disk (scsi)
Disk /dev/sdc1: 215GB
Sector size (logical/physical): 512B/4096B
Partition Table: loop
Disk Flags:
Number Start End Size File system Flags
1 0.00B 107GB 107GB ext4
I can't go any further because in my case after typing this command I see:
GNU Parted 3.3
Using /dev/sdc
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) print
Model: Msft Virtual Disk (scsi)
Disk /dev/sdc: 550GB
Sector size (logical/physical): 512B/4096B
Partition Table: msdos
Disk Flags:
As you can see, there are no partitions, so I can't use resizepart command.
lsblk -o NAME,HCTL,SIZE,MOUNTPOINT | grep -i "sd"
sda 1:0:1:0 16G
└─sda1 16G /mnt
sdb 0:0:0:0 30G
├─sdb1 29.9G /
├─sdb14 4M
└─sdb15 106M /boot/efi
sdc 3:0:0:0 512G
As you can see, there are no partitions, so I can't use resizepart
command.
You Need to format the disk sdc to create partitions using either xfs or ext4 file system & to procced further resize/expand the disk partition & file system.
Cmdlets for disk format & diskpartition using XFS file system:
sudo parted /dev/sdc --script mklabel gpt mkpart xfspart xfs 0% 100%
sudo mkfs.xfs /dev/sdc1
sudo partprobe /dev/sdc1
Here we are formatting the disk using XFS file system & using the partprobeutility to make sure the kernel is aware of the new partition and filesystem.
Reference documentation to format the disk & also you can refer this blog on How to create a ext4 file system partition in Linux.
We have tested in our local environment creating a disk partition (to newly attached disk to the linux machine running with ubuntu 20.84 image) & initializing the disk partition with xfs file system.
Below is the reference image when we created a new disk & attached it to the virtual machine. When ran lsblk you see that disk is not mounted & it has no partitions.
In the above image, post running the above mentioned disk format & file partition cmdlets you can see a new partition with sdc1 got created.

df -h giving fake data?

when i'm writing df -h in my instance i'm getting this data:
Filesystem Size Used Avail Use% Mounted on
devtmpfs 7.7G 0 7.7G 0% /dev
tmpfs 7.7G 0 7.7G 0% /dev/shm
tmpfs 7.7G 408K 7.7G 1% /run
tmpfs 7.7G 0 7.7G 0% /sys/fs/cgroup
/dev/nvme0n1p1 32G 24G 8.5G 74% /
tmpfs 1.6G 0 1.6G 0% /run/user/1000
but when i'm clicking sudo du -sh / i'm getting:
11G /
So in df -h, / size is 24G but in du -sh same directory the size is 11G.
I'm trying to get some free space on my instance and can't find the files that cause that.
What i'm missing?
did df -h is really giving fake data?
This question comes up quite often. The file system allocates disk blocks in the file system to record its data. This data is referred to as metadata which is not visible to most user-level programs (such as du). Examples of metadata are inodes, disk maps, indirect blocks, and superblocks.
The du command is a user-level program that isn't aware of filesystem metadata, while df looks at the filesystem disk allocation maps and is aware of file system metadata. df obtains true filesystem statistics, whereas du sees only a partial picture.
There are many causes on why the disk space used or available when running the du or df commands differs.
Perhaps the most common is deleted files. Files that have been deleted may still be open by at least one process. The entry for such files is removed from the associated directory, which makes the file inaccessible. Therefore the command du which only counts files does not take these files into account and comes up with a smaller value. As long as a process still has the deleted file in use, however, the associated blocks are not yet released in the file system, so df which works at the kernel level correctly displays these as occupied. You can find out if this is the case by running the following:
lsof | grep '(deleted)'
The fix for this issue would be to restart the services that still have those deleted files open.
The second most common cause is if you have a partition or drive mounted on top of a directory with the same name. For example, if you have a directory under / called backup which contains data and then you mount a new drive on top of that directory and label it /backup but it contains no data then the space used will show up with the df command even though the du command shows no files.
To determine if there are any files or directories hidden under an active mount point, you can try using a bind-mount to mount your / filesystem which will enable me to inspect underneath other mount points. Note, this is recommended only for experienced system administrators.
mkdir /tmp/tmpmnt
mount -o bind //tmp/tmpmnt
du /tmp/tmpmnt
After you have confirmed that this is the issue, the bind mount can be removed by running:
umount /tmp/tmpmnt/
rmdir /tmp/tmpmnt
Another possible cause might be filesystem corruption. If this is suspected, please make sure you have good backups, and at your convenience, please unmount the filesystem and run fsck.
Again, this should be done by experienced system administrators.
You can also check the calculation by running:
strace -e statfs df /
This will give you output similar to:
statfs("/", {f_type=XFS_SB_MAGIC, f_bsize=4096, f_blocks=20968699, f_bfree=17420469,
f_bavail=17420469, f_files=41942464, f_ffree=41509188, f_fsid={val=[64769, 0]},
f_namelen=255, f_frsize=4096, f_flags=ST_VALID|ST_RELATIME}) = 0
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/vda1 83874796 14192920 69681876 17% /
+++ exited with 0 +++
Notice the difference between f_bfree and f_bavail? These are the free blocks in the filesystem vs free blocks available to an unprivileged user. The used column is merely a calculation between the two.
Hope this will make your idea clear. Let me know if you still have any doubts.

How To Mount A Hard Disk Of File-System Type "devtmpfs"

I'm trying to recover some data from a hard drive extracted from a broken laptop, and I'm having problems mounting the disk to my current system (Linux Mint). The hard disk I'm recovering from ran Debian. Simply, I'm confused as to how I can mount the hard drive to access the files, however it's not as simple as any other mount I've done. The following details struggles and information I've encountered.
I get the following outputs when trying to mount the hard drive with different file-system tags. I should add that the file-system type isn't automatically detected when using auto, and "sdb" is definitely the correct address for the disk (taken it from dmesg).
$ mount /dev/sdb /mnt/usb -t ntfs
NTFS signature is missing.
Failed to mount '/dev/sdb': Invalid argument
The device '/dev/sdb' doesn't seem to have a valid NTFS.
Maybe the wrong device is used? Or the whole disk instead of a
partition (e.g. /dev/sda, not /dev/sda1)? Or the other way around?
The following returns the same message when all other common file-system tags are used:
$ sudo mount /dev/sdb usb -t ext2
mount: wrong fs type, bad option, bad superblock on /dev/sdb,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail or so
The results from these commands led me to believe that there was an issue with the hard disk and it's partitions, however fdisk proved that it's partition's do seem to be valid and correct:
$ sudo fdisk /dev/sdb -l
Disk /dev/sdb: 250.1 GB, 250059350016 bytes
255 heads, 63 sectors/track, 30401 cylinders, total 488397168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0002da94
Device Boot Start End Blocks Id System
/dev/sdb1 * 2048 475920383 237959168 83 Linux
/dev/sdb2 475922430 488396799 6237185 5 Extended
/dev/sdb5 475922432 488396799 6237184 82 Linux swap / Solaris
I then decided to try verify the file-system type of the hard drive, which seems to be "devtmpfs", which I got from the following command using df:
$ df /dev/sdb -T
Filesystem Type 1K-blocks Used Available Use% Mounted on
udev devtmpfs 1014764 4 1014760 1% /dev
And so finally, I mount the hard drive using -t devtmpfs, which is successful in mounting however I'm left with a confusing file system very unlike from what I would expect from what was a standard debian set up.
It contains file folders such as "block","bus","char","disk","dri","mapper"... and files like "sda1","sdb","sdb1","tty","vcs".
I'm totally stumped as to how I should progress, and I'm pretty convinced the hard disk isn't broken and that I'm just mounting it incorrectly. How can I successfully mount the disk so I can access my files? Any help would be greatly appreciated.
Ok, you are trying to mount the entire disk instead of individual partitions, which is why you are getting the error. In short the command you need is:
mount /dev/sdb1 /mnt/usb
The file /dev/sdb references the entire disk as a block file. This includes the partition table at the start, which is why it can't find a filesystem. The file /dev/sdb1 references the first partition, which is where your filesystem will be. From the looks of your fdisk output, this is not an ntfs partition since this is a Windows filesystem and the partition is marked as Linux (most likely you will have ext4 unless you specifically set up something different).
To add a quick explanation of devtmpfs, this is a special filesystem which contains these block files which are specified by udev. You can google both for more information, but by now I'm sure you now know its not what you are looking for.

Understanding Linux partitions with Amazon EC2

I am relatively new to Linux. In one of our projects, we use amazon's EC2 instance for processing of some files. We upload files to S3 server after processing. EC2 instance is booted using an existing AMI
Recently I got an error no space left on disk, hence processing of files was halted. I cleaned up some older files and the processing continued.
Now when I look at available space using df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 9.9G 5.7G 3.7G 61% /
none 3.7G 0 3.7G 0% /dev/shm
/dev/xvdb 414G 199M 393G 1% /mnt
/dev/xvdc 414G 199M 393G 1% /data
I can see my files are effecting only /dev/xvda1.
I have following queries
What is the use of other partitions when I can see my files only effecting /dev/xvda1
It looks like we are only using 10 GB of space effectively and other is being wasted. How can I use other space? Can I move some disk space to /dev/xvda1 or directly store files in other areas?
As you can see from the output of df -h, there are two large partitions mouted on /mnt and /data respectively. I suggest that you use those partitions by processing the files in one of those directories. If you cannot move where the processing happens for some reason, you can remount the partitions in the appropriate place.
If for example your files are processed in the directory /var/mydir and you cannot change that, do the following (as root):
umount /mnt
mount /dev/xvdb /var/mydir
You can use the other partition as well of course if you prefer that.

Resources