Linux Get physical location from file and write contents - linux

I want to get the physical location of linux file, /root/f.txt and write(overwrite) some contents of file
File is /root/f.txt
lsblk command output:
# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
fd0 2:0 1 4K 0 disk
sda 8:0 0 16G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 15G 0 part
├─rhel-root 253:0 0 13.4G 0 lvm /
└─rhel-swap 253:1 0 1.6G 0 lvm [SWAP]
sdb 8:16 0 1G 0 disk
sr0 11:0 1 1024M 0 rom
Contents of file:
#cat /root/f.txt
This is new file ha ha ha
From 'filefrag' command I get pysical location of file
#filefrag -v /root/f.txt
Filesystem type is: 58465342
File size of /root/f.txt is 26 (1 block of 4096 bytes)
ext: logical_offset: physical_offset: length: expected: flags:
0: 0.. 0: 1761827.. 1761827: 1: eof
/root/f.txt: 1 extent found
Here physical block starts at 1761827 and 1 block is of 4096.
So physical location of file would be: 1761827 * 4096 = 7216443392
I have only '/dev/sda' and I am trying to write at location 7216443392 with dd command as:
#sudo dd seek=7216443392 if=/dev/zero of=/dev/sda count=1 obs=1
1+0 records in
512+0 records out
512 bytes (512 B) copied, 0.00699863 s, 73.2 kB/s
But when I saw contents of file /root/f.txt, output is still same
#cat /root/f.txt
This is new file ha ha ha
So either the physical location is not correct, or I do something wrong with dd. Please suggest.

The initial cat pulls the file into the page cache. Then, you directly write to the block device using dd. At this point, the kernel has no reason to believe the page in memory is not consistent with disk, so the new contents you write to the block device are not reflected when you cat after dd.
To see the new data written using dd, sync(1) and drop the page cache before running dd:
sync
sudo sh -c 'echo 1 > /proc/sys/vm/drop_caches'

Related

How do i assign a new disk/LV to the root folder "/" which has been already mounted by sdb2 partition?

[root#my-linux-vm ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
fd0 2:0 1 4K 0 disk
sda 8:0 0 16G 0 disk
└─sda1 8:1 0 16G 0 part
sdb 8:16 0 10G 0 disk
├─sdb1 8:17 0 2M 0 part
└─sdb2 8:18 0 10G 0 part /
sdc 8:32 0 12G 0 disk
└─sdc1 8:33 0 12G 0 part
└─vg_new_root-lv0 252:0 0 11G 0 lvm
sr0 11:0 1 1024M 0 rom
Given the above partition/disk situation,
can i mount the 'vg_new_root-lv0' LV onto the root ("/")folder in order to extend the root capacity beyond sdb2 space?
The short answer is No, based on your current configuration.
Due to the fact that the / root filesystem is not part of LVM there is no easy way to expand its capacity.
My suggestion would be to run a disk space script to confirm what is the directory or service that is using a significant amount of the disk space and then (if possible) move that data into the new sdc1 drive / vg_new_root-lv0 Logical Volume, it needs to be formatted and mounted to be ready to use, once mounted you can stop your application and then move all the data to that new filesystem (i.e /mnt/data), after you confirm that the data has been moved you can then start your application, test and then remove the data from the original location under the sdb2 disk / root / filesystem to free up space.
Run the below one liner to get a disk usage report and confirm what you can remove / compress / move.
echo -n "Type Filesystem: ";read FS;NUMRESULTS=20;resize;clear;date;df -h $FS;echo "Largest Directories:"; du -x $FS 2>/dev/null| sort -rnk1| head -n $NUMRESULTS| awk '{printf "%d MB %s\n", $1/1024,$2}';echo "Largest Files:"; nice -n 19 find $FS -mount -type f -ls 2>/dev/null| sort -rnk7| head -n $NUMRESULTS|awk '{printf "%d MB\t%s\n", ($7/1024)/1024,$NF}'

Mount docker logical volume

I'm trying to access to a logical volume where previously was used by docker. This is the result of various command:
lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme0n1 259:2 0 80G 0 disk
├─nvme0n1p1 259:3 0 80G 0 part /
└─nvme0n1p128 259:4 0 1M 0 part
nvme1n1 259:0 0 80G 0 disk
└─nvme1n1p1 259:1 0 80G 0 part
├─docker-docker--pool_tdata 253:1 0 79G 0 lvm
│ └─docker-docker--pool 253:2 0 79G 0 lvm
└─docker-docker--pool_tmeta 253:0 0 84M 0 lvm
└─docker-docker--pool 253:2 0 79G 0 lvm
fdisk
Disk /dev/nvme1n1: 85.9 GB, 85899345920 bytes, 167772160 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x00029c01
Device Boot Start End Blocks Id System
/dev/nvme1n1p1 2048 167772159 83885056 8e Linux LVM
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.
Disk /dev/nvme0n1: 85.9 GB, 85899345920 bytes, 167772160 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: gpt
Disk identifier: 358A5F86-3BCA-4FB2-8C00-722B915A71AB
# Start End Size Type Name
1 4096 167772126 80G Linux filesyste Linux
128 2048 4095 1M BIOS boot BIOS Boot Partition
lvdisplay
--- Logical volume ---
LV Name docker-pool
VG Name docker
LV UUID piD2Wx-aDjf-CkpN-b4s4-YXWE-6ERm-GWTcOz
LV Write Access read/write
LV Creation host, time ip-172-31-39-159, 2020-02-16 09:18:57 +0000
LV Pool metadata docker-pool_tmeta
LV Pool data docker-pool_tdata
LV Status available
# open 0
LV Size 79.03 GiB
Allocated pool data 80.07%
Allocated metadata 31.58%
Current LE 20232
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:2
But when I try to mount the volume docker-docker--pool_tdata I get the following error:
mount /dev/mapper/docker-docker--pool_tdata /mnt/test
mount: /dev/mapper/docker-docker--pool_tdata is already mounted or /mnt/test busy
I've also tried to reboot the machine, to uninstall docker and to see if there is file opened on that volume using lsof
Do you have any clue about how can I mount that volume?
Thanks
Uninstalling docker does not really help as purge and autoremove only delete the installed packages and not the images, containers, volumes and config files.
To delete those you have to delete a bunch of directories contained in etc, var/lib, bin andvar/run
Clean up the env
try running docker system prune -a to remove unused containers, images etc
remove the volume with docker volume rm {volumeID}
create the volume again docker volume create docker-docker--pool_tdata
Kill the process
run lsof +D /mnt/test or cat ../docker/../tasks
this should display the PIDs of alive tasks.
Kill the task with kill -9 {PID}

Linux read external disk data, cannot mount

I just received a hard disk from other people and it includes some data. I want to read the data inside this disk. However, when I try to mount it, it shows:
~ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 232.9G 0 disk
├─sda1 8:1 0 217G 0 part /
├─sda2 8:2 0 1K 0 part
└─sda5 8:5 0 16G 0 part [SWAP]
sdc 8:32 0 74.6G 0 disk
sr0 11:0 1 1024M 0 rom
~ sudo mount /dev/sdc /media/new
mount: /dev/sdc: can't read superblock
~ udisksctl mount -b /dev/sdc
Object /org/freedesktop/UDisks2/block_devices/sdc is not a mountable filesystem.
So what can I do to read the data inside this disk? I am not sure whether I can use command like mkfs.ext4 /dev/sdc, which seems will initialize the disk and erase the data.
Have you tried to use gparted to get some infos ?

pvcreate failing to create PV. Device not found /dev/sdxy (or ignored by filtering)

I have an oVirt installation with CentOS Linux release 7.3.1611.
I want to add a new drive (sdb) to the oVirt volume group to work with VMs.
Here is the result of fdisk on the drive:
[root#host1 ~]# fdisk /dev/sdb Welcome to fdisk (util-linux 2.23.2).
Changes will remain in memory only, until you decide to write them. Be
careful before using the write command.
Orden (m para obtener ayuda): p
Disk /dev/sdb: 300.1 GB, 300069052416 bytes, 586072368 sectors
Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512
bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk
label type: dos Identificador del disco: 0x7a33815f
Disposit. Inicio Comienzo Fin Bloques Id Sistema
/dev/sdb1 2048 586072367 293035160 8e Linux LVM
The partitions are showed up in /proc/partitions:
[root#host1 ~]# cat /proc/partitions
major minor #blocks name
8 0 293036184 sda
8 1 1024 sda1
8 2 1048576 sda2
8 3 53481472 sda3
8 4 1 sda4
8 5 23072768 sda5
8 6 215429120 sda6
8 16 293036184 sdb
8 17 293035160 sdb1
When I execute the command to create PV with "pvcreate /dev/sdb1" the result is:
[root#host1 ~]# pvcreate /dev/sdb1
Device /dev/sdb1 not found (or ignored by filtering).
I have revised the file /etc/lvm/lvm.conf for the filters, but I do not have any filter that makes LVM discarding the drive. I have rebooted the computer after creating the PV with pvcreate. I did research on Google for the error but no idea.
Thanks. Any help would be appreciated Manuel
Try to edit lvm.conf uncomment global_flter and edit like this
global_filter = [ "a|/dev/sdb|"]
After that edit multipath vi /etc/multipath.conf
[root#ovirtnode2 ~]#lsblk /dev/sdb NAME
MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sdb
8:16 0 200G 0 disk └─3678da6e715b018f01f1abdb887594aae 253:2
0 200G 0 mpath
edit
vi /etc/multipath.conf
append the following to multipath.conf blacklist {
wwid 3678da6e715b018f01f1abdb887594aae }
service multipathd restart
its work for me, and i have that problem to when im trying on ovirt but
[root#ovirtnode2 ~]# pvcreate /dev/sdb Physical volume "/dev/sdb"
successfully created. [root#ovirtnode2 ~]#

missing partition in server centos 6.1

I used the command df-h on my centos 6.1
here's the output
[root#localhost ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root
50G 2.3G 45G 5% /
tmpfs 5.9G 0 5.9G 0% /dev/shm
/dev/sda1 485M 35M 425M 8% /boot
/dev/mapper/VolGroup-lv_home
2.0T 199M 1.9T 1% /home
i found out that the hard disk is two terabyte. but when I used the command cat /proc/partitions | more
here's the output
[root#localhost sysconfig]# cat /proc/partitions | more
major minor #blocks name
8 0 4293656576 sda
8 1 512000 sda1
8 2 2146970624 sda2
253 0 52428800 dm-0
253 1 14417920 dm-1
253 2 2080120832 dm-2
you could see on the first line. it is 4396.7 GB . why is it i could only see is 2TB? how could i find my missing another 2TB and make it a partition.
I also use the command lsblk
here is the output
[root#localhost ~]# lblsk
-bash: lblsk: command not found
[root#localhost ~]# lsblk
NAME MAJ:MIN RM SIZE RO MOUNTPOINT
sda 8:0 0 4T 0
ââsda1 8:1 0 500M 0 /boot
ââsda2 8:2 0 2T 0
ââVolGroup-lv_root (dm-0) 253:0 0 50G 0 /
ââVolGroup-lv_swap (dm-1) 253:1 0 13.8G 0 [SWAP]
ââVolGroup-lv_home (dm-2) 253:2 0 2T 0 /home
sr0 11:0 1 1024M 0
using the parted /dev/sda i type the print free command
here's the output
(parted) print free
Model: DELL PERC 6/i (scsi)
Disk /dev/sda: 4397GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Number Start End Size Type File system Flags
32.3kB 1049kB 1016kB Free Space
1 1049kB 525MB 524MB primary ext4 boot
2 525MB 2199GB 2198GB primary lvm
2199GB 4397GB 2198GB Free Space
I was wrong, sorry. As you can see in parted print free output you have 2 MBR partitions - boot and lvm and 2198GB free space (last row).
If you want to use all of your space you have to use GPT partitions. These partitions as opposed to MBR partition that can only address up to 2TB, can address your whole disk and up to 8 ZiB (zebibytes).
You can try to convert MBR partition table to GPT (example 1, example 2), though I strongly recommend to backup your data.
You are using tools showing info from different layers of your system and interpreting it wrong.
df, according to man page, will display the space available on all currently mounted file systems.
/proc/partitions holds info about partitions on your drive - physical device. This file shows you size of your drive as number of blocks. Usually, on HDD block size is a size of sector - 512 bytes.
So, sda size of 4293656576 is size in blocks, not kilobytes.
4293656576 blocks = (4293656576 / 2 ) kilobytes = 2146828288 KiB = 2047.375 GiB, or 2198.352 GB.
Assuming 1 GiB = 1 * 2^30, 1 GB = 1 * 10^3.
If you want to see size of your disk use fdisk -l <device name>.

Resources