Yocto wic Creates Unexpected Small Partition - linux

I am using Yocto and it's wic tool to build my embedded Linux image.
The wic configuration file looks like this:
part /boot --source bootimg-partition --ondisk mmcblk --fstype=msdos --label boot --align 1024 --fixed-size 64
part / --source rootfs --ondisk mmcblk --fstype=ext4 --label root_a --fixed-size 256 --active
part / --source rootfs --ondisk mmcblk --fstype=ext4 --label root_b --fixed-size 256
part /permanent-storage --ondisk mmcblk --fstype=ext4 --label permanent-storage --fixed-size 300
part swap --ondisk mmcblk --size 64 --label swap --fstype=swap
I burn the resulting image to my SD Card and boot successfully, and there is an unexpected small ( 1K ) partition:
root#eval:/dev# ls -lrt /dev/mmcblk0*
brw-rw---- 1 root disk 179, 0 Feb 27 21:55 /dev/mmcblk0
brw-rw---- 1 root disk 179, 4 Feb 27 21:55 /dev/mmcblk0p4
brw-rw---- 1 root disk 179, 2 Feb 27 21:55 /dev/mmcblk0p2
brw-rw---- 1 root disk 179, 3 Feb 27 21:55 /dev/mmcblk0p3
brw-rw---- 1 root disk 179, 5 Feb 27 21:55 /dev/mmcblk0p5
brw-rw---- 1 root disk 179, 1 Feb 27 21:55 /dev/mmcblk0p1
brw-rw---- 1 root disk 179, 6 Feb 27 21:55 /dev/mmcblk0p6
root#eval:/dev# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
mmcblk0 179:0 0 59.6G 0 disk
|-mmcblk0p1 179:1 0 64M 0 part
|-mmcblk0p2 179:2 0 256M 0 part /
|-mmcblk0p3 179:3 0 256M 0 part
|-mmcblk0p4 179:4 0 1K 0 part
|-mmcblk0p5 179:5 0 300M 0 part
`-mmcblk0p6 179:6 0 64M 0 part
Why is wic creating this partition and how can I get rid of it with my wic file? Thanks.

The mmcblk0p4 (1K) partition is an extended partition. When using a master boot record (MBR) to partition storage into more than 4 partitions one must use 3 primary partitions and 1 extended partition. This is because there is a maximum of 4 primary partitions. The extended partition may hold multiple logical partitions.
mmcblk0 <- Entire Storage
|--mmcblk0p1 <- Primary Partition 1
|--mmcblk0p2 <- Primary Partition 2
|--mmcblk0p3 <- Primary Partition 3
|--mmcblk0p4 <- Extended Partition
|--mmcblk0p5 <- Logical Partition 1
|--mmcblk0p6 <- Logical Partition 2
This is not Yocto specific. I use Buildroot and have a similar layout. The commonality is the disk partition method not the Linux distribution.
Wikipedia: Disk Partitioning
Serverfault: Primary vs extended partitions on Linux

Related

python3/bash + how to automate creation of disk partitions in Linux

here is example from our rhel server machine
lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sdb 8:16 0 20G 0 disk /data/sdb
sdc 8:32 0 20G 0 disk /data/sdc
sdd 8:48 0 20G 0 disk /data/sdd
sde 8:64 0 20G 0 disk /data/sde
sdf 8:80 0 42G 0 disk
sdg 8:96 0 42G 0 disk
sdh 8:112 0 42G 0 disk
we want to Create a Disk Partitions for the other disks as sdf,sdg,sdh , but all this process should be by bash script and we want to automate the process
first here is example how to create 2 partitions for sdf disk ,
so in this example we create two partitions each one will get 10G size
step 1 ( create partitions when each partition take 10G )
parted /dev/sdf
GNU Parted 3.1
Using /dev/sdf
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) mklabel msdos <-- sending text (1)
(parted) mkpart primary 0 10024MB <-- sending text (2)
Warning: The resulting partition is not properly aligned for best performance.
Ignore/Cancel? I <-- sending text (3)
(parted) mkpart primary 10024MB 20048MB <-- sending text (4)
(parted) quit <-- sending text (5)
Information: You may need to update /etc/fstab.
and now we get ( the expected results )
lsblk
sdf 8:80 0 42G 0 disk
├─sdf1 8:81 0 9.3G 0 part
└─sdf2 8:82 0 9.3G 0 part
can we automate the parted process ? or maybe by other approach ( for example by fdisk ) ?
in order to use this automated process in python3/bash script
Note - we not have expect on our Linux machines
reference - https://www.tecmint.com/create-disk-partitions-in-linux/
The parted option --script is what you are looking for.
Create a text file with the parted commands you want to execute (simulate interactive) and use the above option on the command line in the following manner:
parted --script ${batch_file}
I would do that for only one partition at at time until the batch_file content is verified to be correct and reliable.
One observation: you may want to modify
mkpart primary 0 10024MB
to show
mkpart primary 0 10080MB
to eliminate the mis-alignment being reported (disk access performance hit from mis-alignment).
The approach is to calculate a number that is a multiple of 512 bytes, but divisible by 2048 or 4096 depending on what the disk reports is the physical sector size. For example:
4096 * 1024 * (2048 + 256 + 128 + 64 + 32 - 16 + 8) = 1279918080
1279918080 / (1024 * 1024) = 10080 GB

Mount docker logical volume

I'm trying to access to a logical volume where previously was used by docker. This is the result of various command:
lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme0n1 259:2 0 80G 0 disk
├─nvme0n1p1 259:3 0 80G 0 part /
└─nvme0n1p128 259:4 0 1M 0 part
nvme1n1 259:0 0 80G 0 disk
└─nvme1n1p1 259:1 0 80G 0 part
├─docker-docker--pool_tdata 253:1 0 79G 0 lvm
│ └─docker-docker--pool 253:2 0 79G 0 lvm
└─docker-docker--pool_tmeta 253:0 0 84M 0 lvm
└─docker-docker--pool 253:2 0 79G 0 lvm
fdisk
Disk /dev/nvme1n1: 85.9 GB, 85899345920 bytes, 167772160 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x00029c01
Device Boot Start End Blocks Id System
/dev/nvme1n1p1 2048 167772159 83885056 8e Linux LVM
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.
Disk /dev/nvme0n1: 85.9 GB, 85899345920 bytes, 167772160 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: gpt
Disk identifier: 358A5F86-3BCA-4FB2-8C00-722B915A71AB
# Start End Size Type Name
1 4096 167772126 80G Linux filesyste Linux
128 2048 4095 1M BIOS boot BIOS Boot Partition
lvdisplay
--- Logical volume ---
LV Name docker-pool
VG Name docker
LV UUID piD2Wx-aDjf-CkpN-b4s4-YXWE-6ERm-GWTcOz
LV Write Access read/write
LV Creation host, time ip-172-31-39-159, 2020-02-16 09:18:57 +0000
LV Pool metadata docker-pool_tmeta
LV Pool data docker-pool_tdata
LV Status available
# open 0
LV Size 79.03 GiB
Allocated pool data 80.07%
Allocated metadata 31.58%
Current LE 20232
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:2
But when I try to mount the volume docker-docker--pool_tdata I get the following error:
mount /dev/mapper/docker-docker--pool_tdata /mnt/test
mount: /dev/mapper/docker-docker--pool_tdata is already mounted or /mnt/test busy
I've also tried to reboot the machine, to uninstall docker and to see if there is file opened on that volume using lsof
Do you have any clue about how can I mount that volume?
Thanks
Uninstalling docker does not really help as purge and autoremove only delete the installed packages and not the images, containers, volumes and config files.
To delete those you have to delete a bunch of directories contained in etc, var/lib, bin andvar/run
Clean up the env
try running docker system prune -a to remove unused containers, images etc
remove the volume with docker volume rm {volumeID}
create the volume again docker volume create docker-docker--pool_tdata
Kill the process
run lsof +D /mnt/test or cat ../docker/../tasks
this should display the PIDs of alive tasks.
Kill the task with kill -9 {PID}

Linux extended disk

disks on vmware (/dev/sda) was extended (it is RHEL5, can't use lvm) from 20G to 40G .. if I use fdisk /dev/sda I can create /dev/sda7, but this partition have just 2G, why the partition have just 2G and how I fix it ? thanks
I tried:
fdisk /dev/sda and create /dev/sda7
df -Th
...
/dev/sda2 ext3 6.8G 6.0G 478M 93% /
/dev/sda7 ext3 2.0G 36M 1.9G 2% /home
fdisk -l
Disk /dev/sda: 42.9 GB, 42949672960 bytes
255 heads, 63 sectors/track, 5221 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 131 1052226 83 Linux
/dev/sda2 132 1045 7341705 83 Linux
/dev/sda3 1046 1567 4192965 82 Linux swap / Solaris
/dev/sda4 1568 2610 8377897+ 5 Extended
/dev/sda5 1568 2089 4192933+ 83 Linux
/dev/sda6 2090 2350 2096451 83 Linux
/dev/sda7 2351 2610 2088418+ 83 Linux
used also parted:
Number Start End Size Type File system Flags
1 32.3kB 1078MB 1077MB primary ext3 boot
2 1078MB 8595MB 7518MB primary ext3
3 8595MB 12.9GB 4294MB primary linux-swap
4 12.9GB 21.5GB 8579MB extended
5 12.9GB 17.2GB 4294MB logical ext3
6 17.2GB 19.3GB 2147MB logical ext3
19.3GB 21.5GB 2139MB Free Space
21.5GB 42.9GB 21.5GB Free Space
Warning: You requested a partition from 21.5GB to 42.9GB.
The closest location we can manage is 21.5GB to 21.5GB. Is this still acceptable to you?
Yes/No? no
(parted) mkpart
Partition type? [logical]?
File system type? [ext2]? ext3
Start? 22G
End? 40G
Warning: You requested a partition from 22.0GB to 40.0GB.
The closest location we can manage is 21.5GB to 21.5GB. Is this still acceptable to you?
Yes/No? no
(parted)
problem is I can't resize more than 2G
type:
parted /dev/sd?
(parted) print free
Number Start End Size Type File system Flags
32.3kB 1049kB 1016kB Free Space
1 1049kB 1075MB 1074MB primary ext3
2 1075MB 2149MB 1074MB primary ext3
3 2149MB 3222MB 1074MB primary ext3
4 3222MB 7443MB 4221MB extended
5 3223MB 4297MB 1074MB logical ext3
6 4298MB 5372MB 1074MB logical ext3
7 5373MB 5897MB 524MB logical ext3
5897MB 7443MB 1546MB Free Space
7443MB 8590MB 1147MB Free Space
(parted) resizepart 4
End? [7443MB]? 8590MB
(parted) print free
Number Start End Size Type File system Flags
32.3kB 1049kB 1016kB Free Space
1 1049kB 1075MB 1074MB primary ext3
2 1075MB 2149MB 1074MB primary ext3
3 2149MB 3222MB 1074MB primary ext3
4 3222MB 8590MB 5368MB extended
5 3223MB 4297MB 1074MB logical ext3
6 4298MB 5372MB 1074MB logical ext3
7 5373MB 5897MB 524MB logical ext3
5897MB 8590MB 2693MB Free Space
(parted) quit
now I resized extended partition

pvcreate failing to create PV. Device not found /dev/sdxy (or ignored by filtering)

I have an oVirt installation with CentOS Linux release 7.3.1611.
I want to add a new drive (sdb) to the oVirt volume group to work with VMs.
Here is the result of fdisk on the drive:
[root#host1 ~]# fdisk /dev/sdb Welcome to fdisk (util-linux 2.23.2).
Changes will remain in memory only, until you decide to write them. Be
careful before using the write command.
Orden (m para obtener ayuda): p
Disk /dev/sdb: 300.1 GB, 300069052416 bytes, 586072368 sectors
Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512
bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk
label type: dos Identificador del disco: 0x7a33815f
Disposit. Inicio Comienzo Fin Bloques Id Sistema
/dev/sdb1 2048 586072367 293035160 8e Linux LVM
The partitions are showed up in /proc/partitions:
[root#host1 ~]# cat /proc/partitions
major minor #blocks name
8 0 293036184 sda
8 1 1024 sda1
8 2 1048576 sda2
8 3 53481472 sda3
8 4 1 sda4
8 5 23072768 sda5
8 6 215429120 sda6
8 16 293036184 sdb
8 17 293035160 sdb1
When I execute the command to create PV with "pvcreate /dev/sdb1" the result is:
[root#host1 ~]# pvcreate /dev/sdb1
Device /dev/sdb1 not found (or ignored by filtering).
I have revised the file /etc/lvm/lvm.conf for the filters, but I do not have any filter that makes LVM discarding the drive. I have rebooted the computer after creating the PV with pvcreate. I did research on Google for the error but no idea.
Thanks. Any help would be appreciated Manuel
Try to edit lvm.conf uncomment global_flter and edit like this
global_filter = [ "a|/dev/sdb|"]
After that edit multipath vi /etc/multipath.conf
[root#ovirtnode2 ~]#lsblk /dev/sdb NAME
MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sdb
8:16 0 200G 0 disk └─3678da6e715b018f01f1abdb887594aae 253:2
0 200G 0 mpath
edit
vi /etc/multipath.conf
append the following to multipath.conf blacklist {
wwid 3678da6e715b018f01f1abdb887594aae }
service multipathd restart
its work for me, and i have that problem to when im trying on ovirt but
[root#ovirtnode2 ~]# pvcreate /dev/sdb Physical volume "/dev/sdb"
successfully created. [root#ovirtnode2 ~]#

missing partition in server centos 6.1

I used the command df-h on my centos 6.1
here's the output
[root#localhost ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root
50G 2.3G 45G 5% /
tmpfs 5.9G 0 5.9G 0% /dev/shm
/dev/sda1 485M 35M 425M 8% /boot
/dev/mapper/VolGroup-lv_home
2.0T 199M 1.9T 1% /home
i found out that the hard disk is two terabyte. but when I used the command cat /proc/partitions | more
here's the output
[root#localhost sysconfig]# cat /proc/partitions | more
major minor #blocks name
8 0 4293656576 sda
8 1 512000 sda1
8 2 2146970624 sda2
253 0 52428800 dm-0
253 1 14417920 dm-1
253 2 2080120832 dm-2
you could see on the first line. it is 4396.7 GB . why is it i could only see is 2TB? how could i find my missing another 2TB and make it a partition.
I also use the command lsblk
here is the output
[root#localhost ~]# lblsk
-bash: lblsk: command not found
[root#localhost ~]# lsblk
NAME MAJ:MIN RM SIZE RO MOUNTPOINT
sda 8:0 0 4T 0
ââsda1 8:1 0 500M 0 /boot
ââsda2 8:2 0 2T 0
ââVolGroup-lv_root (dm-0) 253:0 0 50G 0 /
ââVolGroup-lv_swap (dm-1) 253:1 0 13.8G 0 [SWAP]
ââVolGroup-lv_home (dm-2) 253:2 0 2T 0 /home
sr0 11:0 1 1024M 0
using the parted /dev/sda i type the print free command
here's the output
(parted) print free
Model: DELL PERC 6/i (scsi)
Disk /dev/sda: 4397GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Number Start End Size Type File system Flags
32.3kB 1049kB 1016kB Free Space
1 1049kB 525MB 524MB primary ext4 boot
2 525MB 2199GB 2198GB primary lvm
2199GB 4397GB 2198GB Free Space
I was wrong, sorry. As you can see in parted print free output you have 2 MBR partitions - boot and lvm and 2198GB free space (last row).
If you want to use all of your space you have to use GPT partitions. These partitions as opposed to MBR partition that can only address up to 2TB, can address your whole disk and up to 8 ZiB (zebibytes).
You can try to convert MBR partition table to GPT (example 1, example 2), though I strongly recommend to backup your data.
You are using tools showing info from different layers of your system and interpreting it wrong.
df, according to man page, will display the space available on all currently mounted file systems.
/proc/partitions holds info about partitions on your drive - physical device. This file shows you size of your drive as number of blocks. Usually, on HDD block size is a size of sector - 512 bytes.
So, sda size of 4293656576 is size in blocks, not kilobytes.
4293656576 blocks = (4293656576 / 2 ) kilobytes = 2146828288 KiB = 2047.375 GiB, or 2198.352 GB.
Assuming 1 GiB = 1 * 2^30, 1 GB = 1 * 10^3.
If you want to see size of your disk use fdisk -l <device name>.

Resources