How to Resize Root File System Partition Debian Jessie BeagleBone Black - linux

I started a new Debian install on my BeagleBone Black Version C using a 16gb MicroSD card. The image I used via win32DiskImager created only one boot linux partition for the root file system and everything else, at a 3.3gb size. The rest of the MicroSD card is unused. I want to resize this partition to take advantage of the rest of the space that is unused.
All of the resize tutorials I have read relate to a system that has two partitions, but again, my install has only one. Why I do not know. I didn't delete any partitions at all, which is why I didn't notice until I decided to check the room I had left.
Is it possible to resize mmcblk0p1 to make use of the entire microSD, or should I break it up like a traditional install?
Here is my lsblk
root#beaglebone:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
mmcblk1boot0 179:16 0 1M 1 disk
mmcblk1boot1 179:24 0 1M 1 disk
mmcblk0 179:0 0 14.7G 0 disk
`-mmcblk0p1 179:1 0 3.3G 0 part /
mmcblk1 179:8 0 3.7G 0 disk
|-mmcblk1p1 179:9 0 96M 0 part
`-mmcblk1p2 179:10 0 3.6G 0 part
Here is my fdisk -l
root#beaglebone:~# fdisk -l
Disk /dev/mmcblk0: 14.7 GiB, 15811477504 bytes, 30881792 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xa2911fde
Device Boot Start End Sectors Size Id Type
/dev/mmcblk0p1 * 2048 6963199 6961152 3.3G 83 Linux
Disk /dev/mmcblk1: 3.7 GiB, 3925868544 bytes, 7667712 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x00000000
Device Boot Start End Sectors Size Id Type
/dev/mmcblk1p1 * 2048 198655 196608 96M e W95 FAT16 (LBA)
/dev/mmcblk1p2 198656 7667711 7469056 3.6G 83 Linux
Disk /dev/mmcblk1boot1: 1 MiB, 1048576 bytes, 2048 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/mmcblk1boot0: 1 MiB, 1048576 bytes, 2048 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

nsilent22's suggestion did the trick. I did not need to worry about losing data when I deleted the partition and then created a new one using fdisk. Thank you nsilent22!!

Related

Restore backup from 'dd' (possibly problem with softRAID HDDs)

I have 2 servers, Ubuntu 18.04. First is my main server which I want to backup. Second is tested server with Ubuntu 18.04 - KS-4 Server - Atom N2800 - 4GB DDR3 1066 MHz - SoftRAID 2x 2To SATA - when I want test my backup.
I make backup by 'dd' command and nextly I download this backup (wget) by server 2 (490gb, ~ 24hours downloading).
Now I want test my backup so I tried:
dd if=sdadisk.img of=/dev/sdb
I get:
193536+0 records in
193536+0 records out
99090432 bytes (99 MB, 94 MiB) copied, 5.06239 s, 19.6 MB/s
But nothing will change.
fdisk -l
Disk /dev/sda: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x8efed6c9
Device Boot Start End Sectors Size Id Type
/dev/sda1 * 4096 1050623 1046528 511M fd Linux raid autodetect
/dev/sda2 1050624 3905974271 3904923648 1.8T fd Linux raid autodetect
/dev/sda3 3905974272 3907020799 1046528 511M 82 Linux swap / Solaris
GPT PMBR size mismatch (879097967 != 3907029167) will be corrected by w(rite).
Disk /dev/sdb: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: B3B13223-6382-4EB6-84C3-8E66C917D396
Device Start End Sectors Size Type
/dev/sdb1 2048 1048575 1046528 511M EFI System
/dev/sdb2 1048576 2095103 1046528 511M Linux RAID
/dev/sdb3 2095104 878039039 875943936 417.7G Linux RAID
/dev/sdb4 878039040 879085567 1046528 511M Linux swap
Disk /dev/md1: 511 MiB, 535756800 bytes, 1046400 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/md2: 1.8 TiB, 1999320842240 bytes, 3904923520 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
lsblk -l
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 1.8T 0 disk
├─sda1 8:1 0 511M 0 part
│ └─md1 9:1 0 511M 0 raid1 /boot
├─sda2 8:2 0 1.8T 0 part
│ └─md2 9:2 0 1.8T 0 raid1 /
└─sda3 8:3 0 511M 0 part [SWAP]
sdb 8:16 0 1.8T 0 disk
├─sdb1 8:17 0 511M 0 part
├─sdb2 8:18 0 511M 0 part
├─sdb3 8:19 0 417.7G 0 part
└─sdb4 8:20 0 511M 0 part
I think the problem is with configuration of disks on server 2, specifically with 'Linux raid' between them. I searching how change its, I testing commands like 'mdadm...' but it's not working like I expected. So I have questions:
How change 'Linux raid' from 2 HDDS to 1 HDD with current system and 2 HDD clear, when I can test my backup properly?
It's generally possible to restore backup 490GB on 1.8TB?
I selected the best option to full linux backup?
To touch on question #3 "I selected the best option to full linux backup?"
You are using the correct tool, but not the correct items to use the dd command on.
It appears you created a .img file for your sda drive (which is normally only used to create a bootable disk and not a full backup of the drive.
If you want to create a full backup, for example of /dev/sda you would run the following:
dd if=/dev/sda of=/dev/sdb
But In your case your /dev/sda drive is a twice the size of your /dev/sdb drive so you might want to consider backing up important files using tar as well as compressing the backup using either gzip or bzip2.

I dont see all azure resources on my azure portal

I have a linux VM in my azure account, I am using disk -l command to list all my drives on my VM as fooolow:
[root#thermo-breast-cancer-devvm Python-3.7.8]# fdisk -l
Disk /dev/sda: 64 GiB, 68719476736 bytes, 134217728 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 4504F03A-B9F1-4B8B-88BD-4EDC60947270
Device Start End Sectors Size Type
/dev/sda1 1026048 2050047 1024000 500M Linux filesystem
/dev/sda2 2050048 134215679 132165632 63G Linux LVM
/dev/sda14 2048 10239 8192 4M BIOS boot
/dev/sda15 10240 1024000 1013761 495M EFI System
Partition table entries are not in disk order.
Disk /dev/sdb: 16 GiB, 17179869184 bytes, 33554432 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: dos
Disk identifier: 0x6d61cffb
Device Boot Start End Sectors Size Id Type
/dev/sdb1 128 33552383 33552256 16G 7 HPFS/NTFS/exFAT
Disk /dev/mapper/rootvg-tmplv: 2 GiB, 2147483648 bytes, 4194304 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/mapper/rootvg-usrlv: 10 GiB, 10737418240 bytes, 20971520 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/mapper/rootvg-homelv: 1 GiB, 1073741824 bytes, 2097152 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/mapper/rootvg-varlv: 8 GiB, 8589934592 bytes, 16777216 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/mapper/rootvg-rootlv: 2 GiB, 2147483648 bytes, 4194304 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Actually, in addition to sea which is 64 it showing me several additional drives which I can not see on my azure protal when I list all my resources on portal, it shows me just one disk which its size is 64gig, but why the protal doesn't list all other disk as listed by fdik command?
First of all this is the wrong pane to check disks attached to a VM. You can have other disks under the resource group as well. You should be checking it under disks section of a virtual machine.
Now coming back to the point. Azure virtual machine has a temporary disk, sda1 is your temp disk which isn't visible on portal. You are using LVM which is why you have multiple disks in that output even though you have only one physical volume.
Sda14 is the reserved OS partition which is hidden by default (even in windows). And sda13 is your boot partition created out of the same disk with the help of LVM. Below article talks about LVM in detail:
https://opensource.com/business/16/9/linux-users-guide-lvm
Hope this helps.

setting 4k block size for NVMe in QEMU

I want to use 4k block size for emulated nvme device in qemu. So I end up with a script like this:
#!/bin/bash
QEMU_EXE="path/to/bin/qemu-system-x86_64"
SYSTEM_IMG="path/to/ubuntu1604.qcow2"
NVME_IMG="./nvme_8G.img"
$QEMU_EXE -m 2G \
-machine q35 \
-hda ${SYSTEM_IMG} \
-drive file=${NVME_IMG},format=raw,if=none,id=drv0 \
-device nvme,drive=drv0,serial=foo,opt_io_size=4096,min_io_size=4096,logical_block_size=4096,physical_block_size=4096 \
-smp 4 \
-enable-kvm \
-net nic \
-net user,hostfwd=tcp::2222-:22
However when I entered the guest machine the block/io size in the "-device nvme ..." seems not to have any effects:
lifeng#node0:~$ sudo fdisk -l
[sudo] password for lifeng:
Disk /dev/nvme0n1: 8 GiB, 8589934592 bytes, 16777216 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sda: 30 GiB, 32212254720 bytes, 62914560 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xbbb77ab7
Device Boot Start End Sectors Size Id Type
/dev/sda1 * 2048 58722303 58720256 28G 83 Linux
/dev/sda2 58724350 62912511 4188162 2G 5 Extended
/dev/sda5 58724352 62912511 4188160 2G 82 Linux swap / Solaris
I am using the QEMU 2.11, compiled from source(./configure with only PREFIX changed), I followed the configuration instructions from here:
lifeng#1wk300:~/software/qemu-2.11.1qemu-system-x86_64 -device nvme,help
nvme.serial=str
nvme.rombar=uint32
nvme.logical_block_size=uint16 (A power of two between 512 and 32768)
nvme.discard_granularity=uint32
nvme.drive=str (ID of a drive to use as a backend)
nvme.bootindex=int32
nvme.multifunction=bool (on/off)
nvme.opt_io_size=uint32
nvme.min_io_size=uint16
nvme.romfile=str
nvme.command_serr_enable=bool (on/off)
nvme.addr=int32 (Slot and optional function number, example: 06.0 or 06)
nvme.physical_block_size=uint16 (A power of two between 512 and 32768)
Any help or comments are appreciated!
I found qemu-nvme can have 4k block and sector size for NVME,
after the installation, see more details using:
qemu\install\dir/bin/qemu-system-x86_64 -device nvme,help

Using SD card as external storage for Beaglebone Black

After following instructions found here:
http://elinux.org/Beagleboard:MicroSD_As_Extra_Storage
and here:
http://electronicsembedded.blogspot.com/2014/10/beaglebone-black-using-sd-card-as-extra.html?showComment=1434418179676#c2761158033046523777
I am still having trouble. I use the code it says to use and followed the instructions, I get 3 solid LED on the board upon load up with the SD inserted, and Windows 7 doesn't detect it at all.
The board works fine without the SD inserted, I can boot up and login via SSH and it is detected by Windows.
The code for my uEnv.txt is as follows:
mmcdev=1
bootpart=1:2
mmcroot=/dev/mmcblk1p2 ro
optargs=quiet
and I also added to the fstab file:
/dev/mmcblk0p1 /media/card auto auto,rw,async,user,nofail 0 0
Some results from checking the filesystem, my drive is called 'BBB_Ext'. This is after booting without the SD in it, and then putting it in after bootup:
root#beaglebone:~# df -h
Filesystem Size Used Avail Use% Mounted on
rootfs 3.5G 1.8G 1.5G 55% /
/dev/root 3.5G 1.8G 1.5G 55% /
devtmpfs 250M 0 250M 0% /dev
tmpfs 250M 4.0K 250M 1% /dev/shm
tmpfs 250M 248K 250M 1% /run
tmpfs 250M 0 250M 0% /sys/fs/cgroup
tmpfs 250M 4.0K 250M 1% /tmp
/dev/mmcblk0p1 70M 54M 16M 78% /media/card
/dev/mmcblk1p1 15G 16K 15G 1% /media/BBB_Ext_
Here is more details on the fdisk:
root#beaglebone:~# fdisk -l
Disk /dev/mmcblk0: 3867 MB, 3867148288 bytes, 7553024 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/mmcblk0p1 * 63 144584 72261 c W95 FAT32 (LBA)
/dev/mmcblk0p2 144585 7550549 3702982+ 83 Linux
Disk /dev/mmcblk0boot1: 2 MB, 2097152 bytes, 4096 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/mmcblk0boot0: 2 MB, 2097152 bytes, 4096 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/mmcblk1: 15.9 GB, 15931539456 bytes, 31116288 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/mmcblk1p1 2048 31115263 15556608 c W95 FAT32 (LBA)
Any help would be appreciated. I figured since this is Linux related that the question is applicable to SO, if it's better off in SE plz let me know.
For what I got, the uEnv.txt should be (saved on your SD card):
mmcdev=1
bootpart=1:2
mmcroot=/dev/mmcblk1p2 ro
optargs=quiet
And add below line in /etc/fstab
/dev/mmcblk0p1 /media/data auto rw 0 0
Ok I believe I resolved the issue (for anyone who cares)
After looking at the fdisk log I realized that I needed to change the fstab line to be:
/dev/mmcblk1p1 /media/card auto rw 0 0
and then I also changed the uEnv.txt to be:
mmcdev=1
bootpart=1:2
mmcroot=/dev/mmcblk0p2 ro
optargs=quiet
From my understanding it was trying to boot off of a disk that wasn't there and that caused the problem. In addition, I didn't fix the fstab to be the correct port for the drive, which can be seen at the bottom of the fdisk check:
Device Boot Start End Blocks Id System
/dev/mmcblk1p1 2048 31115263 15556608 c W95 FAT32 (LBA)

vmware enlarging space for linux

Its maybe stupid question but how can i enlarge my linux machine from 20 to 40gb? i need to increase my / space. I made it on vmware and its says 40gb now but if i make :
df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/zabbix-root 19G 17G 789M 96% /
udev 489M 4.0K 489M 1% /dev
tmpfs 200M 276K 199M 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 498M 0 498M 0% /run/shm
/dev/sda1 228M 25M 192M 12% /boot
or
fdisk -l
Disk /dev/mapper/zabbix-root: 20.1 GB, 20124270592 bytes
255 heads, 63 sectors/track, 2446 cylinders, total 39305216 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/mapper/zabbix-root doesn't contain a valid partition table
Disk /dev/mapper/zabbix-swap_1: 1069 MB, 1069547520 bytes
255 heads, 63 sectors/track, 130 cylinders, total 2088960 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
i still se only 20gb... How can i mount another 20gb?
Thanks a lot.
Depending on your distribution, probably something like that:
reboot or echo "- - -" > /sys/class/scsi_host/host0/rescan
Now that Linux is aware of the disk size change, use pvextend to extend you PV.
Using vgdisplay, make sure that you have free space on your VG.
Extend your LV using lvextend and finally, use resize2fs to change your FS.

Resources