Its maybe stupid question but how can i enlarge my linux machine from 20 to 40gb? i need to increase my / space. I made it on vmware and its says 40gb now but if i make :
df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/zabbix-root 19G 17G 789M 96% /
udev 489M 4.0K 489M 1% /dev
tmpfs 200M 276K 199M 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 498M 0 498M 0% /run/shm
/dev/sda1 228M 25M 192M 12% /boot
or
fdisk -l
Disk /dev/mapper/zabbix-root: 20.1 GB, 20124270592 bytes
255 heads, 63 sectors/track, 2446 cylinders, total 39305216 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/mapper/zabbix-root doesn't contain a valid partition table
Disk /dev/mapper/zabbix-swap_1: 1069 MB, 1069547520 bytes
255 heads, 63 sectors/track, 130 cylinders, total 2088960 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
i still se only 20gb... How can i mount another 20gb?
Thanks a lot.
Depending on your distribution, probably something like that:
reboot or echo "- - -" > /sys/class/scsi_host/host0/rescan
Now that Linux is aware of the disk size change, use pvextend to extend you PV.
Using vgdisplay, make sure that you have free space on your VG.
Extend your LV using lvextend and finally, use resize2fs to change your FS.
Related
I have 2 servers, Ubuntu 18.04. First is my main server which I want to backup. Second is tested server with Ubuntu 18.04 - KS-4 Server - Atom N2800 - 4GB DDR3 1066 MHz - SoftRAID 2x 2To SATA - when I want test my backup.
I make backup by 'dd' command and nextly I download this backup (wget) by server 2 (490gb, ~ 24hours downloading).
Now I want test my backup so I tried:
dd if=sdadisk.img of=/dev/sdb
I get:
193536+0 records in
193536+0 records out
99090432 bytes (99 MB, 94 MiB) copied, 5.06239 s, 19.6 MB/s
But nothing will change.
fdisk -l
Disk /dev/sda: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x8efed6c9
Device Boot Start End Sectors Size Id Type
/dev/sda1 * 4096 1050623 1046528 511M fd Linux raid autodetect
/dev/sda2 1050624 3905974271 3904923648 1.8T fd Linux raid autodetect
/dev/sda3 3905974272 3907020799 1046528 511M 82 Linux swap / Solaris
GPT PMBR size mismatch (879097967 != 3907029167) will be corrected by w(rite).
Disk /dev/sdb: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: B3B13223-6382-4EB6-84C3-8E66C917D396
Device Start End Sectors Size Type
/dev/sdb1 2048 1048575 1046528 511M EFI System
/dev/sdb2 1048576 2095103 1046528 511M Linux RAID
/dev/sdb3 2095104 878039039 875943936 417.7G Linux RAID
/dev/sdb4 878039040 879085567 1046528 511M Linux swap
Disk /dev/md1: 511 MiB, 535756800 bytes, 1046400 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/md2: 1.8 TiB, 1999320842240 bytes, 3904923520 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
lsblk -l
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 1.8T 0 disk
├─sda1 8:1 0 511M 0 part
│ └─md1 9:1 0 511M 0 raid1 /boot
├─sda2 8:2 0 1.8T 0 part
│ └─md2 9:2 0 1.8T 0 raid1 /
└─sda3 8:3 0 511M 0 part [SWAP]
sdb 8:16 0 1.8T 0 disk
├─sdb1 8:17 0 511M 0 part
├─sdb2 8:18 0 511M 0 part
├─sdb3 8:19 0 417.7G 0 part
└─sdb4 8:20 0 511M 0 part
I think the problem is with configuration of disks on server 2, specifically with 'Linux raid' between them. I searching how change its, I testing commands like 'mdadm...' but it's not working like I expected. So I have questions:
How change 'Linux raid' from 2 HDDS to 1 HDD with current system and 2 HDD clear, when I can test my backup properly?
It's generally possible to restore backup 490GB on 1.8TB?
I selected the best option to full linux backup?
To touch on question #3 "I selected the best option to full linux backup?"
You are using the correct tool, but not the correct items to use the dd command on.
It appears you created a .img file for your sda drive (which is normally only used to create a bootable disk and not a full backup of the drive.
If you want to create a full backup, for example of /dev/sda you would run the following:
dd if=/dev/sda of=/dev/sdb
But In your case your /dev/sda drive is a twice the size of your /dev/sdb drive so you might want to consider backing up important files using tar as well as compressing the backup using either gzip or bzip2.
I only have one disk and it has lots free space unused with only one primary partition. I want to create a .img image as a backup without free space.
i do some search on 'create img image'(without free space), most anwser give:
dd if=/dev/sdb | gzip > backup.img.gz
Now I have two question:
I only have one disk(sda) with all space in one partition(sda1), is it possible to create a .img image with the dd order without adding another disk? If 'yes', then what should i do? Running dd if=/dev/sda | gzip > backup.img.gz at /home/xxx will be ok? will dd copy the .img file itself to the iamge?
It's better to get a .img file, not a .img.gz file, without free space. is that possible in my condition(one disk)?
here's the condition of my computer(sorry for the system language):
root#act-Precision-T1500:/home/act# df
df: "/mnt/udisk": 输入/输出错误
文件系统 1K-块 已用 可用 已用% 挂载点
udev 1983824 4 1983820 1% /dev
tmpfs 400464 1252 399212 1% /run
/dev/sda1 476502040 64401980 387872068 15% /
none 4 0 4 0% /sys/fs/cgroup
none 5120 0 5120 0% /run/lock
none 2002320 180 2002140 1% /run/shm
none 102400 40 102360 1% /run/user
root#act-Precision-T1500:/home/act# fdisk -l
Disk /dev/sda: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors
Units = 扇区 of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000d1403
设备 启动 起点 终点 块数 Id 系统
/dev/sda1 * 2048 968466431 484232192 83 Linux
/dev/sda2 968468478 976771071 4151297 5 扩展
/dev/sda5 968468480 976771071 4151296 82 Linux 交换 / Solaris
Do i express myself clearly? I'm a beginner to Linux and my English is awful. Thank you for helping me!
I remotely ssh'd and resized my primary partition with parted (rebooted as well and updated /etc/fstab) to give it more space.
Why on earth is my primary ext4 partition not reflecting the free space?
~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 985M 0 985M 0% /dev
tmpfs 200M 3.5M 197M 2% /run
/dev/sda1 7.8G 7.5G 0 100% /
tmpfs 999M 0 999M 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 999M 0 999M 0% /sys/fs/cgroup
tmpfs 200M 0 200M 0% /run/user/0
~# fdisk -l
Disk /dev/sda: 20 GiB, 21474836480 bytes, 41943040 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: dos
Disk identifier: 0x010920b9
Device Boot Start End Sectors Size Id Type
/dev/sda1 * 2048 41943039 41940992 20G 83 Linux
Any ideas? I must have forgotten something really simple.
Execute parted and then print free. This will print free space of your system.
Created thin provisioning vm(centos 7) with 50 GB hard disk. But it doesnt automatically increase the space when there is a need. Can someone please tell how to increase the space of "/" directory.
[oracle#localhost ~]$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/centos-root 14G 14G 16K 100% /
devtmpfs 1.9G 0 1.9G 0% /dev
tmpfs 1.9G 912M 985M 49% /dev/shm
tmpfs 1.9G 17M 1.9G 1% /run
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
/dev/sda1 497M 147M 351M 30% /boot
tmpfs 380M 0 380M 0% /run/user/1001
tmpfs 380M 0 380M 0% /run/user/1002
Below are the output of pvs command.
[root#inches-rmdev01 ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/sda2 centos lvm2 a-- 15.51g 40.00m
Below are the output of vgs command.
[root#inches-rmdev01 ~]# vgs
VG #PV #LV #SN Attr VSize VFree
centos 1 2 0 wz--n- 15.51g 40.00m
Below are the output of lvs command.
[root#inches-rmdev01 ~]# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
root centos -wi-ao---- 13.87g
swap centos -wi-ao---- 1.60g
Below are the output of fdisk command.
[root#inches-rmdev01 ~]# fdisk -l
Disk /dev/sda: 53.7 GB, 53687091200 bytes, 104857600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x0009a61a
Device Boot Start End Blocks Id System
/dev/sda1 * 2048 1026047 512000 83 Linux
/dev/sda2 1026048 33554431 16264192 8e Linux LVM
/dev/sda3 33554432 104857599 35651584 8e Linux LVM
Disk /dev/mapper/centos-root: 14.9 GB, 14889779200 bytes, 29081600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/mapper/centos-swap: 1719 MB, 1719664640 bytes, 3358720 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
In the fdisk -l output you can see that you have a 35GB disk /dev/sda3. To extend your root volume you can add this disk to LVM (Logical Volume Manager):
pvcreate /dev/sda3
This will add the unused disk /dev/sda3 as a new pv (physical volume) to LVM.
Next step is to extend your root vg (volumegroup). In your case it is easy since you've got only one vg:
vgextend centos /dev/sda3
Now you have added the 35GB disk to your vg and you can distribute it to your lv's (logical volume).
Finaly you can add as much space as you need (up to 35GB) to your root-volume with the lvextend command:
If you want to use the whole 35GB you can use:
lvextend -l +100%FREE /dev/mapper/centos-root
If you only want to add a certain ammount (i.e 1G) you can use this:
lvextend -L +1G /dev/mapper/centos-root
And finaly resize your filesystem:
resize2fs /dev/mapper/centos-root
The LVM logic is:
1. Harddisk fdisk -l
2. Physical Volume pvs
3. Volume Group vgs
4. Logical Volume lvs
After following instructions found here:
http://elinux.org/Beagleboard:MicroSD_As_Extra_Storage
and here:
http://electronicsembedded.blogspot.com/2014/10/beaglebone-black-using-sd-card-as-extra.html?showComment=1434418179676#c2761158033046523777
I am still having trouble. I use the code it says to use and followed the instructions, I get 3 solid LED on the board upon load up with the SD inserted, and Windows 7 doesn't detect it at all.
The board works fine without the SD inserted, I can boot up and login via SSH and it is detected by Windows.
The code for my uEnv.txt is as follows:
mmcdev=1
bootpart=1:2
mmcroot=/dev/mmcblk1p2 ro
optargs=quiet
and I also added to the fstab file:
/dev/mmcblk0p1 /media/card auto auto,rw,async,user,nofail 0 0
Some results from checking the filesystem, my drive is called 'BBB_Ext'. This is after booting without the SD in it, and then putting it in after bootup:
root#beaglebone:~# df -h
Filesystem Size Used Avail Use% Mounted on
rootfs 3.5G 1.8G 1.5G 55% /
/dev/root 3.5G 1.8G 1.5G 55% /
devtmpfs 250M 0 250M 0% /dev
tmpfs 250M 4.0K 250M 1% /dev/shm
tmpfs 250M 248K 250M 1% /run
tmpfs 250M 0 250M 0% /sys/fs/cgroup
tmpfs 250M 4.0K 250M 1% /tmp
/dev/mmcblk0p1 70M 54M 16M 78% /media/card
/dev/mmcblk1p1 15G 16K 15G 1% /media/BBB_Ext_
Here is more details on the fdisk:
root#beaglebone:~# fdisk -l
Disk /dev/mmcblk0: 3867 MB, 3867148288 bytes, 7553024 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/mmcblk0p1 * 63 144584 72261 c W95 FAT32 (LBA)
/dev/mmcblk0p2 144585 7550549 3702982+ 83 Linux
Disk /dev/mmcblk0boot1: 2 MB, 2097152 bytes, 4096 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/mmcblk0boot0: 2 MB, 2097152 bytes, 4096 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/mmcblk1: 15.9 GB, 15931539456 bytes, 31116288 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/mmcblk1p1 2048 31115263 15556608 c W95 FAT32 (LBA)
Any help would be appreciated. I figured since this is Linux related that the question is applicable to SO, if it's better off in SE plz let me know.
For what I got, the uEnv.txt should be (saved on your SD card):
mmcdev=1
bootpart=1:2
mmcroot=/dev/mmcblk1p2 ro
optargs=quiet
And add below line in /etc/fstab
/dev/mmcblk0p1 /media/data auto rw 0 0
Ok I believe I resolved the issue (for anyone who cares)
After looking at the fdisk log I realized that I needed to change the fstab line to be:
/dev/mmcblk1p1 /media/card auto rw 0 0
and then I also changed the uEnv.txt to be:
mmcdev=1
bootpart=1:2
mmcroot=/dev/mmcblk0p2 ro
optargs=quiet
From my understanding it was trying to boot off of a disk that wasn't there and that caused the problem. In addition, I didn't fix the fstab to be the correct port for the drive, which can be seen at the bottom of the fdisk check:
Device Boot Start End Blocks Id System
/dev/mmcblk1p1 2048 31115263 15556608 c W95 FAT32 (LBA)