I have Windows, Linux and FreeBSD on my computer. But accidentally i "forgot" to allocate 17 gb of free disk space on the end of the disk(and "bad" Windows is using a little primary restore partition. Thus i have 4 primary partitions now). Can i resize my FreeBSD partition, to capture free space?
There is my disk partitioning.
$ gpart show
=> 63 488397105 ada0 MBR (232G)
63 1985 - free - (992k)
2048 716800 1 ntfs (350M)
718848 313856000 2 ntfs (149G)
314574848 2046 - free - (1M)
314576894 83996674 3 ebr (40G)
398573568 27 - free - (13k)
398573595 52428726 4 freebsd [active] (25G)
451002321 37394847 - free - (17G) // Free space i wanna allocate
=> 0 83996674 ada0s3 EBR (40G)
0 29997058 1 linux-data (14G)
29997058 2028 - free - (1M)
29999086 49997844 476176 linux-data (23G)
79996930 1980 - free - (990k)
79998910 3997764 1269824 linux-swap (1.9G)
=> 0 52428726 ada0s4 BSD (25G)
0 52428725 1 freebsd-ufs (25G)
52428725 1 - free - (512B)
Thanks in advance
Have a look at gpart's resize command. That should enable you to grow the partition. You can then grow the UFS filesystem in the partition with growfs(8).
Do make a backup of your filesystem before trying this!
growfs(8) works well for the UFS case. If you are using ZFS, you will want to zpool online -e the zpool after expanding the partition (slice).
Related
I am using the following cmd where sda(500GB) is my laptop hd (unmounted) and sdc(500GB) is my external usb hd
dd if=/dev/sda of=/dev/sdc bs=4096
When complete this returns
122096647+0 records in
122096646+0 records out
50010782016 bytes (500GB) copied, 10975. 5 s, 45.6 MB/s
This shows records in != records out
fdisk -l
returns
Device Boot Start End Blocks Id System
/dev/sda1 * 2048 718847 358407 7 HPFS/NTFS/exFAT
/dev/sda2 718848 977102847 488192000 7 HPFS/NTFS/exFAT
/dev/sdc1 * 2048 718847 358407 7 HPFS/NTFS/exFAT
/dev/sdc2 718848 977102847 976384000 7 HPFS/NTFS/exFAT
This also shows differences between the Block sizes
Another question is it normal for dd to take 3 hours for a 500GB copy.(laptop ssd to normal non ssd usb hd)
My Physical Sector on windows is 4096 whilst Logical Sector is 512
is it normal for dd to take 3 hours - yes. dd can take very long because you are copying everything off the drive bit by bit bit. And you need to recognize how the connection is made from source (sda) to destination (sdc). You mention sdc is your external usb hard drive, so what is the max transfer speed on USB? Then, it is unlikely that transfer will always happen at that max value. If it is USB 2.0 then yes, it can take very long.
Which is why i hate dd. It is often used when it should not be, and differences between source and destination such as partition sizes, types, block sizes cause problems.
In most cases you are better off using cp -rp or tar.
If you are trying to clone a drive that has a bootable linux operating system, you do not need to use dd there are better ways.
I am working with NVMe card on linux(Ubuntu 14.04).
I am finding some performance degradation for Intel NVMe card when formatted with xfs file system with its default sector size(512). or any other sector size less than 4096.
In the experiment I formatted the card with xfs filesystem with default options. I tried running fio with 64k block size on an arm64 platform with 64k page size.
This is the command used
fio --rw=randread --bs=64k --ioengine=libaio --iodepth=8 --direct=1 --group_reporting --name=Write_64k_1 --numjobs=1 --runtime=120 --filename=new --size=20G
I could get only the below values
Run status group 0 (all jobs):
READ: io=20480MB, aggrb=281670KB/s, minb=281670KB/s, maxb=281670KB/s, mint=744454msec, maxt=74454msec
Disk stats (read/write):
nvme0n1: ios=326821/8, merge=0/0, ticks=582640/0, in_queue=582370, util=99.93%
I tried formatting as follows:
mkfs.xfs -f -s size=4096 /dev/nvme0n1
then the values were :
Run status group 0 (all jobs):
READ: io=20480MB, aggrb=781149KB/s, minb=781149KB/s, maxb=781149KB/s, mint=266
847msec, maxt=26847msec
Disk stats (read/write):
nvme0n1: ios=326748/7, merge=0/0, ticks=200270/0, in_queue=200350, util=99.51%
I find no performance degradation when used with
4k page size
Any fio block size lesser than 64k
With ext4 fs with default configs
What could be the issue? Is this any alignment issue? What Am I missing here? Any help appreciated
The issue is your SSD's native sector size is 4K. So your file system's block size should be set to match so that reads and writes are aligned on sector boundaries. Otherwise you'll have blocks that span 2 sectors, and therefore require 2 sector reads to return 1 block (instead of 1 read).
If you have an Intel SSD, the newer ones have a variable sector size you can set using their Intel Solid State Drive DataCenter Tool. But honestly 4096 is still probably the drive's true sector size anyway and you'll get the most consistent performance using it and setting your file system to match.
On ZFS on Linux the setting is ashift=12 for 4K blocks.
I have been playing around with BTRFS on a few drives I had lying around. At first I created BTRFS using the entire drive, but eventually I decided I wanted to use GPT partitions on the drives and recreated the filesystem I needed on the partitions that resulted. (This was so I could use a portion of each drive as Linux swap space, FYI.)
When I got this all done, BTRFS worked a treat. But I have annoying messages saying that I have some old filesystems from my previous experimentation that I have actually nuked. I worry this meant that BTRFS was confused about what space on the drives was available, or that some sort of corruption might occur.
The messages look like this:
$ sudo btrfs file show
Label: 'x' uuid: 06fa59c9-f7f6-4b73-81a4-943329516aee
Total devices 3 FS bytes used 159.20GB
devid 3 size 931.00GB used 134.01GB path /dev/sde
*** Some devices missing
Label: 'root' uuid: 5f63d01d-3fde-455c-bc1c-1b9946e9aad0
Total devices 4 FS bytes used 1.13GB
devid 4 size 931.51GB used 1.03GB path /dev/sdd
devid 3 size 931.51GB used 2.00GB path /dev/sdc
devid 2 size 931.51GB used 1.03GB path /dev/sdb
*** Some devices missing
Label: 'root' uuid: e86ff074-d4ac-4508-b287-4099400d0fcf
Total devices 5 FS bytes used 740.93GB
devid 4 size 911.00GB used 293.03GB path /dev/sdd1
devid 5 size 931.51GB used 314.00GB path /dev/sde1
devid 3 size 911.00GB used 293.00GB path /dev/sdc1
devid 2 size 911.00GB used 293.03GB path /dev/sdb1
devid 1 size 911.00GB used 293.00GB path /dev/sda1
As you can see, I have an old filesystem labeled 'x' and an old one labeled 'root', and both of these have "Some devices missing". The real filesystem, the last one shown, is the one that I am now using.
So how do I clean up the old "Some devices missing" filesystems? I'm a little worried, but mostly just OCD and wanting to tidy up this messy output.
Thanks.
To wipe from disks that are NOT part of your wanted BTRFS FS, I found:
How to clean up old superblock ?
...
To actually remove the filesystem use:
wipefs -o 0x10040 /dev/sda
8 bytes [5f 42 48 52 66 53 5f 4d] erased at offset 0x10040 (btrfs)"
from: https://btrfs.wiki.kernel.org/index.php/Problem_FAQ#I_can.27t_mount_my_filesystem.2C_and_I_get_a_kernel_oops.21
I actually figured this out for myself. Maybe it will help someone else.
I poked around in the code to see what was going on. When the btrfs filesystem show command is used to show all filesystems on all devices, it scans every device and partition in /proc/partitions. Each device and each partition is examined to see if there is a BTRFS "magic number" and associated valid root data structure found at 0x10040 offset from the beginning of the device or partition.
I then used hexedit on a disk that was showing up wrong in my own situation and sure enough there was a BTRFS magic number (which is the ASCII string _BHRfS_M) there from my previous experiments.
I simply nailed that magic number by overwriting a couple of the characters of the string with "**", also using hexedit, and the erroneous entries magically disappeared!
I'm running few tube sites and i'm planning a new server for the next 3 years.
I already have norco 1204 case, with supermicro 600w psu.
The plan is to use:
1. 4X3TB SATA HD's in RAID6 for video streaming.
2. 2x120GB SSD in RAID1 for system, sql and php files.
(Some may think that it will not fit, but it does)
I want to use a motherboard with 2 cpu's, as much ram as possible and a raid card that supports ssd, but i don't know which hardware to pick.
My budget for the motherboard, 2 cpu's and raid card is about 1900$.
Please help me with this.
Thanks,
Avi.
This is a quick update, Half a year ago I finished to build the server, here is the final spec:
Case:
NORCO RPC-1204
PSU:
SuperMicro PWS-563-1H 600w
The psu does not fit with screws
Motherboard:
Supermicro MBD-X9DRL-IF-B
2X Xeon E5 2609 with Dynatron R18 1u cooling
DDR3 ECC:
4XHynix DDR3 1333mhz PC3L-10600R
Raid:
LSI 3Ware 9750-8i + LSIiBBU07 batery
2 SFF-8087 raid cables
HD's:
4 SSD INTEL 330 120GB (raid 10) - OS, SQL, PHP FILES
4 Hitachi 4TB (Raid 5) - VIDEO FILES
And,
A picture of the server build:
http://img18.imageshack.us/img18/7788/serverx.jpg
I'm working on embedded platform (Broadcom's bcm5358u processor with MIPS core), where I need extra partitions for the purpose of further upgrade procedure. The filesystem used is SquashFS, so I modified 'struct mtd_partition' accordingly, which is passed to MTD related code, and I ended up with this:
#cat /proc/partitions
major minor #blocks name
- 31 0 128 mtdblock0
- 31 0 128 mtdblock0
- 31 1 6016 mtdblock1
- 31 2 4573 mtdblock2
- 31 3 6016 mtdblock3
- 31 4 4445 mtdblock4
- 31 5 4160 mtdblock5
- 31 6 64 mtdblock6
Now I want to be able to mount /dev/mtdblock4 as a temporary storage during system upgrade, but I can't do this, because it appears that this partition mtdblock4 doesn't have any FS installed. The kernel image and FS are integrated in one image, which is flashed down the /dev/mtdblock2 (which is supplied as root_fs to kernel).
I see only one solution: create a empty squashFS image, write it on /dev/mtdblock4 and may be it will work as I want (?). Is there a way to, like, format the partition on the fly, whenever the kernel boots, or it violates the MTD concepts?
Thanks.
You can mount a JFFS2 filesystem on an empty (erased) flash. It will automatically
"format" the flash partition at mount time. Squashfs is not a good candidate, because it is a read-only filesystem.
Is there a reason you can't create a mount a new FS on the fly?
You definitely do not want an empty squashFS image. If you want temporary writeable storage you can use something like a tmpfs volume. If you need to support a system reboot, you can use JFFS on a raw flash device. You should be able to format/mount the MTD devices just like any other block device.
Thanks for responses.
Yes, SquashFS is read-only, but nevertheless I'm able to update my system via Web interface provided by the platform vendor. The platform SDK provides API to directly access MTD from user space.