How can I discover if a remote machine is configured with or without hardware or software RAID? All I know is i have 256GB at present, I need to order more space but before I can I need to know how the drives are configured.
df lists the drive as:
/dev/sdb1 287826944 273086548 119644 100% /mnt/db
and hdparm:
/dev/sdb:
HDIO_GET_MULTCOUNT failed: Invalid argument
readonly = 0 (off)
readahead = 256 (on)
geometry = 36404/255/63, sectors = 299439751168, start = 0
What else should I run and what should I look for?
Software RAID would not be /dev/sdb - dev/md0. Nor is it LVM.
So it's either real hardware RAID, or a raw disk.
lspci might show you and RAID controllers plugged in.
dmesg | grep sdb might tell you some more about the disk.
sdparm /dev/sdb might tell you something? Particularly if it really is a SCSI disk.
To check for software RAID:
cat /proc/mdstat
On my box, this shows:
Personalities : [raid1]
md0 : active raid1 sda1[0] sdb1[1]
96256 blocks [2/2] [UU]
md1 : active raid1 sda2[0] sdb2[1]
488287552 blocks [2/2] [UU]
unused devices: <none>
You get the names of all software RAID arrays, the RAID level for each, the partitions that are part of each RAID array, and the status of the arrays.
dmesg might help.
On a system where we do have software raid we see things like:
SCSI device sda: 143374744 512-byte hdwr sectors (73408 MB)
sda: Write Protect is off
sda: Mode Sense: ab 00 10 08
SCSI device sda: write cache: enabled, read cache: enabled, supports DPO and FUA
SCSI device sda: 143374744 512-byte hdwr sectors (73408 MB)
sda: Write Protect is off
sda: Mode Sense: ab 00 10 08
SCSI device sda: write cache: enabled, read cache: enabled, supports DPO and FUA
sda: sda1 sda2
sd 0:0:0:0: Attached scsi disk sda
SCSI device sdb: 143374744 512-byte hdwr sectors (73408 MB)
sdb: Write Protect is off
sdb: Mode Sense: ab 00 10 08
SCSI device sdb: write cache: enabled, read cache: enabled, supports DPO and FUA
SCSI device sdb: 143374744 512-byte hdwr sectors (73408 MB)
sdb: Write Protect is off
sdb: Mode Sense: ab 00 10 08
SCSI device sdb: write cache: enabled, read cache: enabled, supports DPO and FUA
sdb: sdb1 sdb2
sd 0:0:1:0: Attached scsi disk sdb
A bit later we see:
md: md0 stopped.
md: bind
md: bind
md: raid0 personality registered for level 0
md0: setting max_sectors to 512, segment boundary to 131071
raid0: looking at sda2
raid0: comparing sda2(63296000) with sda2(63296000)
raid0: END
raid0: ==> UNIQUE
raid0: 1 zones
raid0: looking at sdb2
raid0: comparing sdb2(63296000) with sda2(63296000)
raid0: EQUAL
raid0: FINAL 1 zones
raid0: done.
raid0 : md_size is 126592000 blocks.
raid0 : conf->hash_spacing is 126592000 blocks.
raid0 : nb_zone is 1.
raid0 : Allocating 4 bytes for hash.
and a df shows:
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 7.8G 3.3G 4.2G 45% /
tmpfs 2.0G 0 2.0G 0% /dev/shm
/dev/md0 117G 77G 35G 69% /scratch
So part of sda and all of sdb have been bound as one raid volume.
What you have could be one disk, or it could be hardware raid. dmesg should give you some clues.
It is always possible that it is a hardware raid controller that just looks like
a single sata (or scsi) drive. Ie, our systems with fiber channel raid arrays, linux
only sees a single device, and you control the raid portion and disk assignment
via connecting to the fiber raid array directly.
You can try mount -v or you can look in /sys/ or /dev/ for hints. dmesg might reveal information about the drivers used, and lspci could list any add-in hw raid cards, but in general there is no generic method you can rely on to find out the exact hardware & driver setup.
You might try using mdadm with more explanation here. If the 'mount' command does not show /dev/md*, chances are you are not using (or seeing) the software raid.
This is really a system administration, not programming related question, I'll tag it as such.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 years ago.
Improve this question
I'm going to write a disk partition creator program for flash memory removable devices, mostly controlled by SCSI based I/O and accessed with LBA address.
For reference, I'm researching the partition table on the SD cards that were partitioned and formatted by the disk utility of Ubuntu.
I used the 'unit' command of 'parted' software in Linux to watch the parameters of the cards with CHS unit and byte unit.
The following is for a 8GB sd card with 15122432 sectors of LBA:
pi#raspberrypi:~ $ sudo parted /dev/sda
GNU Parted 3.2
Using /dev/sda
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) unit chs print
Model: Generic CRM01 SD Reader (scsi)
Disk /dev/sda: 1020,130,11
Sector size (logical/physical): 512B/512B
BIOS cylinder,head,sector geometry: 1020,239,62. Each cylinder is 7587kB.
Partition Table: msdos
Disk Flags:
Number Start End Type File system Flags
1 0,1,0 1019,238,61 primary ext3
(parted) unit b print
Model: Generic CRM01 SD Reader (scsi)
Disk /dev/sda: 7742685184B
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:
Number Start End Size Type File system Flags
1 31744B 7738552319B 7738520576B primary ext3
The following is for a 4GB sd card with 7585792 sectors of LBA:
(parted) unit chs print
Model: Generic CRM01 SD Reader (scsi)
Disk /dev/sda: 1019,71,29
Sector size (logical/physical): 512B/512B
BIOS cylinder,head,sector geometry: 1019,120,62. Each cylinder is 3809kB.
Partition Table: msdos
Disk Flags:
Number Start End Type File system Flags
1 0,1,0 1018,119,61 primary ext3
(parted) unit b print
Model: Generic CRM01 SD Reader (scsi)
Disk /dev/sda: 3883925504B
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:
Number Start End Size Type File system Flags
1 31744B 3881656319B 3881624576B primary ext3
From my observation, the disk geometry values (C/H/S) are different on different capacity SD card, and the geometry values are seem associated with the end CHS address of the end of the partition. It is seems like..
The card with the partition which end CHS tuple is (c, h, s), which disk geometry will be (c + 1 / h + 1 / s + 1). Are they related?
But how the values are determined? Does those depend on OS or file system?
Disk geometry is located in the on-board device controller, and OS request it from controller through the driver. Request/answer format is specified in the protocol definition of such device.
Long time ago I wrote IDE driver for PDP-11, and remember something about IDE/ATA protocol. I do not know details about modern SATA or SCSI devices,
so can answer about ATA/IDE only.
An IDE device has special operation "identify" (code 0xEC), which driver sends to device. Driver sends this opcode command to the control port, and thereafter, when device set flag DRDY (device ready), reads 512 bytes block, contains an answer. An answer contains disk info (manufacturer, serial, etc) and geometry.
See for example this code, where program sends request to ATA and parse answer, contains disk geometry.
What I can say additionally:
IDE device can accept "uploaded geometry" (code 0x91). I.e. driver can send
request to the device, and say "you will have X sectors, Y heads, Z
cylinders", and thereafter device accept ahd use this "virtual
geometry".
Some devices do not know their geometry, and during startup, BIOS
must explain to device, which geometry it has. Otherwise, it
just does not work.
Some devices store external virtual geometry, specified by computer
in 1, and remember and use it even after power cycle.
If you setup a "virtual geometry", different from default, then some
devices can automatically return to default geometry after I/O error.
As result, it produces file system destroying.
I am using the following cmd where sda(500GB) is my laptop hd (unmounted) and sdc(500GB) is my external usb hd
dd if=/dev/sda of=/dev/sdc bs=4096
When complete this returns
122096647+0 records in
122096646+0 records out
50010782016 bytes (500GB) copied, 10975. 5 s, 45.6 MB/s
This shows records in != records out
fdisk -l
returns
Device Boot Start End Blocks Id System
/dev/sda1 * 2048 718847 358407 7 HPFS/NTFS/exFAT
/dev/sda2 718848 977102847 488192000 7 HPFS/NTFS/exFAT
/dev/sdc1 * 2048 718847 358407 7 HPFS/NTFS/exFAT
/dev/sdc2 718848 977102847 976384000 7 HPFS/NTFS/exFAT
This also shows differences between the Block sizes
Another question is it normal for dd to take 3 hours for a 500GB copy.(laptop ssd to normal non ssd usb hd)
My Physical Sector on windows is 4096 whilst Logical Sector is 512
is it normal for dd to take 3 hours - yes. dd can take very long because you are copying everything off the drive bit by bit bit. And you need to recognize how the connection is made from source (sda) to destination (sdc). You mention sdc is your external usb hard drive, so what is the max transfer speed on USB? Then, it is unlikely that transfer will always happen at that max value. If it is USB 2.0 then yes, it can take very long.
Which is why i hate dd. It is often used when it should not be, and differences between source and destination such as partition sizes, types, block sizes cause problems.
In most cases you are better off using cp -rp or tar.
If you are trying to clone a drive that has a bootable linux operating system, you do not need to use dd there are better ways.
I am working with NVMe card on linux(Ubuntu 14.04).
I am finding some performance degradation for Intel NVMe card when formatted with xfs file system with its default sector size(512). or any other sector size less than 4096.
In the experiment I formatted the card with xfs filesystem with default options. I tried running fio with 64k block size on an arm64 platform with 64k page size.
This is the command used
fio --rw=randread --bs=64k --ioengine=libaio --iodepth=8 --direct=1 --group_reporting --name=Write_64k_1 --numjobs=1 --runtime=120 --filename=new --size=20G
I could get only the below values
Run status group 0 (all jobs):
READ: io=20480MB, aggrb=281670KB/s, minb=281670KB/s, maxb=281670KB/s, mint=744454msec, maxt=74454msec
Disk stats (read/write):
nvme0n1: ios=326821/8, merge=0/0, ticks=582640/0, in_queue=582370, util=99.93%
I tried formatting as follows:
mkfs.xfs -f -s size=4096 /dev/nvme0n1
then the values were :
Run status group 0 (all jobs):
READ: io=20480MB, aggrb=781149KB/s, minb=781149KB/s, maxb=781149KB/s, mint=266
847msec, maxt=26847msec
Disk stats (read/write):
nvme0n1: ios=326748/7, merge=0/0, ticks=200270/0, in_queue=200350, util=99.51%
I find no performance degradation when used with
4k page size
Any fio block size lesser than 64k
With ext4 fs with default configs
What could be the issue? Is this any alignment issue? What Am I missing here? Any help appreciated
The issue is your SSD's native sector size is 4K. So your file system's block size should be set to match so that reads and writes are aligned on sector boundaries. Otherwise you'll have blocks that span 2 sectors, and therefore require 2 sector reads to return 1 block (instead of 1 read).
If you have an Intel SSD, the newer ones have a variable sector size you can set using their Intel Solid State Drive DataCenter Tool. But honestly 4096 is still probably the drive's true sector size anyway and you'll get the most consistent performance using it and setting your file system to match.
On ZFS on Linux the setting is ashift=12 for 4K blocks.
I wrote a simple PCIe driver and I want to test if it works. For example, If it is possible to write and read to the memory which is used from the device as well.
How can I do that?
And which stuff should be proved too?
You need to find the sysfs entry for your device, for example
/sys/devices/pci0000:00/0000:00:07.0/0000:28:00.0
(It can be easier to get there via the symlinks in other subdirectories of /sys, e.g. /sys/class/...)
In this directory there should be (pseudo-)files named resource... which correspond to the various address ranges (Base Address Registers) of your device. I think these can be mmap()ed (but I've never done that).
There's a lot of other stuff you can do with the entries in /sys. See the kernel documentation for more details.
To test the memory you can follow this approach:
1) Do lspci -v
Output of this command will be something like this
0002:03:00.1 Ethernet controller: QUALCOMM Corporation Device ABCD (rev 11)
Subsystem: QUALCOMM Corporation Device 8470
Flags: fast devsel, IRQ 110
Memory at 11d00f1008000 (64-bit, prefetchable) [disabled] [size=32K]
Memory at 11d00f0800000 (64-bit, prefetchable) [disabled] [size=8M]
Capabilities: [48] Power Management version 3
Capabilities: [50] Vital Product Data
Capabilities: [58] MSI: Enable- Count=1/8 Maskable- 64bit+
Capabilities: [a0] MSI-X: Enable- Count=1 Masked-
Capabilities: [ac] Express Endpoint, MSI 00
Capabilities: [100] Advanced Error Reporting
Capabilities: [13c] Device Serial Number 00-00-00-00-00-00-00-00
Capabilities: [150] Power Budgeting <?>
Capabilities: [180] Vendor Specific Information: ID=0000 Rev=0 Len=028 <?>
Capabilities: [250] #12
2) We can see in the above output memory is disabled. To enable it we can execute the following:
setpci -s 0002:03:00.1 COMMAND=0x02
This command will enable the memory at the address: 11d00f1008000
Now, try to read this memory using your processor read command it should be accessible.
How to register an user space call back function with USB driver for mass storage devices in Linux?
I got follwing messages on to console when usb stick is attached.
usb 1-1: new high speed USB device using ehci_hcd and address 2
usb 1-1: Product: DataTraveler G2
usb 1-1: Manufacturer: Kingston
usb 1-1: SerialNumber: 0019E06B07F7A961877C02A9
usb 1-1: configuration #1 chosen from 1 choice
scsi0 : SCSI emulation for USB Mass Storage devices
scsi 0:0:0:0: Direct-Access Kingston DataTraveler G2 1.00 PQ: 0 ANSI: 2
SCSI device sda: 7818240 512-byte hdwr sectors (4003 MB)
sda: Write Protect is off
sda: assuming drive cache: write through
SCSI device sda: 7818240 512-byte hdwr sectors (4003 MB)
sda: Write Protect is off
sda: assuming drive cache: write through sda:sda1
sd 0:0:0:0: Attached scsi removable disk sda
sd 0:0:0:0: Attached scsi generic sg0 type 0
You could create an udev rule which executes a command when it is inserted. Basically you create a file containing a set of rules for matching, and the path to a program/script to run. It'll look something like this:
KERNEL=="sd?1", ATTRS{serial}=="0019E06B07F7A961877C02A9", RUN+="/path/to/script arg1 arg2 ... argN"
This will run /path/to/script with the arguments arg1 to argN when a device node named sd?1 is created, where ? is any character, with the serial number given in your data. You can get a lot of info from the udevinfo program to incorporate in the rule if you need better control over when it should fire. Such as if you want it to fire for all Kingston drives, for instance. Then you'd need to find the vendorID and maybe some more information unique to these drives.