How to read/write sectors in range 0 - 2048 under Linux? - linux

I have small task to read/write sectors in "free area", this free area between MBR's sector (LBA=0) and first sector of the first partition (tipicaly LBA=2048).
So, I can read/write first 128 sectors. After LBA=127 write operation end successfully but nothing has been written to disk really.
So is there a some kernel limitation ?

It was a logic bug in the C code, so the problem was resolved.
It was incorrect file offset computation as input for vfs_read() routine.

Related

Stress-ng: RAM testing commands

Stress-ng: Can we test RAM using stress-ng? What are the commands used to test RAM on a MIPS 32 device?
There are many memory based stressors in stress-ng:
stress-ng --class memory?
class 'memory' stressors: atomic bsearch context full heapsort hsearch
lockbus lsearch malloc matrix membarrier memcpy memfd memrate memthrash
mergesort mincore null numa oom-pipe pipe qsort radixsort remap
resources rmap stack stackmmap str stream tlb-shootdown tmpfs tsearch
vm vm-rw wcs zero zlib
Alternatively, one can also use VM based stressors too:
stress-ng --class vm?
class 'vm' stressors: bigheap brk madvise malloc mlock mmap mmapfork mmapmany
mremap msync shm shm-sysv stack stackmmap tmpfs userfaultfd vm vm-rw
vm-splice
I suggest looking at the vm stressor first as this contains a large range of stressor methods that exercise memory patterns and can possibly find broken memory:
-m N, --vm N
start N workers continuously calling mmap(2)/munmap(2) and writ‐
ing to the allocated memory. Note that this can cause systems to
trip the kernel OOM killer on Linux systems if not enough physi‐
cal memory and swap is not available.
--vm-bytes N
mmap N bytes per vm worker, the default is 256MB. One can spec‐
ify the size as % of total available memory or in units of
Bytes, KBytes, MBytes and GBytes using the suffix b, k, m or g.
--vm-ops N
stop vm workers after N bogo operations.
--vm-hang N
sleep N seconds before unmapping memory, the default is zero
seconds. Specifying 0 will do an infinite wait.
--vm-keep
do not continually unmap and map memory, just keep on re-writing
to it.
--vm-locked
Lock the pages of the mapped region into memory using mmap
MAP_LOCKED (since Linux 2.5.37). This is similar to locking
memory as described in mlock(2).
--vm-madvise advice
Specify the madvise 'advice' option used on the memory mapped
regions used in the vm stressor. Non-linux systems will only
have the 'normal' madvise advice, linux systems support 'dont‐
need', 'hugepage', 'mergeable' , 'nohugepage', 'normal', 'ran‐
dom', 'sequential', 'unmergeable' and 'willneed' advice. If this
option is not used then the default is to pick random madvise
advice for each mmap call. See madvise(2) for more details.
--vm-method m
specify a vm stress method. By default, all the stress methods
are exercised sequentially, however one can specify just one
method to be used if required. Each of the vm workers have 3
phases:
1. Initialised. The anonymously memory mapped region is set to a
known pattern.
2. Exercised. Memory is modified in a known predictable way.
Some vm workers alter memory sequentially, some use small or
large strides to step along memory.
3. Checked. The modified memory is checked to see if it matches
the expected result.
The vm methods containing 'prime' in their name have a stride of
the largest prime less than 2^64, allowing to them to thoroughly
step through memory and touch all locations just once while also
doing without touching memory cells next to each other. This
strategy exercises the cache and page non-locality.
Since the memory being exercised is virtually mapped then there
is no guarantee of touching page addresses in any particular
physical order. These workers should not be used to test that
all the system's memory is working correctly either, use tools
such as memtest86 instead.
The vm stress methods are intended to exercise memory in ways to
possibly find memory issues and to try to force thermal errors.
Available vm stress methods are described as follows:
Method Description
all iterate over all the vm stress methods
as listed below.
flip sequentially work through memory 8
times, each time just one bit in memory
flipped (inverted). This will effec‐
tively invert each byte in 8 passes.
galpat-0 galloping pattern zeros. This sets all
bits to 0 and flips just 1 in 4096 bits
to 1. It then checks to see if the 1s
are pulled down to 0 by their neighbours
or of the neighbours have been pulled up
to 1.
galpat-1 galloping pattern ones. This sets all
bits to 1 and flips just 1 in 4096 bits
to 0. It then checks to see if the 0s
are pulled up to 1 by their neighbours
or of the neighbours have been pulled
down to 0.
gray fill the memory with sequential gray
codes (these only change 1 bit at a time
between adjacent bytes) and then check
if they are set correctly.
incdec work sequentially through memory twice,
the first pass increments each byte by a
specific value and the second pass
decrements each byte back to the origi‐
nal start value. The increment/decrement
value changes on each invocation of the
stressor.
inc-nybble initialise memory to a set value (that
changes on each invocation of the stres‐
sor) and then sequentially work through
each byte incrementing the bottom 4 bits
by 1 and the top 4 bits by 15.
rand-set sequentially work through memory in 64
bit chunks setting bytes in the chunk to
the same 8 bit random value. The random
value changes on each chunk. Check that
the values have not changed.
rand-sum sequentially set all memory to random
values and then summate the number of
bits that have changed from the original
set values.
read64 sequentially read memory using 32 x 64
bit reads per bogo loop. Each loop
equates to one bogo operation. This
exercises raw memory reads.
ror fill memory with a random pattern and
then sequentially rotate 64 bits of mem‐
ory right by one bit, then check the
final load/rotate/stored values.
swap fill memory in 64 byte chunks with ran‐
dom patterns. Then swap each 64 chunk
with a randomly chosen chunk. Finally,
reverse the swap to put the chunks back
to their original place and check if the
data is correct. This exercises adjacent
and random memory load/stores.
move-inv sequentially fill memory 64 bits of mem‐
ory at a time with random values, and
then check if the memory is set cor‐
rectly. Next, sequentially invert each
64 bit pattern and again check if the
memory is set as expected.
modulo-x fill memory over 23 iterations. Each
iteration starts one byte further along
from the start of the memory and steps
along in 23 byte strides. In each
stride, the first byte is set to a ran‐
dom pattern and all other bytes are set
to the inverse. Then it checks see if
the first byte contains the expected
random pattern. This exercises cache
store/reads as well as seeing if neigh‐
bouring cells influence each other.
prime-0 iterate 8 times by stepping through mem‐
ory in very large prime strides clearing
just on bit at a time in every byte.
Then check to see if all bits are set to
zero.
prime-1 iterate 8 times by stepping through mem‐
ory in very large prime strides setting
just on bit at a time in every byte.
Then check to see if all bits are set to
one.
prime-gray-0 first step through memory in very large
prime strides clearing just on bit
(based on a gray code) in every byte.
Next, repeat this but clear the other 7
bits. Then check to see if all bits are
set to zero.
prime-gray-1 first step through memory in very large
prime strides setting just on bit (based
on a gray code) in every byte. Next,
repeat this but set the other 7 bits.
Then check to see if all bits are set to
one.
rowhammer try to force memory corruption using the
rowhammer memory stressor. This fetches
two 32 bit integers from memory and
forces a cache flush on the two
addresses multiple times. This has been
known to force bit flipping on some
hardware, especially with lower fre‐
quency memory refresh cycles.
walk-0d for each byte in memory, walk through
each data line setting them to low (and
the others are set high) and check that
the written value is as expected. This
checks if any data lines are stuck.
walk-1d for each byte in memory, walk through
each data line setting them to high (and
the others are set low) and check that
the written value is as expected. This
checks if any data lines are stuck.
walk-0a in the given memory mapping, work
through a range of specially chosen
addresses working through address lines
to see if any address lines are stuck
low. This works best with physical mem‐
ory addressing, however, exercising
these virtual addresses has some value
too.
walk-1a in the given memory mapping, work
through a range of specially chosen
addresses working through address lines
to see if any address lines are stuck
high. This works best with physical mem‐
ory addressing, however, exercising
these virtual addresses has some value
too.
write64 sequentially write memory using 32 x 64
bit writes per bogo loop. Each loop
equates to one bogo operation. This
exercises raw memory writes. Note that
memory writes are not checked at the end
of each test iteration.
zero-one set all memory bits to zero and then
check if any bits are not zero. Next,
set all the memory bits to one and check
if any bits are not one.
--vm-populate
populate (prefault) page tables for the memory mappings; this
can stress swapping. Only available on systems that support
MAP_POPULATE (since Linux 2.5.46).
So to run 1 vm stressor that uses 75% of memory using all the vm stressors with verification for 10 minutes with verbose mode enabled, use:
stress-ng --vm 1 --vm-bytes 75% --vm-method all --verify -t 10m -v

Who could specifically explain the remaining 2 bytes in first sector of disk?

We know the size of sector in disk is 512 bytes. We also know the first sector mainly record MBR and partition table. The size of MBR is 446 bytes and the size of partition table is 64 bytes. But the sum of the size of MBR and partition is 510 bytes isn't equal 512 bytes. What do we use the remaining 2 bytes to do?
The last two bytes are the boot signature. The BIOS is supposed to check that these bytes have values 0x55 and then 0xAA before loading the bootstrap code from sector 0, so this isn't really a Linux question.
https://support.microsoft.com/en-us/kb/149877
Wikipedia has tables for the common layouts, and the last two bytes are used for the "boot signature" (55 AA).
The BIOS is supposed to check this before trying to launch into the booting process.

Intel NVMe drive Performance degradation with xfs filesystem with sector size other than 4096

I am working with NVMe card on linux(Ubuntu 14.04).
I am finding some performance degradation for Intel NVMe card when formatted with xfs file system with its default sector size(512). or any other sector size less than 4096.
In the experiment I formatted the card with xfs filesystem with default options. I tried running fio with 64k block size on an arm64 platform with 64k page size.
This is the command used
fio --rw=randread --bs=64k --ioengine=libaio --iodepth=8 --direct=1 --group_reporting --name=Write_64k_1 --numjobs=1 --runtime=120 --filename=new --size=20G
I could get only the below values
Run status group 0 (all jobs):
READ: io=20480MB, aggrb=281670KB/s, minb=281670KB/s, maxb=281670KB/s, mint=744454msec, maxt=74454msec
Disk stats (read/write):
nvme0n1: ios=326821/8, merge=0/0, ticks=582640/0, in_queue=582370, util=99.93%
I tried formatting as follows:
mkfs.xfs -f -s size=4096 /dev/nvme0n1
then the values were :
Run status group 0 (all jobs):
READ: io=20480MB, aggrb=781149KB/s, minb=781149KB/s, maxb=781149KB/s, mint=266
847msec, maxt=26847msec
Disk stats (read/write):
nvme0n1: ios=326748/7, merge=0/0, ticks=200270/0, in_queue=200350, util=99.51%
I find no performance degradation when used with
4k page size
Any fio block size lesser than 64k
With ext4 fs with default configs
What could be the issue? Is this any alignment issue? What Am I missing here? Any help appreciated
The issue is your SSD's native sector size is 4K. So your file system's block size should be set to match so that reads and writes are aligned on sector boundaries. Otherwise you'll have blocks that span 2 sectors, and therefore require 2 sector reads to return 1 block (instead of 1 read).
If you have an Intel SSD, the newer ones have a variable sector size you can set using their Intel Solid State Drive DataCenter Tool. But honestly 4096 is still probably the drive's true sector size anyway and you'll get the most consistent performance using it and setting your file system to match.
On ZFS on Linux the setting is ashift=12 for 4K blocks.

Understanding Assembly Code

Could anyone provide some insight into the following assembly code:
More Information:
The bootloader is in fact a small 16bit bootloader that decipher using Xor decryption, a bigger one, a linux bootloader located in sectors 3 to 34. ( 1 sector is 512 byte in that disk )
The whole thing is a protection system for an exec running on embedded Linux.
the version where the protection was removed has the linux bootloader already deciphered ( we were able to reverse it using IDA ) so we assume that the xor key must be made only with zero's in the version without the protection.
if we look at offset 0x800 to 0x8FF in the version with protection removed it is not filled with zero's so this cannot be the key otherwise this version couldn't be loaded, it would xor plain data and load nothing but garbage.
the sectors 3->34 are ciphered in original version and in clear in our version ( protection removed ) but the MBR code ( small prebootloader ) is identical in both version.
So could it be that there is a small detail in the assembly code of the MBR that changes slightly the place of the xor key?
This is simply an exercise I am doing to understand assembly code loaders better and I found this one quite a challenge to go through. I thank you for your input so far!
Well, reading assembly code is a lot of work. I'll just stick to answering where does the "key" come from.
The BIOS loads the MBR at 0000:7C00 and sets DL to the drive from which it was loaded.
First, your MBR sets up a stack growing down from 0000:7C00.
Then, it copies itself to 0000:0600-0000:07ff and does the far jump which I assume is to the next instruction, however on the copied version, 0000:061D. (EDIT: this copy is a pretty standard thing for an MBR to do, typically to this address, it leaves the address space above free) (EDIT: it copies itself fully, I misread).
Then, it tries 5 times to read sector 2 into 0000:0700-0000:08ff, and halts with an error message if it fails (loc_78).
Then, it tries 5 times to read 32 sectors starting from sector 3 into 2000:0-2000:3FFF (absolute address 20000-23FFF, that's 16kB starting at 128kB).
If all goes well we are at loc_60 and we know what we have in memory.
The loop on loc_68 will use as destination the buffer holding those 32 sectors.
It will use as source the buffer starting at 0000:0800 (which is the second 256 bytes of the buffer we read into 0:0700-0:08ff, the second 256 bytes of sector 2 of the disk).
On each iteration, LODSW adds 2 to SI.
When SI reaches 840, it is set back to 800 by the AND. Therefore, the "key" buffer goes from 800 to 83F, which is to say the 64 bytes starting at byte 256 of sector 2 of the disk.
I have the impression that the XORing falls one byte short... that CX should have been set to 4000h, not 3FFF. I think this is a bug in the code.
After the 16kB buffer at physical address 20000 has been circularly XORed with that "key" buffer, we jump into it with a far jump to 2000:0.
Right? Does it fit? It the key there?

Copying sectors?

Is there a script i can use to copy some particular sectors of my Harddisk?
I actually have two partitions say A and B, on my Harddisk. Both are of same sizes. What i want is to run a program which starts copying data from the starting sector of A to the starting sector of B until the end sector of A is copied to the end sector of B.
Looking for possible solutions...
Thanks a lot
How about using dd? Following copies 1024 blocks (of 512 bytes size, which is usually a sector size) with 4096 block offset from sda to sdb partition:
dd if=/dev/sda1 of=/dev/sdb1 bs=512 count=1024 skip=4096
PS. I also suppose it should be SuperUser or rather ServerFault question.
If you want to access the hard drive directly, not via partitions, then, well, just do that. Something like
dd if=/dev/sda of=/dev/sda bs=512 count=1024 skip=XX seek=YY
should copy 1024 sectors starting at sector XX to sectors YY->YY+1024. Of course, if the sector ranges overlap, results are probably not going to be pretty.
(Personally, I wouldn't attempt this without first taking a backup of the disk, but YMMV)
I am not sure if what you are looking for is a partion copier.
If that is what you mean try clonezilla.
(it will show you what exact statement it uses so can be used to find out how to do that in a script afterwards)

Resources