There are product_uuid and product_serial files in dir /sys/class/dmi/id/.
How it are generated? What is the difference?
Can I change this files?
Is it save a value after reinstall operation system?
How it are generated?
Those values are generated in kernel code. You can find them pretty easily using git grep command (with keywords you are interested in) in your kernel source directory:
$ git grep --all-match -n -e '\bdmi\b' -e product_uuid -e product_serial
So, product_uuid and product_serial sysfs nodes are created in drivers/firmware/dmi-id.c:
DEFINE_DMI_ATTR_WITH_SHOW(product_serial, 0400, DMI_PRODUCT_SERIAL);
DEFINE_DMI_ATTR_WITH_SHOW(product_uuid, 0400, DMI_PRODUCT_UUID);
From DEFINE_DMI_ATTR_WITH_SHOW definition you can see that both attributes are accessed via sys_dmi_field_show() function, which in turn calls dmi_get_system_info(), which just returns corresponding element from dmi_ident array. This table is populated in dmi_decode() routine:
dmi_save_ident(dm, DMI_PRODUCT_SERIAL, 7);
dmi_save_uuid(dm, DMI_PRODUCT_UUID, 8);
So product_uuid is generated in dmi_save_uuid() function. Just read its code to understand how it's done.
product_serial is generated in dmi_save_ident() function. It boils down to code like this:
(struct dmi_header *)(dmi_base)[7];
where dmi_base is address (remapped to virtual memory obviously) of DMI table, and 7 corresponds to DMI_PRODUCT_SERIAL constant.
To better understand this please see SMBIOS specification, specifically Table 9 – System Information (Type 1) Structure, which corresponds to this command:
# dmidecode --type 1
What is the difference?
As for product_uuid -- look at SMBIOS specification, section 7.2.1 System - UUID. It has description and also table with explanation for each part of this number. Using that table you can decode your UUID and extract some information from it, like timestamp, etc.
As for product_serial -- I believe it's self-explanatory, it's just a serial number for your device. You can usually find it printed on some sticker on your computer. For example, for my laptop it's on the bottom. It's the same string that I see in /sys/class/dmi/id/product_serial.
Can I change this files?
Those files are actually not real files but just an interface to kernel functions. Read about sysfs for details. So in order to "change" those files you need to edit mentioned kernel files accordingly, then rebuild the whole kernel and boot it (instead of one provided by your distribution).
Also, as #ChristopheVu-Brugier mentioned in comment, you can change those values in DMI table (in some tricky way though). But I wouldn't recommend it. Those values definitely have some meaning and may be useful in some cases (if not for you, then for some software in your PC).
Is it save a value after reinstall operation system?
Those values are actually obtained from DMI table, which is hardcoded along with BIOS to permanent memory (flash chip with BIOS on your motherboard) and you just read those values from this DMI table using kernel functions by reading those files.
Related
I have a very specific need:
to partially replace the content of a flash and to move MTD partition boundaries.
Current map is:
u-boot 0x000000 0x040000
u-boot-env 0x040000 0x010000
kernel 0x050000 0x230000
initrd 0x280000 0x170000
scripts 0x3f0000 0x010000
filesystem 0x400000 0xbf0000
firmware 0xff0000 0x010000
While desired output is:
u-boot 0x000000 0x040000
u-boot-env 0x040000 0x010000
kernel 0x050000 0x230000
filesystem 0x280000 0xd70000
firmware 0xff0000 0x010000
This means to collapse initrd, scripts and filesystem into a single area while leaving the others alone.
Problem is this should be achieved from the running system (booted with the "old" configuration") and I should rewrite kernel and "new" filesystem before rebooting.
The system is an embedded, so I have little space for maneuver (I have a SD card, though).
Of course the rewritten kernel will have "new" configuration written in its DTB.
Problem is transition.
Note: I have seen this Question, but it is very old and it has drawback to need kernel patches, which I would like to avoid.
NOTE2: this question has been flagged for deletion because "not about programming". I beg to disagree: I need to perform said operation on ~14k devices, most of them already sold to customers, so any workable solution should involve, at the very least, scripting.
NOTE3: if absolutely necessary I can even consider (small) kernel modifications (YES, I have means to update kernel remotely).
I will leave the Accepted answer as-is, but, for anyone who happens to come here to find a solution, I want to point out that:
Recent (<4 years old) mtd-utils, coupled with 4.0+ kernel support:
Definition of a "master" device (MTD device representing the full, unpartitioned Flash). This is a kernel option.
mtd-utils has a specific mtd-part utility that can add/delete MTD partitions dynamically. NOTE: this utility woks IF (and only if) the above is defined in Kernel.
With the above utility it's possible to build multiple, possibly overlapping partitions; use with care!
I have three ideas/suggestions:
Instead of moving the partitions, can you just split the "new" filesystem image into chunks and write them to the corresponding "old" MTD partitions? This way you don't really need to change MTD partition map. After booting into the new kernel, it will see the new contiguous root filesystem. For JFFS2 filesystem, it should be fairly straightforward to do using split or dd, flash_erase and nandwrite. Something like:
# WARNING: this script assumes that it runs from tmpfs and the old root filesystem is already unmounted.
# Also, it assumes that your shell has arithmetic evaluation, which handles hex (my busybox 1.29 ash does this).
# assuming newrootfs.img is the image of new rootfs
new_rootfs_img="newrootfs.img"
mtd_initrd="/dev/mtd3"
mtd_initrd_size=0x170000
mtd_scripts="/dev/mtd4"
mtd_scripts_size=0x010000
mtd_filesystem="/dev/mtd5"
mtd_filesystem_size=0xbf0000
# prepare chunks of new filesystem image
bs="0x1000"
# note: using arithmetic evaluation $(()) to convert from hex and do the math.
# dd doesn't handle hex numbers ("dd: invalid number '0x1000'") -- $(()) works this around
dd if="${new_rootfs_img}" of="rootfs_initrd" bs=$(( bs )) count=$(( mtd_initrd_size / bs ))
dd if="${new_rootfs_img}" of="rootfs_scripts" bs=$(( bs )) count=$(( mtd_scripts_size / bs )) skip=$(( mtd_initrd_size / bs ))
dd if="${new_rootfs_img}" of="rootfs_filesystem" bs=$(( bs )) count=$(( mtd_filesystem_size / bs )) skip=$(( ( mtd_initrd_size + mtd_scripts_size ) / bs ))
# there's no going back after this point
flash_eraseall -j "${mtd_initrd}"
flash_eraseall -j "${mtd_scripts}"
flash_eraseall -j "${mtd_filesystem}"
nandwrite -p "${mtd_initrd}" rootfs_initrd
nandwrite -p "${mtd_scripts}" rootfs_scripts
nandwrite -p "${mtd_filesystem}" rootfs_filesystem
# don't forget to update the kernel too
There is kernel support for concatenating MTD devices (which is exactly what you're trying to do). I don't see an easy way to use it, but you could create a kernel module, which concatenates the desired partitions for you into a contiguous MTD device.
In order to combine the 3 MTD partitions into one to write the new filesystem, you could create a dm-linear mapping over the 3 mtdblocks, and then turn it back into an MTD device using block2mtd. (i.e. mtdblock + device mapper linear + block2mtd) But it looks very awkward and I don't know if it'll work well (for say, OOB data).
EDIT1: added a comment explaining use of $(( bs )) -- to convert from hex as dd doesn't handle hex numbers directly (neither coreutils, nor busybox dd).
AFAIK, #andrey 's answer suggestion 1 is wrong.
an mtd partition is made of a sequence of blocks, any of which could be bad or go bad anytime. this is why the simple mtd char abstraction exists: an mtd char device (not the mtdblock one) is read sequentially and skips bad blocks. nandwrite also writes sequentially and skips bad blocks.
an mtd char device sort of acts like:
a single file into which you cannot random access, from which you can only read sequentially from the beginning to the end (or to where you get bored).
a single file into which you cannot random access, to which you can only write sequentially from the beginning (or from an erase block where you previously stopped reading) all the way to the end. (that is, you can truncate and append, but you cannot write mid-file.) to write you need to previously erase all erase blocks from where you start writing to the end of the partition.
this means that the partition size is the maximum theoretical capacity, but typically the capacity will be less due to bad blocks, and can be effectively reduced every time you rewrite the partition. you can never expect to write the full size of an mtd partition.
this is were #andrey 's suggestion 1 is wrong: it breaks up the file to be written into max-sized pieces before writing each piece. but you never know beforehand how much data will fit into an mtd partition without actually writing that data.
instead, you typically need to write some data, and you pray there will be enough good blocks to fit it. if at some point there are not, the write fails and the device reached end-of-life. needless to say, the larger the fraction of a partition you need, the higher the likelihood that the write will fail (and when that happens, it typically means that the device is toast).
to actually implement something akin to suggestion 1, you need to start writing into a partition (skipping bad blocks), and when you run out of erase blocks, you continue writing into the next partition, and so on. the point being: you cannot know where the data boundaries will lay until you actually write the data and fill each partition; there is no other way.
I want to copy files from one place to another and the problem is I deal with a lot of sparse files.
Is there any (easy) way of copying sparse files without becoming huge at the destination?
My basic code:
out, err := os.Create(bricks[0] + "/" + fileName)
in, err := os.Open(event.Name)
io.Copy(out, in)
Some background theory
Note that io.Copy() pipes raw bytes – which is sort of understandable once you consider that it pipes data from an io.Reader to an io.Writer which provide Read([]byte) and Write([]byte), correspondingly.
As such, io.Copy() is able to deal with absolutely any source providing
bytes and absolutely any sink consuming them.
On the other hand, the location of the holes in a file is a "side-channel" information which "classic" syscalls such as read(2) hide from their users.
io.Copy() is not able to convey such side-channel information in any way.
IOW, initially, file sparseness was an idea to just have efficient storage of the data behind the user's back.
So, no, there's no way io.Copy() could deal with sparse files in itself.
What to do about it
You'd need to go one level deeper and implement all this using the syscall package and some manual tinkering.
To work with holes, you should use the SEEK_HOLE and SEEK_DATA special values for the lseek(2) syscall which are, while formally non-standard, are supported by all major platforms.
Unfortunately, the support for those "whence" positions is not present
neither in the stock syscall package (as of Go 1.8.1)
nor in the golang.org/x/sys tree.
But fear not, there are two easy steps:
First, the stock syscall.Seek() is actually mapped to lseek(2)
on the relevant platforms.
Next, you'd need to figure out the correct values for SEEK_HOLE and
SEEK_DATA for the platforms you need to support.
Note that they are free to be different between different platforms!
Say, on my Linux system I can do simple
$ grep -E 'SEEK_(HOLE|DATA)' </usr/include/unistd.h
# define SEEK_DATA 3 /* Seek to next data. */
# define SEEK_HOLE 4 /* Seek to next hole. */
…to figure out the values for these symbols.
Now, say, you create a Linux-specific file in your package
containing something like
// +build linux
const (
SEEK_DATA = 3
SEEK_HOLE = 4
)
and then use these values with the syscall.Seek().
The file descriptor to pass to syscall.Seek() and friends
can be obtained from an opened file using the Fd() method
of os.File values.
The pattern to use when reading is to detect regions containing data, and read the data from them – see this for one example.
Note that this deals with reading sparse files; but if you'd want to actually transfer them as sparse – that is, with keeping this property of them, – the situation is more complicated: it appears to be even less portable, so some research and experimentation is due.
On Linux, it appears you could try to use fallocate(2) with
FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE to try to punch a hole at the
end of the file you're writing to; if that legitimately fails
(with syscall.EOPNOTSUPP), you just shovel as many zeroed blocks to the destination file as covered by the hole you're reading – in the hope
the OS will do the right thing and will convert them to a hole by itself.
Note that some filesystems do not support holes at all – as a concept.
One example is the filesystems in the FAT family.
What I'm leading you to is that inability of creating a sparse file might
actually be a property of the target filesystem in your case.
You might find Go issue #13548 "archive/tar: add support for writing tar containing sparse files" to be of interest.
One more note: you might also consider checking whether the destination directory to copy a source file resides in the same filesystem as the source file, and if this holds true, use the syscall.Rename() (on POSIX systems)
or os.Rename() to just move the file across different directories w/o
actually copying its data.
You don't need to resort to syscalls.
package main
import "os"
func main() {
f, _ := os.Create("/tmp/sparse.dat")
f.Write([]byte("start"))
f.Seek(1024*1024*10, 0)
f.Write([]byte("end"))
}
Then you'll see:
$ ls -l /tmp/sparse.dat
-rw-rw-r-- 1 soren soren 10485763 Jun 25 14:29 /tmp/sparse.dat
$ du /tmp/sparse.dat
8 /tmp/sparse.dat
It's true you can't use io.Copy as is. Instead you need to implement an alternative to io.Copy which reads a chunk from the src, checks if it's all '\0'. If it is, just dst.Seek(len(chunk), os.SEEK_CUR) to skip past that part in dst. That particular implementation is left as an exercise to the reader :)
i'm analizing block layer's sysfs functions.
i added(attached) a file(diagram) which i made to explain function sequence flow of
/usr/src/linux-source-4.8.0/linux-source/4.8.0/block/blk-mq-sysfs.c.
I understanded these functions' relationship. but i couldn't find how kernel change values of attribute file.
i heard that these files are created in the /sysfs/ hierarchy by calling sysfs_create_group() function.
When i do some I/O requests, system make some files like below.
(i use nvme ssd 750 series)
root#leedoosol:/sys/devices/pci0000:00/0000:00:03.0/0000:03:00.0/nvme/nvme0/nvme0n1/mq/0/cpu0# ls
completed dispatched merged rq_list
kernel would have made these files to give us information about completed request numbers, dispatched number, merged number, pending request_list.
And kernel should have changed value of these file while dealing with I/O request. but i don't know when and how kernel change these value.
i want to know when and how kernel change these values of attribute file because i have to find out what these values of attribute file means exactly.
here my environments.
1.) 2 socket per 10cores
2.) Kernel version : 4.8.17
3.) intel SSD 750 series
maybe I found answer. show store functions are called when i read my attribute file.
kernel don't fix their value of attribute file. kernel don't need to.
when i use 'cat' to the attribute file(in my example 'dispatched'), the file will be opened and then, several structs concerned with that file will be created in ram(of coures, in case of sysfs, backing store will not exist).
read() function will be called and then, show() function will called.
I am writing a C program to calculate current bandwidth usage on a data link. I also need the bandwidth of the link.
For wireless links, iwconfig prints the wireless link characteristics which are stored in /proc/net/wireless. However how about data rate of the wireless link? Is it also stored somewhere in the (another)file?
Also for ethernet links, Are there similar files where all the link details are stored?
You should use libnl to query interface information. Don't rely on files under /proc or scrape the output of iw or iwconfig, since their output format might change any time.
If you are curious about the details, check out the source code of iw. It's easy to understand (I used it myself to understand how to query nl80211 for interface info).
So, I solved my problem by reading the tx_bytes and rx_bytes in the statistics directory of each interface. My function is called every 10 seconds. so, I save the current value of tx/rx_bytes in memory and when my function is called the next time, i use the current value and previous value to calculate the data rate.
Is there any C# way to check an ISO file is valid or not i.e. valid Iso format or any other check possible or not.
The scenario is like, if any text file(or any other format file) is renamed to ISO and given it for further processing. I want to check weather this ISO file is a valid ISO file or not? Is there any way exist programmatically like to check any property of the file or file header or any other things
Thanks for any reply in advance
To quote the wiki gods:
There is no standard definition for ISO image files. ISO disc images
are uncompressed and do not use a particular container format; they
are a sector-by-sector copy of the data on an optical disc, stored
inside a binary file. ISO images are expected to contain the binary
image of an optical media file system (usually ISO 9660 and its
extensions or UDF), including the data in its files in binary format,
copied exactly as they were stored on the disc. The data inside the
ISO image will be structured according to the file system that was
used on the optical disc from which it was created.
reference
So you basically want to detect whether a file is an ISO file or not, and not so much check the file, to see if it's valid (e.g. incomplete, corrupted, ...) ?
There's no easy way to do that and there certainly is not a C# function (that I know of) that can do this.
The best way to approach this is to guess the amount of bytes per block stored in the ISO.
Guess, or simply try all possible situations one by one, unless you have an associated CUE file that actually stores this information. PS. If the ISO is accompanied by a same-name .CUE file then you can be 99.99% sure that it's an ISO file anyway.
Sizes would be 2048 (user data) or 2352 (raw or audio) bytes per block. Other sizes are possible as well !!!! I just mentioned the two most common ones. In case of 2352 bytes per block the user data starts at an offset in this block. Usually 16 or 24 depending on the Mode.
Next I would try to detect the CD/DVD file-systems. Assume that the image starts at sector 0 (although you could for safety implement a scan that assumes -150 to 16 for instance).
You'll need to look into specifics of ISO9660 and UDF for that. Sectors 16, 256 etc. will be interesting sectors to check !!
Bottom line, it's not an easy task to do and you will need to familiarize yourself with optical disc layouts and optical disc file-systems (ISO9660, UDF but possibly also HFS and even FAT on BD).
If you're digging into this I strongly suggest to get IsoBuster (www.isobuster.com) to help you see what the size per block is, what file systems there are, to inspect the different key blocks etc.
In addition to the answers above (and especially #peter's answer): I recently made a very simple Python tool for the detection of truncated/incomplete ISO images. Definitely not validation (which as #Jake1164 correctly points out is impossible), but possibly useful for some scenarios nevertheless. It also supports ISO images that contain Apple (HFS) partitions. For more details see the following blog post:
Detecting broken ISO images: introducing Isolyzer
And the software's Github repo is here:
Isolyzer
You may run md5sum command to check the integrity of an image
For example, here's a list of ISO: http://mirrors.usc.edu/pub/linux/distributions/centos/5.4/isos/x86_64/
You may run:
md5sum CentOS-5.4-x86_64-LiveCD.iso
The output is supposed to be the same as 1805b320aba665db3e8b1fe5bd5a14cc, which you may find from here:
http://mirrors.usc.edu/pub/linux/distributions/centos/5.4/isos/x86_64/md5sum.txt