I'm working on embedded platform (Broadcom's bcm5358u processor with MIPS core), where I need extra partitions for the purpose of further upgrade procedure. The filesystem used is SquashFS, so I modified 'struct mtd_partition' accordingly, which is passed to MTD related code, and I ended up with this:
#cat /proc/partitions
major minor #blocks name
- 31 0 128 mtdblock0
- 31 0 128 mtdblock0
- 31 1 6016 mtdblock1
- 31 2 4573 mtdblock2
- 31 3 6016 mtdblock3
- 31 4 4445 mtdblock4
- 31 5 4160 mtdblock5
- 31 6 64 mtdblock6
Now I want to be able to mount /dev/mtdblock4 as a temporary storage during system upgrade, but I can't do this, because it appears that this partition mtdblock4 doesn't have any FS installed. The kernel image and FS are integrated in one image, which is flashed down the /dev/mtdblock2 (which is supplied as root_fs to kernel).
I see only one solution: create a empty squashFS image, write it on /dev/mtdblock4 and may be it will work as I want (?). Is there a way to, like, format the partition on the fly, whenever the kernel boots, or it violates the MTD concepts?
Thanks.
You can mount a JFFS2 filesystem on an empty (erased) flash. It will automatically
"format" the flash partition at mount time. Squashfs is not a good candidate, because it is a read-only filesystem.
Is there a reason you can't create a mount a new FS on the fly?
You definitely do not want an empty squashFS image. If you want temporary writeable storage you can use something like a tmpfs volume. If you need to support a system reboot, you can use JFFS on a raw flash device. You should be able to format/mount the MTD devices just like any other block device.
Thanks for responses.
Yes, SquashFS is read-only, but nevertheless I'm able to update my system via Web interface provided by the platform vendor. The platform SDK provides API to directly access MTD from user space.
Related
I'm trying to write an ansible playbook to automaticly scan new disks and put it into an existing VG and than extend it.
Unfortunately I can't figure out how Linux knows the next device mapper for example (/dev/sdc), to create a perfect ansible playbook to execute this task for me.
Scanning new disk online:
echo 0 0 0 | tee /sys/class/scsi_host/host*/scan
Someone have any idea about this?
Thanks.
You are using confusing terminology. Device mapper is framework which is used by LVM, occasionally one may use device mapper as name for devices created by applications which use device mapper. They are usually can be found in /dev/mapper.
/dev/sdc (and allo other /dev/sd[a-z][a-z]?) are just block devices. They CAN be used by LVM to create PV (physical volume), but they aren't "device mapper".
Now to the answer:
Linux uses 'next available in alphabet letter' for new device. Unfortunately, 'next available' for kernel and for user may be a different thing. If device has been unplugged (or died, or rescanned with reset) and underlying device is marked as been still used, Linux will use 'next letter', so replugged /dev/sdc may appear as /dev/sdd, or, if /dev/sdd is busy, /dev/sde, down to /dev/sdja (I'm not sure where it ends, but there is no such thing as /dev/sdzz AFAIK).
If you want to identify your devices you may use symlinks provided by udev. They are present in /dev/disk and reflects different way to identify devices:
- by-id - device ID is used (usually name and vendor)
- by-partuuid - by UUID of existing partition on disk
- by-uuid - by generated UUID unique for each drive
- by-path - by it's logical location.
I thing last one is the best: If you plug your device in the same slot it would have same name in /dev/disk/by-path regardless of vendor, id, existing filesystems and state of other block devices.
Here few examples of name you may find there:
pci-0000:00:1f.2-ata-3 - ATA disk #3 attached to a specific controller at PCI.
pci-0000:08:00.0-sas-0x50030480013afa6c-lun-0 - SAS drive with WWN 0x50030480013afa6c attached to a specific PCI controller.
pci-0000:01:00.0-scsi-0:2:1:0 - 'strange' scsi device #2 attached to a specific PCI controller. In my case it is a hardware RAID by LSI.
If you really want to handle new devices regardless of their names, please look at Udev scripts, which allows to reacts on new devices. Dealing with udev may be tricky, here example of such scripts in Ceph project: They process all disks with specific paritions ID automatically by udev rules: https://github.com/ceph/ceph/tree/master/udev
What about this?
- name: Find /sys/class/scsi_host hostX softlinks
find:
path: '/sys/class/scsi_host'
file_type: link
pattern: 'host*'
register: _scsi_hosts
- name: Rescanning for new disks
command: 'echo "- - -" > {{ item }}/scan'
changed_when: false
with_items: _scsi_hosts.files.path
I have been playing around with BTRFS on a few drives I had lying around. At first I created BTRFS using the entire drive, but eventually I decided I wanted to use GPT partitions on the drives and recreated the filesystem I needed on the partitions that resulted. (This was so I could use a portion of each drive as Linux swap space, FYI.)
When I got this all done, BTRFS worked a treat. But I have annoying messages saying that I have some old filesystems from my previous experimentation that I have actually nuked. I worry this meant that BTRFS was confused about what space on the drives was available, or that some sort of corruption might occur.
The messages look like this:
$ sudo btrfs file show
Label: 'x' uuid: 06fa59c9-f7f6-4b73-81a4-943329516aee
Total devices 3 FS bytes used 159.20GB
devid 3 size 931.00GB used 134.01GB path /dev/sde
*** Some devices missing
Label: 'root' uuid: 5f63d01d-3fde-455c-bc1c-1b9946e9aad0
Total devices 4 FS bytes used 1.13GB
devid 4 size 931.51GB used 1.03GB path /dev/sdd
devid 3 size 931.51GB used 2.00GB path /dev/sdc
devid 2 size 931.51GB used 1.03GB path /dev/sdb
*** Some devices missing
Label: 'root' uuid: e86ff074-d4ac-4508-b287-4099400d0fcf
Total devices 5 FS bytes used 740.93GB
devid 4 size 911.00GB used 293.03GB path /dev/sdd1
devid 5 size 931.51GB used 314.00GB path /dev/sde1
devid 3 size 911.00GB used 293.00GB path /dev/sdc1
devid 2 size 911.00GB used 293.03GB path /dev/sdb1
devid 1 size 911.00GB used 293.00GB path /dev/sda1
As you can see, I have an old filesystem labeled 'x' and an old one labeled 'root', and both of these have "Some devices missing". The real filesystem, the last one shown, is the one that I am now using.
So how do I clean up the old "Some devices missing" filesystems? I'm a little worried, but mostly just OCD and wanting to tidy up this messy output.
Thanks.
To wipe from disks that are NOT part of your wanted BTRFS FS, I found:
How to clean up old superblock ?
...
To actually remove the filesystem use:
wipefs -o 0x10040 /dev/sda
8 bytes [5f 42 48 52 66 53 5f 4d] erased at offset 0x10040 (btrfs)"
from: https://btrfs.wiki.kernel.org/index.php/Problem_FAQ#I_can.27t_mount_my_filesystem.2C_and_I_get_a_kernel_oops.21
I actually figured this out for myself. Maybe it will help someone else.
I poked around in the code to see what was going on. When the btrfs filesystem show command is used to show all filesystems on all devices, it scans every device and partition in /proc/partitions. Each device and each partition is examined to see if there is a BTRFS "magic number" and associated valid root data structure found at 0x10040 offset from the beginning of the device or partition.
I then used hexedit on a disk that was showing up wrong in my own situation and sure enough there was a BTRFS magic number (which is the ASCII string _BHRfS_M) there from my previous experiments.
I simply nailed that magic number by overwriting a couple of the characters of the string with "**", also using hexedit, and the erroneous entries magically disappeared!
The question is how to recognize what is the file system type that resides on a device (LUN) when I can't mount the device, but I can access(read) to any LBA on the device.
I'm looking for something like: NTFS keeps it's file system type on LBA number X, ext3 keeps it's file system type on LBA number Y.
The main FS that I'm wondering about are: NTFS, ext3, ext4 and VMFS.
The environment is a linux box that can access blocks from the device using dd commands.
Thanks for the help.
I can't directly give you the info you need, but the file utility can:
e.g.:
$ file -s /dev/sda*
/dev/sda: x86 boot sector; partition 1: ID=0x83, s.......
/dev/sda1: Linux rev 1.0 ext4 filesystem data, UUID=3e.....
/dev/sda2: x86 boot sector; partition 1: ID=0x8e, ......
/dev/sda3: x86 boot sector, code offset 0x52, OEM-ID "NTFS .....
/dev/sda4: x86 boot sector, code offset 0x52, OEM-ID "NTFS ....
/dev/sda5: LVM2 PV (Linux Logical Volume Manager), .....
That means you might be able to find the clues in the source code for file/libmagic, or for C/C++ code, you can use libmagic(part of the file tool) to extract the same info.
This is a bit tricky since the volume on the device might not start at sector 0 (usually LBA 0 through 511). The thing is you have to first recognize the structure which has the layout of the drive like a type of Master Boot Record (http://en.wikipedia.org/wiki/Master_boot_record) or GUID Partition Table (http://en.wikipedia.org/wiki/GUID_Partition_Table). Some MBR structures hold a partition type identifier (http://en.wikipedia.org/wiki/Partition_type). GPT has a GUID that identifies the file system stored on a partition.
If the partition identifier is unavailable in such a structure you have to look either for markers of a boot sector or somehow recognize the start of the volume. Typically the first sector of the volume contains the boot record structure. For example, NTFS has a field in it's boot record called OEM ID at offset 0x03 which hold the characters "NTFS" as ASCII (http://www.ntfs.com/ntfs-partition-boot-sector.htm).
I'm trying to adapt an existing SD/MMC card driver to our SD controller hardware. I'm using Synopsys' dw_mmc code (in linux3.3) as a reference. I have a long way to go but at least it's compiled ok and the platform device and platform driver seems to have been registered. My question is how to make the /dev/mmcblk0 file appear in the system? I named our new device ald_sd and I can see ald_sd.0 under /sys/devices/platform. under /dev, I tried 'mknod mmcblk0 179 0' and I see mmcblk0 under /dev. Then I tried 'mount /dev/mmcblk0 /mnt/sd' (after making /mnt/sd) and it gives me message 'mount: mounting /dev/mmcblk0 on /mnt/sd failed: No such device or address'.
Please help. Thank you!
Chan
It's been several months since I sloved this problem. long story short, when the kernel reads the super block of the SD card, then the block access is ok. usually we make /dev/sd0 using mknod command.(not mmcblock0). (mmcblock0 file is made somewhere different maybe /sys.. I don't remember). Also beware, you can mis-type mknod like mkdir or mkdev, then you can have 'No such device or address' message too. Just for your information.
Trying to get access to a partially rooted Galaxy S2 external sd card.
The problem is that /dev/block/mmcblk1p1 does not exist on the phone. This is the device name that should allow me to put the "recovery" image onto the sdcard so that the unit will be a phone again.
Problem is, I don't know where to find the magic Major and Minor numbers for this device and I'm trying to figure out where in the kernel source I should be looking for them.
Could someone point me at the right kernel files to find this information?
Standard devices use predefined major numbers and minor numbers starting from 0 for the first instance and upward depending on how many instances there are going to be.
Look at the Linux Documentation file(devices.txt) to see the full list but the section of interest to you is:
179 block MMC block devices
0 = /dev/mmcblk0 First SD/MMC card
1 = /dev/mmcblk0p1 First partition on first MMC card
8 = /dev/mmcblk1 Second SD/MMC card
...
The start of next SD/MMC card can be configured with
CONFIG_MMC_BLOCK_MINORS, or overridden at boot/modprobe
time using the mmcblk.perdev_minors option. That would
bump the offset between each card to be the configured
value instead of the default 8.
So /dev/block/mmcblk1p1 would be major 179, minor 9.
According to hotplug.txt
Entries for block devices are found at the following locations:
/sys/block/*/dev
/sys/block/*/*/dev
So try looking in /sys/block/mmcblk1p1/dev.
EDIT:
Looking at it again I actually think that it will be in /sys/block/mmcblk1/mmcblk1p1/dev