Create a filesystem on block device directly, without a partition? - linux

I was under the impression that a block device is listed under /dev, so for example /dev/xvdf and that file systems live on a partition which is listed with a number behind the block device the partition is on, like /dev/xvdf1 and that all file systems must live on a partition.
I am running CentOS and as part of a course I have to create file systems, partitions and mount file systems. For this course, I have created a file system on device file /dev/xvdf and I have mounted this file system. In addition to that, I have created a partition on /dev/xvdf with the file name of /dev/xvdf1 and created a file system on this partition as well and mounted this file system. This confuses me and I have some questions:
Am I correct that you do not have to create a partition on a block device, but that you can create a file system on a block device directly without a partition?
If so, why would anyone want to do this?
After creating the file system on /dev/xvdf, I created the /dev/xvdf1 partition using fdisk and I allocated the max blocks to this new partition. However, the file system on /dev/xvdf was not removed and still had a file on it. How is this possible if all the blocks on /dev/xvdf have been allocated to the /dev/xvdf1 partition?

Question #1: you are correct. A file system needs only a contiguous space somewhere. You can also create file system in memory (virtual disk).
Question #2: the possibility of having a partition table is a good thing; but why use it if you don't need to break a disk (or other block device) in several pieces?
About question #3, I think you overlooked something - probably an error raised somewhere and you didn't notice, or some error will raise in future. Even if you have the impression, it can not work; the mounted filesystem thinks to own all the space reserved to it, and similarly fdisk thinks that the blocks it is using "can be used". BTW, what is that "/dev/xvdf"? Is it a real device or whatever?

Related

how to check whether two files in same physical harddisk in linux bash or python?

I'm optimizing an I/O intensive Linux program. So is there any way to know whether two given files/folders path are in same hard disk?
Thanks.
If, by "same physical harddisk" you mean same fileystem, then you can use the stat command to get the device ID:
$ stat -c '%D' filename
$ fd03
If the device IDs match, they're in the same filesystem.
To actually determine the physical disk the file is on, you'd have to know the filesystem in use (some filesystems can span multiple disks), and even the "device" itself may be mapped to more than one actual physical disk by a volume manager such as LVM or a RAID controller.

Linux filesystem nesting and syscall hooking

Using 2.6.32 linux kernel, I need to use a specific filesystem on a block device partition and I wan't to hook open/write/read/close (and few others) syscalls to read/write, in an other fashion that the specific filesystem, what should be written on this partition.
It would be only for this partition, others partitions using this filesystem would act as usual.
Fuse would have been perfect for this but I can't use it because of the memory consumption (too large for the targeted system)
How could I hook syscalls between VFS and the mounted filesystem, for, e.g. having an intermediate index and buffering all the read / write ?
I tried stuff like that :
mount -t ext3 /dev/sda1 /my/mount/data
mkfs.vfat /my/mount/data/big_file
mount -o loop -t vfat /my/mount/data/big_file /my_mount/custom_data
where vfat would be my custom filesystem, but debug shows that vfat is never referencing to jfs files operations where there is file operation that are done inside custom_data mount.
Any hints on how I should proceed ?
I discovered the stackable file system.
Wrapfs is interesting and should fit my needs : http://wrapfs.filesystems.org/
It allows to catch, in an intermediate layer between vfs and the lower fs, all the system calls.
Solve.

adding a device to another device in a raid config

If I have a device mounted in RHEL, how do I add another device to it and make it a raid0 config?
I know how to mount two new devices in a raid0 config, but how do I do it with one device that is already in use and has data on it?
Depends on the details; is it an lvm volume? Then add the new device to the volume group and extend the logical volume. Is it a file system like ZFS? Then add the device to the pool. Otherwise, you need to backup the mounted drive, unmount and create a raid0 volume.
Save you some headaches and clone the drive with a disk imaging software like Clonezilla (http://clonezilla.org/).
Once you have your drived cloned as a disk image, set up RAID0 and recover your clone to the newly created RAID, telling Clonezilla to expand to the total size of the disk.
This way, even if something goes wrong, you can always undo the whole process and just recover your clone into the original single disk as if nothing happened.
Hope it helps,
Regards
I don't think it's possible. You can't have a mounted drive and convert it into a RAID array along with another drive. You will surely have to unmount it first. More realistically, you'll probably have to reformat the drive before adding it to a RAID array. Understand that the RAID is being managed at a lower level than the OS. The OS sees the RAID array as one partition. The OS has no ability to manage or add drives to an existing RAID array.

remount disk with running process

I have an embedded application that I am working on. To protect the data on this image its partitions are mounted RO (this helps prevent flash errors when the power is lost unexpectedly since I cannot guarantee clean shutdowns, you could pull the plug)
An application I am working that needs to be protected resides on this RO partition, however this program also needs to be able to change configuration files on the same RO file system. I have code that allows me to remount this partition RW as needed (eg for firmware updates), but this requires all the processes to be stopped that are running from the read only partition (ie killall my_application). Hence it is not possible for my application to remount the partition it needs to modify without first killing itself (I am not sure which one is the chicken and which one is the egg, but you get the gist).
Is there a way to start my application in such a way that the entire binary is kept in RAM and there is no link back to the partition from which it was run so that the unmount reports the partition as busy?
Or alternatively is there a way to safely remount this RO partition without first killing the process running on it?
You can copy it to a tmpfs filesystem and execute it from there. A tmpfs filesystem stores all data in RAM and sometimes on your SWAP partition.
Passing the -oremount flag to mount should also work.

How to get real file offset in NAND by file name?

Using linux, I can use raw access to NAND or access to files through filesystem. So, when I need to know, where my file is really located in NAND, what should I do? I cannot found any utilities providing this feature. Moreover, I cannot detect any possibility of this, besides hacking kernel with tons of "printk" (it's not nice way, I guess).
Can anybody enlighten me on this? (I'm using YAFFS2 and JFFS2 filesystems)
You can make a copy of any partition with nanddump. Transfer that partition dump to a PC. The nandsim utility can be used to mount the partitions on a PC.
modprobe nandsim first_id_byte=0x2c second_id_byte=0xda \
third_id_byte=0x90 fourth_id_byte=0x95 parts=2,64,64
flash_erase /dev/mtd3 0 0
ubiformat /dev/mtd3 -f rootfs.ubi
This command emulates a Micron 256MB NAND flash with four partitions. If you just capture the single partition and not the whole device, don't set parts. You can also do nanddump on each partition and then concatenate them all. This code targeted mtd3 with a UbiFs partition. For JFFS2 or YAFFS2, you can try nandwrite or some other appropriate flashing utility on the PC.
How to get real file offset in NAND by file name?
The files may span several NAND sectors and they are almost never contiguous. There is not much of an advantage to keep file data together as there is no disk head that takes physical time to seek. Some flash has marginally better efficiency for sequential reads; yet other flash will give better performance for reads from another erase block.
I would turn on debug at either the MTD layer or in the filesystem. In a live system, the position of the file may migrate over time on the flash even if it is not written. This is active wear leveling.

Resources