What is the general sizing number for /dev partition on RHEL - linux

Please check the below description:
Red Hat Enterprise Linux uses a naming scheme that is file-based, with file names in the form of /dev/xxyN.
Where,
xx:
The first two letters of the partition name indicate the type of device on which the partition resides, usually sd.
y:
This letter indicates which device the partition is on. For example, /dev/sda for the first hard disk, /dev/sdb for the second, and so on.
N:
The final number denotes the partition. The first four (primary or extended) partitions are numbered 1 through 4. Logical partitions start at 5. So, for example, /dev/sda3 is the third primary or extended partition on the first hard disk, and /dev/sdb6 is the second logical partition on the second hard disk.
In Red Hat Enterprise Linux each partition is used to form part of the storage necessary to support a single set of files and directories. Mounting a partition makes its storage available starting at the specified directory (known as a mount point).
For example, if partition /dev/sda5 is mounted on /usr/, that would mean that all files and directories under /usr/ physically reside on /dev/sda5. So the file /usr/share/doc/FAQ/txt/Linux-FAQ would be stored on /dev/sda5, while the file /etc/gdm/custom.conf would not. It is also possible that one or more directories below /usr/ would be mount points for other partitions. For instance, a partition (say, /dev/sda7) could be mounted on /usr/local/, meaning that /usr/local/man/whatis would then reside on /dev/sda7 rather than /dev/sda5.
Generally speaking, the disk spacing for /dev partition depends on number and size of the partitions (both primary and logical)to be used by operating system. However, there is no one right answer to this question. It depends on your needs and requirements.
My question is, Is there any affect to the initial partition memory (say, we given 32 GB to /dev partition while installing RHEL OS), if we are adding more harddisk memories(say in 100's of GB's) to /dev partition.

You don't create partitions for /dev. It's in memory, and managed fully automatically by the kernel. /dev exists to expose kernel objects such as devices to userspace, it is transient and doesn't require backing storage on disk.

if you run ls -l /dev/sda1, you will see that the first letter in the permission block says b. b = block-device. This is a special file that if stored on disk, only would hold two special numbers (called major and minor, usually stored together with the file-permissions). When you try to open this special file, the kernel will see that it is a "block" device and look up and major and minor numbers to find the matching physical driver that actually contains this data. Your read/write/ioctl calls will then be redirected to this driver.

Related

Partitions in Linux

I've just learnt about backup and restore on Ubuntu. I have some questions below.
When we've set up Ubuntu successfully, how many partitions have been created? I checked on Terminal by using parted -l and see that there're 3 parts. I typed lsblk and it seems there's a difference in size of /dev/sda2 (extended partition) between two commands. Can I have the explanation?
Does mkfs command create a logical partition? I know that mkfs means make a file system, but a file system is created when mounting with a partition.
Here some images.
The difference is because of extended partition type of dev/sda2 device. More explanation is given here
By definition, mkfs says mkfs is used to build a Linux file system on a device, usually a hard disk partition. This means, you have to use either fdisk or parted to partition a hard disk into primary or extended or logical type & then use mkfs to build either ext4 or ext3 or xfs filesystem or whatever which depends on your need.
Software people calculate disk size as multiples of 1024. But some use multiples of 1000. In your case, parted is probably using 1000 while lsblk is using 1024.
You can not use mkfs to partition disk. You can make a filesystem with it once the disk is already partitioned.

How to find total and free size of mounted and unmounted partition at runtime

I am using tune2fs -l /dev/mmcblk0p1 but when i use this command and change the free space of partition by copying some files into it , tune2fs does not gives updated values.
Tune2fs gives correct value if i restart the system but does not updates values at runtime.
Please suggest some other command that provides coreect data at runtime for unmounted/mounted partitions both.
TIA.
You can not get the exact value of free space available on a random unmounted block device/partition, because the actual free space will depend on actions kernel will perform when mounting the device. So the only proper way to get the amount of free space is to mount the partition first, then use the usual df utility, which will show the up to date space utilization.
However, if system crashes abruptly, the free space reported after remounting may actually be different from the value last reported by df prior to the crash.
For non-mounted devices/partitions, you can always obtain the total (but not free) space available using lsblk or similar utilities.

Does disk IO correspond directly to its physical sector location?

I've been playing around with disk IO on flash drives, HDDs, and SSDs by opening /dev/sd* paths in Linux the way I would any other file.
I understand that the memory controller on the disk can hide true block order (via a mapping) from the OS.
This boils down to these questions:
Are the blocks in /dev/sd* in the order perceived by the OS, or in the order as perceived by the disk's memory controller?
Is the order of blocks in /dev/sd* subjective between POSIX OSes?
Can these properties change if done on an NT or Cygwin system?
Is this property different among Flash, HDD, and SSD?
Can a write occur to a specific index in an opened /dev/sd* path, or is this determined by the memory controller?
Thanks in advance!
If you use the device nodes for entire disks (/dev/sda, /dev/sdb, and so on), then the file offsets for the block device correspond to logical block addresses and will be portable across systems (assuming that the disk sector size is supported). This is independent of the storage technology.
However, the names of the device nodes are different from system to system.
If you use sub-devices (partitions), this is not necessarily the case because interpretation of and support for partition tables varies considerably.

Where are inodes stored at?

I recently started learning about the Linux kernel and I just learned about inodes, which are data-structures containing meta-data of a file.
Now, how do the OS find the associated inode of a file? (Let's say a string of a path). Moreover, where are those inode stored at? I mean, obviously they are stored on the disk but how is it all managed?
One naive solution (I can come up with) would be to allocate on the disk a region designated only for inodes - What's actually done?
It depends on file system implementation. For example ext2fs/ext3fs choose to store inodes before data blocks within Block Group. The Second Extended File system (EXT2)
Remember inodes stored across all Block Groups. For example, inodes 1 to 32768 will get stored in Block Group-0 and inodes 32768 to 65536 stored on Block-Group-2 and so on.
So, the answer to your question is: Inodes are stored in inode tables, and there's an inode table in every block group in the partition.

debian wheezy - how to mount unused disks?

I'm mostly a database guy but i have debian wheezy server with 4 hard disks. It was set up using one disk a while back, that one was all that was needed. Now, i need more space and the thing, i think, that is throwing me off is the UUID disk stuff.
anyway:
/mnt# lsblk -io KNAME,TYPE,SIZE,MODEL
KNAME TYPE SIZE MODEL
sdb disk 232.9G Hitachi HDP72502
sdc disk 232.9G Hitachi HDP72502
sda disk 232.9G Hitachi HDP72502
sda1 part 223.4G
sda2 part 1K
sda5 part 9.5G
sdd disk 232.9G Hitachi HDP72502
sr0 rom 1024M DVD A DS8A1P
Root is mounted to sda. sdb, c and d are unused, unformatted etc. i just need some more space, so i have created /mnt/ext_b/ and so on for b, c, d.
mount shows:
/dev/disk/by-uuid/1b1e97e4-3c04-4e50-8e06-b16752778717 on / type ext4 (rw,relatime,errors=remount-ro,user_xattr,barrier=1,data=ordered)
which is correct. i want to mount the others just for space, how do i get their UUIDs?
/mnt# blkid
/dev/sda5: UUID="f70ad0b2-a9d0-430a-829c-d2e37245fd71" TYPE="swap"
/dev/sda1: UUID="1b1e97e4-3c04-4e50-8e06-b16752778717" TYPE="ext4"
how do i get the UUIDs to put formatted filesystems on the disks?
/mnt# mkfs.ext4 /dev/sdb1
mke2fs 1.42.5 (29-Jul-2012)
Could not stat /dev/sdb1 --- No such file or directory
thanks in advance.
matt
It seems that those additional disks haven't been touched since they were connected to the server and don't have even partitions yet. In general, adding extra disk space in Linux can be done in the following steps:
Attach new disk to the server
Create partition table on it
Add one or more partitions to the disk
Format partition to the FS of your choice
Mounting this partition to the mount point of your choice
Make those mount persistent by adding appropriate line to the /etc/fstab
If you have multiple disks you may consider to create hardware RAID disk if you have RAID controller or software RAID, using mdadm tool. Both ways you'll get larger single disk(size would depend from the RAID level you choose) for which you'll need to go to step 2 and farther. It's worth mentioning that there is another way to get larger than single disk usable space from multiple disks called Logical Volume Manager or LVM. It's more sophisticated than MDM and allows to create FS snapshots and add extra disk space to the volume without need to create additional mountpoints.
Whatever you choose you'll need to create a partition table on a new disk/LVM volume/MDM disk. Here you need to make another choice - what type of partition table to use, MBR or GPT. Check Partitioning HOWTO for more details, but in general I'd recommend GPT for large non-bootable disks.
Same HOWTO will tell you, how create partition(s) on the selected disk. At this point you'll get devices like /dev/sdb1, etc.
Then you can go to step 4, the one you already tried:
# mkfs.ext4 /dev/sdb1
That should succeed now and you'll be able to get UUID of a new FS with blkid. Add obtained UUID to your /etc/fstab file and mount newly created FS to it's mount point.
It seems to me that you must create partitions on disks.
Think about storing your data. You have similar disks. It must be raid? If it must be raid, what type of raid it must be?
You can create partition with fdisk or some alternatives (gparted, cfdisk and so on).
There are a lot information in internet and manuals.
May be you need LVM? Some people say that it may slow down your database, but it gives you opportunity to get snapshots.
After creating partitions you can create filesystem and can mount it.
Usually people recommend to use XFS or Ext4 for databases.
And don't forget to set right mount flags to your filestystem.
noatime,nodiratime and barrier=0 will improve performance, but with barrier=0 in some cases you can loose your data. In case of Ext4 look at data (may be in your case you can set it in ordered).
UPD: may be this question must be in superuser or unix section?
You supposed to create partition with utility like the fdisk, cfdisk, gparted or partitionmanager before you can format it.

Resources