My configuration
I'm trying to deploy a virtual machine in Azure using a Oracle Linux image (version 8.4, generation 2). We change the mount point of the Azure temporary disk (ephemeral0) to /mnt/resource. In addition I create a swapfile on the temporary disk. I'm using the following custom cloud-init script during deployment:
#cloud-config
datasource_list: [ Azure ]
mounts:
- [ ephemeral0, /mnt/resource, auto, "defaults,nofail,x-systemd.requires=cloud-init.service" ]
mount_default_fields: [ None, None, "auto", "defaults,nofail", "0", "2" ]
swap:
filename: /mnt/resource/swap.img
size: "auto" # or size in bytes
max_size: 17179869184 # 16GB
On the first boot right after VM creation everything (ephemeral0 and swap) is working as expected.
If I take a look in /var/log/cloud-init.log you can see the following entries:
[root#test01 ~]# cat /var/log/cloud-init.log | grep swap
2021-08-16 08:19:29,814 - cc_mounts.py[DEBUG]: Attempting to determine the real name of swap
2021-08-16 08:19:29,814 - cc_mounts.py[DEBUG]: changed default device swap => None
2021-08-16 08:19:29,814 - cc_mounts.py[DEBUG]: Ignoring nonexistent default named mount swap
2021-08-16 08:19:29,815 - cc_mounts.py[DEBUG]: suggest 4096.0 MB swap for 7672.03125 MB memory with '17018.25390625 MB' disk given max=None [max=4254.5634765625 MB]'
2021-08-16 08:19:29,815 - cc_mounts.py[DEBUG]: Creating swapfile in '/mnt/resource/swap.img' on fstype 'xfs' using 'fallocate'
2021-08-16 08:19:29,815 - subp.py[DEBUG]: Running command ['fallocate', '-l', '4096M', '/mnt/resource/swap.img'] with allowed return codes [0] (shell=False, capture=True)
2021-08-16 08:19:29,849 - subp.py[DEBUG]: Running command ['mkswap', '/mnt/resource/swap.img'] with allowed return codes [0] (shell=False, capture=True)
2021-08-16 08:19:29,887 - util.py[DEBUG]: Setting up swap file took 0.072 seconds
2021-08-16 08:19:29,893 - cc_mounts.py[DEBUG]: Changes to fstab: ['+ /dev/disk/cloud/azure_resource-part1 /mnt/resource auto defaults,nofail,x-systemd.requires=cloud-init.service,comment=cloudconfig 0 2', '+ /mnt/resource/swap.img none swap sw,comment=cloudconfig 0 0']
2021-08-16 08:19:29,893 - subp.py[DEBUG]: Running command ['swapon', '-a'] with allowed return codes [0] (shell=False, capture=True)
2021-08-16 08:19:29,929 - cc_mounts.py[DEBUG]: Activate mounts: PASS:swapon -a
As you can see cloud-init suggests 4096 MB as swapsize.
In /etc/fstab the following entries are added:
/dev/disk/cloud/azure_resource-part1 /mnt/resource auto defaults,nofail,x-systemd.requires=cloud-init.service,comment=cloudconfig 0 2
/mnt/resource/swap.img none swap sw,comment=cloudconfig 0 0
Also swapon -s states that swap is configured corretly:
Filename Type Size Used Priority
/mnt/resource/swap.img file 4194300 0 -2
The Problem
Now if I deallocate the virtual machine and start it again the temporary disk is deleted and recreated as expected. It is mounted again under /mnt/resource but swap is not created any longer. /var/log/cloud-init.log states:
2021-08-16 08:29:33,331 - cc_mounts.py[DEBUG]: Attempting to determine the real name of swap
2021-08-16 08:29:33,331 - cc_mounts.py[DEBUG]: changed default device swap => None
2021-08-16 08:29:33,331 - cc_mounts.py[DEBUG]: Ignoring nonexistent default named mount swap
2021-08-16 08:29:33,331 - util.py[DEBUG]: Reading from /proc/swaps (quiet=False)
2021-08-16 08:29:33,331 - util.py[DEBUG]: Read 37 bytes from /proc/swaps
2021-08-16 08:29:33,331 - cc_mounts.py[DEBUG]: swap file /mnt/resource/swap.img exists, but not in /proc/swaps
2021-08-16 08:29:33,332 - cc_mounts.py[DEBUG]: suggest 0.0 MB swap for 7672.03125 MB memory with '12889.515625 MB' disk given max=None [max=3222.37890625 MB]'
2021-08-16 08:29:33,332 - cc_mounts.py[DEBUG]: Not creating swap: suggested size was 0
2021-08-16 08:29:33,337 - cc_mounts.py[DEBUG]: Changes to fstab: ['- /mnt/resource/swap.img none swap sw,comment=cloudconfig 0 0']
For my understanding the cc_mounts module of cloud-init suggests a swapsize of 0 MB because it determines that the temporary disk has only about 12 GB space left. This seems to be wrong since (a) the disk is empty due to deallocating and (b) df -h states it has about 15 GB available:
[root#test01 ~]# df -h /mnt/resource/
Filesystem Size Used Avail Use% Mounted on
/dev/sdb1 16G 45M 15G 1% /mnt/resource
Am I missing something here? Can anybody explain why cloud-init behaves like this and how to create a swapfile properly for every reboot?
• Your problem is occurred from a misconfiguration that causes Azure Linux Agent and the cloud init agent to try to configure the swap file. When cloud-init is responsible for provisioning, the swap file must be configured by cloud-init to enable only one agent (either cloud-init or waagent) for provisioning. This issue can be intermittent because of the timing of when the waagent daemons start.
• You can fix this problem by disabling the disk formatting and then swapping the configuration within the waagent configuration file, i.e., /etc/waagent.conf and ensuring that the azure linux agent is not mounting the ephemeral disk as this should be handled by the cloud-init agent. For this purpose, set the parameters as below: -
#vi /etc/waagent.conf
#Mount point for the resource disk
ResourceDisk.MountPoint=/mnt
#Create and use swapfile on resource disk
ResourceDisk.EnableSwap=n
#Size of the swapfile
ResourceDisk.SwapSizeMB=0
• Restart the Azure Linux agent and ensure that the VM is configured to create a swap file through cloud init. Also, add the below script to ‘/var/lib/cloud/scripts/per-boot’ and making the file executable by using the ‘# chmod +x create_swapfile.sh’ command: -
#!/bin/sh
if [ ! -f '/mnt/swapfile' ]; then
fallocate --length 2GiB /mnt/swapfile
chmod 600 /mnt/swapfile
mkswap /mnt/swapfile
swapon /mnt/swapfile
swapon -a
else
swapon /mnt/swapfile; fi
• Once done, stop and start the VM and check for swap enablement. Below is its example. Also, compare the logs from /var/log/waagent.log and /var/log/cloud-init.log for reboot timeframe. To avoid this situation completely, deploy the VM by using the swap configuration custom data during provisioning.
Please find the below documentation for more information: -
https://learn.microsoft.com/en-us/troubleshoot/azure/virtual-machines/swap-file-not-recreated-linux-vm-restart
https://learn.microsoft.com/en-us/azure/virtual-machines/extensions/update-linux-agent
Thanking you,
Related
I want to install oracle database 12cR1 Real application cluster on oracle linux operating system .
I did all configurations in node 1 and node 2 but during the installation of grid infrastructure I got following errors :
checking DNS response from all servers in "/etc/resolv.conf"
checking response for name "kaash-his-2" from each of the name servers specified in "/etc/resolv.conf"
Node Name Source Comment Status
------------------------ ------------------------ ----------
checking response for name "kaash-his-1" from each of the name servers specified in "/etc/resolv.conf"
Node Name Source Comment Status
------------------ ------------------------ ----------
Check for integrity of file "/etc/resolv.conf" failed
and this is the output of file resolv.conf
[root#KAASH-HIS-1 tmp]# cat /etc/resolv.conf
# Generated by NetworkManager
nameserver 10.93.200.222
nameserver 10.93.200.223
[root#KAASH-HIS-2 ~]# cat /etc/resolv.conf
# Generated by NetworkManager
nameserver 10.93.200.222
nameserver 10.93.200.223
the other error :
Starting check for /dev/shm mounted as temporary file system ...
ERROR:
PRVE-0426 : The size of in-memory file system mounted as /dev/shm is "33554432k" megabytes which is less than the required size of "2048" megabytes on node ""
PRVE-0426 : The size of in-memory file system mounted as /dev/shm is "33554432k" megabytes which is less than the required size of "2048" megabytes on node ""
Check for /dev/shm mounted as temporary file system failed
and I added already the file /dev/shm in file /etc/fstab and this is the file details :
[root#KAASH-HIS-1 tmp]# cat /etc/fstab
#
# /etc/fstab
# Created by anaconda on Tue Dec 27 13:35:34 2022
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/ol_kaash--his--1-root / xfs defaults 0 0
UUID=8c639649-0a25-48d7-9fe5-9ed62090f457 /boot xfs defaults 0 0
/dev/mapper/ol_kaash--his--1-home /home xfs defaults 0 0
/dev/mapper/ol_kaash--his--1-swap swap swap defaults 0 0
tmpfs /dev/shm tmpfs size=32g 0 0
How to solve these errors please ?
After days of search I found the solution for these errors :
1- Edit file /etc/resolv.conf and add search and domain for all nodes :
[root#KAASH-HIS-1 tmp]# vi /etc/resolv.conf
# Generated by NetworkManager
search "local-domain-name"
domain "local-domain-name"
nameserver 10.93.200.222
nameserver 10.93.200.223
2- about the /dev/shm size error
you can change the size and mount using the following command as root user :
# mount -o remount,size=2G /dev/shm
then confirm the size by using
# df -h
and during the installation you can ignore the warning if the mounted size for /dev/shm greater than 2048M
This is all done as the root user.
The script for backups at /usr/share/perl5/PVE/VZDump/LXC.pm sets a default mount point
my $default_mount_point = "/mnt/vzsnap0";
But regardless of whether I use the GUI or the command line I get the following error:
ERROR: Backup of VM 103 failed - mkdir /mnt/vzsnap0:
Permission denied at /usr/share/perl5/PVE/VZDump/LXC.pm line 161.
And lines 160 - 161 in that script is:
my $rootdir = $default_mount_point;
mkpath $rootdir;
After the installation before I created any images or did any backups I setup two things.
(1) SSHFS mount for /mnt/backups
(2) Added all other drives as Linux LVM
What I did for the drive addition is as simple as:
pvcreate /dev/sdb1
pvcreate /dev/sdc1
pvcreate /dev/sdd1
pvcreate /dev/sde1
vgextend pve /dev/sdb1
vgextend pve /dev/sdc1
vgextend pve /dev/sdd1
vgextend pve /dev/sde1
lvextend pve/data /dev/sdb1
lvextend pve/data /dev/sdc1
lvextend pve/data /dev/sdd1
lvextend pve/data /dev/sde1
For the SSHFS instructions see my blog post on it: https://6ftdan.com/allyourdev/2018/02/04/proxmox-a-vm-server-for-your-home/
Here are filesystem directory permission related files and details.
cat /etc/fstab
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/pve/root / ext4 errors=remount-ro 0 1
/dev/pve/swap none swap sw 0 0
proc /proc proc defaults 0 0
df -h
Filesystem Size Used Avail Use% Mounted on
udev 7.8G 0 7.8G 0% /dev
tmpfs 1.6G 9.0M 1.6G 1% /run
/dev/mapper/pve-root 37G 8.0G 27G 24% /
tmpfs 7.9G 43M 7.8G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup
/dev/fuse 30M 20K 30M 1% /etc/pve
sshfs#10.0.0.10:/mnt/raid/proxmox_backup 1.4T 725G 672G 52% /mnt/backups
tmpfs 1.6G 0 1.6G 0% /run/user/0
ls -dla /mnt
drwxr-xr-x 3 root root 0 Aug 12 20:10 /mnt
ls /mnt
backups
ls -dla /mnt/backups
drwxr-xr-x 1 1001 1002 80 Aug 12 20:40 /mnt/backups
The command that I desire to succeed is:
vzdump 103 --compress lzo --node ProxMox --storage backup --remove 0 --mode snapshot
For the record the container image is only 8GB in size.
Cloning containers does work and snapshots work.
Q & A
Q) How are you running the perl script?
A) Through the GUI you click on Backup now, then select your storage (I have backups and local and the both produce this error), then select the state of the container (Snapshot, Suspend, Stop each produce the same error), then compression type (none, LZO, and gzip each produce the same error). Once all that is set you click Backup and get the following output.
INFO: starting new backup job: vzdump 103 --node ProxMox --mode snapshot --compress lzo --storage backups --remove 0
INFO: Starting Backup of VM 103 (lxc)
INFO: Backup started at 2019-08-18 16:21:11
INFO: status = stopped
INFO: backup mode: stop
INFO: ionice priority: 7
INFO: CT Name: Passport
ERROR: Backup of VM 103 failed - mkdir /mnt/vzsnap0: Permission denied at /usr/share/perl5/PVE/VZDump/LXC.pm line 161.
INFO: Failed at 2019-08-18 16:21:11
INFO: Backup job finished with errors
TASK ERROR: job errors
From this you can see that the command is vzdump 103 --node ProxMox --mode snapshot --compress lzo --storage backups --remove 0 . I've also tried logging in with a SSH shell and running this command and get the same error.
Q) It could be that the directory's "immutable" attribute is set. Try lsattr / and see if /mnt has the lower-case "i" attribute set to it.
A) root#ProxMox:~# lsattr /
--------------e---- /tmp
--------------e---- /opt
--------------e---- /boot
lsattr: Inappropriate ioctl for device While reading flags on /sys
--------------e---- /lost+found
lsattr: Operation not supported While reading flags on /sbin
--------------e---- /media
--------------e---- /etc
--------------e---- /srv
--------------e---- /usr
lsattr: Operation not supported While reading flags on /libx32
lsattr: Operation not supported While reading flags on /bin
lsattr: Operation not supported While reading flags on /lib
lsattr: Inappropriate ioctl for device While reading flags on /proc
--------------e---- /root
--------------e---- /var
--------------e---- /home
lsattr: Inappropriate ioctl for device While reading flags on /dev
lsattr: Inappropriate ioctl for device While reading flags on /mnt
lsattr: Operation not supported While reading flags on /lib32
lsattr: Operation not supported While reading flags on /lib64
lsattr: Inappropriate ioctl for device While reading flags on /run
Q) Can you manually created /mnt/vzsnap0 without any issues?
A) root#ProxMox:~# mkdir /mnt/vzsnap0
mkdir: cannot create directory ‘/mnt/vzsnap0’: Permission denied
Q) Can you replicate it in a clean VM ?
A) I don't know. I don't have an extra system to try it on and I need the container's I have on it. Trying it within a VM in ProxMox… I'm not sure. I suppose I could try but I'd really rather not have to just yet. Maybe if all else fails.
Q) If you look at drwxr-xr-x 1 1001 1002 80 Aug 12 20:40 /mnt/backups, it looks like there are is a user with id 1001 which has access to the backups, so not even root will be able to write. You need to check why it is 1001 and which group is represented by 1002. Then you can add your root as well as the user under which the GUI runs to the group with id 1002.
A) I have no problem writing to the /mnt/backups directory. Just now did a cd /mnt/backups; mkdir test and that was successful.
From the message
mkdir /mnt/vzsnap0: Permission denied
it is obvious the problem is the permissions for /mnt directory.
It could be that the directory `s "immutable" attribute is set.
Try lsattr / and see if /mnt has the lower-case "i" attribute set to it.
As a reference:
The lower-case i in lsattr output indicates that the file or directory is set as immutable: even root must clear this attribute first before making any changes to it. With root access, you should be able to remove this with chattr -i /mnt, but there is probably a reason why this was done in the first place; you should find out what the reason was and whether or not it's still applicable before removing it. There may be security implications.
So, if this is the case, try:
chattr -i /mnt
to remove it.
References
lsattr output
According to inode flags—attributes manual page:
FS_IMMUTABLE_FL 'i':
The file is immutable: no changes are permitted to the file
contents or metadata (permissions, timestamps, ownership, link
count and so on). (This restriction applies even to the supe‐
ruser.) Only a privileged process (CAP_LINUX_IMMUTABLE) can
set or clear this attribute.
As long as the bounty is still up I'll give it to a legitimate answer that fixes the problem described here.
What I'm writing here for you all is a work around I've thought of which works. Note, it is very slow.
Since I am able to write to the /mnt/backups directory, which exists on another system on the network, I went ahead and changed the Perl script to point to /mnt/backups/vzsnap0 instead of /mnt/vzsnap0.
Bounty remains for anyone who can get the /mnt directory to work for the mount path to successfully mount vzsnap0 for the backup script..
1)
Perhaps your "/mnt/vzsnap0" is mounted as read only?
It may tell from your:
/dev/pve/root / ext4 errors=remount-ro 0 1
'errors=remount-ro' means in case of mistake remounting the partition like readonly. Perhaps this setting applies for your mounted filesystem as well.
Can you try remounting the drive as in the following link? https://askubuntu.com/questions/175739/how-do-i-remount-a-filesystem-as-read-write
And if that succeeds, manually create the directory afterwards?
2) If that didn't help:
https://www.linuxquestions.org/questions/linux-security-4/mkdir-throws-permission-denied-error-in-a-directoy-even-with-root-ownership-and-777-permission-4175424944/
There, someone remarked:
What is the filesystem for the partition that contains the directory.[?]
Double check the permissions of the directory, or whether it's a
symbolic link to another directory. If the directory is an NFS mount,
rootsquash can prevent writing by root.
Check for attributes (lsattr). Check for ACLs (getfacl). Check for
selinux restrictions. (ls -Z)
If the filesystem is corrupt, it might be initially mounted RW but
when you try to write to a bad area, change to RO.
Great, turns out this is a pretty long-standing issue with Ubuntu Make which is faced by many people.
I saw a workaround mentioned by an Ubuntu Developer in the above link.
Just follow the below steps:
sudo -s
unset SUDO_UID
unset SUDO_GID
Then run umake to install your application as normal.
you should now be able to install to any directory you want. Works flawlessly for me.
try ls laZ /mnt to review the security context, in case SE Linux is enabled. relabeling might be required then. errors=remount-ro should also be investigated (however, it is rather unlikely lsattr would fail, unless the /mnt inode itself is corrupted). Creating a new directory inode for these mount-points might be worth a try; if it works, one can swap them.
Just change /mnt/backups to /mnt/sshfs/backups
And the vzdump will work.
I created a d2.xlarge EC2 instance on AWS which returns the following output:
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 8G 0 disk
`-xvda1 202:1 0 8G 0 part /
xvdb 202:16 0 1.8T 0 disk
xvdc 202:32 0 1.8T 0 disk
xvdd 202:48 0 1.8T 0 disk
The default /etc/fstab looks like this
LABEL=cloudimg-rootfs / ext4 defaults,discard 0 0
/dev/xvdb /mnt auto defaults,nofail,x-systemd.requires=cloud-init.service,comment=cloudconfig 0 2
Now, I make an EXT4 filesystem for xvdc
$ sudo mkfs -t ext4 /dev/xvdc
mke2fs 1.42.13 (17-May-2015)
Creating filesystem with 488375808 4k blocks and 122101760 inodes
Filesystem UUID: 2391499d-c66a-442f-b9ff-a994be3111f8
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000, 214990848
Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information:
done
blkid returns a UID for the filesystem
$ sudo blkid /dev/xvdc
/dev/xvdc: UUID="2391499d-c66a-442f-b9ff-a994be3111f8" TYPE="ext4"
Then, I mount it on /mnt5
$ sudo mkdir -p /mnt5
$ sudo mount /dev/xvdc /mnt5
It gets succesfully mounted. Till there, the things work fine.
Now, I reboot the machine(first stop it and then start it) and then SSH into the machine.
I do
$ sudo blkid /dev/xvdc
It returns me nothing. Where did the filesystem go which I created before the reboot? I guess the filesystem for mounts remain created even after the reboot cycle.
Am I missing something to mount a partition on an AWS EC2 instance?
I followed this http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-using-volumes.html and it does not seem to work as described above
You need to read up on EC2 Ephemeral Instance Store volumes. When you stop an instance with this type of volume the data on the volume is lost. You can reboot by performing a reboot/restart operation, but if you do a stop followed later by a start the data is lost. A stop followed by a start is not considered a "reboot" on EC2. When you stop an instance it is completely shut down and when you start it back later it is basically recreated on different backing hardware.
In other words what you describe isn't an issue, it is expected behavior. You need to be very aware of how these volumes work before depending on them.
1.At the begining
mount | grep home
/dev/sdb1 on /home type xfs (rw,relatime,attr2,inode64,noquota)
2.try to modify
mount -o remount,rw,relatime,attr2,inode64,prjquota /dev/sbd1 /home
3.check it again
mount | grep home
/dev/sdb1 on /home type xfs (rw,relatime,attr2,inode64,noquota)
It doesn' work.
cat /etc/fstab
#
# /etc/fstab
# Created by anaconda on Tue Aug 9 15:24:43 2016
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
UUID=8f1038a3-6c31-4ce1-a9ef-3d7325e10bef / ext4 defaults 1 1
UUID=c687eab8-3ddd-4756-b91e-ad562b522f7c /boot ext4 defaults 1 2
UUID=7ae72a46-1407-49e6-8669-95bb9e592794 /home xfs rw,relatime,attr2,inode64,prjquota 0 0
UUID=3ccea12f-25d0-437b-9c4b-6ad6a9bd724c /tmp xfs defaults 0 0
UUID=b8ab4016-49bd-4f48-9620-5bda76f4d8b1 /var/log xfs defaults 0 0
UUID=8b9a7ada-3f02-4ee5-8010-ad32a5d7461e swap swap defaults 0 0
I can modify the /etc/fstab then restart machine make it work. But,is there any way I can change the quota configure without reboot?
Quotas
XFS quotas are not a remountable option. The -o quota option must be specified on the initial mount for quotas to be in effect.
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Storage_Administration_Guide/ch06s09.html
BTW if you need to enable quota for root partition /etc/fstab
does not help you only need to tweak kernel boot options
I'm trying to control I/O bandwidth by using cgroup blkio controller.
Cgroup has been setup and mounted successfully, i.e. calling grep cgroup /proc/mounts
returns:
....
cgroup /sys/fs/cgroup/blkio cgroup rw,relatime,blkio 0 0
...
I then make a new folder in the blkio folder and write to the file blkio.throttle.read_bps_device, as follows:
1. mkdir user1; cd user1
2. echo "8:5 10485760" > blkio.throtlle.read_bps_device
----> echo: write error: Invalid argument
My device major:minor number is correct from using df -h and ls -l /dev/sda5 for the storage device.
And I can still write to file that requires no device major:minor number, such as blkio.weight (but the same error is thrown for blkio.weigth_device)
Any idea why I got that error?
Not sure which flavour/version of Linux you are using, on RHEL 6.x kernels, this was did not work for some reason, however it worked when I compiled on a custom kernel on RHEL and on other Fedora versions without any issues.
To check if supported on your kernel, run lssubsys -am | grep blkio. Check the path if you can file the file blkio.throttle.read_bps_device
However, here is an example of how you can do it persistently, set a cgroups to limit the program not to exceed more than 1 Mibi/s:
Get the MARJOR:MINOR device number from /proc/partitions
`cat /proc/partitions | grep vda`
major minor #blocks name
252 0 12582912 vda --> this is the primary disk (with MAJOR:MINOR -> 8:0)
Now if you want to limit your program to 1mib/s (convert the value to bytes/s) as follows. => 1MiB/s => 1024 kiB/1MiB * 1024 B/s = 1048576 Bytes/sec
Edit /etc/cgconfig.conf and add the following entry
group ioload {
blkio.throttle.read_bps_device = "252:0 1048576"
}
}
Edit /etc/cgrules.conf
*: blkio ioload
Restart the required services
`chkconfig {cgred,cgconfig} on;`
`service {cgred,cgconfig} restart`
Refer: blkio-controller.txt
hope this helps!