Create udev rules file from two input file - linux

I am looking for a solution to create Oracle ASM udev rules file for linux. I have two input file. file1 has info of ASM disk requirement and file2 has disk information.
For example, line 2 of file1 is showing DATA12 need 3 disk(DATA12_01,DATA12_02,DATA12_03) of each 128G. file2 has all disk info with size. From these two input file I need to create output file shown bellow.
cat file1
Count - size - name
3 - 128 GB DATA12
1 - 128 GB TEMP02
2 - 4 GB ARCH03
2 - 1 GB ARCH04
1 - 3 GB ORAC01
cat file2
UUID Size
360060e80166ef70000016ef700006700 128.00 GiB
360060e80166ef70000016ef700006701 128.00 GiB
360060e80166ef70000016ef700006702 128.00 GiB
360060e80166ef70000016ef700006703 128.00 GiB
360060e80166ef70000016ef700006730 4.00 GiB
360060e80166ef70000016ef700006731 4.00 GiB
360060e80166ef70000016ef700006733 1.00 GiB
360060e80166ef70000016ef700006734 1.00 GiB
360060e80166ef70000016ef700006735 3.00 GiB
Output File
ACTION=="add|change", ENV{DM_NAME}=="360060e80166ef70000016ef700006700", SYMLINK+="udevlinks/DATA12_01"
ACTION=="add|change", ENV{DM_NAME}=="360060e80166ef70000016ef700006701", SYMLINK+="udevlinks/DATA12_02"
ACTION=="add|change", ENV{DM_NAME}=="360060e80166ef70000016ef700006702", SYMLINK+="udevlinks/DATA12_03"
ACTION=="add|change", ENV{DM_NAME}=="360060e80166ef70000016ef700006703", SYMLINK+="udevlinks/TEMP02_01"
ACTION=="add|change", ENV{DM_NAME}=="360060e80166ef70000016ef700006730", SYMLINK+="udevlinks/ARCH03_01"
ACTION=="add|change", ENV{DM_NAME}=="360060e80166ef70000016ef700006731", SYMLINK+="udevlinks/ARCH03_02"
ACTION=="add|change", ENV{DM_NAME}=="360060e80166ef70000016ef700006733", SYMLINK+="udevlinks/ARCH04_01"
ACTION=="add|change", ENV{DM_NAME}=="360060e80166ef70000016ef700006734", SYMLINK+="udevlinks/ARCH04_02"
ACTION=="add|change", ENV{DM_NAME}=="360060e80166ef70000016ef700006735", SYMLINK+="udevlinks/ORAC01_01"

Here is one in AWK:
$ cat > test.awk
BEGIN {FS="([.]| +)"} # field separator do deal with "." in file2 128.00
FNR==1 {next} # skip header
NR==FNR { # read available disks to pool from file1
for(i=1; i<=$1; i++)
a[$5"_"0i]=$3 # name and set the disks into pool
next}
{
for(i in a) { # look for right sized disk
if(a[i]==$2) { # when found, print...
printf "%s%s%s%s%s", "ACTION==\"add|change\", ENV{DM_NAME}==\"",$1,"\",\"SYMLINK+=\"udevlinks/",i,"\"\n"
delete a[i] # ... and remove from pool
break
}
} # if no device was found:
old=len; len=length(a); if(old==len) {print "No device found for ",$0}
}
$ awk -f test.awk file1 file2
ACTION=="add|change", ENV{DM_NAME}=="360060e80166ef70000016ef700006700","SYMLINK+="udevlinks/DATA12_01"
ACTION=="add|change", ENV{DM_NAME}=="360060e80166ef70000016ef700006701","SYMLINK+="udevlinks/DATA12_02"
...
No device found for THIS_IS_AN_EXAMPLE_OF_MISSING_DISK 666.00 GiB
Due to disk search using for(i in a) no order in which disks are read from the pool is guaranteed.

Related

Replace the option noexec to exec in /etc/fstab file through a shell script

Only the /tmp option of noexec to exec should change. The /var/tmp option of noexec to exec shouldn't change.
contents of /etc/fstab
UUID=f229a689-a31e-4f1a-a823-9a69ee6ec558 / xfs defaults 0 0
UUID=eeb1df48-c9b0-408f-a693-38e2f7f80895 /boot xfs defaults 1 2
UUID=b41e6ef9-c638-4084-8a7e-26ecd2964893 swap swap defaults 0 0
UUID=79aa80a1-fa97-4fe1-a92d-eadf79721204 /var xfs defaults 1 2
UUID=644be3d0-433c-4ed5-bf12-7f61d5b99860 /tmp xfs defaults,nodev,nosuid,noexec 1 2
UUID=decda446-34ac-45b6-826c-ae3f090ed717 /var/log xfs defaults 1 2
UUID=a74170bc-0309-4b3b-862e-722fb7a6882d /var/tmp xfs defaults,nodev,nosuid,noexec 1 2
Using awk:
$ cat 1.awk
$2=="/tmp" { n=split($4,a,",");
str=""
for (i=1; i <= n; i++ ) {
if (a[i] != "noexec") {
if (length(str))
str=str","
str=str""a[i]
}
}
$4=str; print }
$2 != "/tmp" { print }
$ awk -f 1.awk fstab
UUID=f229a689-a31e-4f1a-a823-9a69ee6ec558 / xfs defaults 0 0
UUID=eeb1df48-c9b0-408f-a693-38e2f7f80895 /boot xfs defaults 1 2
UUID=b41e6ef9-c638-4084-8a7e-26ecd2964893 swap swap defaults 0 0
UUID=79aa80a1-fa97-4fe1-a92d-eadf79721204 /var xfs defaults 1 2
UUID=644be3d0-433c-4ed5-bf12-7f61d5b99860 /tmp xfs defaults,nodev,nosuid 1 2
UUID=decda446-34ac-45b6-826c-ae3f090ed717 /var/log xfs defaults 1 2
UUID=a74170bc-0309-4b3b-862e-722fb7a6882d /var/tmp xfs defaults,nodev,nosuid,noexec 1 2
Note, alignment of fields in the modified line can easily be improved using printf. I cannot tell if you were using tabs or spaces between the various fields.

Join of output from DF and LSBLK Linux commands via bash

I need to merge two outputs in Linux.
This:
lsblk -n -b --output KNAME,NAME,SIZE,MOUNTPOINT | grep -v "fd0" | grep -v "loop" | grep -v "sr0" | grep -v "hdc" | grep -v "cdrom"
In a result I have:
sda sda 53687091200
sda1 └─sda1 53684994048
dm-3 └─dockerVG-rootLV 53682896896 /
sdb sdb 2147483648000
sdb1 └─sdb1 2147482599424
dm-1 ├─hddVG-dockerLV 536866717696 /var/lib/docker
dm-2 └─hddVG-hddLV 1610612736000 /dockerhdd
sdc sdc 536870912000
sdc1 └─sdc1 536869863424
dm-0 └─ssdVG-ssdLV 536866717696 /dockerssd
And this:
df --exclude={tmpfs,devtmpfs,squashfs,overlay} | sed -e /^Filesystem/d | awk '{print $6 " " $1 " " $3 " " $4 " " $5}'
In a result I have:
/ /dev/mapper/dockerVG-rootLV 8110496 40591632 17%
/dockerssd /dev/mapper/ssdVG-ssdLV 214133656 274642488 44%
/dockerhdd /dev/mapper/hddVG-hddLV 83278236 1385191240 6%
/var/lib/docker /dev/mapper/hddVG-dockerLV 76046204 412729940 16%
So, I want to Join via these points /, /var/lib/docker, /dockerhdd, /dockerssd.
Important! I want to check this in another place, where we will have another mount points. Also I have to save structure of first output without sorting.
In a result I have to receive something like this:
sda sda 53687091200
sda1 └─sda1 53684994048
dm-3 └─dockerVG-rootLV 53682896896 / /dev/mapper/dockerVG-rootLV 8110496 40591632 17%
sdb sdb 2147483648000
sdb1 └─sdb1 2147482599424
dm-1 ├─hddVG-dockerLV 536866717696 /var/lib/docker /dev/mapper/hddVG-dockerLV 76046204 412729940 16%
dm-2 └─hddVG-hddLV 1610612736000 /dockerhdd /dev/mapper/hddVG-hddLV 83278236 1385191240 6%
sdc sdc 536870912000
sdc1 └─sdc1 536869863424
dm-0 └─ssdVG-ssdLV 536866717696 /dockerssd /dev/mapper/ssdVG-ssdLV 214133656 274642488 44%
Of course better to have one-liner, but if it is not possible, we can send output to separate files and join them. Could You please help me in this ?
Using awk:
awk '!/^\/&^fd0&^loop&^sr0&^hdc&^cdrom/ { print $0" "arr[$4] } /^Filesystem/ { mrk=1;next } mrk==1 && /^\// { arr[$1]=$0 }' <<< $(df --exclude={tmpfs,devtmpfs,squashfs,overlay};lsblk -n -b --output KNAME,NAME,SIZE,MOUNTPOINT)
Redirect the two commands back into awk, stripping out any grep and sed processing. We process the df command first and where we find a line beginning with "Filesystem" we set a marker (mrk) to 1 and move to the next line. We then create an array (arr) indexed with the mountpoint and containing the line returned from the df command. We move onto the lsblk command and search for the lines starting with the KNAMEs required. We print the line from the lsblk command and append the value in the arr array indexed by the mount point ($4)

Linux: dd and pv -f, How to show only first line?

I use pv -f in piping like in 'command | pv -f | command2'. pv shows a progress bar like this
83.6MB 0:00:03 [27.9MB/s] [ <=> ]
When pv exits it shows more information. How can I show only the progress bar?
500+0 records in28.3MB/s] [ <=> ]
500+0 records out
524288000 bytes (524 MB) copied, 17.6912 s, 29.6 MB/s
500MB 0:00:17 [28.3MB/s] [ <=> ]
1024000+0 records in
1024000+0 records out
524288000 bytes (524 MB) copied, 17.6956 s, 29.6 MB/s
Edit:
My test case was
dd if=/dev/zero bs=100k count=80000 | pv -f | dd of=/dev/null

Perl - get free disk space usage on linux

I'm wondering how I can get the value of the second row, 4th column from df ("/"). Here's the output from df:
Filesystem Size Used Avail Use% Mounted on
rootfs 208G 120G 78G 61% /
fakefs 208G 120G 78G 61% /root
fakefs 1.8T 1.3T 552G 70% /home4/user
fakefs 4.0G 1.3G 2.8G 31% /ramdisk/bin
fakefs 4.0G 1.3G 2.8G 31% /ramdisk/etc
fakefs 4.0G 1.3G 2.8G 31% /ramdisk/php
fakefs 208G 120G 78G 61% /var/lib
fakefs 208G 120G 78G 61% /var/lib/mysql
fakefs 208G 120G 78G 61% /var/log
fakefs 208G 120G 78G 61% /var/spool
fakefs 208G 120G 78G 61% /var/run
fakefs 4.0G 361M 3.7G 9% /var/tmp
fakefs 208G 120G 78G 61% /var/cache/man
I'm trying to get the available free space (78GB) using perl which I'm fairly new to. I'm able to get the value using the following linux command but I've heard it's not necessary to use awk in perl at all because perl can do what awk can natively.
df -h | tail -n +2 | sed -n '2p' | awk '{ print $4 }'
I'm stumped. I tried using the Filesys::df module but when I'd print out the available usage percent, it'd give me a different value than what running df from command line does. Help is appreciated.
A little more succinctly:
df -h | perl -wlane 'print $F[3] if $. == 2;'
-w enable warnings
-l add newline to output(and chomps newline from input line)
-a splits the fields on whitespace into the #F array, which you access using the syntax $F[n] (first column is at index position 0)
-n puts the code inside the following loop:
LINE:
while (<>) {
... # code goes here
}
# <> reads lines from STDIN if no filenames are given on the command line
-e execute the string
$. current line number in the file (For the first line, $. is 1)
If you wish to do this all in perl, then:
df -h | perl -e 'while (<stdin>) { if ($. == 2) { #x = split; print $x[3] }}'
This uses perl alone to read the output of df -h and, for the second record ($. == 2) splits the record into fields, based on whitespace, and outputs field 3 (counting from 0).
This seems to work ok too:
df -h | awk 'NR==2 {print $4}'
Get the second line and pint fourth field.

How to find user memory usage in linux

How i can see memory usage by user in linux centos 6
For example:
USER USAGE
root 40370
admin 247372
user2 30570
user3 967373
This one-liner worked for me on at least four different Linux systems with different distros and versions. It also worked on FreeBSD 10.
ps hax -o rss,user | awk '{a[$2]+=$1;}END{for(i in a)print i" "int(a[i]/1024+0.5);}' | sort -rnk2
About the implementation, there are no shell loop constructs here; this uses an associative array in awk to do the grouping & summation.
Here's sample output from one of my servers that is running a decent sized MySQL, Tomcat, and Apache. Figures are in MB.
mysql 1566
joshua 1186
tomcat 353
root 28
wwwrun 12
vbox 1
messagebus 1
avahi 1
statd 0
nagios 0
Caveat: like most similar solutions, this is only considering the resident set (RSS), so it doesn't count any shared memory segments.
EDIT: A more human-readable version.
echo "USER RSS PROCS" ; echo "-------------------- -------- -----" ; ps hax -o rss,user | awk '{rss[$2]+=$1;procs[$2]+=1;}END{for(user in rss) printf "%-20s %8.0f %5.0f\n", user, rss[user]/1024, procs[user];}' | sort -rnk2
And the output:
USER RSS PROCS
-------------------- -------- -----
mysql 1521 1
joshua 1120 28
tomcat 379 1
root 19 107
wwwrun 10 10
vbox 1 3
statd 1 1
nagios 1 1
messagebus 1 1
avahi 1 1
Per-user memory usage in percent using standard tools:
for _user in $(ps haux | awk '{print $1}' | sort -u)
do
ps haux | awk -v user=${_user} '$1 ~ user { sum += $4} END { print user, sum; }'
done
or for more precision:
TOTAL=$(free | awk '/Mem:/ { print $2 }')
for _user in $(ps haux | awk '{print $1}' | sort -u)
do
ps hux -U ${_user} | awk -v user=${_user} -v total=$TOTAL '{ sum += $6 } END { printf "%s %.2f\n", user, sum / total * 100; }'
done
The first version just sums up the memory percentage for each process as reported by ps. The second version sums up the memory in bytes instead and calculates the total percentage afterwards, thus leading to a higher precision.
If your system supports, try to install and use smem:
smem -u
User Count Swap USS PSS RSS
gdm 1 0 308 323 820
nobody 1 0 912 932 2240
root 76 0 969016 1010829 1347768
or
smem -u -t -k
User Count Swap USS PSS RSS
gdm 1 0 308.0K 323.0K 820.0K
nobody 1 0 892.0K 912.0K 2.2M
root 76 0 937.6M 978.5M 1.3G
ameskaas 46 0 1.2G 1.2G 1.5G
124 0 2.1G 2.2G 2.8G
In Ubuntu, smem can be installed by typing
sudo apt install smem
This will return the total ram usage by users in GBs, reverse sorted
sudo ps --no-headers -eo user,rss | awk '{arr[$1]+=$2}; END {for (i in arr) {print i,arr[i]/1024/1024}}' | sort -nk2 -r
You can use the following Python script to find per-user memory usage using only sys and os module.
import sys
import os
# Get list of all users present in the system
allUsers = os.popen('cut -d: -f1 /etc/passwd').read().split('\n')[:-1]
for users in allUsers:
# Check if the home directory exists for the user
if os.path.exists('/home/' + str(users)):
# Print the current usage of the user
print(os.system('du -sh /home/' + str(users)))

Resources