sed append text from file onto line - linux

I have some bash that calculates 90% of the total system memory in KB and outputs this into a file:
cat /proc/meminfo | grep MemTotal | cut -d: -f2 | awk '{SUM += $1} END { printf "%d", SUM/100*90}' | awk '{print $1}' > mem.txt
I then want to copy the value into another file (/tmp/limits.conf) and append to a single line.
The below searches for the string "soft memlock" and writes the output of mem.txt created earlier into the /tmp/limistest.conf
sed -i '/soft\smemlock/r mem.txt' /tmp/limitstest.conf
However the script outputs as below:
oracle soft memlock
1695949
I want it to output like this:
oracle soft memlock 1695949
I have tried quite a few things but can't get this to output correctly.
Thanks
Edit here is some of the text in input file /proc/meminfo
MemTotal: 18884388 kB
MemFree: 1601952 kB
MemAvailable: 1607620 kB

It's a bit of a guess since you didn't provide sample input/output but all you need is something like:
awk '
NR==FNR {
if (/MemTotal/) {
split($0,f,/:/)
$0 = f[2]
sum += $1
}
next
}
/soft[[:space:]]+memlock/ { $0 = $0 OFS int(sum/100*90) }
{ print }
' /proc/meminfo /tmp/limitstest.conf > tmp &&
mv tmp /tmp/limitstest.conf

I think your approach is overly complicated: there is no need to store the output in a file and then append it into another file.
What if you just store the value in a variable and then add it into your file?
var=$(command)
sed "/soft memlock/s/.*/& $var/" /tmp/limitstest.conf
Once you are confident with the output, add the -i in the sed operation.
Where, in fact, command can be something awk alone handles:
awk '/MemTotal/ {sum+=$2} END { printf "%d", SUM/100*90}' /proc/meminfo
See a test on the sed part:
$ cat a
hello
oracle soft memlock
bye
$ var=2222
$ sed "/soft memlock/s/.*/& $var/" a
hello
oracle soft memlock 2222
bye

Related

How to get CPU, RAM and DISK using single command

There are set of VM in production and staging environment. I need to get CPU, Memory and Disk capacity of all VMs. Is there any single command to get those at once?
You can use below command to get above details at once.
free -m | grep Mem | awk -F' ' '{printf "Mem : %s \nCPU: ", $2}' ; cat /proc/cpuinfo | grep processor | wc -l ;df -h |grep G | awk -F' ' '{print($2)}' | awk -F'G' '{for(i=1;i<=NF;i++){ sum += $i;} printf "DISK : %s \n", sum;}'| tail -1
Using awk as a "complete" solution without the need for numerous pipes to grep etc:
One liner:
awk '/^Mem:/ { mem=$2 } /^processor/ { pcnt++ } /^Filesystem/ { filesys=1;next } filesys==1 { filetot=filetot+$2;print filetot } END { printf "Memory: %s\nProcessor Total: %s\nDisk size total: %.2f\n",mem,pcnt,filetot }' <(free -m) /proc/cpuinfo <(df -h)
Explanation:
awk '/^Mem:/ {
mem=$2 # Search for a line beginning with "Mem:" in the output and set a mem variable
}
/^processor/ {
pcnt++ # For each line that starts "processor" increment a pcnt variable
}
/^Filesystem/ {
filesys=1; # Set a tracking filesys variable when a line is encountered beginning with "Filesystem"
next # Skip to the next line
}
filesys==1 {
filetot=filetot+$2; # When we are in the file system section, total the disk sizes (the second space delimited field)
}
END {
printf "Memory: %s\nProcessor Total: %s\nDisk size total: %.2f\n",mem,pcnt,filetot # Once we have processed all lines of the output, print the data we want.
}' <(free -m) /proc/cpuinfo <(df -h) # Redirect command back into awk to process the output

How to hash particular column in csv file | linux |

I have a scenario
where i want to hash some columns of csv file
how to do that with below data
ID|NAME|CITY|AGE
1|AB1|BBC|12
2|AB2|FGD|17
3|AB3|ASD|18
4|AB4|SDF|19
5|AB5|ASC|22
The Column name NAME | AGE should get hashed with random values
like below output
ID|NAME|CITY|AGE
1|68b329da9111314099c7d8ad5cb9c940|BBC|77bAD9da9893er34099c7d8ad5cb9c940
2|69b32fga9893e34099c7d8ad5cb9c940|FGD|68bAD9da989yue34099c7d8ad5cb9c940
3|46b329da9893e3403453d8ad5cb9c940|ASD|60bfgD9da9893e34099c7d8ad5cb9c940
4|50Cd29da9893e34099c7d8ad5cb9c940|SDF|67bAD9da98973e34099c7d8ad5cb9c940
5|67bAD9da9893e34099c7d8ad5cb9c940|ASC|67bAD9da11893e34099c7d8ad5cb9c940
When i tested this code below code gives me same value for the column 'NAME' it should give randomized values
awk '{
tmp="echo " $2 " | openssl md5 | cut -f2 -d\" \""
tmp | getline cksum
close(tmp)
$2=cksum
print
}' < sample.csv
output :
68b329da9893e34099c7d8ad5cb9c940
68b329da9893e34099c7d8ad5cb9c940
68b329da9893e34099c7d8ad5cb9c940
68b329da9893e34099c7d8ad5cb9c940
68b329da9893e34099c7d8ad5cb9c940
68b329da9893e34099c7d8ad5cb9c940
You may use it like this:
awk 'function hash(s, cmd, hex, line) {
cmd = "openssl md5 <<< \"" s "\""
if ( (cmd | getline line) > 0)
hex = line
close(cmd)
return hex
}
BEGIN {
FS = OFS = "|"
}
NR == 1 {
print
next
}
{
print $1, hash($2), $3, hash($4)
}' file
ID|NAME|CITY|AGE
1|d44aec35a11ff6fa8a800120dbef1cd7|BBC|2737b49252e2a4c0fe4c342e92b13285
2|157aa4a48373eaf0415ea4229b3d4421|FGD|4d095eeac8ed659b1ce69dcef32ed0dc
3|ba3c08d4a65f1baa1d7220a6802b5710|ASD|cf4278314ef8e4b996e1b798d8eb92cf
4|69be622e1c0d417ceb9b8fb0aa9dc574|SDF|3bb50ff8eeb7ad116724b56a820139fa
5|427872b1ac3a22dc154688ddc2050516|ASC|2fc57d6f63a9ee7e2f21a26fa522e3b6
You have to specify | as input and output field separators. Otherwise $2 is not what you expect, but an empty string.
awk -F '|' -v "OFS=|" 'FNR==1 { print; next } {
tmp="echo " $2 " | openssl md5 | cut -f2 -d\" \""
tmp | getline cksum
close(tmp)
$2=cksum
print
}' sample.csv
prints
ID|NAME|CITY|AGE
1|d44aec35a11ff6fa8a800120dbef1cd7|BBC|12
2|157aa4a48373eaf0415ea4229b3d4421|FGD|17
3|ba3c08d4a65f1baa1d7220a6802b5710|ASD|18
4|69be622e1c0d417ceb9b8fb0aa9dc574|SDF|19
5|427872b1ac3a22dc154688ddc2050516|ASC|22
Example using GNU datamash to do the hashing and some awk to rearrange the columns it outputs:
$ datamash -t'|' --header-in -f md5 2,4 < input.txt | awk 'BEGIN { FS=OFS="|"; print "ID|NAME|CITY|AGE" } { print $1, $5, $3, $6 }'
ID|NAME|CITY|AGE
1|1109867462b2f0f0470df8386036243c|BBC|c20ad4d76fe97759aa27a0c99bff6710
2|14da3a611e2f8953d76b6fb7866b01d1|FGD|70efdf2ec9b086079795c442636b55fb
3|710a24b9eac0692b1adaabd07726211a|ASD|6f4922f45568161a8cdf4ad2299f6d23
4|c4d15b255ef3c6a89d1fe2e6a26b8eda|SDF|1f0e3dad99908345f7439f8ffabdffc4
5|96b24a28173a75cc3c682e25d3a6bd49|ASC|b6d767d2f8ed5d21a44b0e5886680cb9
Note that the MD5 hashes are different in this answer than (At the time of writing) the ones in the others; that's because they use approaches that add a trailing newline to the strings being hashed, producing incorrect results if you want the exact hash:
$ echo AB1 | md5sum
d44aec35a11ff6fa8a800120dbef1cd7 -
$ echo -n AB1 | md5sum
1109867462b2f0f0470df8386036243c -
You might consider using a language that has support for md5 included, or at least cache the md5 results (I assume that the city and age have a limited domain, which is smaller than the number of lines).
Perl has support for md5 out of the box:
perl -M'Digest::MD5 qw(md5_hex)' -F'\|' -le 'if (2..eof) {
$F[$_] = md5_hex($F[$_]) for (1,3);
print join "|",#F
} else { print }'
online demo: https://ideone.com/xg6cxZ (to my surprise ideone has perl available in bash)
Digest::MD5 is a core module, any perl installation should have it
-M'Digest::MD5 qw(md5_hex)' - this loads the md5_hex function
-l handle line endings
-F'\|' - autosplit fields on | (this implies -a and -n)
2..eof - range operator (or flip-flop as some want to call it) - true between line 2 and end of the file
$F[$_] = md5_hex($F[$_]) - replace field $_ with it's md5 sum
for (1,3) - statement modifier runs the statement for 1 and 3 aliasing $_ to them
print join "|",#F - print the modified fields
else { print } - this hanldes the header
Note about speed: on my machine this processes ~100,000 lines in about 100 ms, compared with an awk variant of this answer that does 5000 lines in ~1 minute 14 seconds (i wasn't patient enough to wait for 100,000 lines)
time perl -M'Digest::MD5 qw(md5_hex)' -F'\|' -le 'if (2..eof) { $F[$_] = md5_hex($F[$_]) for (1,3);print join "|",#F } else { print }' <sample2.txt > out4.txt
real 0m0.121s
user 0m0.118s
sys 0m0.003s
$ time awk -F'|' -v OFS='|' -i md5.awk '{ print $1,md5($2),$3,md5($4) }' <(head -5000 sample2.txt) >out2.txt
real 1m14.205s
user 0m50.405s
sys 0m35.340s
md5.awk defines the md5 function as such:
$ cat md5.awk
function md5(str, cmd, l, hex) {
cmd= "/bin/echo -n "str" | openssl md5 -r"
if ( ( cmd | getline l) > 0 )
hex = substr(l,0,32)
close(cmd)
return hex
}
I'm using /bin/echo because there are some variants of shell where echo doesn't have -n
I'm using -n mostly because I want to be able to compare the results with the perl results
substr(l,0,32) - on my machine openssl md5 doesn't return just the sum, it has also the file name - see: https://ideone.com/KGMWPe - substr gets only the relevant part
I'm using a separate file because it seems much cleaner, and because I can switch between function implementations fairly easy
As I was saying in the beginning, if you really want to use awk, at least cache the result of the openssl tool.
$ cat md5memo.awk
function md5(str, cmd, l, hex) {
if (cache[str])
return cache[str]
cmd= "/bin/echo -n "str" | openssl md5 -r"
if ( ( cmd | getline l) > 0 )
hex = substr(l,0,32)
close(cmd)
cache[str] = hex
return hex
}
With the above caching, the results improve dramatically:
$ time awk -F'|' -v OFS='|' -i md5memo.awk '{ print $1,md5($2),$3,md5($4) }' <(head -5000 sample2.txt) >outmemo.txt
real 0m0.192s
user 0m0.141s
sys 0m0.085s
[savuso#localhost hash]$ time awk -F'|' -v OFS='|' -i md5memo.awk '{ print $1,md5($2),$3,md5($4) }' <sample2.txt >outmemof.txt
real 0m0.281s
user 0m0.222s
sys 0m0.088s
however your mileage my vary: sample2.txt has 100000 lines, with 5 different values for $2 and 40 different values for $4. Real life data may vary!
Note: I just realized that my awk implementation doesn't handle headers, but you can get that from the other answers

bump or increment the integer that is a result of grep

Im grepping below file with output below but i want the result to increment it with another number.
egrep -i --color=auto "[0-9]{10}" file
2017080802 ; Xen number
How can I make it to 2017080803 at least?
Something like this?
awk '/[0-9]{10}/ { print 1+$1 }' file
awk '{$1=($1+1); print $0}' file will increment your first column in output.
Exmpl:
a="2017080802 ; Xen number"; echo $a | awk '{$1=($1+1); print $0}'
2017080803 ; Xen number

script to return info from /proc/

I am trying to write a script that will return info from the /proc/cpuinfo, /proc/meminfo and /proc/version files.
From the cpuinfo file, I want to return the cpu Mhz and model name.
I can get these via these commands
more /proc/cpuinfo | grep "model name" | head -n 1
more /proc/cpuinfo | grep "cpu MHz"
for the meminfo file, I want to get total memory, memory free and total used. I can get the first 2 via these commands:
more /proc/meminfo | grep MemTotal
more /proc/meminfo | grep MemFree
and I can get the linux version # with this:
more /proc/version
I can then have this saved as a file via redirecting the first output into a file and then append the next info items with using a >> instead of >.
My problem is this - how do I write a script that will take the info from the above and place it into this format:
/proc/cpuinfo, Model name: (result of first command above)
/proc/cpuinfo, cpu Mhz: (result of 2nd)
/proc/meminfo, MemTotal: (result of 3rd)
/proc/meminfo, MemFree: (result of 4th)
/proc/meminfo, MemUsed: (calculate it based off memtotal and memfree)
/proc/version, Linux version #:
I know how to use cut, awk and more, etc but do not know how to set this up. I do not know how to force the calculation of the mem used either.
Any help you can give would be appreciated.
EDIT: I use the more because I am not too familiar with Linux.
I am getting closer and closer to what I want to do with a combination of what is posted here and what I need to come up with.
MATH function -
I just want to take the memtotal and subtract memfree from it.
Could I just create a variable such as
memused=$(bc $memtotal - memfree)
and then echo it out?
With a simple shell function like:
filedata() {
grep -H "$#" | sed -e 's/:/, /'
}
You can get most of the data you need by calling
filedata 'model name' /proc/cpuinfo
filedata -E 'Mem(Total|Free)' /proc/meminfo
filedata . /proc/version
To get MemUsed you could use something like:
awk '/MemFree/ {free=$2} /MemTotal/ {total=$2} END {print FILENAME",","MemUsed:", total-free}' /proc/meminfo
Alternatively the following awk script will do it all for you (though not in exactly the order of your example output):
awk '/model name|cpu MHz|MemTotal|MemFree|^Linux/ {
print FILENAME",",$0
}
/MemTotal|MemFree/ {
v=$1
gsub(/^Mem/, "", v)
gsub(/:$/, "", v)
mem[v]=$2
}
END {
print "/proc/meminfo, MemUsed:", mem["Total"] - mem["Free"]
}' /proc/cpuinfo /proc/meminfo /proc/version
In a simplified way you can do the following:
EXAMPLE
#!/bin/sh
LOCATION=$1
if [ "$#" -ne "1" ]
then
echo "Usage: ./$0 <FILE>"
else
model=$(cat /proc/cpuinfo |grep -m 1 "model name"|cut -d' ' -f 4-);
mhz=$(cat /proc/cpuinfo |grep -m 1 "cpu MHz"|cut -d' ' -f 3-);
mem=$(cat /proc/meminfo | grep MemTotal|cut -d' ' -f 2-);
free=$(cat /proc/meminfo | grep MemFree|cut -d' ' -f 2-);
ver=$(cat /proc/version|cut -d' ' -f 3);
fi
echo -e \
"/proc/cpuinfo, Model Name: $model
/proc/cpuinfo, CPU MHz: $mhz
/proc/meminfo, MemTotal: $mem
/proc/meminfo, MemFree: $free
/proc/version, Linux Verion #: $ver" > $LOCATION
That will place each result in a variable so you can echo it into a file that you declare when you call the script like sh test.sh mynewfile.txt.
As for "I do not know how to force the calculation of the mem used either." please update you question to include how you expect those values to be present (kb, MB, GB) and a sample output you are looking for.
Here is how to get value using awk only:
model=$(awk -F: '/model name/ {print $2;exit}' /proc/cpuinfo)
mhz=$(awk -F: '/cpu MHz/ {print $2;exit}' /proc/cpuinfo)
mem=$(awk -F"[: ]+" '/MemTotal/ {print $2;exit}' /proc/meminfo)
etc

wput speed result as pass or fail

I'm using the following to output the result of an upload speed test
wput 10MB.zip ftp://user:pass#host 2>&1 | grep '\([0-9.]\+[KM]/s\)'
which returns
18:14:38 (10MB.zip) - '10.49M/s' [10485760]
Transfered 10,485,760 bytes in 1 file at 10.23M/s
I'd like to have the result 10.23M/s (i.e. the speed) echoed, and a comparison result:
if speed=>5 MB/s then echo "pass" else echo "fail"
So, the final output would be:
PASS 7 M/s
23/01/2013
ideally i'd like it all done on a single line so far i've got
wput 100M.bin ftp://test:test#0.0.0.0 2>&1 | grep -o '\([0-9.]\+[KM]/s\)$' | awk ' { if (($1 > 5) && ($2 == "M/s")) { printf("FAST %s\n ", $0); }}'
however it doesn't output anything if I remove
&& ($2 == "M/s"))
it works but I obviously want to it output above 5M/s and as it is it would still echo fast if it was over 1K/s. Can someone tell me what i've missed.
Using awk:
# Over 5M/s
$ cat pass
18:14:38 (10MB.zip) - '10.49M/s' [10485760]
Transfered 10,485,760 bytes in 1 file at 10.23M/s
$ awk 'END{f="FAIL "$NF;p="PASS "$NF;if($NF~/K\/s/){print f;exit};gsub(/M\/s/,"");print(int($NF)>5?p:f)}' pass
PASS 10.23M/s
# Under 5M/s
$ cat fail
18:14:38 (10MB.zip) - '3.49M/s' [10485760]
Transfered 10,485,760 bytes in 1 file at 3.23M/s
$ awk 'END{f="FAIL "$NF;p="PASS "$NF;if($NF~/K\/s/){print f;exit};gsub(/M\/s/,"");print(int($NF)>5?p:f)}' fail
FAIL 3.23M/s
# Also Handle K/s
$ cat slow
18:14:38 (10MB.zip) - '3.49M/s' [10485760]
Transfered 10,485,760 bytes in 1 file at 8.23K/s
$ awk 'END{f="FAIL "$NF;p="PASS "$NF;if($NF~/K\/s/){print f;exit};gsub(/M\/s/,"");print(int($NF)>5?p:f)}' slow
FAIL 8.23K/s
Not sure where you get 7 M/s from?
According to #Rubens, you can use grep -o with your regex to show the speed, just append $ for end of line
wput 10MB.zip ftp://user:pass#host 2>&1 | grep -o '\([0-9.]\+[KM]/s\)$'
With perl you can easily do the remaining stuff
use strict;
use warnings;
while (<>) {
if (m!\s+((\d+\.\d+)([KM])/s)$!) {
if ($2 > 5 && $3 eq 'M') {
print "PASS $1\n";
} else {
print "FAIL $1\n";
}
}
}
and then call it
wput 10MB.zip ftp://user:pass#host 2>&1 | perl script.pl
This is an answer to the question update.
With the awk program, you haven't split the speed into numeric and unit value. It is just one string.
Because fast speed is greater than 5 M/s, you can ignore K/s and extract the speed by splitting at the character M. Then you have the speed in $1 and can compare it
wput 100M.bin ftp://test:test#0.0.0.0 2>&1 | grep -o '[0-9.]\+M/s$' | awk -F '/M/' '{ if ($1 > 5) { printf("FAST %s\n ", $0); }}'

Resources