How to search for a string in linux - linux

How to extract only /app/xxxx field in the below lines and add df -P -T command to the extracted strings and save the output in the below format
edlp_nps_app:x:23449:5000:EDLP_NPS_APP (HP):/app/edlp_nps_app:/bin/bash
genxp_app:x:23414:15887:GENXP_APP (HP):/app/genxp_app:/bin/bash
icegnapp:x:21697:15954:ICEGNAPP (HP):/app/icegnapp:/bin/bash
icegnftp:x:21554:15416:ICEGNFTP
(HP):/app/icegnftp:/usr/libexec/openssh/sftp-server
df -P -T /app/XXXXX
df -P -T /app/edlp_nps_app
O/P:
Filesystem Type 1024-blocks Used Available Capacity
Mounted on
/dev/mapper/rootvg-rootvol ext4 144365708 27057836 110769428 20% /
Output i require is FS, type, Mounted on, appname
/dev/mapper/rootvg-rootvol ext4 / /app/edlp_nps_app
I tried it awk command but it didnt work

Using awk test if 6th field starts with /app, if true print 6th field
awk -F':' '$6~/^\/app/{print "df -P -T "$6}' infile | bash
To save in file
awk -F':' '$6~/^\/app/{print "df -P -T "$6}' infile | bash > outfile
To avoid printing of header multiple times you may pipe another awk like below
awk -F':' '$6~/^\/app/{print "df -P -T "$6}' infile | bash | awk 'FNR==1{print;next}/Filesystem/{next}1' >outfile
Test Results :
input :
$ cat file
edlp_nps_app:x:23449:5000:EDLP_NPS_APP (HP):/app/edlp_nps_app:/bin/bash
genxp_app:x:23414:15887:GENXP_APP (HP):/app/genxp_app:/bin/bash
icegnapp:x:21697:15954:ICEGNAPP (HP):/app/icegnapp:/bin/bash
icegnftp:x:21554:15416:ICEGNFTP
(HP):/app/icegnftp:/usr/libexec/openssh/sftp-server
Output:
$ awk -F: '$6~/^\/app/{print "df -P -T "$6}' file
df -P -T /app/edlp_nps_app
df -P -T /app/genxp_app
df -P -T /app/icegnapp

extract /app/xxx
first we drop the lines that do not contain /app
sed '/\/app/!d'
then we extract the /app/...
's|.*\(/app/[^:]*\).*|\1|'
combined:
sed '/\/app/!d;s|.*\(/app/[^:]*\).*|\1|'
Output i require is FS, type, Mounted on, appname
/dev/mapper/rootvg-rootvol ext4 / /app/edlp_nps_app
Let's tweak the df's output a little bit by having it supply only the relevant fields
df /app/foo --output=source,fstype,target
I do not believe we can get df to drop the header, so we strip it
df /app/foo --output=source,fstype,target | sed '1d'
I will append the app name manually and put it all together in a script named "magic.sh":
#!/bin/sh
if [ ! -f "$1" ]
then
echo "Input file does not exist!"
exit 1
fi
if [ -f "$2" ]
then
echo "Output file already exists!"
exit 2
fi
sed '/\/app/!d;s|.*\(/app/[^:]*\).*|\1|' $1 | while read -r app
do
echo "$(df $app --output=source,fstype,target | sed '1d') $app" >> $2
done
Assuming the specified input is in a file named "input" and you want the result in a file named "output", this is how to call it:
sh magic.sh input output

Related

Linux: append all filenames in path to text file

I want to add the filenames of all files of a certain type (*.cub) in the path to a text file in the same path. This file will become the batch (.submit) file. That I can run overnight. I also need to adapt the name a bit.
I do not really know how to describe it better, so I'll give an example:
Let's say I have three files: 001.cub, 002.cub & 003.cub
Then the final text file must be:
[program] -i 001.cub -o 001.vdb
[program] -i 002.cub -o 002.vdb
[program] -i 003.cub -o 003.vdb
It seems a fairly easy operation, but I simply can't get it right.
Also, it really has to become a .submit (or at least some text) file. I cannot run the program immediately.
I hope someone can help!
A simple for loop will do the job:
for i in *.cub
b=$(basename "$i" .cub)
echo "program -i \"$b.cub\" -o \"$b.vdb\""
done >output.txt
Create an empty sh file
List the files *.cub and loop through them
Store the sequence by splitting on dot [.]
echo the required string and append to the sh file of step 1
echo -n "" > 'Run.sh'
for filename in `ls *.cub`
do
sequence=`echo $filename | cut -d "." -f1`
echo "Program -i $filename -o $sequence.vdb" >> Run.sh
done
Directly put the stream into the file as below:
for filename in `ls *.cub`
do
sequence=`echo $filename | cut -d "." -f1`
echo "Program -i $filename -o $sequence.vdb"
done > Run.sh
For everything before the extension to be retained in the variable:
for filename in `ls *.cub`
do
sequence=`echo $filename | rev | cut -d "." -f2- | rev`
echo "Program -i $filename -o $sequence.vdb"
done > Run.sh
For extracting only the numbers from the filename and use accordingly:
for filename in `ls *.cub`
do
sequence=`echo $filename | sed 's/[^0-9]*//g'`
echo "Program -i $filename -o $sequence.vdb"
done > Run.sh
This oneliner will do what you want:
ls *.cub | sort | awk '{split($1,x,"."); print "[program] -i "$1" -o "x[1]".vdb "}' > something.sh

Invalid option 3 for cat

When I am trying to run the below Script it says invalid option 3 for cat..Whats the problem?
I am tried to use index file which specifies which file is ham and which is spam...to read the files and train spamfilter
#!bin/bash
DirBogoDict=$1
BogoFilter=/home/gunna/Downloads/bogofilter-1.2.4/src/bogofilter
x=0
for i in 'cat index | fgrep spam | head -300 | awk -F "/" '{print$2"/"$3}''
do
x=$((x+1)) ; echo $x
cat /home/gunna/Downloads/db-6.1.19.NC/build_unix/ceas08-1/$i| $BogoFilter -d $DirBogoDict -M -k 1024 -s
done
for i in 'cat index | fgrep ham | head -300 | awk -F "/" '{print$2"/"$3}''
do
x=$((x+1)) ; echo $x
cat /home/gunna/Downloads/db-6.1.19.NC/build_unix/ceas08-1/$i | $BogoFilter -d $DirBogoDict -M -k 1024 -n
done
This part
'cat index | fgrep spam | head -300 | awk -F "/" '{print$2"/"$3}''
needs to be in back-ticks, not single quotes
`cat index | fgrep spam | head -300 | awk -F "/" '{print$2"/"$3}'`
And you could probably simplify it a little with
for i in `fgrep spam index | head -300 | awk "/" '{print$2"/"$3}'`
Kdopen has explained the error you got , here is the improved code for similar for-loop function.
DirBogoDict=$1
BogoFilter=/home/gunna/Downloads/bogofilter-1.2.4/src/bogofilter
awk '/spam/&&++myctr<=300{print $2 FS $3}' FS="/" index |while read i
do
cat /home/gunna/Downloads/db-6.1.19.NC/build_unix/ceas08-1/"$i"| $BogoFilter -d ${DirBogoDict} -M -k 1024 -s
done
awk '/ham/&&++myctr<=300{print $2 FS $3}' FS="/" index |while read i
do
cat /home/gunna/Downloads/db-6.1.19.NC/build_unix/ceas08-1/"$i"| $BogoFilter -d ${DirBogoDict} -M -k 1024 -s
done
Also look at your file names , since cat is giving an error and an option is invalid. To demonstrate this, Let say you have a file a name -3error
executing the following command
cat -3error
will gave
cat: invalid option -- '3'
cat therefore is thinking the "-" is followed by one of its command line arguments. As a result you probably get an invalid option error.

Dynamic Comment Changing In Linux Shell Script

I have files one is /etc/passwd that contains three new user methun, salam and kalam and have another file in /methunfiles/mypractice/myfile/passwd which contains input as methun:xxx salam:firstboy kalam:secondboy in a tow columns. first column contains methun, salam, kalam and second column contains xxx, firstboy, secondboy. Now my job is to matches the /etc/passwd files first column with the first column of /methunfiles/mypractice/myfile/passwd 's first colimn. If any matches found then insert the /etc/passwd 's comment field
with the second column of /methunfiles/mypractice/myfile/passwd file in same name found in first column of both. i have tried with the following code but no output found. I want to use loop here . Anybody help ? my output should like methun:x:501:502:xxx:......, salam:x:439:439:firstboy ...etc.
mainUser=cat /etc/passwd | awk -F ':' '{print $1}'
modifyUser=cat /methunfiles/mypractice/myfile/passwd | awk -F ':' '{print $1}'
modifyComment=cat /methunfiles/mypractice/myfile/passwd | awk -F ':' '{print $2}'
for muser in $mainUser
do
for moduser in $modifyUser
do
for mcomment in $modifyComment
do
if ["$muser" == "$moduser" ]
chmod -c "$mcomment" $muser
fi
done
done
done
the join command is what you need.
f1=/etc/passwd
f2=/methunfiles/mypractice/myfile/passwd
join -t: -j1 -o 2.1,2.2 <(sort -t: -k1,1 $f1) <(sort -t: -k1,1 $f2) |
while IFS=: read user new_comment; do
if usermod -c "$new_comment" $user; then
getent passwd $user
else
echo "could not modify comment field for $user"
fi
done

Piping a list of two delimeter separated variables to a new command in BASH

I need to pipe a list of two delimeter separated variables to a command in BASH. I deleted my girlfriend's files from her SD card accidentally. I cloned an image of it using dd and used Sleuth Kit to recover the inode number and names of the deleted files.
fls -d -r bckup_irmasSD1.img | awk 'gsub(/\t|.*\*/,"")' | less
This gives me an example output:
6689308:DCIM/Camera/2014-02-05 20.51.30.jpg
6689560:DCIM/Camera/2013-08-10 16.37.44.jpg
6689563:DCIM/Camera/2013-08-10 16.37.52.jpg
6689566:DCIM/Camera/2013-09-14 19.00.06.jpg
6689567:DCIM/Camera/_I966F~2.MP4
29211:Android/data/com.google.android.apps.maps/cache/_ACHE_~8.M
29298:Android/data/com.google.android.apps.maps/cache/_ACHE_~2.6
29301:Android/data/com.google.android.apps.maps/cache/cache_vts_GMM.7
29304:Android/data/com.google.android.apps.maps/cache/cache_vts_GMM.8
73224:bluetooth/DSC00360.jpg
73227:bluetooth/DSC00360_2.jpg
14728713:.downloadTemp/1616021_716182491801349_1111393555_n.mp4
14728718:.downloadTemp/1616117_10151911525912011_1690760246_n.mp4
18898441:download/1595926_47757
18898445:download/1614824_234800313358133_914357470_n.mp4
18898449:download/_24316~1.MP4
To recover a deleted file by inode number, you can use the command line tool icat:
icat -d /tmp/disk.img 18898449 > /recover/download/_24316~1.MP4
How can I pipe this cleanly to a command to recover all files?
fls -d -r bckup_irmasSD1.img |
awk 'gsub(/\t|.*\*/,"")' |
while IFS=: read -r inode filename; do
mkdir -p /recover/"${filename%/*}"
icat -d /tmp/disk.img $inode > /recover/"$filename"
done
You could use awk again to split your lines and then call your command:
fls -d -r bckup_irmasSD1.img | awk 'gsub(/\t|.*\*/,"")' > indoes.txt
awk -F: '{system("icat -d " $1 " > " #2}' inodes.txt
Make sure none of your filenames contain a : and buy your girlfriend some flowers!

Bash capturing output of awk into array

I am stuck on a little problem. I have a command which pipes output to awk but I want to capture the output of to an array one by one.
My example:
myarr=$(ps -u kdride | awk '{ print $1 }')
But that capture all my output into one giant string separated by commas:
output: PID 3856 5339 6483 10448 15313 15314 15315 15316 22348 29589 29593 32657 1
I also tried the following:
IFS=","
myarr=$(ps -u kdride | awk '{ print $1"," }')
But the output is: PID, 3856, 5339, 6483, 10448, 15293, 15294, 15295, 15296, 22348, 29589, 29593, 32657,
1
I want to be able to capture each individual pid into its own array element. Setting IFS = '\n' does not do anything and retains my original output. What change do I need to do to make this work?
Add additional parentheses, like this:
myarr=($(ps -u kdride | awk '{ print $1 }'))
# Now access elements of an array (change "1" to whatever you want)
echo ${myarr[1]}
# Or loop through every element in the array
for i in "${myarr[#]}"
do
:
echo $i
done
See also bash — Arrays.
Use Bash's builtin mapfile (or its synonym readarray)
mapfile -t -s 1 myarr < <(ps -u myusername | awk '{print $1}')
At least in GNU/Linux you can format output of ps, so no need for awk and -s 1
mapfile -t myarr < <(ps -u myusername -o pid=)

Resources