Linux shell scripting: How to store output from terminal in integers (but only numbers)? - linux

I'm new to shell scripting and here is my problem:
I want to store PID's from output of airmon-ng check to some variables (for ex: $1, $2, $3) so that I can execute kill $1 $2 $3.
here is sample output of airmon-ng check:
Found 3 processes that could cause trouble.
If airodump-ng, aireplay-ng or airtun-ng stops working after
a short period of time, you may want to kill (some of) them!
PID Name
707 NetworkManager
786 wpa_supplicant
820 dhclient
I want to grab numbers 707, 786, 820.
I tried using set 'airmon-ng check' and then using for loop:
set `airmon-ng check`
n=$#
for (( i=0; i<=n; i++ ))
do
echo $i
done
it outputs 1,2,3,...36
not words or numbers so I couldn't figure out how I should do it.

airmon-ng check | egrep -o '\b[0-9]+\b' | xargs kill
egrep is grep with extended regular expressions (like grep -E), -o says to extract only the matching parts, \b matches word boundaries so you don't get any numbers accidentally occuring in process names or something, [0-9]+ matches one or more decimal digit, xargs kill passes all the matches as arguments to the kill command.
Note that parsing output intended to be read by humans might not always be a good idea. Also, just killing all those processes doesn't sound too smart either, but proper usage of airocrack is beyond this question.

You can get list of the PIDs separated by spaces e.g. like this (everything from the 1st column after "PID"):
l=`airmon-ng check | awk 'BEGIN { p=0 } { if (p) { print $1" "; } if ($1=="PID") { p=1 } }' | tr '\n' ' '`

Why not use grep?
myvar=$(airmon-ng check | grep '[0-9]\{3,6\}')
This assumes a PID of 3 to 6 digits, and will grab anything from the airmon-ng output of a similar length. So this may not work as well if the output includes other strings with digits of a similar length.

I would use awk for this and store the output in an array
pids=( $(airmon-ng check | awk '/^[[:blank:]]+[[:digit:]]+[[:blank:]]+/{print $1}') )
#'pids' is an array
kill "${pids[#]}" #killing all the processes thus found.

Related

Pass command-line arguments to grep as search patterns and print lines which match them all

I'm learning about grep commands.
I want to make a program that when a user enters more than one word, outputs a line containing the word in the data file.
So I connected the words that the user typed with '|' and put them in the grep command to create the program I intended.
But this is OR operation. I want to make AND operation.
So I learned how to use AND operation with grep commands as follows.
cat <file> | grep 'pattern1' | grep 'pattern2' | grep 'pattern3'
But I don't know how to put the user input in the 'pattern1', 'pattern2', 'pattern3' position. Because the number of words the user inputs is not determined.
As user input increases, grep must be executed using more and more pipes, but I don't know how to build this part.
The user input is as follows:
$ [the name of my program] 'pattern1' 'pattern2' 'pattern3' ...
I'd really appreciate your help.
With grep -f you can grep multiple items, when each of them is on a line in a file.
With <(command) you can let Bash think that the result of command is a file.
With printf "%s\n" and a list of arguments, each argument is printed on a new line.
Together:
grep -f <(printf "%s\n" "$#") datafile
suggesting to use awk pattern logic:
awk '/RegExp-pattern-1/ && /RegExp-pattern-2/ && /RegExp-pattern-3/ 1' input.txt
The advantages: you can play with logic operators && || on RegExp patterns. And your are scanning the whole file once.
The disadvantages: must provide files list (can't traverse sub directories), and limited RegExp syntax compared to grep -E or grep -P
In principle, what you are asking could be done with a loop with output to a temporary file.
file=inputfile
temp=$(mktemp -d -t multigrep.XXXXXXXXX) || exit
trap 'rm -rf "$temp"' ERR EXIT
for regex in "$#"; do
grep "$regex" "$file" >"$temp"/output
mv "$temp"/output "$temp"/input
file="$temp"/input
done
cat "$temp"/input
However, a better solution is probably to arrange for Awk to check for all the patterns in one go, and avoid reading the same lines over and over again.
Passing the arguments to Awk with quoting intact is not entirely trivial. Here, we simply pass them as command-line arguments and process those into an array within the Awk script itself.
awk 'BEGIN { for(i=1; i<ARGC; ++i) a[i]=ARGV[i];
ARGV[1]="-"; ARGC=1 }
{ for(n=1; n<=i; ++n) if ($0 !~ a[n]) next; }1' "$#" <file
In brief, in the BEGIN block, we copy the command-line arguments from ARGV to a, then replace ARGV and ARGC to pass Awk a new array of (apparent) command-line arguments which consists of just - which means to read standard input. Then, we simply iterate over a and skip to the next line if the current input line from standard input does not match. Any remaining lines have matched all the patterns we passed in, and are thus printed.

Linux Scripting with Spaces in Filenames

I am currently working with a vendor-provided software that is trying to handle sending attachment files to another script that will text-extract from the listed file. The script fails when we receive files from an outside source that contain spaces, as the vendor-supplied software does not surround the filename in quotes - meaning when the text-extraction script is run, it receives a filename that will split apart on the space and cause an error on the extractor script. The vendor-provided software is not editable by us.
This whole process is designed to be an automated transfer, so having this wrench that could be randomly thrown into the gears is an issue.
What we're trying to do, is handle the spaced name in our text extractor script, since that is the piece we have some control over. After a quick Google, it seems like changing the IFS value for the script would be the quick solution, but unfortunately, that script would take effect after the extensions have already mutilated the incoming data.
The script I'm using takes in a -e value, a -i value, and a -o value. These values are sent from the vendor supplied script, which I have no editing control over.
#!/bin/bash
usage() { echo "Usage: $0 -i input -o output -e encoding" 1>&2; exit 1; }
while getopts ":o:i:e:" o; do
case "${o}" in
i)
inputfile=${OPTARG}
;;
o)
outputfile=${OPTARG}
;;
e)
encoding=${OPTARG}
;;
*)
usage
;;
esac
done
shift $((OPTIND-1))
...
...
<Uses the inputfile, outputfile, and encoding variables>
I admit, there may be pieces to this I don't fully understand, and it could be a simple fix, but my end goal is to be able to extract -o, -i, and -e that all contain 1 value, regardless of the spaces within each section. I can handle quoting the script after I can extract the filename value
The script fragment that you have posted does not have any issues with spaces in the arguments.
The following, for example, does not need quoting (since it's an assignment):
inputfile=${OPTARG}
All other uses of $inputfile in the script should be double quoted.
What matters is how this script is called.
This would fail and would assign only hello to the variable inputfile:
$ ./script.sh -i hello world.txt
The string world.txt would prompt the getopts function to stop processing the command line and the script would continue with the shift (world.txt would be left in $1 afterwards).
The following would correctly assign the string hello world.txt to inputfile:
$ ./script.sh -i "hello world.txt"
as would
$ ./script.sh -i hello\ world.txt
The following script uses awk to split the arguments while including spaces in the file names. The arguments can be in any order. It does not handle multiple consecutive spaces in an argument, it collapses them to one.
#!/bin/bash
IFS=' '
str=$(printf "%s" "$*")
istr=$(echo "${str}" | awk 'BEGIN {FS="-i"} {print $2}' | awk 'BEGIN {FS="-o"} {print $1}' | awk 'BEGIN {FS="-e"} {print $1}')
estr=$(echo "${str}" | awk 'BEGIN {FS="-e"} {print $2}' | awk 'BEGIN {FS="-o"} {print $1}' | awk 'BEGIN {FS="-i"} {print $1}')
ostr=$(echo "${str}" | awk 'BEGIN {FS="-o"} {print $2}' | awk 'BEGIN {FS="-e"} {print $1}' | awk 'BEGIN {FS="-i"} {print $1}')
inputfile=""${istr}""
outputfile=""${ostr}""
encoding=""${estr}""
# call the jar
There was an issue when calling the jar where Java threw a MalformedUrlException on a filename with a space.
So after reading through the commentary, we decided that although it may not be the right answer for every scenario, the right answer for this specific scenario was to extract the pieces manually.
Because we are building this for a pre-built script passing to it, and we aren't updating that script any time soon, we can accept with certainty that this script will always receive a -i, -o, and -e flag, and there will be spaces between them, which causes all the pieces passed in to be stored in different variables in $*.
And we can assume that the text after a flag is the response to the flag, until another flag is referenced. This leaves us 3 scenarios:
The variable contains one of the flags
The variable contains the first piece of a parameter immediately after the flag
The variable contains part 2+ of a parameter, and the space in the name was interpreted as a split, and needs to be reinserted.
One of the other issues I kept running into was trying to get string literals to equate to variables in my IF statements. To resolve that issue, I pre-stored all relevant data in array variables, so I could test $variable == $otherVariable.
Although I don't expect it to change, we also handled what to do if the three flags appear in a different order than we anticipate (Our assumption was that they list as i,o,e... but we can't see excatly what is passed). The parameters are dumped into an array in the order they were read in, and a parallel array tracks whether the items in slots 0,1,2 relate to i,o,e.
The final result still has one flaw: if there is more than one consecutive space in the filename, the whitespace is trimmed before processing, and I can only account for one space. But saying as we processed over 4000 files before encountering one with a space, I find it unlikely with the naming conventions that we would encounter something with more than one space.
At that point, we would have to be stepping in for a rare intervention anyways.
Final code change is as follows:
#!/bin/bash
IFS='|'
position=-1
ioeArray=("" "" "")
previous=""
flagArr=("-i" "-o" "-e" " ")
ioePattern=(0 1 2)
#echo "for loop:"
for i in $*; do
#printf "%s\n" "$i"
if [ "$i" == "${flagArr[0]}" ] || [ "$i" == "${flagArr[1]}" ] || [ "$i" == "${flagArr[2]}" ]; then
((position += 1));
previous=$i;
case "$i" in
"${flagArr[0]}")
ioePattern[$position]=0
;;
"${flagArr[1]}")
ioePattern[$position]=1
;;
"${flagArr[2]}")
ioePattern[$position]=2
;;
esac
continue;
fi
if [[ $previous == "-"* ]]; then
ioeArray[$position]=${ioeArray[$position]}$i;
else
ioeArray[$position]=${ioeArray[$position]}" "$i;
fi
previous=$i;
done
echo "extracting (${ioeArray[${ioePattern[0]}]}) to (${ioeArray[${ioePattern[1]}]}) with (${ioeArray[${ioePattern[2]}]}) encoding."
inputfile=""${ioeArray[${ioePattern[0]}]}"";
outputfile=""${ioeArray[${ioePattern[1]}]}"";
encoding=""${ioeArray[${ioePattern[2]}]}"";

filtering output of who with grep and cut

I have this exercice :
Create a bash script that check if the user passed as a parameter is
connected and if he is display when he connected. Indications : use the command who, the grep filter and the
command cut.
But i have some trouble to solve it.
#!/bin/bash
who>who.txt;
then
grep $1 who.txt
for a in who.txt
do
echo "$a"
done
else
echo "$1 isnt connected"
fi
So first of all i want to only keep the line where i can find the user in a .txt and then i want to cut each part with a loop in the who command to keep only the date but the problem is that i don't know how to cut here because it's seperated with multiple spaces.
So i'am really blocked and i don't see where to go to do this. I'am a beginner with bash.
If I understand you simply want to check to see if a user is logged in, then that is what the users command is for. If you want to wrap it in a short script, then you could do something like the following:
#!/bin/bash
[ -z "$1" ] && { ## validate 1 argument given on command line
printf "error: insufficient input, usage: %s username.\n" "${0##*/}" >&2
exit 1
}
## check if that argument is among the logged in users
if $(users | grep -q "$1" >/dev/null 2>&1) ; then
printf " user: %s is logged in.\n" "$1"
else
printf " user: %s is NOT logged in.\n" "$1"
fi
Example/Use
$ bash chkuser.sh dog
user: dog is NOT logged in.
$ bash chkuser.sh david
user: david is logged in.
cut is a rather awkward tool for parsing who's output, unless you use fixed column positions. In delimiter mode, with -d ' ', each space makes a separate empty field. It's not like awk where fields are separated by a run of spaces.
who(1) output looks like this (and GNU who has no option to cut it down to just the username/time):
$ who
peter tty1 2015-11-13 18:53
john pts/13 2015-11-12 08:44 (10.0.0.1)
john pts/14 2015-11-12 08:44 (10.0.0.1)
john pts/15 2015-11-12 08:44 (10.0.0.1)
john pts/16 2015-11-12 08:44 (10.0.0.1)
peter pts/9 2015-11-14 16:09 (:0)
I didn't check what happens with very long usernames, whether they're truncated or whether they shift the rest of the line over. Parsing it with awk '{print $3, $4} would feel much safer, since it would still work regardless of exact column position.
But since you need to use cut, let's assume that those exact column positions (time starting from 23 and running until 38) are constant across all systems where we want this script to work, and all terminal widths. (who doesn't appear to vary its output for $COLUMNS or the tty-driver column width (the columns field in stty -a output)).
Putting all that together:
#!/bin/sh
who | grep "^$1 " | cut -c 23-38
The regex on the grep command line will only match at the beginning of the line, and has to match a space following the username (to avoid substring matches). Then those lines that match are filtered through cut, to extract only the columns containing the timestamp.
With an empty cmdline arg, will print the login time for every logged-in user. If the pattern doesn't match anything, the output will be empty. To explicitly detect this and print something else, capture the pipeline output with var=$(pipeline), and test if it's the empty-string or not.
This will print a time for every separate login from the same user. You could use grep's count limit arg (see the man page) to stop after one match, but it might not be the most recent time. You might use sort -n | head -1 or something.
If you don't have to write a loop in the shell, don't. It's much better to write a pipeline that makes one pass over the data. The shell itself is slow, but as long as it doesn't have to parse every line of what you're dealing with, that doesn't matter.
Also note how I quoted the expansion of $1 with double quotes, to avoid the shell applying word splitting and glob expansion to it.
For more shell stuff, see the Wooledge Bash FAQ and guide. That's a good place to get started learning idioms that don't suck (i.e. don't break when you have filenames and directories with spaces in them, or filenames containing a ?, or lines with trailing spaces that you want to not munge...).

How to pipe all the output of "ps" into a shell script for further processing?

When I run this command:
ps aux|awk {'print $1,$2,$3,$11'}
I get a listing of the user, PID, CPU% and the actual command.
I want to pipe all those listings into a shell script to calculate the CPU% and if greater than, say 5, then to kill the process via the PID.
I tried piping it to a simple shell script, i.e.
ps aux|awk {'print $1,$2,$3,$11'} | ./myscript
where the content of my script is:
#!/bin/bash
# testing using positional parameters
echo "$1 $2 $3 $4"
But I get a blank output. Any idea how to do this?
Many thanks!
If you use awk, you don't need an additional bash script. Also, it is a good idea to reduce the output of the ps command so you don't have to deal with extra information:
ps acxho user,pid,%cpu,cmd | awk '$3 > 5 {system("echo kill " $2)}'
Explanation
The extra ps flags I use:
c: command only, no extra arguments
h: no header, good for scripting
o: output format. In this case, only output the user, PID, %CPU, and command
The awk command compare the %CPU, which is the third column, with a threshold (5). If it is over the threshold, then issue the system command to kill that process.
Note the echo in the command. Once you are certain the scripts works the way you like, then remove the word echo from the command to execute it for real.
Your script needs to read its input
#!/bin/bash
while read a b c d; do
echo $a $b
done
I think you can get it using xargs command to pass the AWK output to your script as arguments:
ps aux|awk {'print $1,$2,$3,$11'} | xargs ./myscript
Some extra info about xargs: http://en.wikipedia.org/wiki/Xargs
When piping input from one process to another in Linux (or POSIX-compliant systems) the output is not given as arguments to the receiving process. Instead, the standard output of the first process is piped into the standard input of the other process.
Because of this, your script cannot work. $1...$n accesses variables that have been passed as arguments to it. As there are none it won't display anything. Instead, you have to read the standard input into variables with the read command (as pointed out by William).
The pipe '|' redirects the standard output of the left to the standard input of the right. In this case, the output of the ps goes to the input of awk, then the output of awk goes to the stdin of the script.
Therefore your scripts needs to read its STDIN.
#!/bin/bash
read var1 var2 var3 ...
Then you can do whatever you want with those variables.
More info, type in bash: help read
If I well understood your problem, you want to kill every process that exceeds X% of the CPU (using ps aux).
Here is the solution using AWK:
ps aux | grep -v "%CPU" | awk '{if ($3 > XXX) { print "Killing process with PID "$2", called "$4", consuming "$3"% and launched by "$1; system( "kill -9 " $2 );}}' -
Where XXX is your threshold (% of CPU).
It also prints related info to the killed process, if it is not desired just remove the print statement.
You can add some filters like: do not remove root's process...
Try putting myscript in front like this:
./myscript `ps aux|awk {'print $1,$2,$3,$11'}`

Count the number of occurrences in a string. Linux

Okay so what I am trying to figure out is how do I count the number of periods in a string and then cut everything up to that point but minus 2. Meaning like this:
string="aaa.bbb.ccc.ddd.google.com"
number_of_periods="5"
number_of_periods=`expr $number_of_periods-2`
string=`echo $string | cut -d"." -f$number_of_periods`
echo $string
result: "aaa.bbb.ccc.ddd"
The way that I was thinking of doing it was sending the string to a text file and then just greping for the number of times like this:
grep -c "." infile
The reason I don't want to do that is because I want to avoid creating another text file for I do not have permission to do so. It would also be simpler for the code I am trying to build right now.
EDIT
I don't think I made it clear but I want to make finding the number of periods more dynamic because the address I will be looking at will change as the script moves forward.
If you don't need to count the dots, but just remove the penultimate dot and everything afterwards, you can use Bash's built-in string manuipulation.
${string%substring}
Deletes shortest match of $substring from back of $string.
Example:
$ string="aaa.bbb.ccc.ddd.google.com"
$ echo ${string%.*.*}
aaa.bbb.ccc.ddd
Nice and simple and no need for sed, awk or cut!
What about this:
echo "aaa.bbb.ccc.ddd.google.com"|awk 'BEGIN{FS=OFS="."}{NF=NF-2}1'
(further shortened by helpful comment from #steve)
gives:
aaa.bbb.ccc.ddd
The awk command:
awk 'BEGIN{FS=OFS="."}{NF=NF-2}1'
works by separating the input line into fields (FS) by ., then joining them as output (OFS) with ., but the number of fields (NF) has been reduced by 2. The final 1 in the command is responsible for the print.
This will reduce a given input line by eliminating the last two period separated items.
This approach is "shell-agnostic" :)
Perhaps this will help:
#!/bin/sh
input="aaa.bbb.ccc.ddd.google.com"
number_of_fields=$(echo $input | tr "." "\n" | wc -l)
interesting_fields=$(($number_of_fields-2))
echo $input | cut -d. -f-${interesting_fields}
grep -o "\." <<<"aaa.bbb.ccc.ddd.google.com" | wc -l
5

Resources