filtering output of who with grep and cut - linux

I have this exercice :
Create a bash script that check if the user passed as a parameter is
connected and if he is display when he connected. Indications : use the command who, the grep filter and the
command cut.
But i have some trouble to solve it.
#!/bin/bash
who>who.txt;
then
grep $1 who.txt
for a in who.txt
do
echo "$a"
done
else
echo "$1 isnt connected"
fi
So first of all i want to only keep the line where i can find the user in a .txt and then i want to cut each part with a loop in the who command to keep only the date but the problem is that i don't know how to cut here because it's seperated with multiple spaces.
So i'am really blocked and i don't see where to go to do this. I'am a beginner with bash.

If I understand you simply want to check to see if a user is logged in, then that is what the users command is for. If you want to wrap it in a short script, then you could do something like the following:
#!/bin/bash
[ -z "$1" ] && { ## validate 1 argument given on command line
printf "error: insufficient input, usage: %s username.\n" "${0##*/}" >&2
exit 1
}
## check if that argument is among the logged in users
if $(users | grep -q "$1" >/dev/null 2>&1) ; then
printf " user: %s is logged in.\n" "$1"
else
printf " user: %s is NOT logged in.\n" "$1"
fi
Example/Use
$ bash chkuser.sh dog
user: dog is NOT logged in.
$ bash chkuser.sh david
user: david is logged in.

cut is a rather awkward tool for parsing who's output, unless you use fixed column positions. In delimiter mode, with -d ' ', each space makes a separate empty field. It's not like awk where fields are separated by a run of spaces.
who(1) output looks like this (and GNU who has no option to cut it down to just the username/time):
$ who
peter tty1 2015-11-13 18:53
john pts/13 2015-11-12 08:44 (10.0.0.1)
john pts/14 2015-11-12 08:44 (10.0.0.1)
john pts/15 2015-11-12 08:44 (10.0.0.1)
john pts/16 2015-11-12 08:44 (10.0.0.1)
peter pts/9 2015-11-14 16:09 (:0)
I didn't check what happens with very long usernames, whether they're truncated or whether they shift the rest of the line over. Parsing it with awk '{print $3, $4} would feel much safer, since it would still work regardless of exact column position.
But since you need to use cut, let's assume that those exact column positions (time starting from 23 and running until 38) are constant across all systems where we want this script to work, and all terminal widths. (who doesn't appear to vary its output for $COLUMNS or the tty-driver column width (the columns field in stty -a output)).
Putting all that together:
#!/bin/sh
who | grep "^$1 " | cut -c 23-38
The regex on the grep command line will only match at the beginning of the line, and has to match a space following the username (to avoid substring matches). Then those lines that match are filtered through cut, to extract only the columns containing the timestamp.
With an empty cmdline arg, will print the login time for every logged-in user. If the pattern doesn't match anything, the output will be empty. To explicitly detect this and print something else, capture the pipeline output with var=$(pipeline), and test if it's the empty-string or not.
This will print a time for every separate login from the same user. You could use grep's count limit arg (see the man page) to stop after one match, but it might not be the most recent time. You might use sort -n | head -1 or something.
If you don't have to write a loop in the shell, don't. It's much better to write a pipeline that makes one pass over the data. The shell itself is slow, but as long as it doesn't have to parse every line of what you're dealing with, that doesn't matter.
Also note how I quoted the expansion of $1 with double quotes, to avoid the shell applying word splitting and glob expansion to it.
For more shell stuff, see the Wooledge Bash FAQ and guide. That's a good place to get started learning idioms that don't suck (i.e. don't break when you have filenames and directories with spaces in them, or filenames containing a ?, or lines with trailing spaces that you want to not munge...).

Related

Linux shell scripting: How to store output from terminal in integers (but only numbers)?

I'm new to shell scripting and here is my problem:
I want to store PID's from output of airmon-ng check to some variables (for ex: $1, $2, $3) so that I can execute kill $1 $2 $3.
here is sample output of airmon-ng check:
Found 3 processes that could cause trouble.
If airodump-ng, aireplay-ng or airtun-ng stops working after
a short period of time, you may want to kill (some of) them!
PID Name
707 NetworkManager
786 wpa_supplicant
820 dhclient
I want to grab numbers 707, 786, 820.
I tried using set 'airmon-ng check' and then using for loop:
set `airmon-ng check`
n=$#
for (( i=0; i<=n; i++ ))
do
echo $i
done
it outputs 1,2,3,...36
not words or numbers so I couldn't figure out how I should do it.
airmon-ng check | egrep -o '\b[0-9]+\b' | xargs kill
egrep is grep with extended regular expressions (like grep -E), -o says to extract only the matching parts, \b matches word boundaries so you don't get any numbers accidentally occuring in process names or something, [0-9]+ matches one or more decimal digit, xargs kill passes all the matches as arguments to the kill command.
Note that parsing output intended to be read by humans might not always be a good idea. Also, just killing all those processes doesn't sound too smart either, but proper usage of airocrack is beyond this question.
You can get list of the PIDs separated by spaces e.g. like this (everything from the 1st column after "PID"):
l=`airmon-ng check | awk 'BEGIN { p=0 } { if (p) { print $1" "; } if ($1=="PID") { p=1 } }' | tr '\n' ' '`
Why not use grep?
myvar=$(airmon-ng check | grep '[0-9]\{3,6\}')
This assumes a PID of 3 to 6 digits, and will grab anything from the airmon-ng output of a similar length. So this may not work as well if the output includes other strings with digits of a similar length.
I would use awk for this and store the output in an array
pids=( $(airmon-ng check | awk '/^[[:blank:]]+[[:digit:]]+[[:blank:]]+/{print $1}') )
#'pids' is an array
kill "${pids[#]}" #killing all the processes thus found.

Grep words containg 'n' number of letters given user input

I am trying to create a script (bash) that will take input (integer) from a user and grep all words containing that number of letters. I am okay with how grep basically works, but I am unsure how use input from user to determine the output
Here is what I started:
#!/bin/sh
echo " Content type: text/html"
echo
x=`expr $1`
I'm pretty sure the grep command would be as simple as grep^...integer from user$. Just don't know how to take use the user input. Thanks!
EDIT: I should have mentioned that "user input" would be entered as an argument (./script 6)
Run this script as ./script 6 and it will select all 6-letter words from the file text and display them:
#!/bin/sh
grep -Eo "\<[[:alpha:]]{$1}\>" text
Key parts of the regex:
\< signifies the start of a word.
[[:alpha:]]{$1} signifies $1 alphabetical characters. If you want an apostrophe, such as in don't, to be considered a valid word character, then add it inside the outer square backets like this: [[:alpha:]']{$1}
\> signifies the end of a word.
There are some limitations to grep's ability to understand human-language. For example, in the string don't, it considers the apostrophe to be a word boundary.
Example
I ran this script against the text of the question:
$ ./script.sh 9
basically
determine
mentioned
$ ./script.sh 10
containing
you can use read to accpet input from the user.
#!/bin/sh
echo $1 | grep ".\{$2\}"
now if yo call the script as ./script hello 5
The positional parameters $1 will be hello and $2 as 5
here the {m} matches lines with m lenght as . any character is matched for exactly m times

Line from bash command output stored in variable as string

I'm trying to find a solution to a problem analog to this one:
#command_A
A_output_Line_1
A_output_Line_2
A_output_Line_3
#command_B
B_output_Line_1
B_output_Line_2
Now I need to compare A_output_Line_2 and B_output_Line_1 and echo "Correct" if they are equal and "Not Correct" otherwise.
I guess the easiest way to do this is to copy a line of output in some variable and then after executing the two commands, simply compare the variables and echo something.
This I need to implement in a bash script and any information on how to get certain line of output stored in a variable would help me put the pieces together.
Also, it would be cool if anyone can tell me not only how to copy/store a line, but probably just a word or sequence like : line 1, bytes 4-12, stored like string in a variable.
I am not a complete beginner but also not anywhere near advanced linux bash user. Thanks to any help in advance and sorry for bad english!
An easier way might be to use diff, no?
Something like:
command_A > command_A.output
command_B > command_B.output
diff command_A.output command_B.output
This will work for comparing multiple strings.
But, since you want to know about single lines (and words in the lines) here are some pointers:
# first line of output of command_A
command_A | head -n 1
The -n 1 option says only to use the first line (default is 10 I think)
# second line of output of command_A
command_A | head -n 2 | tail -n 1
that will take the first two lines of the output of command_A and then the last of those two lines. Happy times :)
You can now store this information in a variable:
export output_A=`command_A | head -n 2 | tail -n 1`
export output_B=`command_B | head -n 1`
And then compare it:
if [ "$output_A" == "$output_B" ]; then echo 'Correct'; else echo 'Not Correct'; fi
To just get parts of a string, try looking into cut or (for more powerful stuff) sed and awk.
Also, just learing a good general purpose scripting language like python or ruby (even perl) can go a long way with this kind of problem.
Use the IFS (internal field separator) to separate on newlines and store the outputs in an array.
#!/bin/bash
IFS='
'
array_a=( $(./a.sh) )
array_b=( $(./b.sh) )
if [ "${array_a[1]}" = "${array_b[0]}" ]; then
echo "CORRECT"
else
echo "INCORRECT"
fi

Grep filtering output from a process after it has already started?

Normally when one wants to look at specific output lines from running something, one can do something like:
./a.out | grep IHaveThisString
but what if IHaveThisString is something which changes every time so you need to first run it, watch the output to catch what IHaveThisString is on that particular run, and then grep it out? I can just dump to file and later grep but is it possible to do something like background it and then bring it to foreground and bringing it back but now piped to some grep? Something akin to:
./a.out
Ctrl-Z
fg | grep NowIKnowThisString
just wondering..
No, it is only in your screen buffer if you didn't save it in some other way.
Short form: You can do this, but you need to know that you need to do it ahead-of-time; it's not something that can be put into place interactively after-the-fact.
Write your script to determine what the string is. We'd need a more detailed example of the output format to give a better example of usage, but here's one for the trivial case where the entire first line is the filter target:
run_my_command | { read string_to_filter_for; fgrep -e "$string_to_filter_for" }
Replace the read string_to_filter_for with as many commands as necessary to read enough input to determine what the target string is; this could be a loop if necessary.
For instance, let's say that the output contains the following:
Session id: foobar
and thereafter, you want to grep for lines containing foobar.
...then you can pipe through the following script:
re='Session id: (.*)'
while read; do
if [[ $REPLY =~ $re ]] ; then
target=${BASH_REMATCH[1]}
break
else
# if you want to print the preamble; leave this out otherwise
printf '%s\n' "$REPLY"
fi
done
[[ $target ]] && grep -F -e "$target"
If you want to manually specify the filter target, this can be done by having the loop check for a file being created with filter contents, and using that when starting up grep afterwards.
That is a little bit strange what you need, but you can do it tis way:
you must go into script session first;
then you use shell how usually;
then you start and interrupt you program;
then run grep over typescript file.
Example:
$ script
$ ./a.out
Ctrl-Z
$ fg
$ grep NowIKnowThisString typescript
You could use a stream editor such as sed instead of grep. Here's an example of what I mean:
$ cat list
Name to look for: Mike
Dora 1
John 2
Mike 3
Helen 4
Here we find the name to look for in the fist line and want to grep for it. Now piping the command to sed:
$ cat list | sed -ne '1{s/Name to look for: //;h}' \
> -e ':r;n;G;/^.*\(.\+\).*\n\1$/P;s/\n.*//;br'
Mike 3
Note: sed itself can take file as a parameter, but you're not working with text files, so that's how you'd use it.
Of course, you'd need to modify the command for your case.

using grep in a If statement to get all items, ignoring spaces

This is part of a homework problem in a beginning bash class.
I need to bring in the passwd file, which I have done with my passfile variable, then I need to be able to extract certain pieces of it and display the different fields. When I manually grep from CLI using this statement below it works fine. I'm wanting all the variables and I get them all.
grep 1000 passfile | cut -c1-
However, when I do this from the script it stops or breaks or starts over at the first 'blank space' in the users full name. John D. Doe will return 3 lines when I only want one. I see this by echoing the value of i and the following.
for i in `grep 1000 ${passfile} | cut -c1-
user=`echo $1 | cut -d : -f1`
userID=`echo $1 | cut -d : -f3`
For example, if the line reads
jdoe:x:123:1000:John D Doe:/home/jdoe:/bin/bash
I get the following:
i = jdoe:x:123:1000:John
which gives me:
User is jdoe, UID is 509
but then in the next line i starts at R.
i = R. so User is R., UID is R.
next line
i = Johnson:/home/jjohnson:/bin/bash
which returns User is Johnson, UID is /bin/bash
The passwd file holds many users so I need to use the for loop to process them all. I think if I can get it to ignore the space I can get it. But not knowing a whole lot about linux, I'm not sure if I'm even going down the right path. Thanks in Advance for guidence/help.
By default, cut splits on spaces, not colons. If you continue to use it, specify the separator.
You probably want to use IFS=: and a read statement in a while loop to get the values in:
while IFS=: read user password uid gid comment home shell
do
...whatever...
done < /etc/passwd
Or you can pipe the output of grep into the while loop.
Are you allowed to use any external program? If so, I'd recommend awk
UID=1000
awkcmd="\$4==\"$UID\" {print \"user:\",\$1}"
cat $PASSWORDFILE | awk -F ":" "$awkcmd"
when parsing structured files with specific field delimiters such as passwd file, the appropriate tool for the job is awk.
UID=1000
awk -vuid="$UID" '$4==uid{print "user: "$1}' /etc/passwd
you do not have to use grep or cut or anything else. ( Of course, you can also use pure bash while read loops as demonstrated.)

Resources