Creating a tshark bash script to export objects - linux

Ok so I've just recently started studying network security, and had no knowledge of linux before doing so. I was trying to write a script that will basically do what the GUI in wireshark does when you follow tcp streams and then export the objects. I have pretty much no background in coding whatsoever and I was wondering the best format to do this in. Everything worked perfectly but then I decided to add a function to test the output against the original with md5sum. I can't get it to work.
function testScript {
if [[ $test == "yes" ]]; then
echo "Type original file path: ";
read ogfpath;
md5sum "$fpath" "$ogfpath" > print
else
echo "Goodbye"
fi
}
echo -n 'Type stream number and press ENTER: '
read stream
echo -n 'Type pcap path and press ENTER: '
read pcap
echo -n 'Type magic number and press ENTER: '
read mnum
echo -n 'Type new file path and press ENTER: '
read fpath
tshark -2 -q -z follow,tcp,raw,$stream -r $pcap | tr '\n' ' ' | sed 's\ \\g' | grep -oP "(?<="$mnum").+" | sed "s/^/"$mnum"/g" | xxd -r -p > $fpath
echo -n 'Do you want to test the program (y/n)? :'
read test
testScript

The problem I see here is that your $test variable is local, only accessible to your function from the inside, in other words, unless it's defined inside the function, it doesn't exist there at all.
One easy way to get around this is to pass parameters to the function, which is very easy in bash. For example:
function test {
if [ "$1" == "yes" ]; then
echo "True!"
else
echo "False!"
fi
}
test "yes"
In this example, the parameter passed to the function "test" is "yes", which is accessed inside the function through the variable $1. More parameters can be passed to the function and accessed sequentially, $2 $3, etc. In your case, your function would have to be called like this:
testScript $test
And the if statement inside the function would have to look like this:
if [[ $1 == "yes" ]]; then

Related

How to use error validation in Bash for checking an entry in a file

#!/bin/bash
echo 'Please enter the name of the species you are looking for: '
read speciesName
grep "$speciesName" speciesDetails.txt | awk '{print $0}'
echo
echo 'Would you like to search for another species? Press y to search or n to go back
to the main menu: '
read answer
case $answer in
[yY] | [yY][eE][sS] )
./searchSpecies.sh;;
[nN] | [nN][oO] )
./speciesMenu.sh;;
*) echo exit;;
esac
If there is no entry of that species name in the file how do I give the user an error to say not found?
The answer to your immediate question is to examine the exit code from grep. But probably also refactor the loop:
#!/bin/bash
while true; do
read -p 'Please enter the name of the species you are looking for: ' -r speciesName
grep -e "$speciesName" speciesDetails.txt || echo "$speciesName: not found" >&2
read -p 'Would you like to search for another species? Press n to quit: ' -r answer
case $answer in
[nN] | [nN][oO] )
break;;
esac
done
A better design altogether is probably to make the search term a command-line argument. This makes the script easier to use from other scripts, and the user can use the shell's facilities for history, completion, etc to run it as many times as they like, and easily fix e.g. typos by recalling the previous invocation and editing it.
#!/bin/bash
grep -e "$1" speciesDetails.txt || echo "$1: not found" >&2
The short-circuit one || two corresponds to the longhand
if one; then
: nothing
else
two
fi
If you want to search for static strings, not regular expressions, maybe add -F to the grep options.
If you need just check existance then execute it using next way :
if grep speciesName4 speciesDetails.txt; then
echo "exist";
else
echo "Not exist";
fi
Use $? to check exit code of the command if you need return value as well
set -o pipefail
$ echo "speciesName1" > speciesDetails.txt
$ echo "speciesName2" >> speciesDetails.txt
$ echo "speciesName3" >> speciesDetails.txt
$ l_result=$(grep speciesName3 speciesDetails.txt); l_exit_code=$?
$ echo $l_exit_code
0
$ l_result=$(grep speciesName4 speciesDetails.txt); l_exit_code=$?
$ echo $l_exit_code
1
Updated:
It is not antipattern if you need to use later output of your command

How to write a function that check the result of a bash command

I'm trying to write a simple function to debug my script easily and making my code simpler. (Still stuck after 3 hours)
I want to pass to this function 3 arguments
A command
A success string
And an error string
The function is supposed to execute the command and print the proper string whether it's a success or not.
What I mean by successful is when the command prints something in the output.
Here is what I've tried (On CentOS7) :
#!/bin/bash
CMD=$(yum list installed | egrep "yum.utils.\w+" | cut -d " " -f1)
SUCCESS="YES"
ERROR="NO"
foo() {
if ["$1" != ""]; then
echo -e "$2"
else
echo -e "$3"
fi
}
foo $CMD $SUCCESS $ERROR
Unfortunately, I'm encountering 2 problems :
Firstly, when the $CMD is empty, the first parameter will be $SUCCESS instead of an empty string (the behaviour I want)
Secondly, I want to remove the console output (> /dev/null 2>&1 ???).
Do you think it's possible? Do you have any idea how to do it?
Otherwise, is there an easier way with the eval command?
Thanks for reading and have a nice day,
Valentin M.
------------------ Correction ------------------
#!/bin/bash
CMD=$(yum list installed | grep -E "yum.utils.\w+" | cut -d " " -f1)
SUCCESS="YES"
ERROR="NO"
foo() {
if [ "$1" != "" ]; then
echo -e "$2"
else
echo -e "$3"
fi
}
foo "$CMD" "$SUCCESS" "$ERROR"
I found out a similar topic here: Stack overflow : How to write a Bash function that can generically test the output of executed commands?
Unfortunately, I'm encountering 2 problems :
Firstly, when the $CMD is empty, the first parameter will be $SUCCESS instead of an empty string (the behaviour I want)
If you follow the suggestion in William Pursell's comment above, this problem is solved, since an empty first parameter is then passed.
Secondly, I want to remove the console output (> /dev/null 2>&1 ???).
I assume by console output you mean the output to STDERR, since STDOUT is assigned to CMD. Your > /dev/null 2>&1 is unsuitable, as it redirects also STDOUT to /dev/null; just do this with STDERR:
CMD=$(yum list installed 2>/dev/null | egrep "yum.utils.\w+" | cut -d " " -f1)

Calling a function that decodes in base64 in bash

#!/bin/bash
#if there are no args supplied exit with 1
if [ "$#" -eq 0 ]; then
echo "Unfortunately you have not passed any parameter"
exit 1
fi
#loop over each argument
for arg in "$#"
do
if [ -f arg ]; then
echo "$arg is a file."
#iterates over the files stated in arguments and reads them $
cat $arg | while read line;
do
#should access only first line of the file
if [ head -n 1 "$arg" ]; then
process line
echo "Script has ran successfully!"
exit 0
#should access only last line of the file
elif [ tail -n 1 "$arg" ]; then
process line
echo "Script has ran successfully!"
exit 0
#if it accesses any other line of the file
else
echo "We only process the first and the last line of the file."
fi
done
else
exit 2
fi
done
#function to process the passed string and decode it in base64
process() {
string_to_decode = "$1"
echo "$string_to_decode = " | base64 --decode
}
Basically what I want this script to do is to loop over the arguments passed to the script and then if it's a file then call the function that decodes in base64 but just on the first and the last line of the chosen file. Unfortunately when I run it even with calling a right file it does nothing. I think it might be encountering problems with the if [ head -n 1 "$arg" ]; then part of the code. Any ideas?
EDIT: So I understood that I am actually just extracting first line over and over again without really comparing it to anything. So I tried changing the if conditional of the code to this:
first_line = $(head -n 1 "$arg")
last_line = $(tail -n 1 "$arg")
if [ first_line == line ]; then
process line
echo "Script has ran successfully!"
exit 0
#should access only last line of the file
elif [ last_line == line ]; then
process line
echo "Script has ran successfully!"
exit 0
My goal is to iterate through files for example one is looking like this:
MTAxLmdvdi51awo=
MTBkb3duaW5nc3RyZWV0Lmdvdi51awo=
MXZhbGUuZ292LnVrCg==
And to decode the first and the last line of each file.
To decode the first and last line of each file given to your script, use this:
#! /bin/bash
for file in "$#"; do
[ -f "$file" ] || exit 2
head -n1 "$file" | base64 --decode
tail -n2 "$file" | base64 --decode
done
Yea, as the others already said the true goal of the script isn't really clear. That said, i imagine every variation of what you may have wanted to do would be covered by something like:
#!/bin/bash
process() {
encoded="$1";
decoded="$( echo "${encoded}" | base64 --decode )";
echo " Value ${encoded} was decoded into ${decoded}";
}
(( $# )) || {
echo "Unfortunately you have not passed any parameter";
exit 1;
};
while (( $# )) ; do
arg="$1"; shift;
if [[ -f "${arg}" ]] ; then
echo "${arg} is a file.";
else
exit 2;
fi;
content_of_first_line="$( head -n 1 "${arg}" )";
echo "Content of first line: ${content_of_first_line}";
process "${content_of_first_line}";
content_of_last_line="$( tail -n 1 "${arg}" )";
echo "Content of last line: ${content_of_last_line}";
process "${content_of_last_line}";
line=""; linenumber=0;
while IFS="" read -r line; do
(( linenumber++ ));
echo "Iterating over all lines. Line ${linenumber}: ${line}";
process "${line}";
done < "${arg}";
done;
some additions you may find useful:
If the script is invoked with multiple filenames, lets say 4 different filenames, and the second file does not exist (but the others do),
do you really want the script to: process the first file, then notice that the second file doesnt exist, and exit at that point ? without processing the (potentially valid) third and fourth file ?
replacing the line:
exit 2;
with
continue;
would make it skip any invalid filenames, and still process valid ones that come after.
Also, within your process function, directly after the line:
decoded="$( echo "${encoded}" | base64 --decode )";
you could check if the decoding was successful before echoing whatever the resulting garbage may be if the line wasnt valid base64.
if [[ "$?" -eq 0 ]] ; then
echo " Value ${encoded} was decoded into ${decoded}";
else
echo " Garbage.";
fi;
--
To answer your followup question about the IFS/read-construct, it is a mixture of a few components:
read -r line
reads a single line from the input (-r tells it not to do any funky backslash escaping magic).
while ... ; do ... done ;
This while loop surrounds the read statement, so that we keep repeating the process of reading one line, until we run out.
< "${arg}";
This feeds the content of filename $arg into the entire block of code as input (so this becomes the source that the read statement reads from)
IFS=""
This tells the read statement to use an empty value instead of the real build-in IFS value (the internal field separator). Its generally a good idea to do this for every read statement, unless you have a usecase that requires splitting the line into multiple fields.
If instead of
IFS="" read -r line
you were to use
IFS=":" read -r username _ uid gid _ homedir shell
and read from /etc/passwd which has lines such as:
root:x:0:0:root:/root:/bin/bash
apache:x:48:48:Apache:/usr/share/httpd:/sbin/nologin
then that IFS value would allow it to load those values into the right variables (in other words, it would split on ":")
The default value for IFS is inherited from your shell, and it usually contains the space and the TAB character and maybe some other stuff. When you only read into one single variable ($line, in your case). IFS isn't applied but when you ever change a read statement and add another variable, word splitting starts taking effect and the lack of a local IFS= value will make the exact same script behave very different in different situations. As such it tends to be a good habbit to control it at all times.
The same goes for quoting your variables like "$arg" or "${arg}" , instead of $arg . It doesn't matter when ARG="hello"; but once the value starts containing spaces suddenly all sorts of things can act different; suprises are never a good thing.

convert comma separated command line arguments to json in shell script

I'm using below script to generate json data from comma separated values to feed zabbix.
but i'm getting one extra comma symbol. please try to optimize the comma in the end line.
#/bin/bash
IFS=':, ' read -r -a array <<< "$1"
idx=0
echo {\"data\":[
while [ -n "${array[$idx]}" ]; do
echo -n \{\"{#R_IP}\":\""${array[$idx]}"\"}
let idx=$idx+1
[ -n "$array[idx]}" ] && echo "," || echo
done
echo ]}
exit
input
./test.sh embimsrv.exe,emcms.exe,emcmsg.exe,emforecastsrv.exe,emgtw.exe,emguisrv.exe,emmaintag.exe,emselfservicesrv.exe,Naming_Service.exe,p_ctmce.exe,p_ctmcs.exe,p_ctmrt.exe,p_ctmtr.exe,p_ctmwd.exe
output
{"data":[
{"{#R_IP}":"embimsrv.exe"},
{"{#R_IP}":"emcms.exe"},
{"{#R_IP}":"emcmsg.exe"},
{"{#R_IP}":"emforecastsrv.exe"},
{"{#R_IP}":"emgtw.exe"},
{"{#R_IP}":"emguisrv.exe"},
{"{#R_IP}":"emmaintag.exe"},
{"{#R_IP}":"emselfservicesrv.exe"},
{"{#R_IP}":"Naming_Service.exe"},
{"{#R_IP}":"p_ctmce.exe"},
{"{#R_IP}":"p_ctmcs.exe"},
{"{#R_IP}":"p_ctmrt.exe"},
{"{#R_IP}":"p_ctmtr.exe"},
{"{#R_IP}":"p_ctmwd.exe"},
]}
Use a proper tool, like jq, to generate your JSON.
printf '%s' "$1" | jq -R 'split(",") | map({"{#R_IP}": .}) | {data: .}'
Manually piecing together JSON like this is pretty brittle. But here goes. A very common trick is to prefix each string except the first with a comma.
#!/bin/bash
IFS=':, ' read -r -a array <<< "$1"
prefix=''
printf '%s' '{"data":['
for item in "${array[#]}"; do
printf '%s%s' "$prefix" "{\"{#R_IP}\":\"$item\"}"
prefix=','
done
printf '%s\n' ']}'
Notice also how no explicit exit is required at the end of a script. The shell stops executing the script and terminates when it reaches the end of the script.
Also, the shebang needs to start with exactly the two single-byte characters #!.
Finally, a much better overall design is probably to not require the arguments to be comma-separated; but I won't try to fix that here.

Get the executed command, quoted params, after executing `"${argv[#]}"`

This function works:
source foo.bash && foo -n "a b c.txt"
The problem is, no matter what I've tried, I couldn't get the last line echo "$CMD" (or echo $CMD) to generate exactly this output:
cat -n "a b c.txt"
How to achieve that?
# foo.bash
function foo() {
local argv=("$#");
local OUT=`cat "${argv[#]}"`
local CMD=`echo cat "${argv[#]}"`
echo "--------------------------"
echo "$OUT"
echo "--------------------------"
echo "$CMD"
}
The output is instead:
cat -n a b c.txt
With this command: foo -n \"a b c.txt\" it does work for the display of the command, but it gives errors for the execution via the backtick.
The file "a b c.txt" is a valid, small, text file.
You need to escape quotes inside of the assignment:
local CMD="cat \"${argv[#]}\""
Also, echo is not needed to concatenate strings.
There you go, with the help of number of tokens in bash variable I've come up with the right solution.
I've almost forgot WHY we actually need quoting for one argument, it's because it has multiple words!
function foo() {
local argv=( "$#" );
local OUT=`cat "${argv[#]}"`
echo "--------------------------"
echo "$OUT"
echo "--------------------------"
local CMD="cat"
for word in "${argv[#]}"; do
words="${word//[^\ ]} "
if [[ ${#words} > 1 ]]; then
local CMD="$CMD \"${word}\""
else
local CMD="$CMD $word"
fi
done
echo "$CMD"
}
Hope it helps someone.

Resources