I would like to create a perl or bash script that will read keyboard input and assign a variable, perform a fixed string grep recursively within the current directory filled with Snort logs, and then automatically tcpdump the matched files, grep its output, and print the specified lines to the terminal. Does anyone have a good idea of how this should work?
Here is an example of the methodology I want from the script:
step 1: Read keyboard input and assign it to variable named string.
step 2 command: grep -Fr "$string"
step 2 output: snort.log.1470609906 matches
step 3 command: tcpdump -r snort.log.1470609906 | grep -F "$string" C-10
step 3 output:
Snort log
Here's some bash code that does that:
s="google.com"
grep -Frl "$s" | \
while IFS= read -r x; do
tcpdump -r "$x" | grep -F "$s" -C10
done
idk about perl but you can do it easily enough just in shell:
str="google.com"
find . -type f -name 'snort.log.*' -exec grep -FlZ "$str" {} + |
xargs -0 -I {} sh -c 'tcpdump -r "{}" | grep -F '"$str"' -C10'
Related
I am trying to make a script so that I can give command with variable number of arguments myfind one two three and it finds all files in the folder, then applies grep -i one then grep -i two, and grep -i three and so on.
I tried following code:
#! /bin/bash
FULLARG="find . | "
for arg in "$#"
do
FULLARG=$FULLARG" grep -i "$arg" | "
done
echo $FULLARG
$FULLARG
However, though the command is created but it is not working and giving following error:
$ ./myfind one two three
find . | grep -i one | grep -i two | grep -i three |
find: unknown predicate `-i'
Where is the problem and how can it be solved?
You could store the result of find . and keep filtering that out till you have command line arguments:
#!/bin/bash
result="$(find .)"
for arg in "$#"
do
result=$(echo "$result" | grep -i "${arg}")
done
echo "$result"
I'm restoring a number of archives with dates within their names, something along the lines of:
user-2018.12.20.tar.xz
user-2019.01.10.tar.xz
user-2019.02.25.tar.xz
user-2019.04.19.tar.xz
...
I want to set each file's modification date to match the date in their filename by piping the filenames to touch via xargs and using replace-str to set the dates.
touch -m -t will take a datetime in the format [CCYYMMDDhhmm], but I'm having trouble substituting inline:
find . -name "*.xz" | xargs -I {} touch -m -t $(sed -e 's/\.tar\.xz//g; s/user-//g; s/\.//g; s/\///g; s/$/0000/g' {}) {}
Returns touch: invalid date format ‘./user-2018.03.22.tar.xz’, even though this:
find . -name "*.xz" | sed -e 's/\.tar\.xz//g; s/user-//g; s/\.//g; s/\///g; s/$/0000/g'
Returns properly-formatted dates, for example 201812200000. Am I misusing command substitution in my replace string somehow?
EDIT : Yes, a simple script could do this no problem. But the question remains...
You don't need find, sed, xargs or any third party tools, but just use the shell built-in regex capabilities to get the timestamp from the file
for file in *.tar.xz; do
[ -f "$file" ] || continue
if [[ $file =~ ^user-([[:digit:]]+).([[:digit:]]+).([[:digit:]]+).tar.xz$ ]]; then
dateStr="${BASH_REMATCH[1]}${BASH_REMATCH[2]}${BASH_REMATCH[3]}0000"
touch -m -t "$dateStr"
fi
done
The problem is that the command substitution will be evaluated once when you call xargs, not for each argument. You would need to spawn a shell for that:
find . -name "*.xz" \
| xargs -I {} bash -c 'touch -m --date "$(sed -e "s/\.tar\.xz//;s/user-//g; s/\.//g; s/\///g;" <<< "$1")" "$1"' -- {}
Note: xargs is not needed because you can use the -exec option of find:
find . -name "*.xz" -exec bash -c 'touch -m --date "$(sed -e "s/\.tar\.xz//;s/user-//g; s/\.//g; s/\///g;" <<< "$1")" "$1"' -- {} \;
PS: A small for loop would be more readable:
for file in user-*.tar.xz ; do
# remove prefix and suffix
date=${file#user-}
date=${date%.tar.xz}
# replace dots by /
date=${date//./\/}
touch -m --date "${date}" "${file}"
done
This might work for you (GNU parallel):
parallel --dryrun touch -m --date '{= s/[^0-9]//g =}' {} ::: *.xz
When happy that the commands are correct, then remove the --dryrun option.
Alternative:
parallel touch -m --date `{= s/user-//;s/\.tar\.xz//;s/\.//g =}' {} ::: *.xz
I have a bunch of files under a directory. how can I check all of them and make sure if it is a perl script or not?(they don't have .pl in the filename)
If you cannot rely on there being a valid shebang either, you might pass them to perl -c.
for f in *; do
perl -c "$f" 2>/dev/null && echo "$f is Perl"
done
If you want properly machine-readable output, maybe switch the echo to printf '%s\0' "$f" so you can pass it to xargs -0 and friends.
The obvious flaw with this is that a Perl script with an error in it will be reported as not being (valid) Perl.
Check the shebang
head -n 1 script | grep perl
Normally most command line scripts contain a shebang ie something like
#!/usr/bin/perl
They're not required if you are calling the script like this
perl script
but if you want to call them as system command they help.
find ./ -type f -exec egrep -I -l '^use strict;|^use warnings;|^sub |my \$|my \%|my \#|\->{' {} + 2>&1 \
| egrep -v 'README|\.git|\.zsh$|.sh$' \
| xargs file | grep 'ASCII' \
| awk '{print $1}' \
| sed 's/:$//'
not perfect but this will find most files with relatively modern Perl5 code in them
Since they do not have the extension, try this:
find /path/to/directory/ -type f | while read line; do if file -b "$line" | grep -i perl -q; then echo "$line is a perl file"; fi; done
I'm building a little bash script to run another bash script that's found in multiple directories. Here's the code:
cd /home/mainuser/CaseStudies/
grep -R -o --include="Auto.sh" [\w] | wc -l
When I execute just that part, it finds the same file 5 times in each folder. So instead of getting 49 results, I get 245. I've written a recursive bash script before and I used it as a template for this problem:
grep -R -o --include=*.class [\w] | wc -l
This code has always worked perfectly, without any duplication. I've tried running the first code with and without the " ", I've tried -r as well. I've read through the bash documentation and I can't seem to find a way to prevent, or even why I'm getting, this duplication. Any thoughts on how to get around this?
As a separate, but related question, if I could launch Auto.sh inside of each directory so that the output of Auto.sh was dumped into that directory; without having to place Auto.sh in each folder. That would probably be much more efficient that what I'm currently doing and it would also probably fix my current duplication problem.
This is the code for Auto.sh:
#!/bin/bash
index=1
cd /home/mainuser/CaseStudies/
grep -R -o --include=*.class [\w] | wc -l
grep -R -o --include=*.class [\w] |awk '{print $3}' > out.txt
while read LINE; do
echo 'Path '$LINE > 'Outputs/ClassOut'$index'.txt'
javap -c $LINE >> 'Outputs/ClassOut'$index'.txt'
index=$((index+1))
done <out.txt
Preferably I would like to make it dump only the javap outputs for the application its currently looking at. Since those .class files could be in any number of sub-directories, I'm not sure how to make them all dump in the top folder, without executing a modified Auto.sh in the top directory of each application.
Ok, so to fix the multiple find:
grep -R -o --include="Auto.sh" [\w] | wc -l
Should be:
grep -R -l --include=Auto.sh '\w' | wc -l
The reason this was happening, was that it was looking for instances of the letter w in Auto.sh. Which occurred 5 times in the file.
However, the overall fix that doesn't require having to place Auto.sh in every directory, is something like this:
MAIN_DIR=/home/mainuser/CaseStudies/
cd $MAIN_DIR
ls -d */ > DirectoryList.txt
while read LINE; do
cd $LINE
mkdir ProjectOutputs
bash /home/mainuser/Auto.sh
cd $MAIN_DIR
done <DirectoryList.txt
That calls this Auto.sh code:
index=1
grep -R -o --include=*.class '\w' | wc -l
grep -R -o --include=*.class '\w' | awk '{print $3}' > ProjectOutputs.txt
while read LINE; do
echo 'Path '$LINE > 'ProjectOutputs/ClassOut'$index'.txt'
javap -c $LINE >> 'ProjectOutputs/ClassOut'$index'.txt'
index=$((index+1))
done <ProjectOutputs.txt
Thanks again for everyone's help!
I have a command that should be executed by a shell script.
Actually the command does not matter the only thing that is important the further command execution and the right escaping of the critical parts.
The command that usually is executed normally in putty is something like this(maybe some additional flags for ls)
rm -r `ls /test/parse_first/ | awk '{print $2}' | grep trash`
but now I have a batch of such command so I would like to execute them in a loop
like
for i in {0..100}
do
str=str$i
${!str}
done
where str is :
str0="rm -r `ls /test/parse_first/ | awk '{print $2}' | grep trash`"
str1="rm -r `ls /test/parse_second/ | awk '{print $2}' | grep trash`"
and that gives me a lot of headache cause the execution done by ${!str} brakes the quotations and inline shell between `...` marks
my_rm() { rm -r `ls /test/$1 | awk ... | grep ... `; }
for i in `whatevr`; do
my_rm $i
done;
Getting this right is surprisingly tricky, but it can be done:
for i in $(seq 0 100)
do
str=str$i
eval "eval \"\$$str\""
done
You can also do:
for i in {0..10}
do
<whatevercommand>
done
It's actually simpler to place them on arrays and use glob patterns:
#!/bin/bash
shopt -s nullglob
DIRS=("/test/parse_first/" "/test/parse_second/")
for D in "${DIRS[#]}"; do
for T in "$D"/*trash*; do
rm -r -- "$T"
done
done
And if rm could accept multiple arguments, you don't need to have an extra loop:
for D in "${DIRS[#]}"; do
rm -r -- "$D"/*trash*
done
UPDATE:
#!/bin/bash
readarray -t COMMANDS <<'EOF'
rm -r `ls /test/parse_first/ | awk '{print $2}' | grep trash
rm -r `ls /test/parse_second/ | awk '{print $2}' | grep trash
EOF
for C in "${COMMANDS[#]}"; do
eval "$C"
done
Or you could just read commands from another file:
readarray -t COMMANDS < somefile.txt