I want to show only the directories where the binaries are installed. Like
/bin
for
/bin/ls
This is what I've done so far:
ps aux | awk '{print $11}' | grep -x -e "/.*"
But its displaying the filename too, and I dont want that, and example of the output:
/usr/lib/firefox/firefox
But id like it like this:
/usr/lib/firefox
Thank you!
The command in order to extract the name of the directory is dirname "path/to/file". Now as you probably see, it requires an argument (does not read from stdin). You can however use xargs to fix this:
xargs dirname
Now you simply need to add this at the end of your pipeline:
ps aux | awk '{print $11}' | grep -x -e "/.*" | xargs dirname
Demo
Ran this on my Linux machine:
$ ps aux | awk '{print $11}' | grep -x -e "/.*" | xargs dirname | head
/sbin
/lib/systemd
/lib/systemd
/sbin
/usr/sbin
/usr/sbin
/usr/sbin
/usr/sbin
/usr/sbin
/usr/bin
In order to make your command space-safe (a remark by #hek2mgl), you can use:
ps aux | awk '{print $11}' | grep -x -e "/.*" | xargs -I file dirname "file"
Mind this will have an impact on performance: whereas using xargs dirname without any flags would use the loop mechanism of dirname handling multiple parameters, and thus resulting in a tight loop, using the latter will spawn a dirname process for each line individually.
More elegant way
Your program makes use of a lot of text processing, which can be tricky, error prone and furthermore sensitive to changes of the format (of ps,...). A less error prone way can be:
ps -A -o pid | xargs -I pid readlink "/proc/pid/exe" | xargs -I file dirname "file"
Related
I have a program that in distributed mode creates a folder and spawns a bunch of sub processes. Is there any way to find all PIDs that were executed from this folder? Sort of opposite of
$ pwdx pid
where you give a path name and you get a bunch of pids.
thanks
Reporting all processes which absolute path is inside '/usr/bin/' may be done like this:
ls -l /proc/*/exe 2>/dev/null | grep /usr/bin/ | sed 's#.*/proc/##;s#/exe.*##;' | grep -v "self"
Reporting all processes which working directory (working directory can be changed by a simple cd) is inside /tmp/a could be done like this:
ps axo pid | xargs -n1 pwdx 2>/dev/null | grep ': /tmp/a' | sed 's/:.*//'
I would like to create a perl or bash script that will read keyboard input and assign a variable, perform a fixed string grep recursively within the current directory filled with Snort logs, and then automatically tcpdump the matched files, grep its output, and print the specified lines to the terminal. Does anyone have a good idea of how this should work?
Here is an example of the methodology I want from the script:
step 1: Read keyboard input and assign it to variable named string.
step 2 command: grep -Fr "$string"
step 2 output: snort.log.1470609906 matches
step 3 command: tcpdump -r snort.log.1470609906 | grep -F "$string" C-10
step 3 output:
Snort log
Here's some bash code that does that:
s="google.com"
grep -Frl "$s" | \
while IFS= read -r x; do
tcpdump -r "$x" | grep -F "$s" -C10
done
idk about perl but you can do it easily enough just in shell:
str="google.com"
find . -type f -name 'snort.log.*' -exec grep -FlZ "$str" {} + |
xargs -0 -I {} sh -c 'tcpdump -r "{}" | grep -F '"$str"' -C10'
I have a bunch of files under a directory. how can I check all of them and make sure if it is a perl script or not?(they don't have .pl in the filename)
If you cannot rely on there being a valid shebang either, you might pass them to perl -c.
for f in *; do
perl -c "$f" 2>/dev/null && echo "$f is Perl"
done
If you want properly machine-readable output, maybe switch the echo to printf '%s\0' "$f" so you can pass it to xargs -0 and friends.
The obvious flaw with this is that a Perl script with an error in it will be reported as not being (valid) Perl.
Check the shebang
head -n 1 script | grep perl
Normally most command line scripts contain a shebang ie something like
#!/usr/bin/perl
They're not required if you are calling the script like this
perl script
but if you want to call them as system command they help.
find ./ -type f -exec egrep -I -l '^use strict;|^use warnings;|^sub |my \$|my \%|my \#|\->{' {} + 2>&1 \
| egrep -v 'README|\.git|\.zsh$|.sh$' \
| xargs file | grep 'ASCII' \
| awk '{print $1}' \
| sed 's/:$//'
not perfect but this will find most files with relatively modern Perl5 code in them
Since they do not have the extension, try this:
find /path/to/directory/ -type f | while read line; do if file -b "$line" | grep -i perl -q; then echo "$line is a perl file"; fi; done
I'm building a little bash script to run another bash script that's found in multiple directories. Here's the code:
cd /home/mainuser/CaseStudies/
grep -R -o --include="Auto.sh" [\w] | wc -l
When I execute just that part, it finds the same file 5 times in each folder. So instead of getting 49 results, I get 245. I've written a recursive bash script before and I used it as a template for this problem:
grep -R -o --include=*.class [\w] | wc -l
This code has always worked perfectly, without any duplication. I've tried running the first code with and without the " ", I've tried -r as well. I've read through the bash documentation and I can't seem to find a way to prevent, or even why I'm getting, this duplication. Any thoughts on how to get around this?
As a separate, but related question, if I could launch Auto.sh inside of each directory so that the output of Auto.sh was dumped into that directory; without having to place Auto.sh in each folder. That would probably be much more efficient that what I'm currently doing and it would also probably fix my current duplication problem.
This is the code for Auto.sh:
#!/bin/bash
index=1
cd /home/mainuser/CaseStudies/
grep -R -o --include=*.class [\w] | wc -l
grep -R -o --include=*.class [\w] |awk '{print $3}' > out.txt
while read LINE; do
echo 'Path '$LINE > 'Outputs/ClassOut'$index'.txt'
javap -c $LINE >> 'Outputs/ClassOut'$index'.txt'
index=$((index+1))
done <out.txt
Preferably I would like to make it dump only the javap outputs for the application its currently looking at. Since those .class files could be in any number of sub-directories, I'm not sure how to make them all dump in the top folder, without executing a modified Auto.sh in the top directory of each application.
Ok, so to fix the multiple find:
grep -R -o --include="Auto.sh" [\w] | wc -l
Should be:
grep -R -l --include=Auto.sh '\w' | wc -l
The reason this was happening, was that it was looking for instances of the letter w in Auto.sh. Which occurred 5 times in the file.
However, the overall fix that doesn't require having to place Auto.sh in every directory, is something like this:
MAIN_DIR=/home/mainuser/CaseStudies/
cd $MAIN_DIR
ls -d */ > DirectoryList.txt
while read LINE; do
cd $LINE
mkdir ProjectOutputs
bash /home/mainuser/Auto.sh
cd $MAIN_DIR
done <DirectoryList.txt
That calls this Auto.sh code:
index=1
grep -R -o --include=*.class '\w' | wc -l
grep -R -o --include=*.class '\w' | awk '{print $3}' > ProjectOutputs.txt
while read LINE; do
echo 'Path '$LINE > 'ProjectOutputs/ClassOut'$index'.txt'
javap -c $LINE >> 'ProjectOutputs/ClassOut'$index'.txt'
index=$((index+1))
done <ProjectOutputs.txt
Thanks again for everyone's help!
For instance, if I'd like to reference the output of the previous command once, I can use the command below:
ls *.txt | xargs -I % ls -l %
But how to reference the output twice? Like how can I implement something like:
ls *.txt | xargs -I % 'some command' % > %
PS: I know how to do it in shell script, but I just want a simpler way to do it.
You can pass this argument to bash -c:
ls *.txt | xargs -I % bash -c 'ls -l "$1" > "out.$1"' - %
You can lookup up 'tpipe' on SO; it will also lead you to 'pee' (which is not a good search term elsewhere on the internet). Basically, they're variants of the tee command which write to multiple processes instead of writing to files like the tee command does.
However, with Bash, you can use Process Substitution:
ls *.txt | tee >(cmd1) >(cmd2)
This will write the input to tee to each of the commands cmd1 and cmd2.
You can arrange to lose standard output in at least two different ways:
ls *.txt | tee >(cmd1) >(cmd2) >/dev/null
ls *.txt | tee >(cmd1) | cmd2