I'm trying to use grep in a machine but it just isn't working. When I use it after a pipeline (for example ps -aux | grep grep) it works (in the example showing grep processes).
But I'm trying for search a word into files but it just starts at S+ () state, no matter if I write "grep anything", "grep -Ril anything", etc.
From the man page:
grep searches the named input FILEs (or standard input if no files are named ...
So when you run grep anything, grep is waiting for standard input. You need to give it either filenames on the command line, or input via stdin.
Related
I'm trying to tweak a bash script to pull back PID's of the individual application accounts when there are multiple applications running as a masterId. This used to run under individual user accounts, but recent changes have forced the applications to all run under a combined "masterId", but still maintain a unique application Id that I can grep against.
Normally
pgrep -u "appId"
would give me a single PID. Now I have to run:
pgrep -u "masterId"
it returns all of the PID's (each one is it's own application).
1234
2345
3456
I'm trying to come up with a command to bring me back just the PID of the appAccount(n) so I can pipe it into other useful commands. I can do a double grep (which is closer to what I want):
ps aux | grep -i "masterId" | grep -i "appAccount(n)"
and that will get me the entire single process information, but I just want the PID to do something like:
ps aux | grep -i "masterId" | grep -i "appAccount(n)" | xargs sudo -u appAccount(n) kill -9
How do I modify the initial above command to get just the PID? Is there a better way to do this?
pgrep --euid "masterId" --list-full | awk '/appAccount(n)/ {print $1}'
Output the full process command line, then select the one with the desired account and print the first field (pid).
I am doing a bash script and i am essaying to show not logged users processes,which are typically daemon processes, for this,in the exercise, they recommend me:
To process the command line, we will use the cut command, which allows
selecting the different columns of the list through a filter.
I used:
ps -A | grep -v w
ps -A | grep -v who
ps -A | grep -v $USER
but trying all these options all the processes of all users are printed in the output file, and I only want the processes of users who are not logged.
I appreciate your help
Thank you.
grep -v w will remove lines matching the regular expression w (which is simply anything which contains the string w). To run the command w you have to say so; but as hinted in the instructions, you will also need to use cut to post-process the output.
So as not to give the answer away completely, here's rough pseudocode.
w | cut something >tempfile
ps -A | grep -Fvf tempfile
It would be nice if you could pass the post-processed results of w in a pipe, but standard input is already tied to ps -A. If you have a shell which supports process substitution, you can use that.
ps -A | grep -Fvf <(w | cut something)
Unfortunately, the output from w is not properly machine-readable -- you will properly want to cut out the header line(s), too. (On my machine, there are two header lines. Yours might differ.) You'll probably learn a bit of Awk later on in the course, but until then, maybe something like
ps -A | grep -Fvf <(w | tail -n +3 | cut something)
This still doesn't completely handle all possible situations. What if someone's account name is grep?
what does the s mean there and also when pipe into wc what is that for? I know it eventually count the number of abc appeared in file filename, but not sure about the option s for and also pipe to wc mean
linux command grep -is "abc" filename|wc -l
output
47
-s means "suppress error messages about unreadable files" and the pipe to wc means "take the output and send it to the wc -l command" which effectively counts the number of lines matched. You can accomplish the same with the -c option to grep: grep -isc "abc" filename
Consider,
command_1 | command_2
Role of the pipe is that- it takes output of command written before it (command_1 here) and supplies that output to the command written after it (command_2 here).
The man page has everything you would want to know about the options for grep:
-s, --no-messages
Suppress error messages about nonexistent or unreadable files.
Portability note: unlike GNU grep, traditional grep did not con-
form to POSIX.2, because traditional grep lacked a -q option and
its -s option behaved like GNU grep's -q option. Shell scripts
intended to be portable to traditional grep should avoid both -q
and -s and should redirect output to /dev/null instead.
The pipe to wc -l is what gives you the count of how many lines the string "abc" appeared on. It isn't necessarily the number of times the string appeared in the file since one line with multiple occurrences is going to be counted as only 1.
grep man page says:
-s, --no-messages suppress error messages
grep returns the lines that have abc (case insensitive) in them. You pipe them to wc to get a count of the number of lines.
From man grep:
-s, --no-messages
Suppress error messages about nonexistent or unreadable files.
The wc command counts line, words and characters. With -l it returns the number of lines.
I have multiple piped commands, like this:
find [options] | grep [options] | xargs grep [options]
Each one of them can potentially produce errors (permissions errors, spaces-in-filenames errors, etc) that I am not interested in. So, I want to redirect all errors to /dev/null. I know I can do this with 2>/dev/null, for each command. Can I set IO redirection persistently? Ideally, I would just set it once, at the beginning/end of the command, and then it would affect all subsequent/preceding piped commands. Also, can IO redirection be set permanently, so that it continues to affect all commands until it is reset?
I'm using bash (I checked the man page for bash builtins and did not see the '>' and '<' characters at the top, so I assumed it was a linux thing... sorry)
I'm going to assume that you're using bash, or at least some sort of Bourne-like shell.
I'm also assuming that what you want to avoid is the following:
find ... 2>/dev/null | grep ... 2>/dev/null | xargs ... 2>/dev/null
i.e. repeating the 2>/dev/null part for each segment of the pipeline.
You can do that with:
( find ... | grep ... | xargs ... ) 2>/dev/null
You can also set the redirection permanently as follows:
exec 2>/dev/null
and (assuming that STDOUT and STDERR were both pointing to the same place before), you can undo that with:
exec 2>&1
Start a new shell with a redirected stderr, and run your pipeline in there?
Something like
$ bash 2>/dev/null -c "find [options] | grep [options] | xargs grep [options]"
It seems to work (i.e. no visible output) when I do
$ bash 2>/dev/null -c "echo nope 1>&2"
Of course, a side effect is that it is going to be tricky to understand what happened in the event of anything unexpected going on, but that's your business.
How do I pipe the output of grep as the search pattern for another grep?
As an example:
grep <Search_term> <file1> | xargs grep <file2>
I want the output of the first grep as the search term for the second grep. The above command is treating the output of the first grep as the file name for the second grep. I tried using the -e option for the second grep, but it does not work either.
You need to use xargs's -i switch:
grep ... | xargs -ifoo grep foo file_in_which_to_search
This takes the option after -i (foo in this case) and replaces every occurrence of it in the command with the output of the first grep.
This is the same as:
grep `grep ...` file_in_which_to_search
Try
grep ... | fgrep -f - file1 file2 ...
If using Bash then you can use backticks:
> grep -e "`grep ... ...`" files
the -e flag and the double quotes are there to ensure that any output from the initial grep that starts with a hyphen isn't then interpreted as an option to the second grep.
Note that the double quoting trick (which also ensures that the output from grep is treated as a single parameter) only works with Bash. It doesn't appear to work with (t)csh.
Note also that backticks are the standard way to get the output from one program into the parameter list of another. Not all programs have a convenient way to read parameters from stdin the way that (f)grep does.
I wanted to search for text in files (using grep) that had a certain pattern in their file names (found using find) in the current directory. I used the following command:
grep -i "pattern1" $(find . -name "pattern2")
Here pattern2 is the pattern in the file names and pattern1 is the pattern searched for
within files matching pattern2.
edit: Not strictly piping but still related and quite useful...
This is what I use to search for a file from a listing:
ls -la | grep 'file-in-which-to-search'
Okay breaking the rules as this isn't an answer, just a note that I can't get any of these solutions to work.
% fgrep -f test file
works fine.
% cat test | fgrep -f - file
fgrep: -: No such file or directory
fails.
% cat test | xargs -ifoo grep foo file
xargs: illegal option -- i
usage: xargs [-0opt] [-E eofstr] [-I replstr [-R replacements]] [-J replstr]
[-L number] [-n number [-x]] [-P maxprocs] [-s size]
[utility [argument ...]]
fails. Note that a capital I is necessary. If i use that all is good.
% grep "`cat test`" file
kinda works in that it returns a line for the terms that match but it also returns a line grep: line 3 in test: No such file or directory for each file that doesn't find a match.
Am I missing something or is this just differences in my Darwin distribution or bash shell?
I tried this way , and it works great.
[opuser#vjmachine abc]$ cat a
not problem
all
problem
first
not to get
read problem
read not problem
[opuser#vjmachine abc]$ cat b
not problem xxy
problem abcd
read problem werwer
read not problem 98989
123 not problem 345
345 problem tyu
[opuser#vjmachine abc]$ grep -e "`grep problem a`" b --col
not problem xxy
problem abcd
read problem werwer
read not problem 98989
123 not problem 345
345 problem tyu
[opuser#vjmachine abc]$
You should grep in such a way, to extract filenames only, see the parameter -l (the lowercase L):
grep -l someSearch * | xargs grep otherSearch
Because on the simple grep, the output is much more info than file names only. For instance when you do
grep someSearch *
You will pipe to xargs info like this
filename1: blablabla someSearch blablabla something else
filename2: bla someSearch bla otherSearch
...
Piping any of above line makes nonsense to pass to xargs.
But when you do grep -l someSearch *, your output will look like this:
filename1
filename2
Such an output can be passed now to xargs
I have found the following command to work using $() with my first command inside the parenthesis to have the shell execute it first.
grep $(dig +short) file
I use this to look through files for an IP address when I am given a host name.