I am doing a bash script and i am essaying to show not logged users processes,which are typically daemon processes, for this,in the exercise, they recommend me:
To process the command line, we will use the cut command, which allows
selecting the different columns of the list through a filter.
I used:
ps -A | grep -v w
ps -A | grep -v who
ps -A | grep -v $USER
but trying all these options all the processes of all users are printed in the output file, and I only want the processes of users who are not logged.
I appreciate your help
Thank you.
grep -v w will remove lines matching the regular expression w (which is simply anything which contains the string w). To run the command w you have to say so; but as hinted in the instructions, you will also need to use cut to post-process the output.
So as not to give the answer away completely, here's rough pseudocode.
w | cut something >tempfile
ps -A | grep -Fvf tempfile
It would be nice if you could pass the post-processed results of w in a pipe, but standard input is already tied to ps -A. If you have a shell which supports process substitution, you can use that.
ps -A | grep -Fvf <(w | cut something)
Unfortunately, the output from w is not properly machine-readable -- you will properly want to cut out the header line(s), too. (On my machine, there are two header lines. Yours might differ.) You'll probably learn a bit of Awk later on in the course, but until then, maybe something like
ps -A | grep -Fvf <(w | tail -n +3 | cut something)
This still doesn't completely handle all possible situations. What if someone's account name is grep?
Related
I'm trying to tweak a bash script to pull back PID's of the individual application accounts when there are multiple applications running as a masterId. This used to run under individual user accounts, but recent changes have forced the applications to all run under a combined "masterId", but still maintain a unique application Id that I can grep against.
Normally
pgrep -u "appId"
would give me a single PID. Now I have to run:
pgrep -u "masterId"
it returns all of the PID's (each one is it's own application).
1234
2345
3456
I'm trying to come up with a command to bring me back just the PID of the appAccount(n) so I can pipe it into other useful commands. I can do a double grep (which is closer to what I want):
ps aux | grep -i "masterId" | grep -i "appAccount(n)"
and that will get me the entire single process information, but I just want the PID to do something like:
ps aux | grep -i "masterId" | grep -i "appAccount(n)" | xargs sudo -u appAccount(n) kill -9
How do I modify the initial above command to get just the PID? Is there a better way to do this?
pgrep --euid "masterId" --list-full | awk '/appAccount(n)/ {print $1}'
Output the full process command line, then select the one with the desired account and print the first field (pid).
I'm trying to get the process id of multiple processes run from a multiple tmux windows. Each processes started have their unique command line. I think only the command line is unique as there can be multiple processes with same name. what i did currently is
process_pid=$(ps --no-headers aux | grep "${process_cmd_line}" | grep -v grep | awk '{print $2}' | tr '\n' '')
This works for me. But i want to know if this is the correct approach to do this. I know there are output format specifiers. Any example to do the same with the format specifier or an improvement over the code above?
Is there way of counting the number of processes being run by a user in the unix/linux/os x terminal?
For instance, top -u taha lists my processes. I want to be able to count these.
This will show all of the users with their counts (I believe this would be close enough for you. :)
ps -u "$(echo $(w -h | cut -d ' ' -f1 | sort -u))" o user= | sort | uniq -c | sort -rn
You can use ps to output it and count the number using wc, as:
ps -u user | sed 1d | wc -l
You can also dump top output and grep it, something like:
top -u user -n1 | grep user | wc -l
I'm somewhat new to *nix, so perhaps I did not fully understand the context of your question, but here is a possible solution:
jobs | wc -l
The output of the above command is a count of all the processes reported by the jobs command. You can manipulate the parameters of the jobs command to change which processes get reported.
EDIT: Just FYI, this would only work if interested in commands originating from a particular shell. If you want more control in looking at system-wide processes you probably want to use ps as others have suggested. However, if you use wc to do your counting, make sure you take into account any extraneous white space jobs, ps or top may have generated as that will affect the output of wc.
I have a Linux driver running in the background that is able to return the current system data/stats. I view the data by running a console utility (let's call it dump-data) in a console. All data is dumped every time I run dump-data. The output of the utility is like below
Output:
- A=reading1
- B=reading2
- C=reading3
- D=reading4
- E=reading5
...
- variableX=readingX
...
The list of readings returned by the utility can be really long. Depending on the scenario, certain readings would be useful while everything else would be useless.
I need a way to grep only the useful readings whose names might have have nothing in common (via a bash script). I.e. Sometimes I'll need to collect A,D,E; and other times I'll need C,D,E.
I'm attempting to graph the readings over time to look for trends, so I can't run something like this:
# forgive my pseudocode
Loop
dump-data | grep A
dump-data | grep D
dump-data | grep E
End Loop
to collect A,D,E as that would actually give me readings from 3 separate calls of dump-data as that would not be accurate.
If you want to save all result of grep in the same file, you can just join all expressions in one:
grep -E 'expr1|expr2|expr3'
But if you want to have results (for expr1, expr2 and expr3) in separate files, things are getting more interesting.
You can do this using tee >(command).
For example, here I process the same pipe with thre different commands:
$ echo abc | tee >(sed s/a/_a_/ > file1) | tee >(sed s/b/_b_/ > file2) | sed s/c/_c_/ > file3
$ grep "" file[123]
file1:_a_bc
file2:a_b_c
file3:ab_c_
But the command seems to be too complex.
I would better save dump-data results to a file and then grep it.
TEMP=$(mktemp /tmp/dump-data-XXXXXXXX)
dump-data > ${TEMP}
grep A ${TEMP}
grep B ${TEMP}
grep C ${TEMP}
You can use dump-data | grep -E "A|D|E". Note the -E option of grep. Alternatively you could use egrep without the -E option.
you can simply use:
dump-data | grep -E 'A|D|E'
awk '/MY PATTERN/{print > "matches-"FILENAME;}' myfile{1,3}
thx Guru at Stack Exchange
There is a simlar question in Preserve ls colouring after grep’ing but it annoys me that if you pipe colored grep output into another grep that the coloring is not preserved.
As an example grep --color WORD * | grep -v AVOID does not keep the color of the first output. But for me ls | grep FILE do keep the color, why the difference ?
grep sometimes disables the color output, for example when writing to a pipe. You can override this behavior with grep --color=always
The correct command line would be
grep --color=always WORD * | grep -v AVOID
This is pretty verbose, alternatively you can just add the line
alias cgrep="grep --color=always"
to your .bashrc for example and use cgrep as the colored grep. When redefining grep you might run into trouble with scripts which rely on specific output of grep and don't like ascii escape code.
A word of advice:
When using grep --color=always, the actual strings being passed on to the next pipe will be changed. This can lead to the following situation:
$ grep --color=always -e '1' * | grep -ve '12'
11
12
13
Even though the option -ve '12' should exclude the middle line, it will not because there are color characters between 1 and 2.
The existing answers only address the case when the FIRST command is grep (as asked by the OP, but this problem arises in other situations too).
More general answer
The basic problem is that the command BEFORE | grep, tries to be "smart" by disabling color when it realizes the output is going to a pipe. This is usually what you want so that ANSI escape codes don't interfere with your downstream program.
But if you want colorized output emanating from earlier commands, you need to force color codes to be produced regardless of the output sink. The forcing mechanism is program-specific.
Git: use -c color.status=always
git -c color.status=always status | grep -v .DS_Store
Note: the -c option must come BEFORE the subcommand status.
Others
(this is a community wiki post so feel free to add yours)
Simply repeat the same grep command at the end of your pipe.
grep WORD * | grep -v AVOID | grep -v AVOID2 | grep WORD