I'm new to bash scripting and I've been learning as I go with a small project I'm taking on. However, I've run into a problem that I cannot seem to get past.
I have a variable that I need to include in a command. When ran directly in the shell (with the variable manually typed), the command returns the expected result. However, I can't get it to work when using a variable.
So, if I manually run this, it correctly returns 0 or 1, depending if it is running or not.
ps -ef | grep -v grep | grep -c ProcessName
However, when I try to embed that into this while clause, it always evaluates to 0 because it's not searching for the correct text.
while [ `ps -ef | grep -v grep | grep -c {$1}` -ne 0 ]
do
sleep 5
done
Is there a way I can accomplish this? I've tried a myriad of different things to no avail. I also tried using the $() syntax for command substitution, but I had no luck with that either.
Thanks!
I think that instead of {$1} you mean "$1". Also, you can just do pgrep -c "$1" instead of the two pipes.
In addition, there's also no need to compare the output of grep -c with 0, since you can just see if the command failed or not. So, a much simplified version might be:
while pgrep "$1" > /dev/null
do
sleep 4
done
You should really use -C with ps rather than the messy pipes if you're using the full process name. If you're interested in substring matching, then your way is the only thing I can think of.
Related
When I do
ps -ax|grep myApp
I get the one line with PID and stuff of my app.
Now, I'ld liked to process the whole result of ps -ax (without grep, so, the full output):
Either store it in a variable and grep from it later
Or go through the results in a for loop, e.g. like that:
for a in $(ps -ax)
do
echo $a
done
Unfortunally, this splits with every space, not with newline as |grep does it.
Any ideas, how I can accomplish one or the other (grep from variable or for loop)?
Important: No bash please, only POSIX, so #!/bin/sh
Thanks in advance
Like stated above, while loop can be helpful here.
One more useful thing is --no-headers argument which makes ps skip the header.
Or - even better - specify the exact columns you need to process, like ps -o pid,command --no-header ax
The overall code would look like
processes=`ps --no-headers -o pid,command ax`
echo "$processes" | while read pid command; do
echo "we have process with pid $pid and command line $command"
done
The only downside to this approach is that commands inside while loop will be executed in subshell so if you need to export some var to the parent process you'll have to do it using inter-process communication stuff.
I usually dump the results into temp file created before while loop and read them after the loop is finished.
I found a solution by removing the spaces while executing the command:
result=$(ps -aux|sed 's/ /_/g')
You can also make it more filter friendly by removing duplicated spaces:
result=$(ps -aux| tr -s ' '|sed 's/ /_/g')
I need to highlight certain keywords like "fail, failed, error, fatal, missing" over my terminal.
I need this with the output of ALL the commands, not any specific command. I assume I need to tweak my bashrc file for this.
To color I can use:
<input coming to terminal>|grep -P --color=auto 'fail|failed|error|fatal|missing|$'
I tried the following command but not helped:
tail -f $(tty) |grep -P --color=auto 'fail|failed|error|fatal|missing|$' &
[1]+ Stopped(SIGTTIN) tail -f $(tty) | grep -P --color=auto 'fail|failed|error|fatal|missing|$'
Searched SO for answers but could not find any question which provides desired an answer.
I don't think there's really an elegant way to do this using the shell. Ideally, you'd get a terminal emulator with this kind a keyword highlighting built in. You can get some of the way by piping the output of bash through a filter that adds ANSI colour escapes. Here is a sed script, that replaces "fail" with (red)fail(normal):
s/fail/\x1B[31m&\x1B[0m/
t done
:done
Run bash with its output piped through sed like this:
$bash | sed -f color.sed
This mechanism is not without problems, but it works in some cases. Usually it's better just to collect up the output you want, and then pipe it through sed, rather than working directly with the bash output.
with this grep it shows a comand I used:
echo `history | grep "ssh root" | head -1| cut -c6-`
with this output:
ssh root#107.170.70.100
I want the output to directly execute as the command instead of printed.
How can I do it?
In principle, this can be done by using the $() format, so
$(history | grep "ssh root" | head -1| cut -c6-)
should do what you ask for. However, I don't think that it is advisable to do so, as this will automatically execute the command that results from your grep, so if you did a mistake, a lot of bad things can happen. Instead I suggest reviewing your result before re-executing. bash history has a lot of nice shortcuts to deal with these kind of things. As an example, imagine:
> history | grep "ssh root"
756 ssh root#107.170.70.100
you can call this command on line 756 easily by typing
!756
It's definitely much safer. Hope this helps.
Ideally you'd be using the $(cmd) syntax rather than the `cmd` syntax. This makes it easier to nest subshells as well as keep track of what's going on.
That aside, if you remove the echo statement it will run the script:
# Prints out ls
echo $( echo ls )
# Runs the ls command
$( echo ls )
Use eval.
$ eval `history | grep "ssh root" | head -1| cut -c6-`
From eval command in Bash and its typical uses:
eval takes a string as its argument, and evaluates it as if you'd typed that string on a command line.
And the Bash Manual (https://www.gnu.org/software/bash/manual/html_node/Bourne-Shell-Builtins.html#Bourne-Shell-Builtins)
eval
eval [arguments]
The arguments are concatenated together into a single command, which is then read and executed, and its exit status returned as the exit status of eval. If there are no arguments or only empty arguments, the return status is zero.
I have a script that we use on Ubuntu (Linux) and I'd like to convert it to be used on both Ubuntu (Linux) and MacOS X. grep on Linux is different than grep on FreeBSD (i.e. MacOS X); grep on MacOS X doesn't support the -P option. Unfortunately, using the -E option on both platforms doesn't give the same results. Consider the following code that works on Linux:
wip_scenarios=$(grep -oP "^\d+ scenarios?" log/report.log | grep -oP "\d+")
echo "\n"
echo $wip_scenarios
This returns a 0 on Linux. Replacing all the -P with -E makes this work on MacOS X, but on Linux, this just returns a null which doesn't help the rest of my script when I use conditionals like this:
if [ $wip_scenarios != 0 ];then
One solution is to put a flag at the front and use the appropriate option set depending on the platform, but I was hoping for a cross-platform solution. Is there a way to do this?
For the regex you gave here, this is simple: Change \d to [[:digit:]].
Thus:
wip_scenarios=$(grep -Eo '^[[:digit:]]+ scenarios[?]' <report.log | grep -Eo '[[:digit:]]+')
If your script starts with #!/bin/bash (and thus will only ever be run with bash), I'd also consider skipping the dependency on the non-standard extension grep -o, and instead depending on bash itself to separate out the numbers you care about:
# This works with any POSIX-compliant grep, but requires that the shell be bash
wip_scenarios_re='([[:digit:]]+) scenarios[?]'
wip_scenarios_line=$(grep -E '^[[:digit:]]+ scenarios[?]' <report.log)
[[ $wip_scenarios_line =~ $wip_scenarios_re ]] && {
wip_scenarios=${BASH_REMATCH[1]}
}
Is it possible to grep the result of a command spawned by xargs?
As an example I am trying the following command
findbranch prj-xyz -latest|sed 's/^\(.*\/.*\)##.*$/\1/'|xargs -I {} cleartool lsh {}|grep -m 1 'user'
but seems like grep is executing on the entire result set returned by findbranch, rather individual results of lsh
As an example what I want from above is, for every file returned by findbranch and sed combined I would like to find that version which was last modified by a certain user.
Note If in case it is of a concern, findbranch is an internal utility.
How about this approach?
.... | xargs -I {} bash -c "cleartool lsh {}|grep -m 1 'user'"
I guess, this answer is self explanatory for you...
Why not use a two phase command? something like
findbranch prj-xyz -latest|sed 's/^\(.*\/.*\)##.*$/\1/' > /tmp/x ; for x in `cat /tmp/x`; do echo $x; done
Once you see $x is the input you need for xargs you can further manipulate it
If you have GNU Parallel this ought to work:
findbranch prj-xyz -latest|sed 's/^\(.*\/.*\)##.*$/\1/'|parallel cleartool lsh {}'|'grep -m 1 'user'
It will still spawn multiple shells, but at least you can use more CPUs to process it.