"watch" and tell how long it has been watching - linux

I need to repeatedly execute a command (e.g. ls) on a linux OS and I also would like to know how long it has been running since I started watching it.
I know I can use watch to execute the command. My question is how I can show when the "watch" was started? Or how long it has been watching?

It is not supported out of the box, but you can hack around it by supplying a more complex command.
watch bash -c '"echo $(($(date +%s) - '$(date +%s)'))s; echo ---; ls"'
Instead of directly watching the command you want (ls in this toy example), watch a bash instance, because it can parse a command line and do more. We tell this bash to execute three commands, the first of which calculates the difference between the seconds since epoch at watch invocation and the seconds since epoch at bash invocation. The second prints a line because for prettiness, and the third then executes the desired command.
Showing when the watch started follows the same idea, but is simpler:
watch bash -c "'echo $(date); echo ---; ls'"
Note that the order of the quotes is not random. The last command could have also been written similar to the above one:
watch bash -c '"echo '$(date)'; echo ---; ls"'

Related

Run "dummy" background command with specific text

I'm looking for a bash command I can run in the background that will sleep for a bit (60 seconds), and the command will contain a specific text string I can grep out of a ps command.
I can't release a "dummy" script I'm afraid, so it needs to be a one line command.
I tried
echo "textneeded">/dev/null && sleep 60 &
But of course the only text I can grep for is the sleep, as the echo is over in a flash.
(The reasoning for this is it's for putting another script in "test" mode so it doesn't create child processes, but other functionality that ensures there are none of these processes running will still find something, and therefore wait. The other functionality isn't in a bash script.)
I had to do this to test a process killing script. You can use perl to set the process name.
perl -e '$0="textneeded"; sleep 60' &
Original props goes to this guy

What does it mean if a procces runs like this?

Hi I have a question about a process. This is what I get when I run the line:
ls -l /proc/3502/exe
If I run this line:
echo "$(xargs -0 < /proc/${pids[0]}/cmdline)"
I get an output like: /bin/bash ./sleeper.sh 10
Does that mean the procces that was actually run is /bin/bash and everything after that was only passed to it as arguments(Procces: /bin/bash args that it got: ./sleeper.sh 10)?? Because I know the last part is the name of a script and an argument passed to it.
Does that mean the procces that was actually run is /bin/bash and everything after that was only passed to it as arguments(Procces: /bin/bash args that it got: ./sleeper.sh 10)??
That's exactly the case. When you run a script like this: ./script, the program loader parses the script, looking for a shebang that will tell it how to run that script. Shebangs are needed to differentiate, say, a Python script from a Bash script, or from a Perl script. Such scripts are actually executed by their respective interpreters, which are usually /bin/python, /bin/bash, /bin/perl, and that's why you see it listed as /bin/bash ./sleeper 10 rather than ./sleeper.sh 10.
For example, say ./script looks like this:
#!/bin/sh
echo yo
Running it with ./script will cause the system to spawn /bin/sh ./script. (Anecdote: because of how it works, some shebangs include also command line parameters on its own such as #!/usr/bin/perl -T, which will cause the system to spawn your script as /usr/bin/perl -T x).
As for why the script sees ./sleeper.sh 10 rather than /bin/bash ./sleeper.sh 10 - it's just how Bash (and other interpreters) works. The shebang expansion is visible to everyone, including Bash. Hence ps ux will show /bin/bash ./sleeper.sh 10. However, it would make little sense for your script to know exact specific combination of Bash flags and Bash path it was invoked with, so Bash strips these away and passes only relevant command line. Bash tries to make that command line consistent with general command line rules, meaning the first argument to your script will be usually the path to your script (caveats), and the rest of the arguments are the arguments passed to your script.
Seeing all of this in action
./test:
#!/bin/bash -i
echo $BASH_ARGV
Running it as ./test prints nothing. The process is spawned as /bin/bash -i ./test.
Running it as ./test x y prints x y. The process is spawned as /bin/bash -i ./test x y.
Suggestion
It's widely recommended to omit .sh, .py, .pl etc. extensions from executable files.

bash is not executed 'at -f foo.sh' command, even with #!/bin/bash shebang

I'm using the 'at' command in order to create 3 directories, just a dumb bash script:
#!/bin/bash
for i in {1..3}
do
mkdir dir$i
done
Everything is ok if I execute that script directly on terminal, but when I use 'at' command as follows:
at -f g.sh 18:06
It only creates one directory named dir{1..3}, taking interval not as an interval but as a list with one element {1..3}. According to this I think my mistake is using bash script due to at executes commands using /bin/sh but I'm not sure.
Please tell me if I'm right and I would appreciate some alternative to my code since even it is useless I'm curious to know what's wrong with at and bash.
The #! line only affects what happens when you run a script as a program (e.g. using it as a command in the shell). When you use at, it's not being run as a program, it's simply used as the standard input to /bin/sh, so the shebang has no effect.
You could do:
echo './g.sh' | at 18:06

How to use a Linux bash function to "trigger two processes in parallel"

Please kindly consider the following sample code snippet:
function cogstart
{
nohup /home/michael/..../cogconfig.sh
nohup /home/michael/..../freshness_watch.sh
watch -n 15 -d 'tail -n 1 /home/michael/nohup.out'
}
Basically the freshness_watch.sh and the last watch commands are supposed to be executed in parallel, i.e., the watch command doesn't have to wait till its prequel to finish. I am trying to work out a way like using xterm but since the freshness_watch.sh is a script that would last 15 minutes at the most(due to my bad way of writing a file monitoring script in Linux), I definitely want to trigger the last watch command while this script is still executing...
Any thoughts? Maybe in two separate/independent terminals?
Many thanks in advance for any hint/help.
As schemathings indicates indirectly, you probably want to append the '&' character to the end of the line with freshness_watch.sh. (without the single-quotes). I don't see any reason to use '&' for your final watch command, unless you add more commands after that.
'&' at the end of a unix command-line indicates 'run in the back-ground'.
You might want to insert a sleep ${someNumOfSecs} after your call to freshness_watch, to give it some time to have the CPU to it's self.
Seeing as you mention xterm, do you know about the crontab facility that allows you to schedule a job to run anytime you want, and is done without the user having to login? (Maybe this will help with your issue). I like setting jobs to run in crontab, because then you can capture any trace information you care to capture AND any possible output from stderr into a log/trace file.
( nohup wc -l * || nohup ls -l * ) &
( nohup wc -l * ; nohup ls -l * ) &
I'm not clear on what you're attempting to do - the question seems self contradictory.

at command strange behaviour

This is the first time i am playing with the at command in linux and notice something strange. Say i create this test file:
#!/bin/bash
count=1
echo "count is $count"
then i issue
at -f /full/path/to/myscript.sh -v 13:00 -m
and wait for it to run. Then in my mail, the value of count variable is nothing. What could be wrong?
To: root#localhost.localdomain
Status: R
count is
&
Are you sure your commands are being run by bash, and not some other interpreter like csh? I don't think the shebang line has any effect in an at job -- the commands are simply piped into whichever shell is specified via the SHELL environment variable.

Resources