I'm looking for a bash command I can run in the background that will sleep for a bit (60 seconds), and the command will contain a specific text string I can grep out of a ps command.
I can't release a "dummy" script I'm afraid, so it needs to be a one line command.
I tried
echo "textneeded">/dev/null && sleep 60 &
But of course the only text I can grep for is the sleep, as the echo is over in a flash.
(The reasoning for this is it's for putting another script in "test" mode so it doesn't create child processes, but other functionality that ensures there are none of these processes running will still find something, and therefore wait. The other functionality isn't in a bash script.)
I had to do this to test a process killing script. You can use perl to set the process name.
perl -e '$0="textneeded"; sleep 60' &
Original props goes to this guy
I want have a shell script, which configure several things and then call two other shell scripts. I want these two scripts run in parallel and I want to be able to get and print their live output.
Here is my first script which calls the other two
#!/bin/bash
#CONFIGURE SOME STUFF
$path/instance2_commands.sh
$path/instance1_commands.sh
These two process trying to deploy two different application and each of them took around 5 minute so I want to run them in parallel and also see their live output so I know where are they with the deploying tasks. Is this possible?
Running both scripts in parallel can look like this:
#!/bin/bash
#CONFIGURE SOME STUFF
$path/instance2_commands.sh >instance2.out 2>&1 &
$path/instance1_commands.sh >instance1.out 2>&1 &
wait
Notes:
wait pauses until the children, instance1 and instance2, finish
2>&1 on each line redirects error messages to the relevant output file
& at the end of a line causes the main script to continue running after forking, thereby producing a child that is executing that line of the script concurrently with the rest of the main script
each script should send its output to a separate file. Sending both to the same file will be visually messy and impossible to sort out when the instances generate similar output messages.
you may attempt to read the output files while the scripts are running with any reader, e.g. less instance1.out however output may be stuck in a buffer and not up-to-date. To fix that, the programs would have to open stdout in line buffered or unbuffered mode. It is also up to you to use -f or > to refresh the display.
Example D from an article on Apache Spark and parallel processing on my blog provides a similar shell script for calculating sums of a series for Pi on all cores, given a C program for calculating the sum on one core. This is a bit beyond the scope of the question, but I mention it in case you'd like to see a deeper example.
It is very possible, change your script to look like this:
#!/bin/bash
#CONFIGURE SOME STUFF
$path/instance2_commands.sh >> script.log
$path/instance1_commands.sh >> script.log
They will both output to the same file and you can watch that file by running:
tail -f script.log
If you like you can output to 2 different files if you wish. Just change each ling to output (>>) to a second file name.
This how I end up writing it using Paul instruction.
source $path/instance2_commands.sh >instance2.out 2>&1 &
source $path/instance1_commands.sh >instance1.out 2>&1 &
tail -q -f instance1.out -f instance2.out --pid $!
wait
sudo rm instance1.out
sudo rm instance2.out
My logs in two processes was different so I didn't care if aren't all together, that is why I put them all in one file.
How it is now
I currently have a script running under windows that frequently invokes recursive file trees from a list of servers.
I use an AutoIt (job manager) script to execute 30 parallel instances of lftp (still windows), doing this:
lftp -e "find .; exit" <serveraddr>
The file used as input for the job manager is a plain text file and each line is formatted like this:
<serveraddr>|...
where "..." is unimportant data. I need to run multiple instances of lftp in order to achieve maximum performance, because single instance performance is determined by the response time of the server.
Each lftp.exe instance pipes its output to a file named
<serveraddr>.txt
How it needs to be
Now I need to port this whole thing over to a linux (Ubuntu, with lftp installed) dedicated server. From my previous, very(!) limited experience with linux, I guess this will be quite simple.
What do I need to write and with what? For example, do I still need a job man script or can this be done in a single script? How do I read from the file (I guess this will be the easy part), and how do I keep a max. amount of 30 instances running (maybe even with a timeout, because extremely unresponsive servers can clog the queue)?
Thanks!
Parallel processing
I'd use GNU/parallel. It isn't distributed by default, but can be installed for most Linux distributions from default package repositories. It works like this:
parallel echo ::: arg1 arg2
will execute echo arg1 and and echo arg2 in parallel.
So the most easy approach is to create a script that synchronizes your server in bash/perl/python - whatever suits your fancy - and execute it like this:
parallel ./script ::: server1 server2
The script could look like this:
#!/bin/sh
#$0 holds program name, $1 holds first argument.
#$1 will get passed from GNU/parallel. we save it to a variable.
server="$1"
lftp -e "find .; exit" "$server" >"$server-files.txt"
lftp seems to be available for Linux as well, so you don't need to change the FTP client.
To run max. 30 instances at a time, pass a -j30 like this: parallel -j30 echo ::: 1 2 3
Reading the file list
Now how do you transform specification file containing <server>|... entries to GNU/parallel arguments? Easy - first, filter the file to contain just host names:
sed 's/|.*$//' server-list.txt
sed is used to replace things using regular expressions, and more. This will strip everything (.*) after the first | up to the line end ($). (While | normally means alternative operator in regular expressions, in sed, it needs to be escaped to work like that, otherwise it means just plain |.)
So now you have list of servers. How to pass them to your script? With xargs! xargs will put each line as if it was an additional argument to your executable. For example
echo -e "1\n2"|xargs echo fixed_argument
will run
echo fixed_argument 1 2
So in your case you should do
sed 's/|.*$//' server-list.txt | xargs parallel -j30 ./script :::
Caveats
Be sure not to save the results to the same file in each parallel task, otherwise the file will get corrupt - coreutils are simple and don't implement any locking mechanisms unless you implement them yourself. That's why I redirected the output to $server-files.txt rather than files.txt.
Please kindly consider the following sample code snippet:
function cogstart
{
nohup /home/michael/..../cogconfig.sh
nohup /home/michael/..../freshness_watch.sh
watch -n 15 -d 'tail -n 1 /home/michael/nohup.out'
}
Basically the freshness_watch.sh and the last watch commands are supposed to be executed in parallel, i.e., the watch command doesn't have to wait till its prequel to finish. I am trying to work out a way like using xterm but since the freshness_watch.sh is a script that would last 15 minutes at the most(due to my bad way of writing a file monitoring script in Linux), I definitely want to trigger the last watch command while this script is still executing...
Any thoughts? Maybe in two separate/independent terminals?
Many thanks in advance for any hint/help.
As schemathings indicates indirectly, you probably want to append the '&' character to the end of the line with freshness_watch.sh. (without the single-quotes). I don't see any reason to use '&' for your final watch command, unless you add more commands after that.
'&' at the end of a unix command-line indicates 'run in the back-ground'.
You might want to insert a sleep ${someNumOfSecs} after your call to freshness_watch, to give it some time to have the CPU to it's self.
Seeing as you mention xterm, do you know about the crontab facility that allows you to schedule a job to run anytime you want, and is done without the user having to login? (Maybe this will help with your issue). I like setting jobs to run in crontab, because then you can capture any trace information you care to capture AND any possible output from stderr into a log/trace file.
( nohup wc -l * || nohup ls -l * ) &
( nohup wc -l * ; nohup ls -l * ) &
I'm not clear on what you're attempting to do - the question seems self contradictory.
Can you edit a shell script while it's running and have the changes affect the running script?
I'm curious about the specific case of a csh script I have that batch runs a bunch of different build flavors and runs all night. If something occurs to me mid operation, I'd like to go in and add additional commands, or comment out un-executed ones.
If not possible, is there any shell or batch-mechanism that would allow me to do this?
Of course I've tried it, but it will be hours before I see if it worked or not, and I'm curious about what's happening or not happening behind the scenes.
It does affect, at least bash in my environment, but in very unpleasant way. See these codes. First a.sh:
#!/bin/sh
echo "First echo"
read y
echo "$y"
echo "That's all."
b.sh:
#!/bin/sh
echo "First echo"
read y
echo "Inserted"
echo "$y"
# echo "That's all."
Do
$ cp a.sh run.sh
$ ./run.sh
$ # open another terminal
$ cp b.sh run.sh # while 'read' is in effect
$ # Then type "hello."
In my case, the output is always:
hello
hello
That's all.
That's all.
(Of course it's far better to automate it, but the above example is readable.)
[edit] This is unpredictable, thus dangerous. The best workaround is , as described here put all in a brace, and before the closing brace, put "exit". Read the linked answer well to avoid pitfalls.
[added] The exact behavior depends on one extra newline, and perhaps also on your Unix flavor, filesystem, etc. If you simply want to see some influences, simply add "echo foo/bar" to b.sh before and/or after the "read" line.
Try this... create a file called bash-is-odd.sh:
#!/bin/bash
echo "echo yes i do odd things" >> bash-is-odd.sh
That demonstrates that bash is, indeed, interpreting the script "as you go". Indeed, editing a long-running script has unpredictable results, inserting random characters etc. Why? Because bash reads from the last byte position, so editing shifts the location of the current character being read.
Bash is, in a word, very, very unsafe because of this "feature". svn and rsync when used with bash scripts are particularly troubling, because by default they "merge" the results... editing in place. rsync has a mode that fixes this. svn and git do not.
I present a solution. Create a file called /bin/bashx:
#!/bin/bash
source "$1"
Now use #!/bin/bashx on your scripts and always run them with bashx instead of bash. This fixes the issue - you can safely rsync your scripts.
Alternative (in-line) solution proposed/tested by #AF7:
{
# your script
exit $?
}
Curly braces protect against edits, and exit protects against appends. Of course, we'd all be much better off if bash came with an option, like -w (whole file), or something that did this.
Break your script into functions, and each time a function is called you source it from a separate file. Then you could edit the files at any time and your running script will pick up the changes next time it gets sourced.
foo() {
source foo.sh
}
foo
Good question!
Hope this simple script helps
#!/bin/sh
echo "Waiting..."
echo "echo \"Success! Edits to a .sh while it executes do affect the executing script! I added this line to myself during execution\" " >> ${0}
sleep 5
echo "When I was run, this was the last line"
It does seem under linux that changes made to an executing .sh are enacted by the executing script, if you can type fast enough!
An interesting side note - if you are running a Python script it does not change. (This is probably blatantly obvious to anyone who understands how shell runs Python scripts, but thought it might be a useful reminder for someone looking for this functionality.)
I created:
#!/usr/bin/env python3
import time
print('Starts')
time.sleep(10)
print('Finishes unchanged')
Then in another shell, while this is sleeping, edit the last line. When this completes it displays the unaltered line, presumably because it is running a .pyc? Same happens on Ubuntu and macOS.
I don't have csh installed, but
#!/bin/sh
echo Waiting...
sleep 60
echo Change didn't happen
Run that, quickly edit the last line to read
echo Change happened
Output is
Waiting...
/home/dave/tmp/change.sh: 4: Syntax error: Unterminated quoted string
Hrmph.
I guess edits to the shell scripts don't take effect until they're rerun.
If this is all in a single script, then no it will not work. However, if you set it up as a driver script calling sub-scripts, then you might be able to change a sub-script before it's called, or before it's called again if you're looping, and in that case I believe those changes would be reflected in the execution.
I'm hearing no... but what about with some indirection:
BatchRunner.sh
Command1.sh
Command2.sh
Command1.sh
runSomething
Command2.sh
runSomethingElse
Then you should be able to edit the contents of each command file before BatchRunner gets to it right?
OR
A cleaner version would have BatchRunner look to a single file where it would consecutively run one line at a time. Then you should be able to edit this second file while the first is running right?
Use Zsh instead for your scripting.
AFAICT, Zsh does not exhibit this frustrating behavior.
usually, it uncommon to edit your script while its running. All you have to do is to put in control check for your operations. Use if/else statements to check for conditions. If something fail, then do this, else do that. That's the way to go.
Scripts don't work that way; the executing copy is independent from the source file that you are editing. Next time the script is run, it will be based on the most recently saved version of the source file.
It might be wise to break out this script into multiple files, and run them individually. This will reduce the execution time to failure. (ie, split the batch into one build flavor scripts, running each one individually to see which one is causing the trouble).