How can I combine two (or more) calls by the watch command?
Such that they are run together (serially) and watch shows their combined output?
I.e. watch command1 command2
So to show the contents of two different directories:
watch $(ls dir1) $(ls dir2)
(The subshell parens were just added for clarity.)
Of course I could create a script to run both commands, pipe the results into a tempfile and cat it for its contents periodically through watch, but I'd refrain from it if this is natively possible somehow. :)
Subshells, grouping, process substitution did not help me, so I am out of luck and have no idea where to look now.
Is this possible at all?
UPDATE:
watch cat <(ls dir1) <(ls dir2)
gives me on the first iteration what I'd love to see refreshed periodically, but not repeatedly. :(
watch by default runs the passed command in shell so you can pass it any command valid for shell:
watch 'ls dir1; ls dir2'
Related
I'm using rsync to transfer files from a server to another server (both owned by me), my only problem is that these files are over 50GB and I got a ton of them to transfer (Over 200 of them).
Now I could just open multiple tabs and run rsync or add the "&" at the end of the script to execute it in the background.
So my question is, how can I execute this command in the background and when its done transferring, I want a message to be shown on the terminal window that executed the script.
(rsync -av -progress [FOLDER_NAME] [DISTINATION]:[PATH] &) && echo 'Finished'
I know thats completely wrong but I need to use & to run it in the background and && to run echo after rsync finished.
Next to the screen-based solution, you could use xargs tool, too.
echo '/srcpath1 host1 /dstpath1
/srcpath2 host2 /dstpath2
/srcpath3 host3 /dstpath3'| \
xargs -P 5 --max-lines 1 bash -e 'rsync -av -progress $1 $2:$3'
xargs reads its input for stdin, and executes a command for every single words or lines. This time, lines.
What it makes very good: it can do with its child processes parallel! In this configuration, xargs does this by using always 5 parallel child processes. This number can be 1 or even infinite.
xargs will exit, if all of its childs are ready, and handles every ctrl/c, child processing, etc very well and problem tolerant.
Instead of the echo, the input of xargs can come from a file, or even from a previous command in the pipe, too. Or from a for or while loop.
You could use gnu screen for that, screen could monitor output for silence and for activity. Additional benefit - you could close terminal and reattach to screen later - even better if you run could screen on server - then you could shutdown or reboot your machine and processes in screen still be executing.
Well, to answer your specific question, your invocation:
(rsync ... &) && echo 'Finished'
creates a subshell - the ( ... ) bit - in which rsync is run in the background, which means the subshell will exit as soon as it has started rsync, not after rsync finishes. The && echo ... part then notices that the subshell has exited successfully and does its thing, which is not what you want, because rsync is most likely still running.
To accomplish what you want, you need to do this:
(rsync ... && echo 'Finished') &
That will put the subshell itself in the background, and the subshell will run rsync and then echo. If you need to wait for that subshell to finish at some point later in your script, simply insert a wait at the appropriate point.
You could also structure it this way:
rsync ... &
# other stuff to do while rsync runs
wait
echo 'Finished'
Which is "better" is really just a matter of preference. There's one minor difference in that the && will run echo only if rsync doesn't report an error exit code - but replacing && with ; would make the two patterns more equivalent. The second method makes the echo synchronous with other output from your script, so it doesn't show up in the middle of other output, so it might be slightly preferable from that respect, but capturing the exit condition of rsync would be more complicated if it was necessary...
Is there any tool/command in Linux that I can use to run a command in more than one tab simultaneously? I want to run the same command: ./myprog argument1 argument2 simultaneously in more than one shells (I want to increase this so as to put my code under stress later on) to check if the mutexes are working fine in a threaded program.
I am kind of looking for something like what wall does. I can think of using tty's, but that just seems like a lot of pain if I have to scale this to many more shells.
Why not do something like
for i in {1..100}
do
./myprog argument1 argument2 &
done
This is in case the shell is bash. You can look into other looping constructs in case of other shells.
Running the jobs asynchronously with redirected output is the simplest solution, but if you really want each process to run in its own terminal, a good option is to use a terminal emulator like screen or tmux. For example:
yes | sed 5q | while read k; do # iterate 5 times
tmux new-session -d 'sh -c "/path/to/myprog arg1 arg2; sh"'&
done
The trailing sh causes the session to remain alive after myprog terminates, and the intial yes pipeline is used instead of seq since it works where seq is not available (there are lots of ways to iterate!)
Once the sessions are running, you can attach to them individually to view the output. (See tmux documentation for details.)
if you are using gnome terminal, try this one
gnome-terminal --maximize --tab-with-profile=default -e '/bin/sh -c ./myprog argument1 argument2' --tab-with-profile=default -e '/bin/sh -c ./myprog argument1 argument2' --tab-with-profile=default -e '/bin/sh -c ./myprog argument1 argument2'
please check gnome-terminal man page for more options
Use GNU Parallel's --tmux --fg:
parallel -j20 --tmux --fg echo doit {}\; sleep {} ::: {1..100}
Use ctrl-b n and ctrl-b p to cycle between the running terminals.
That way you can keep everything in a single window, but without the output from the jobs mixing.
Install the moreutulis package (with non-GNU parallel) or, preferably, GNU parallel (often packaged separately; SUSE calls it gnu_parallel, Ubuntu and RedHat call it parallel) so you can use parallel. Then figure out how to specify the terminal your command runs in. For example
parallel yourscript -- tty1 tty2 pts6
Then you just have to figure out how to write "yourscript" to take the tty to run on as an argument, which is a separate question. :)
I had the same situation when I was at college and I did something like this
for i in $(seq 1 10)
do
xterm -e "./myprog argument1 argument2"&
done
This spawns 10 xterms each running your program in the background.
From comments it seems you don't need stdin, do need stdout, don't want ptys, don't want screen or tmux.
Easy-peasy:
for seq in `seq 5`; do
./myprog argument1 argument2 >output$seq &
done
Maybe you could (ab)use clusterssh for that?
Open a bunch of terminals, get everything set up, then hit ENTER really quickly in each terminal.
On one of my linux servers I have a script that performs some controls.
Is there a way of finding out where this script is called? This can be in
another script, cobol program, crontab, ...
Opening every one of them will take a very long time.
If you can modify the script, put in a ps line to get the parent pid, ps again and grep for the parent pid to get the command, then log to file.
Come back in a week or so and you should have the command that is triggering your script. In case it's something nested, you may want to recurse or similar.
To do this without modifying the script, you'll need a watcher script/program that checks for access to the script file or calls ps every so often. However, if you have that kind of access, just modifying the script is probably easier.
Edit: Apparently the commands to get the parent pid and command for it, without repeatedly calling ps, look something like:
ps -p $$ -o ppid=
cat /proc/<pid>/cmdline
(from jweyrich's answer here)
Grep for it:
grep -lr yourscript /etc /opt/anotherlikleydir
failing that, search the whole system : grep -lr yourscript /
Edit:
failing that, search in binaries too: grep -lar yourscript /
failing that, the script is either executed by a logged in user or a scripted remote login... if that's the case, try peachykeen's approach and edit the script... and why not dump a ps axf to a log too.
I'm looking for a way to limit the amount of output produced by all command line programs in Linux, and preferably tell me when it is limited.
I'm working over a server which has a lag on the display. Occasionally I will accidentally run a command which outputs a large amount of text to the terminal, such as cat on a large file or ls on a directory with many files. I then have to wait a while for all the output to be printed to the terminal.
So is there a way to automatically pipe all output into a command like head or wc to prevent too much output having to be printed to terminal?
I don't know about the general case, but for each well-known command (cat, ls, find?)
you could do the following:
hardlink a copy to the existing utility
write a tiny bash function that calls the utility and pipes to head (or wc, or whatever)
alias the name of the utility to call your function.
So along these lines (utterly untested):
$ ln `which cat` ~/bin/old_cat
function trunc_cat () {
`old_cat $# | head -n 100`
}
alias cat=trunc_cat
Making aliases of all your commands would be a good start. Something like
alias lm="ls -al | more"
alias cam="cat $# | more"
Perhaps using screen could help?
this makes me think of bash-completion.
As complete command in bash enables you to specify handler when a program is not found,
what about write your own handler and clear $PATH, in order to execute every command with redirection to a filtering pipe?
#Did not try it myself.
Assuming you're working over a network connection, like ssh, into a remote server then try piping the output of the command to less. That way you can manage and navigate the output from the program on the server better. Use 'j' and 'k' to move up and down per line and 'ctrl-u' and 'ctrl-d' to move 1/2 a page up and down. When you do this only the relevant text (i.e. what fits on the screen) will be transmitted over the network.
Can I execute command within another command in UNIX shells?
If impossible, can I use the output of the previous command as the input of next command, as in:
command x then command y,
where in command y I want use the output of command x?
You can use the backquotes for this.
For example this will cat the file.txt
cat `echo file.txt`
And this will print the date
echo the date is `date`
The code between back-quotes will be executed and be replaced by its result.
You can do something like;
x=$(grep $(dirname "$path") file)
here dirname "$path" will run first and its result will be substituted and then grep will run, searching for the result of dirname in the file
What exactly are you trying to do? It's not clear from the commands you are executing. Perhaps if you describe what you're looking for we can point you in the right direction. If you want to execute a command over a range of file (or directory) names returned by the "find" command, Colin is correct, you need to look at the "-exec" option of "find". If you're looking to execute a command over a bunch of arguments listed in a file or coming from stdin, you need to check out the "xargs" commands. If you want to put the output of a single command on to the command line of another command, then using "$(command)" (or 'command' [replace the ' with a backquote]) will do the job. There's a lot of ways to do this, but without knowing what it is you're trying it's hard to be more helpful.
Here is an example where I have used nested system commands.
I had run "ls -ltr" on top of find command. And it executes
it serially on the find output.
ls -ltr $(find . -name "srvm.jar")