CasperJS running unix shell commands - node.js

I am trying to find a solution to run unix shell commands in CasperJS in synchronous mode.
I have seen exec-sync for node.js, but could never make it work for casper:
Sync-exec: http://davidwalsh.name/sync-exec
I intend to run some unix utilities through casperjs:
sed -e "1,1000d" file1 > file2 -> To copy the first 1000 lines from file1 to file2
wc -l filename -> To calculate the lines
Maybe someone has experience with this.

I have resolved the issue in the following way, just in case someone requires it:
Running unix commands as per example:
https://github.com/ariya/phantomjs/blob/master/examples/child_process-examples.js
As far as synchronization is concerned, I have wrapped the command execution under:
casper.then(function() {
});
and achieved synchronized execution this way.

Related

linux use watch command with multiple calls

How can I combine two (or more) calls by the watch command?
Such that they are run together (serially) and watch shows their combined output?
I.e. watch command1 command2
So to show the contents of two different directories:
watch $(ls dir1) $(ls dir2)
(The subshell parens were just added for clarity.)
Of course I could create a script to run both commands, pipe the results into a tempfile and cat it for its contents periodically through watch, but I'd refrain from it if this is natively possible somehow. :)
Subshells, grouping, process substitution did not help me, so I am out of luck and have no idea where to look now.
Is this possible at all?
UPDATE:
watch cat <(ls dir1) <(ls dir2)
gives me on the first iteration what I'd love to see refreshed periodically, but not repeatedly. :(
watch by default runs the passed command in shell so you can pass it any command valid for shell:
watch 'ls dir1; ls dir2'

How do I run the same linux command in more than one tab/shell simultaneously?

Is there any tool/command in Linux that I can use to run a command in more than one tab simultaneously? I want to run the same command: ./myprog argument1 argument2 simultaneously in more than one shells (I want to increase this so as to put my code under stress later on) to check if the mutexes are working fine in a threaded program.
I am kind of looking for something like what wall does. I can think of using tty's, but that just seems like a lot of pain if I have to scale this to many more shells.
Why not do something like
for i in {1..100}
do
./myprog argument1 argument2 &
done
This is in case the shell is bash. You can look into other looping constructs in case of other shells.
Running the jobs asynchronously with redirected output is the simplest solution, but if you really want each process to run in its own terminal, a good option is to use a terminal emulator like screen or tmux. For example:
yes | sed 5q | while read k; do # iterate 5 times
tmux new-session -d 'sh -c "/path/to/myprog arg1 arg2; sh"'&
done
The trailing sh causes the session to remain alive after myprog terminates, and the intial yes pipeline is used instead of seq since it works where seq is not available (there are lots of ways to iterate!)
Once the sessions are running, you can attach to them individually to view the output. (See tmux documentation for details.)
if you are using gnome terminal, try this one
gnome-terminal --maximize --tab-with-profile=default -e '/bin/sh -c ./myprog argument1 argument2' --tab-with-profile=default -e '/bin/sh -c ./myprog argument1 argument2' --tab-with-profile=default -e '/bin/sh -c ./myprog argument1 argument2'
please check gnome-terminal man page for more options
Use GNU Parallel's --tmux --fg:
parallel -j20 --tmux --fg echo doit {}\; sleep {} ::: {1..100}
Use ctrl-b n and ctrl-b p to cycle between the running terminals.
That way you can keep everything in a single window, but without the output from the jobs mixing.
Install the moreutulis package (with non-GNU parallel) or, preferably, GNU parallel (often packaged separately; SUSE calls it gnu_parallel, Ubuntu and RedHat call it parallel) so you can use parallel. Then figure out how to specify the terminal your command runs in. For example
parallel yourscript -- tty1 tty2 pts6
Then you just have to figure out how to write "yourscript" to take the tty to run on as an argument, which is a separate question. :)
I had the same situation when I was at college and I did something like this
for i in $(seq 1 10)
do
xterm -e "./myprog argument1 argument2"&
done
This spawns 10 xterms each running your program in the background.
From comments it seems you don't need stdin, do need stdout, don't want ptys, don't want screen or tmux.
Easy-peasy:
for seq in `seq 5`; do
./myprog argument1 argument2 >output$seq &
done
Maybe you could (ab)use clusterssh for that?
Open a bunch of terminals, get everything set up, then hit ENTER really quickly in each terminal.

Limit output of all Linux commands

I'm looking for a way to limit the amount of output produced by all command line programs in Linux, and preferably tell me when it is limited.
I'm working over a server which has a lag on the display. Occasionally I will accidentally run a command which outputs a large amount of text to the terminal, such as cat on a large file or ls on a directory with many files. I then have to wait a while for all the output to be printed to the terminal.
So is there a way to automatically pipe all output into a command like head or wc to prevent too much output having to be printed to terminal?
I don't know about the general case, but for each well-known command (cat, ls, find?)
you could do the following:
hardlink a copy to the existing utility
write a tiny bash function that calls the utility and pipes to head (or wc, or whatever)
alias the name of the utility to call your function.
So along these lines (utterly untested):
$ ln `which cat` ~/bin/old_cat
function trunc_cat () {
`old_cat $# | head -n 100`
}
alias cat=trunc_cat
Making aliases of all your commands would be a good start. Something like
alias lm="ls -al | more"
alias cam="cat $# | more"
Perhaps using screen could help?
this makes me think of bash-completion.
As complete command in bash enables you to specify handler when a program is not found,
what about write your own handler and clear $PATH, in order to execute every command with redirection to a filtering pipe?
#Did not try it myself.
Assuming you're working over a network connection, like ssh, into a remote server then try piping the output of the command to less. That way you can manage and navigate the output from the program on the server better. Use 'j' and 'k' to move up and down per line and 'ctrl-u' and 'ctrl-d' to move 1/2 a page up and down. When you do this only the relevant text (i.e. what fits on the screen) will be transmitted over the network.

How to retrieve all tools used in shell script

I've got bunch of shell scripts that used some command and other tools.
So is there a way I can list all programs that the shell scripts are using ?
Kind of way to retrieve dependencies from the source code.
Uses sed to translate pipes and $( to newlines, then uses awk to output the first word of a line if it might be a command. The pipes into which to find potiential command words in the PATH:
sed 's/|\|\$(/\n/g' FILENAME |
awk '$1~/^#/ {next} $1~/=/ {next} /^[[:space:]]*$/ {next} {print $1}' |
sort -u |
xargs which 2>/dev/null
One way you can do it is at run time. You can run bash script in debug mode with -x option and then parse it's output. All executed commands plus their arguments will be printed to standard output.
While I have no general solution, you could try two approaches:
You might use strace to see which programs were executed by your script.
You might run your program in a pbuilder environment and see which packages are missing.
Because of dynamic nature of the shell, you cannot do this without running a script.
For example:
TASK="cc foo.c"
time $TASK
This will be really hard to determine without running that cc was called even in such trivial example as above.
In a runtime, you can inspect debug output sh -x myscript as pointed out by thiton (+1) and ks1322 (+1). You can also you tool like strace to catch all exec() syscalls.

How to get bash built in commands using Perl

I was wondering if there is a way to get Linux commands with a perl script. I am talking about commands such as cd ls ll clear cp
You can execute system commands in a variety of ways, some better than others.
Using system();, which prints the output of the command, but does not return the output to the Perl script.
Using backticks (``), which don't print anything, but return the output to the Perl script. An alternative to using actual backticks is to use the qx(); function, which is easier to read and accomplishes the same thing.
Using exec();, which does the same thing as system();, but does not return to the Perl script at all, unless the command doesn't exist or fails.
Using open();, which allows you to either pipe input from your script to the command, or read the output of the command into your script.
It's important to mention that the system commands that you listed, like cp and ls are much better done using built-in functions in Perl itself. Any system call is a slow process, so use native functions when the desired result is something simple, like copying a file.
Some examples:
# Prints the output. Don't do this.
system("ls");
# Saves the output to a variable. Don't do this.
$lsResults = `ls`;
# Something like this is more useful.
system("imgcvt", "-f", "sgi", "-t", "tiff", "Image.sgi", "NewImage.tiff");
This page explains in a bit more detail the different ways that you can make system calls.
You can, as voithos says, using either system() or backticks. However, take into account that this is not recommended, and that, for instance, cd won't work (won't actually change the directory). Note that those commands are executed in a new shell, and won't affect the running perl script.
I would not rely on those commands and try to implement your script in Perl (if you're decided to use Perl, anyway). In fact, Perl was designed at first to be a powerful substitute for sh and other UNIX shells for sysadmins.
you can surround the command in back ticks
`command`
The problem is perl is trying to execute the bash builtin (i.e. source, ...) as if they were real files, but perl can't find them as they don't exist. The answer is to tell perl what to execute explicitly. In the case of bash builtins like source, do the following and it works just fine.
my $XYZZY=`bash -c "source SOME-FILE; DO_SOMETHING_ELSE; ..."`;
of for the case of cd do something like the following.
my $LOCATION=`bash -c "cd /etc/init.d; pwd"`;

Resources