I've got bunch of shell scripts that used some command and other tools.
So is there a way I can list all programs that the shell scripts are using ?
Kind of way to retrieve dependencies from the source code.
Uses sed to translate pipes and $( to newlines, then uses awk to output the first word of a line if it might be a command. The pipes into which to find potiential command words in the PATH:
sed 's/|\|\$(/\n/g' FILENAME |
awk '$1~/^#/ {next} $1~/=/ {next} /^[[:space:]]*$/ {next} {print $1}' |
sort -u |
xargs which 2>/dev/null
One way you can do it is at run time. You can run bash script in debug mode with -x option and then parse it's output. All executed commands plus their arguments will be printed to standard output.
While I have no general solution, you could try two approaches:
You might use strace to see which programs were executed by your script.
You might run your program in a pbuilder environment and see which packages are missing.
Because of dynamic nature of the shell, you cannot do this without running a script.
For example:
TASK="cc foo.c"
time $TASK
This will be really hard to determine without running that cc was called even in such trivial example as above.
In a runtime, you can inspect debug output sh -x myscript as pointed out by thiton (+1) and ks1322 (+1). You can also you tool like strace to catch all exec() syscalls.
Related
I have a perl script with shebang as
#!/usr/bin/env perl
I want this script to print each line as it is executed. So I installed Devel::Trace and changed script shebang to
#!/usr/bin/env perl -d:Trace
But this gives error as it is not a valid syntax.
What should I do to use both env functionality and tracing functionality?
This is one of those things that Just Doesn't Work⢠on some systems, notably those with a GNU env.
Here's a sneaky workaround mentioned in perlrun that I've (ab)used in the past:
#!/bin/sh
#! -*-perl-*-
eval 'exec perl -x -wS $0 ${1+"$#"}'
if 0;
print "Hello, world!\n";
This will find perl on your PATH and you can add whatever other switches you'd like to the command line. You can even set environment variables, etc. before perl is invoked. The general idea is that sh runs the eval, but perl doesn't, and the extra gnarly bits ensure that Perl finds your program correctly and passes along all the arguments.
#!/bin/sh
FOO=bar; export FOO
#! -*-perl-*-
eval 'exec perl -d:Trace -x -wS $0 ${1+"$#"}'
if 0;
$Devel::Trace::TRACE = 1;
print "Hello, $ENV{FOO}!\n";
If you save the file with a .pl extension, your editor should detect the correct file syntax, but the initial shebang might throw it off. The other caveat is that if the Perl part of the script throws an error, the line number(s) might be off.
The neat thing about this trick is that it works for Ruby too (and possibly some other languages like Python, with additional modifications):
#!/bin/sh
#! -*-ruby-*-
eval 'exec ruby -x -wS $0 ${1+"$#"}' \
if false
puts "Hello, world!"
Hope that helps!
As #hek2mgl comments above, a flexible way of doing that is using a shell wrapper, since the shebang admits a single argument (which is going to be perl). A simple wraper would be this one
#!/bin/bash
env perl -d:Trace "$#"
Which you can use then like this
#!./perltrace
or you can create similar scripts, and put them wherever perl resides.
First, the shebang line is handled differently depending on the OS. I'm talking about GNU/Linux here, the leading operating system. ;)
The shebang line will be split only in two parts, the interpreter (usr/bin/perl) and the second argument which is supposed to prepend be the filename argument which itself will be append automatically when executing the shebanged file. Some interpreters need that. Like #!/usr/bin/awk -f for example. -f is needed in front of the filename argument.
Perl doesn't need the -f to pass the perl file name, meaning it works like
perl file.pl
instead of
perl -f file.pl
That gives you basically room for one argument switch that you can choose, like
#!/usr/bin/perl -w
to enable warnings. Furthermore, since perl is using getopt() to parse the command line arguments, and getopt() does not require argument switches to be separated by spaces, you pass even multiple switches as long as you don't separate them, like this:
#!/usr/bin/perl -Xw
Well, as soon as an option takes a value, like -a foo that doesn't work any more and such options can't be passed at all. No chance.
A more flexible way is to use a shell wrappper like this:
#!/bin/bash
exec perl -a -b=123 ... filename.pl
PS: Looking at your question again, you have been asking how to use perl switches together with /usr/bin/env perl. No chance. If you pass an option to Perl, like /usr/bin/env perl -w, Linux would try to open the interpreter 'perl -w'. No further splitting.
You can use the -S option of env to pass arguments. For example:
#!/usr/bin/env -S perl -w
works as expected
I am trying to find a solution to run unix shell commands in CasperJS in synchronous mode.
I have seen exec-sync for node.js, but could never make it work for casper:
Sync-exec: http://davidwalsh.name/sync-exec
I intend to run some unix utilities through casperjs:
sed -e "1,1000d" file1 > file2 -> To copy the first 1000 lines from file1 to file2
wc -l filename -> To calculate the lines
Maybe someone has experience with this.
I have resolved the issue in the following way, just in case someone requires it:
Running unix commands as per example:
https://github.com/ariya/phantomjs/blob/master/examples/child_process-examples.js
As far as synchronization is concerned, I have wrapped the command execution under:
casper.then(function() {
});
and achieved synchronized execution this way.
I was wondering if there is a way to get Linux commands with a perl script. I am talking about commands such as cd ls ll clear cp
You can execute system commands in a variety of ways, some better than others.
Using system();, which prints the output of the command, but does not return the output to the Perl script.
Using backticks (``), which don't print anything, but return the output to the Perl script. An alternative to using actual backticks is to use the qx(); function, which is easier to read and accomplishes the same thing.
Using exec();, which does the same thing as system();, but does not return to the Perl script at all, unless the command doesn't exist or fails.
Using open();, which allows you to either pipe input from your script to the command, or read the output of the command into your script.
It's important to mention that the system commands that you listed, like cp and ls are much better done using built-in functions in Perl itself. Any system call is a slow process, so use native functions when the desired result is something simple, like copying a file.
Some examples:
# Prints the output. Don't do this.
system("ls");
# Saves the output to a variable. Don't do this.
$lsResults = `ls`;
# Something like this is more useful.
system("imgcvt", "-f", "sgi", "-t", "tiff", "Image.sgi", "NewImage.tiff");
This page explains in a bit more detail the different ways that you can make system calls.
You can, as voithos says, using either system() or backticks. However, take into account that this is not recommended, and that, for instance, cd won't work (won't actually change the directory). Note that those commands are executed in a new shell, and won't affect the running perl script.
I would not rely on those commands and try to implement your script in Perl (if you're decided to use Perl, anyway). In fact, Perl was designed at first to be a powerful substitute for sh and other UNIX shells for sysadmins.
you can surround the command in back ticks
`command`
The problem is perl is trying to execute the bash builtin (i.e. source, ...) as if they were real files, but perl can't find them as they don't exist. The answer is to tell perl what to execute explicitly. In the case of bash builtins like source, do the following and it works just fine.
my $XYZZY=`bash -c "source SOME-FILE; DO_SOMETHING_ELSE; ..."`;
of for the case of cd do something like the following.
my $LOCATION=`bash -c "cd /etc/init.d; pwd"`;
Can I execute command within another command in UNIX shells?
If impossible, can I use the output of the previous command as the input of next command, as in:
command x then command y,
where in command y I want use the output of command x?
You can use the backquotes for this.
For example this will cat the file.txt
cat `echo file.txt`
And this will print the date
echo the date is `date`
The code between back-quotes will be executed and be replaced by its result.
You can do something like;
x=$(grep $(dirname "$path") file)
here dirname "$path" will run first and its result will be substituted and then grep will run, searching for the result of dirname in the file
What exactly are you trying to do? It's not clear from the commands you are executing. Perhaps if you describe what you're looking for we can point you in the right direction. If you want to execute a command over a range of file (or directory) names returned by the "find" command, Colin is correct, you need to look at the "-exec" option of "find". If you're looking to execute a command over a bunch of arguments listed in a file or coming from stdin, you need to check out the "xargs" commands. If you want to put the output of a single command on to the command line of another command, then using "$(command)" (or 'command' [replace the ' with a backquote]) will do the job. There's a lot of ways to do this, but without knowing what it is you're trying it's hard to be more helpful.
Here is an example where I have used nested system commands.
I had run "ls -ltr" on top of find command. And it executes
it serially on the find output.
ls -ltr $(find . -name "srvm.jar")
I want to write a very simple script , which takes a process name , and return the tail of the last file name which contains the process name.
I wrote something like that :
#!/bin/sh
tail $(ls -t *"$1"*| head -1) -f
My question:
Do I need the first line?
Why isn't ls -t *"$1"*| head -1 | tail -f working?
Is there a better way to do it?
1: The first line is a so called she-bang, read the description here:
In computing, a shebang (also called a
hashbang, hashpling, pound bang, or
crunchbang) refers to the characters
"#!" when they are the first two
characters in an interpreter directive
as the first line of a text file. In a
Unix-like operating system, the
program loader takes the presence of
these two characters as an indication
that the file is a script, and tries
to execute that script using the
interpreter specified by the rest of
the first line in the file
2: tail can't take the filename from the stdin: It can either take the text on the stdin or a file as parameter. See the man page for this.
3: No better solution comes to my mind: Pay attention to filenames containing spaces: This does not work with your current solution, you need to add quotes around the $() block.
$1 contains the first argument, the process name is actually in $0. This however can contain the path, so you should use:
#!/bin/sh
tail $(ls -rt *"`basename $0`"*| head -1) -f
You also have to use ls -rt to get the oldest file first.
You can omit the shebang if you run the script from a shell, in that case the contents will be executed by your current shell instance. In many cases this will cause no problems, but it is still a bad practice.
Following on from #theomega's answer and #Idan's question in the comments, the she-bang is needed, among other things, because some UNIX / Linux systems have more than one command shell.
Each command shell has a different syntax, so the she-bang provides a way to specify which shell should be used to execute the script, even if you don't specify it in your run command by typing (for example)
./myscript.sh
instead of
/bin/sh ./myscript.sh
Note that the she-bang can also be used in scripts written in non-shell languages such as Perl; in the case you'd put
#!/usr/bin/perl
at the top of your script.