I have a perl script with shebang as
#!/usr/bin/env perl
I want this script to print each line as it is executed. So I installed Devel::Trace and changed script shebang to
#!/usr/bin/env perl -d:Trace
But this gives error as it is not a valid syntax.
What should I do to use both env functionality and tracing functionality?
This is one of those things that Just Doesn't Work⢠on some systems, notably those with a GNU env.
Here's a sneaky workaround mentioned in perlrun that I've (ab)used in the past:
#!/bin/sh
#! -*-perl-*-
eval 'exec perl -x -wS $0 ${1+"$#"}'
if 0;
print "Hello, world!\n";
This will find perl on your PATH and you can add whatever other switches you'd like to the command line. You can even set environment variables, etc. before perl is invoked. The general idea is that sh runs the eval, but perl doesn't, and the extra gnarly bits ensure that Perl finds your program correctly and passes along all the arguments.
#!/bin/sh
FOO=bar; export FOO
#! -*-perl-*-
eval 'exec perl -d:Trace -x -wS $0 ${1+"$#"}'
if 0;
$Devel::Trace::TRACE = 1;
print "Hello, $ENV{FOO}!\n";
If you save the file with a .pl extension, your editor should detect the correct file syntax, but the initial shebang might throw it off. The other caveat is that if the Perl part of the script throws an error, the line number(s) might be off.
The neat thing about this trick is that it works for Ruby too (and possibly some other languages like Python, with additional modifications):
#!/bin/sh
#! -*-ruby-*-
eval 'exec ruby -x -wS $0 ${1+"$#"}' \
if false
puts "Hello, world!"
Hope that helps!
As #hek2mgl comments above, a flexible way of doing that is using a shell wrapper, since the shebang admits a single argument (which is going to be perl). A simple wraper would be this one
#!/bin/bash
env perl -d:Trace "$#"
Which you can use then like this
#!./perltrace
or you can create similar scripts, and put them wherever perl resides.
First, the shebang line is handled differently depending on the OS. I'm talking about GNU/Linux here, the leading operating system. ;)
The shebang line will be split only in two parts, the interpreter (usr/bin/perl) and the second argument which is supposed to prepend be the filename argument which itself will be append automatically when executing the shebanged file. Some interpreters need that. Like #!/usr/bin/awk -f for example. -f is needed in front of the filename argument.
Perl doesn't need the -f to pass the perl file name, meaning it works like
perl file.pl
instead of
perl -f file.pl
That gives you basically room for one argument switch that you can choose, like
#!/usr/bin/perl -w
to enable warnings. Furthermore, since perl is using getopt() to parse the command line arguments, and getopt() does not require argument switches to be separated by spaces, you pass even multiple switches as long as you don't separate them, like this:
#!/usr/bin/perl -Xw
Well, as soon as an option takes a value, like -a foo that doesn't work any more and such options can't be passed at all. No chance.
A more flexible way is to use a shell wrappper like this:
#!/bin/bash
exec perl -a -b=123 ... filename.pl
PS: Looking at your question again, you have been asking how to use perl switches together with /usr/bin/env perl. No chance. If you pass an option to Perl, like /usr/bin/env perl -w, Linux would try to open the interpreter 'perl -w'. No further splitting.
You can use the -S option of env to pass arguments. For example:
#!/usr/bin/env -S perl -w
works as expected
Related
I am using bash and this works on Linux:
read -r -d '' VAR<<-EOF
Hello\nWorld
EOF
echo $VAR > trail
i.e the contents of the file on Linux is
Hello\nWorld
When i run on Solaris
trial file has
Hello
World
The newline(\n) is being replaced with a newline. How can i avoid it?
Is it a problem with heredoc or the echo command?
[UPDATE]
Based on the explanation provided here:
echo -E $VAR > trail
worked fine on Solaris.
The problem is with echo. Behavior is defined in POSIX, where interpretting \n is part of XSI but not basic POSIX itself.
You can avoid this on all platforms using printf (which is good practice anyways):
printf "%s\n" "$VAR"
This is not a problem for bash by the way. If you had used #!/usr/bin/env bash as the shebang (and also not run the script with sh script), behavior would have been consistent.
If you use #!/bin/sh, you'll get whichever shell the system uses as a default, with varying behaviors like this.
To complement #that other guy's helpful answer:
Even when it is bash executing your script, there are several ways in which the observed behavior - echo by default interpreting escape sequences such as \n - can come about:
shopt -s xpg_echo could be in effect, which makes the echo builtin interpret \ escape sequences by default.
enable -n echo could be in effect, which disables the echo builtin and runs the external executable by default - and that executable's behavior is platform-dependent.
These options are normally NOT inherited when you run a script, but there are still ways in which they could take effect:
If your interactive initialization files (e.g., ~/.bashrc) contain commands such as the above and you source (.) your script from an interactive shell.
When not sourcing your script: If your environment contains a BASH_ENV variable that points to a script, that script is sourced before your script runs; thus, if that script contains commands such as the above, they will affect your script.
I'm currently working on pm2, a process manager for NodeJS.
As it's targeted at Javascript, a new standard is coming, ES6.
To enable it on NodeJS I have to add the option --harmony.
Now for the bash part, I have to let the user pass this option to the interpreter that executes the file. By crawling the web (and found on Stackoverflow) I found this :
#!/bin/sh
':' //; exec "`command -v nodejs || command -v node`" $PM2_NODE_OPTIONS "$0" "$#"
bin line
Looks like a nice hack but is it portable enough ? On CentOS, FreeBSD...
It's kind of critical so I want to be sure.
Thank you
Let's break down the line of interest.
: is a do nothing in shells.
; is a command separator.
exec will replace the current process with the process of the command that it is executing.
Notice that in the exec command it passes "$0" and "$#" as parameter to the command?
This will allow the new process to read the script denoted by $0 and use it as a script input and reads the original parameters as well $#
The new process will read the input script from the beginning and ignore the comments like #!/bin/sh. and will also ignore :.
Here's the trick. Most interpreters, including perl, uses syntax that are ignored by shell or vice-versa so that on re-reading the input file, the interpreter will not exec itself again.
In this case, the new process ignored the whole line from :. The reason why the rest of the line is ignored? On some c like interpreters, // is a comment.
I forgot to answer your question. Yes it seems portable. There may be corner cases but I can't think of any right now.
To enable it on NodeJS I have to add the option --harmony.
Not necessarily. You can use normal "#!/usr/bin/env node" shebang, but set a harmony flags in runtime using setflags module.
I'm not sure it's better solution, but it's worth mentioning.
Is it possible to add an option to an existing Bash command?
For example I would like to run a shell script when I pass -foo to a specific command (cp, mkdir, rm...).
You can make an alias for e.g. cp which calls a special script that checks for your special arguments, and in turn call the special script:
$ alias cp="my-command-script cp $*"
And the script can look like
#!/bin/sh
# Get the actual command to be called
command="$1"
shift
# To save the real arguments
arguments=""
# Check for "-foo"
for arg in $*
do
case $arg in
-foo)
# TODO: Call your "foo" script"
;;
*)
arguments="$arguments $arg"
;;
esac
done
# Now call the actual command
$command $arguments
Some programmer dude's code may look cool and attractive... but you should use it very carefully for most commands: https://unix.stackexchange.com/questions/41571/what-is-the-difference-between-and
About usage of $* and $#:
You shouldn't use either of these, because they can break unexpectedly
as soon as you have arguments containing spaces or wildcards.
I was using this myself for at least months until I realized it was the reason why my bash code sometimes didn't work.
Consider much more reliable, but less easy and less portable option. As pointed out in comments, recompile original command with changes, that is:
Download c/c++ source code from some respected developers repositories:
https://github.com/torvalds/linux
http://git.savannah.gnu.org/cgit/coreutils.git/tree/src
https://github.com/coreutils/coreutils/tree/master/src
https://github.com/bluerise/openbsd-src/tree/master/bin
https://git.busybox.net/busybox/tree/coreutils
Add some code in c/c++, compile with gcc/g++.
Also, I guess, you can edit bash itself to set it to check if a string passed to bash as a command matches some pattern, don't execute this and execute some different command or a bash script
https://tiswww.case.edu/php/chet/bash/bashtop.html#Availability
If you really are into this idea of customizing and adding functionality to your shell, maybe check out some other cool fashionable shells like zsh, fish, probably they have something, I don't know.
I've got bunch of shell scripts that used some command and other tools.
So is there a way I can list all programs that the shell scripts are using ?
Kind of way to retrieve dependencies from the source code.
Uses sed to translate pipes and $( to newlines, then uses awk to output the first word of a line if it might be a command. The pipes into which to find potiential command words in the PATH:
sed 's/|\|\$(/\n/g' FILENAME |
awk '$1~/^#/ {next} $1~/=/ {next} /^[[:space:]]*$/ {next} {print $1}' |
sort -u |
xargs which 2>/dev/null
One way you can do it is at run time. You can run bash script in debug mode with -x option and then parse it's output. All executed commands plus their arguments will be printed to standard output.
While I have no general solution, you could try two approaches:
You might use strace to see which programs were executed by your script.
You might run your program in a pbuilder environment and see which packages are missing.
Because of dynamic nature of the shell, you cannot do this without running a script.
For example:
TASK="cc foo.c"
time $TASK
This will be really hard to determine without running that cc was called even in such trivial example as above.
In a runtime, you can inspect debug output sh -x myscript as pointed out by thiton (+1) and ks1322 (+1). You can also you tool like strace to catch all exec() syscalls.
I want to write a very simple script , which takes a process name , and return the tail of the last file name which contains the process name.
I wrote something like that :
#!/bin/sh
tail $(ls -t *"$1"*| head -1) -f
My question:
Do I need the first line?
Why isn't ls -t *"$1"*| head -1 | tail -f working?
Is there a better way to do it?
1: The first line is a so called she-bang, read the description here:
In computing, a shebang (also called a
hashbang, hashpling, pound bang, or
crunchbang) refers to the characters
"#!" when they are the first two
characters in an interpreter directive
as the first line of a text file. In a
Unix-like operating system, the
program loader takes the presence of
these two characters as an indication
that the file is a script, and tries
to execute that script using the
interpreter specified by the rest of
the first line in the file
2: tail can't take the filename from the stdin: It can either take the text on the stdin or a file as parameter. See the man page for this.
3: No better solution comes to my mind: Pay attention to filenames containing spaces: This does not work with your current solution, you need to add quotes around the $() block.
$1 contains the first argument, the process name is actually in $0. This however can contain the path, so you should use:
#!/bin/sh
tail $(ls -rt *"`basename $0`"*| head -1) -f
You also have to use ls -rt to get the oldest file first.
You can omit the shebang if you run the script from a shell, in that case the contents will be executed by your current shell instance. In many cases this will cause no problems, but it is still a bad practice.
Following on from #theomega's answer and #Idan's question in the comments, the she-bang is needed, among other things, because some UNIX / Linux systems have more than one command shell.
Each command shell has a different syntax, so the she-bang provides a way to specify which shell should be used to execute the script, even if you don't specify it in your run command by typing (for example)
./myscript.sh
instead of
/bin/sh ./myscript.sh
Note that the she-bang can also be used in scripts written in non-shell languages such as Perl; in the case you'd put
#!/usr/bin/perl
at the top of your script.