I want to write a very simple script , which takes a process name , and return the tail of the last file name which contains the process name.
I wrote something like that :
#!/bin/sh
tail $(ls -t *"$1"*| head -1) -f
My question:
Do I need the first line?
Why isn't ls -t *"$1"*| head -1 | tail -f working?
Is there a better way to do it?
1: The first line is a so called she-bang, read the description here:
In computing, a shebang (also called a
hashbang, hashpling, pound bang, or
crunchbang) refers to the characters
"#!" when they are the first two
characters in an interpreter directive
as the first line of a text file. In a
Unix-like operating system, the
program loader takes the presence of
these two characters as an indication
that the file is a script, and tries
to execute that script using the
interpreter specified by the rest of
the first line in the file
2: tail can't take the filename from the stdin: It can either take the text on the stdin or a file as parameter. See the man page for this.
3: No better solution comes to my mind: Pay attention to filenames containing spaces: This does not work with your current solution, you need to add quotes around the $() block.
$1 contains the first argument, the process name is actually in $0. This however can contain the path, so you should use:
#!/bin/sh
tail $(ls -rt *"`basename $0`"*| head -1) -f
You also have to use ls -rt to get the oldest file first.
You can omit the shebang if you run the script from a shell, in that case the contents will be executed by your current shell instance. In many cases this will cause no problems, but it is still a bad practice.
Following on from #theomega's answer and #Idan's question in the comments, the she-bang is needed, among other things, because some UNIX / Linux systems have more than one command shell.
Each command shell has a different syntax, so the she-bang provides a way to specify which shell should be used to execute the script, even if you don't specify it in your run command by typing (for example)
./myscript.sh
instead of
/bin/sh ./myscript.sh
Note that the she-bang can also be used in scripts written in non-shell languages such as Perl; in the case you'd put
#!/usr/bin/perl
at the top of your script.
Related
I have the following script created by some self-claimed bash expert:
SCRIPT_LOCATION="$(readlink -f $0)"
SCRIPT_DIRECTORY="$(dirname ${SCRIPT_LOCATION})"
export PYTHONPATH="${PYTHONPATH}:${SCRIPT_DIRECTORY}/util"
That runs nicely on my local Ubuntu 16.04. Now I wanted to use it on our RH 7.2 servers; and there I got an error message from readlink; about being called with bad parameters.
Then I figured: on Ubuntu, $0 gives "bash"; whereas on RH, it gives "-bash".
EDIT: script is invoked as . ourscript.sh
Questions:
Any idea why that is?
When I change my script to use a hardcoded readlink -f bash the whole things works. Are there "better" ways for fixing this?
Feel free to also explain what readlink -f bash is actually doing ;-)
As the script is sourced the readlink -f $0 is pointless as it will just show you the command used to run the shell you are currently using.
To explain the difference in command lets look at the bash man page:
A login shell is one whose first character of argument zero is a -, or one started with the --login option.
When bash is invoked as an interactive login shell, or as a non-interactive shell with the --login option, it first reads and executes commands from the file /etc/profile, if that file exists. After reading that file, it looks for ~/.bash_profile, ~/.bash_login, and ~/.profile, in that order, and reads and executes commands from the first one that exists and is readable. The --noprofile option may be used when the shell is started to inhibit this behavior.
So guessing ubuntu starts with the noprofile option.
As for readlink, we can again look at the man page
-f, --canonicalize
canonicalize by following every symlink in every component of the given name recursively; all but the last component must exist
Therefore it follows symlinks to the base.
Using readlink -f with any non qualified path will result in it just appending the last arg to your current working directory which will not actually show where the script is run.
Try putting any random string instead of bash after it and will see the script is unaffected.
e.g
readlink -f dafsfdsf
Returns
/home/me/testscript/dafsfdsf
I found the below snippet at the .sh file of my project to define some path :
PGMPATH=`pwd|sed -e "s#/survey1##" `
What does the above line means ?
Reference of PGMPATH is used as below :
LIBS="${LIBS}:${PGMPATH}/edmz-par-api_1.4.jar"
LIBS="${LIBS}:${PGMPATH}/commons-logging.jar"
If it is telling the path where the jar file is located , please explain how it works .
So first you should know that this is two commands - pwd and sed -e "s#/survey1##" - and these two commands are being run together in a pipeline. That is, the output of the first command is being sent to the second command as input.
That is, in general, what | means in unix shell scripts.
So then, what do each of these commands do? pwd stands for "print working directory" and prints the current directory (where you ran the script from, unless the script itself had any cd commands in it).
sed is a command that's really a whole separate programming language that people do many simple text-processing commands with. The simple sed program you have here - s#/survey1## - strips the string /survey1 out of its input, and prints the result.
So the end result is that the variable PGMPATH becomes the current directory with /survey1 stripped out of it.
I am wondering that how can I Use the file command and determine if a file is a script or not.for example in usr bin I want to know which file is script or not. actually i don't want write any script just i need a command for determine that.
You can certainly trust file to find any script in the directory you specify:
file /usr/bin/* | grep script
Or, if you prefer to do it yourself and you are using bash you can do:
for f in /usr/bin/*; do r=$(head -1 $f | grep '^#! */') && echo "$f: $r"; done
which uses the shebang to determine the interpreter and thus the script entity.
This should work (assuming that you're using BASH):
for f in `ls`; do file $f|grep "executable"; done
Update- I just validated that this works for C shell scripts, BASH, Perl, and Ruby. It also ignores file permissions (meaning that even if a file doesn't have the executable bit set, it still works). This seems to be do to the file command looking for a command interpreter (bash, perl, etc…)
file can't guarantee to tell you anything about a text file, if it doesn't know how to interpret it.
You may need to do a combination of things. jschorr's answer should probably work for the stuff in /bin, but another way to test a file might be to check whether a text file is executable.
stat -c "%A" myfilename | grep x
If that returns anything, then your file has execute permissions on it. So if file gets you a description that tells you it's plain text (like "ASCII text"), and there are execute permissions on the file, then it's a pretty good bet that it's a script file.
Not perfect, but I don't think anything will be.
Can I execute command within another command in UNIX shells?
If impossible, can I use the output of the previous command as the input of next command, as in:
command x then command y,
where in command y I want use the output of command x?
You can use the backquotes for this.
For example this will cat the file.txt
cat `echo file.txt`
And this will print the date
echo the date is `date`
The code between back-quotes will be executed and be replaced by its result.
You can do something like;
x=$(grep $(dirname "$path") file)
here dirname "$path" will run first and its result will be substituted and then grep will run, searching for the result of dirname in the file
What exactly are you trying to do? It's not clear from the commands you are executing. Perhaps if you describe what you're looking for we can point you in the right direction. If you want to execute a command over a range of file (or directory) names returned by the "find" command, Colin is correct, you need to look at the "-exec" option of "find". If you're looking to execute a command over a bunch of arguments listed in a file or coming from stdin, you need to check out the "xargs" commands. If you want to put the output of a single command on to the command line of another command, then using "$(command)" (or 'command' [replace the ' with a backquote]) will do the job. There's a lot of ways to do this, but without knowing what it is you're trying it's hard to be more helpful.
Here is an example where I have used nested system commands.
I had run "ls -ltr" on top of find command. And it executes
it serially on the find output.
ls -ltr $(find . -name "srvm.jar")
What exactly are the uses of '-' in bash? I know they can be used for
cd - # to take you to the old 'present working directory'
some stream generating command | vim - # somehow vim gets the text.
My question is what exactly is - in bash? In what other contexts can I use it?
Regards
Arun
That depends on the application.
cd -
returns to the last directory you were in.
Often - stands for stdin or stdout. For example:
xmllint -
does not check an XML file but checks the XML on stdin. Sample:
xmllint - <<EOF
<root/>
EOF
The same is true for cat:
cat -
reads from stdin. A last sample where - stands for stdout:
wget -O- http://google.com
will receive google.com by HTTP and send it on stdout.
By the way: That has nothing to do with your shell (e.g. bash). It's only semantics of the called application.
- in bash has no meaning as a standalone argument (I would not go as far as to say it it does not have a meaning in shell at all - it's for example used in expansion, e.g. ls [0-9]* lists all files starting with a digit).
As far as being a standalone parameter value, bash will do absolutely nothing special with it and pass to a command as-is.
What the command does with it is up to each individual program - can be pretty much anything.
There's a commonly used convention that - argument indicates to a program that the input needs to be read from STDIN instead of a file. Again, this is merely how many programs are coded and technically has nothing to do with bash.
From tldp:
This can be done for instance using a hyphen (-) to indicate that a program should read from a pipe
This explains how your vim example gets its data.
There is no universal rule here.
According to the context it changes
It is pretty much useful when you have something to do repeatedly in two directories. Refer #4 here: http://www.thegeekstuff.com/2008/10/6-awesome-linux-cd-command-hacks-productivity-tip3-for-geeks/
In many places it means STDIN.