Executable Unix script cannot be found by which command - linux

I have a Unix bash script written by a former teammate that must be in my PATH, though I can't find it by manual inspection (see below). I can however execute it anywhere by just typing
$ my_script
I would like to view and edit this script. However, when I try to find it via the which command, I get a blank response. Return code indicates an error:
$ which my_script
$ echo $?
1
And yet, I can run the script. I manually combed my PATH and could not find it. Honestly, I have not encountered anything like this in 20+ years. Is there any other command besides which, and/or ways such a script could be hidden?

To determine the type of any command callable from the shell, use the type builtin.
In your case, since the presumptive script turned to be a shell function, you would have seen (assuming bash):
$ type my_script # Bash: option -t would output just 'function`
my_script is a function
my_script ()
{
... # The function's definition
From there - as you did - you can examine the profile (e.g., ~/.bash_profile) and / or initialization scripts (e.g., ~/.bashrc) to determine where the function was sourced.
Caveat: The function signature output by type is normalized to the <name> () form - even if you defined the function as function <name>
In bash you can even find out directly where a given function was defined (tip of the hat to this superuser.com answer and Charles Duffy for inspiring me to find it and coming up with the most succinct form):
$ (shopt -s extdebug; declare -F my_func)
my_func 112 /Users/jdoe/.bashrc # sample output
112 is the line number inside the script indicated.

Found it. I am leaving the question and answer in case someone else runs into a similar situation. I am marking a different answer as the final answer as it contains some more useful tools for finding what I found (see below).
Summary
The "script" was actually a bash function that had been written into a sourced . file.
Details
The account's .bashrc file was sourcing another . file:
. ~/.other-dot-file
In that other . file was the bash function called my_script.
function my_script {
....
}
Since this was sourced, the function could be executed from the command line, mimicking the behavior of a full-fledged script. There is no other code that calls this function - it is purely meant to be called from the command line, as if it were a script.
This willful obscurity is part of the reason the author is no longer with us.

Try
1. type my_script
or
2. find / -name my_script -print
Note that find will throw a lot of permission errors if you run this as a normal user. You can run it as root to avoid that or pipe STDERR to /dev/null

Related

Find command in Command substitution bash script

Hope you are doing well. I am kind of new to bash scripting so I wanted to ask you a question on something I stumbled upon just recently when I was playing around with the find command. What I had noticed was that when I search for a script name using the find command using the command substitution in the bash script, and call the variable from command substitution, it will find the script full path and also execute it right after. Can you please let me know why and how this is working?
E.g.
SEARCH=$(find / -type f -iname "script.name" 2> /dev/null)
$SEARCH
Regards
Bash performs expansions before it builds and executes the command. If you execute cd "$HOME" for example, you probably want to go to the directory stored in the variable HOME and not to a directory literally named $HOME. This allows you to put a command name inside a variable and run it by expanding the variable as the first word, although this isn't really recommended because it becomes very complex to manage for anything more than the simplest commands. The code in your question would probably not work as intended if find returned multiple results for example.
See BashParser on the BashFAQ.

$0 gives different results on Redhat versus Ubuntu?

I have the following script created by some self-claimed bash expert:
SCRIPT_LOCATION="$(readlink -f $0)"
SCRIPT_DIRECTORY="$(dirname ${SCRIPT_LOCATION})"
export PYTHONPATH="${PYTHONPATH}:${SCRIPT_DIRECTORY}/util"
That runs nicely on my local Ubuntu 16.04. Now I wanted to use it on our RH 7.2 servers; and there I got an error message from readlink; about being called with bad parameters.
Then I figured: on Ubuntu, $0 gives "bash"; whereas on RH, it gives "-bash".
EDIT: script is invoked as . ourscript.sh
Questions:
Any idea why that is?
When I change my script to use a hardcoded readlink -f bash the whole things works. Are there "better" ways for fixing this?
Feel free to also explain what readlink -f bash is actually doing ;-)
As the script is sourced the readlink -f $0 is pointless as it will just show you the command used to run the shell you are currently using.
To explain the difference in command lets look at the bash man page:
A login shell is one whose first character of argument zero is a -, or one started with the --login option.
When bash is invoked as an interactive login shell, or as a non-interactive shell with the --login option, it first reads and executes commands from the file /etc/profile, if that file exists. After reading that file, it looks for ~/.bash_profile, ~/.bash_login, and ~/.profile, in that order, and reads and executes commands from the first one that exists and is readable. The --noprofile option may be used when the shell is started to inhibit this behavior.
So guessing ubuntu starts with the noprofile option.
As for readlink, we can again look at the man page
-f, --canonicalize
canonicalize by following every symlink in every component of the given name recursively; all but the last component must exist
Therefore it follows symlinks to the base.
Using readlink -f with any non qualified path will result in it just appending the last arg to your current working directory which will not actually show where the script is run.
Try putting any random string instead of bash after it and will see the script is unaffected.
e.g
readlink -f dafsfdsf
Returns
/home/me/testscript/dafsfdsf

File execution with dot space versus dot slash

I am attempting to work with an existing library of code but have encountered an issue. In short, I execute a shell script (let's call this one A) whose first act is to call another script (B). Script B is in my current directory (a requirement of the program I'm using). The software's manual makes reference to bash, however comments in A suggest it was developed in ksh. I've been operating in bash so far.
Inside A, the line to execute B is simply:
. B
It uses the "dot space" syntax to call the program. It doesn't do anything unusual like sudo.
When I call A without dot space syntax, i.e.:
./A
it always errors saying it cannot find the file B. I added pwd, ls, whoami, echo $SHELL, and echo $PATH lines to A to debug and confirmed that B is in fact right there, the script is running with the same $SHELL as I am at the command prompt, the script is the same user as I am, and the script has the same search path $PATH as I do. I also verified if I do:
. B
at the command line, it works just fine. But, if I change the syntax inside A to:
./B
instead, then A executes successfully.
Similarly, if I execute A with dot space syntax, then both . B and ./B work.
Summarizing:
./A only works if A contains ./B syntax.
. A works for A with either ./B or . B syntax.
I understand that using dot space (i.e. . A) syntax executes without forking to a subshell, but I don't see how this could result in the behavior I'm observing given that the file is clearly right there. Is there something I'm missing about the nuances of syntax or parent/child process workspaces? Magic?
UPDATE1: Added info indicating that the script may have been developed in ksh, while I'm using bash.
UPDATE2: Added checking to verify $PATH is the same.
UPDATE3: The script says it was written for ksh, but it is running in bash. In response to Kenster's answer, I found that running bash -posix then . B fails at the command line. That indicates that the difference in environments between the command line and the script is that the latter is running bash in a POSIX-compliant mode, whereas the command line is not. Looking a little closer, I see this in the bash man page:
When invoked as sh, bash enters posix mode after the startup files are read.
The shebang for A is indeed #!/bin/sh.
In summary, when I run A without dot space syntax, it's forking to its own subshell, which is in POSIX-compliant mode because the shebang is #!/bin/sh (instead of, e.g., #!/bin/bash. This is the critical difference between the command line and script runtime environments that leads to A being unable to find B.
Let's start with how the command path works and when it's used. When you run a command like:
ls /tmp
The ls here doesn't contain a / character, so the shell searches the directories in your command path (the value of the PATH environment variable) for a file named ls. If it finds one, it executes that file. In the case of ls, it's usually in /bin or /usr/bin, and both of those directories are typically in your path.
When you issue a command with a / in the command word:
/bin/ls /tmp
The shell doesn't search the command path. It looks specifically for the file /bin/ls and executes that.
Running ./A is an example of running a command with a / in its name. The shell doesn't search the command path; it looks specifically for the file named ./A and executes that. "." is shorthand for your current working directory, so ./A refers to a file that ought to be in your current working directory. If the file exists, it's run like any other command. For example:
cd /bin
./ls
would work to run /bin/ls.
Running . A is an example of sourcing a file. The file being sourced must be a text file containing shell commands. It is executed by the current shell, without starting a new process. The file to be sourced is found in the same way that commands are found. If the name of the file contains a /, then the shell reads the specific file that you named. If the name of the file doesn't contain a /, then the shell looks for it in the command path.
. A # Looks for A using the command path, so might source /bin/A for example
. ./A # Specifically sources ./A
So, your script tries to execute . B and fails claiming that B doesn't exist, even though there's a file named B right there in your current directory. As discussed above, the shell would have searched your command path for B because B didn't contain any / characters. When searching for a command, the shell doesn't automatically search the current directory. It only searches the current directory if that directory is part of the command path.
In short, . B is probably failing because you don't have "." (current directory) in your command path, and the script which is trying to source B is assuming that "." is part of your path. In my opinion, this is a bug in the script. Lots of people run without "." in their path, and the script shouldn't depend on that.
Edit:
You say the script uses ksh, while you are using bash. Ksh follows the POSIX standard--actually, KSH was the basis for the POSIX standard--and always searches the command path as I described. Bash has a flag called "POSIX mode" which controls how strictly it follows the POSIX standard. When not in POSIX mode--which is how people generally use it--bash will check the current directory for the file to be sourced if it doesn't find the file in the command path.
If you were to run bash -posix and run . B within that bash instance, you should find that it won't work.

Commands work from Shell script but not from command line?

I quickly searched for this before posting, but could not find any similar posts. Let me know if they exist.
The commands being executed seem very simple. A directory listing is used as the input for a function.
The directory contains a bunch of files named "epi1_mcf_0###.nii.gz"
Command-line version (bash is running when this is executed):
fslmerge -t output_file `ls epi1_mcf_0*.nii.gz`
Shell script version:
#!/bin/bash
fslmerge -t output_file `ls epi1_mcf_0*.nii.gz`
The command-line version fails, but the shell script one works perfectly.
The error message is specific to the function, but it's included anyway.
** ERROR (nifti_image_read): failed to find header file for 'epi1_mcf_0000.nii.gz'
** ERROR: nifti_image_open(epi1_mcf_0000.nii.gz): bad header info
Error: failed to open file epi1_mcf_0000.nii.gz
Cannot open volume epi1_mcf_0000.nii.gz for reading!
I have been very frustrated with this problem (less so after I figured out that there was a way to get the command to work).
Any help would be appreciated.
(Or is the general consensus that the problem should be looked for in the "fslmerge" function?)
`ls epi1_mcf_0*.nii.gz` is better written as simply epi1_mcf_0*.nii.gz. As in:
fslmerge -t output_file epi1_mcf_0*.nii.gz
The `ls` doesn't add anything.
Note: Posted as an answer instead of comment. The Markdown-lite comment parser choked on my `` `ls epi1_mcf_0*.nii.gz` `` markup.
(I mentioned this in a comment first, but I'll make an answer since it helped!)
Do you have any shell aliases defined? (Type alias) Those will affect commands typed at the command line, but not scripts.
Linux often has ls defined as ls --color. This may affect the output since the colour codes are sent as escape codes through the regular output stream. If you use ls --color=auto it will auto-detect whether its output is a terminal or not. From man ls:
By default, color is not used to distinguish types of files. That is
equivalent to using --color=none. Using the --color option without the
optional WHEN argument is equivalent to using --color=always. With
--color=auto, color codes are output only if standard output is connected to a terminal (tty).

How can I define a bash alias as a sequence of multiple commands? [duplicate]

This question already has answers here:
Multiple commands in an alias for bash
(10 answers)
Closed 4 years ago.
I know how to configure aliases in bash, but is there a way to configure an alias for a sequence of commands?
I.e say I want one command to change to a particular directory, then run another command.
In addition, is there a way to setup a command that runs "sudo mycommand", then enters the password? In the MS-DOS days I'd be looking for a .bat file but I'm unsure of the linux (or in this case Mac OSX) equivalent.
For chaining a sequence of commands, try this:
alias x='command1;command2;command3;'
Or you can do this:
alias x='command1 && command2 && command3'
The && makes it only execute subsequent commands if the previous returns successful.
Also for entering passwords interactively, or interfacing with other programs like that, check out expect. (http://expect.nist.gov/)
You mention BAT files so perhaps what you want is to write a shell script. If so then just enter the commands you want line-by-line into a file like so:
command1
command2
and ask bash to execute the file:
bash myscript.sh
If you want to be able to invoke the script directly without typing "bash" then add the following line as the first line of the file:
#! /bin/bash
command1
command2
Then mark the file as executable:
chmod 755 myscript.sh
Now you can run it just like any other executable:
./myscript.sh
Note that unix doesn't really care about file extensions. You can simply name the file "myscript" without the ".sh" extension if you like. It's that special first line that is important. For example, if you want to write your script in the Perl programming language instead of bash the first line would be:
#! /usr/bin/perl
That first line tells your shell what interpreter to invoke to execute your script.
Also, if you now copy your script into one of the directories listed in the $PATH environment variable then you can call it from anywhere by simply typing its file name:
myscript.sh
Even tab-completion works. Which is why I usually include a ~/bin directory in my $PATH so that I can easily install personal scripts. And best of all, once you have a bunch of personal scripts that you are used to having you can easily port them to any new unix machine by copying your personal ~/bin directory.
it's probably easier to define functions for these types of things than aliases, keeps things more readable if you want to do more than a command or two:
In your .bashrc
perform_my_command() {
pushd /some_dir
my_command "$#"
popd
}
Then on the command line you can simply do:
perform_my_command my_parameter my_other_parameter "my quoted parameter"
You could do anything you like in a function, call other functions, etc.
You may want to have a look at the Advanced Bash Scripting Guide for in depth knowledge.
For the alias you can use this:
alias sequence='command1 -args; command2 -args;'
or if the second command must be executed only if the first one succeeds use:
alias sequence='command1 -args && command2 -args'
Your best bet is probably a shell function instead of an alias if the logic becomes more complex or if you need to add parameters (though bash supports aliases parameters).
This function can be defined in your .profile or .bashrc. The subshell is to avoid changing your working directory.
function myfunc {
( cd /tmp; command )
}
then from your command prompt
$ myfunc
For your second question you can just add your command to /etc/sudoers (if you are completely sure of what you are doing)
myuser ALL = NOPASSWD: \
/bin/mycommand
Apropos multiple commands in a single alias, you can use one of the logical operators to combine them. Here's one to switch to a directory and do an ls on it
alias x="cd /tmp && ls -al"
Another option is to use a shell function. These are sh/zsh/bash commands. I don't know enough of other shells to be sure if they work.
As for the sudo thing, if you want that (although I don't think it's a good idea), the right way to go is to alter the /etc/sudoers file to get what you want.
You can embed the function declaration followed by the function in the alias itself, like so:
alias my_alias='f() { do_stuff_with "$#" (arguments)" ...; }; f'
The benefit of this approach over just declaring the function by itself is that you can have a peace of mind that your function is not going to be overriden by some other script you're sourcing (or using .), which might use its own helper under the same name.
E.g., Suppose you have a script init-my-workspace.sh that you're calling like . init-my-workspace.sh or source init-my-workspace.sh whose purpose is to set or export a bunch of environment variables (e.g., JAVA_HOME, PYTHON_PATH etc.). If you happen to have a function my_alias inside there, as well, then you're out of luck as the latest function declaration withing the same shell instance wins.
Conversely, aliases have separate namespace and even in case of name clash, they are looked up first. Therefore, for customization relevant to interactive usage, you should only ever use aliases.
Finally, note that the practice of putting all the aliases in the same place (e.g., ~/.bash_aliases) enables you to easily spot any name clashes.
you can also write a shell function; example for " cd " and "ls " combo here

Resources