bash prefix output with current directory - linux

After a couple of hours of flailing, I give up.
I have a long-running build script in a bash function, let's call it "Build":
function Build( ) {
cd /plugins
make
cd /acc
make
}
... and I would like the output from the script to be prefixed on each line with the current directory, something like:
/plugins: configure: checking for gcc
/plugins/inner: making inner modules
...
/acc: configure: checking for speed of strstr.
/acc/misc making misc pieces
.... and so on. There are dozens of "cd"s in the scripts so I don't want to change each and every one. Tried various combinations of piping to awk and trap DEBUG, but no joy.
Any ideas?

You can try overriding built-in cd with function like:
cd(){ builtin cd "$#"; pwd }
Or if you really want to instrument every command, then:
trap "eval 'echo -n $PWD:'" DEBUG
Or to trace really every chdir syscall even inside the child processes of the actual bash script (e.g. make), you can use strace:
strace -f -e trace=chdir bash build.sh

Related

How can I run executable in a different folder with a bash script?

I have a program, a.out, that will be set up in some sequence of folders, each folder gets an a.out and will produce some results in each folder. I am trying to execute these same programs in parallel. If I am in the folder, I just do ./a.out and it would run. I must execute a.out in its folder because a.out looks for a file inside the current directory. So if I am not in its folder, it won't find that file.
Running these programs is part of another job that is based in the rootDir. I am using MATLAB so I am trying to avoid using cd inside MATLAB since that would recompile the MATLAB code every time I use cd and greatly slow down the code.
I use the MATLAB code to write a CallParallel.sh, in it I have this line:
for i in ${JobsOnThisNode[#]};do echo $i;done | xargs -n1 -P ${SLURM_NTASKS_PER_NODE} sh -c '"$1"' sh;
$1 basically gets this command for each batch of parallel jobs incremented by jname and cname:
cd /mnt/home/thrust2/zf1005/Matlab/GAFit/RunningFolder/$jname/$cname/ && ./a.out
I have tested this code from the rootDir and it successfully runs this program in the other folder. However, when I execute it in the bash script, I get the following errors:
sh: /mnt/home/thrust2/zf1005/Matlab/GAFit/RunningFolder/1/1/: Is a directory
sh: &&: command not found
sh: ./a.out: No such file or directory
If I am understanding it correctly, somehow it does not recognize && and cd somehow only checks if it is a directory instead of actually changing to that directory, and as a result, there is no a.out to be found in the rootDir.
When I try this:
sh '"cd /mnt/home/thrust2/zf1005/Matlab/GAFit/RunningFolder/1/1"'
I get:
sh: "cd /mnt/home/thrust2/zf1005/Matlab/GAFit/RunningFolder/1/1": No such file or directory
sh '"cd /mnt/home/thrust2/zf1005/Matlab/GAFit/RunningFolder/1/1"'
means, interpret "cd /mnt/home/thrust2/zf1005/Matlab/GAFit/RunningFolder/1/1" using sh, which does not exist.
to simplify things you can create a runner script that takes the dir as an argument
#! /bin/bash
cd "$1" && /path/to/a.out
and then invoke runner from xargs.
BTW, you need only 1 a.out not 1 per dir.

Shell script hangs when i switch to bash - Linux [duplicate]

This question already has answers here:
Pass commands as input to another command (su, ssh, sh, etc)
(3 answers)
Closed 6 years ago.
I'm very very new to Linux(coming from windows) and trying to write a script that i can hopefully execute over multiple systems. I tried to use Python for this but fount it hard too. Here is what i have so far:
cd /bin
bash
source compilervars.sh intel64
cd ~
exit #exit bash
file= "~/a.out"
if[! -f "$file"]
then
icc code.c
fi
#run some commands here...
The script hangs in the second line (bash). I'm not sure how to fix that or if I'm doing it wrong. Please advice.
Also, any tips of how to run this script over multiple systems on the same network?
Thanks a lot.
What I believe you'd want to do:
#!/bin/bash
source /bin/compilervars.sh intel64
file="$HOME/a.out"
if [ ! -f "$file" ]; then
icc code.c
fi
You would put this in a file and make it executable with chmod +x myscript. Then you would run it with ./myscript. Alternatively, you could just run it with bash myscript.
Your script makes little sense. The second line will open a new bash session, but it will just sit there until you exit it. Also, changing directories back and forth is very seldom required. To execute a single command in another directory, one usually does
( cd /other/place && mycommand )
The ( ... ) tells the shell that you'd like to do this in a sub-shell. The cd happens within that sub-shell and you don't have to cd back after it's done. If the cd fails, the command will not be run.
For example: You might want to make sure you're in $HOME when you compile the code:
if [ ! -f "$file" ]; then
( cd $HOME && icc code.c )
fi
... or even pick out the directory name from the variable file and use that:
if [ -f "$file" ]; then
( cd $(dirname "$file") && icc code.c )
fi
Assigning to a variable needs to happen as I wrote it, without spaces around the =.
Likewise, there needs to be spaces after if and inside [ ... ] as I wrote it above.
I also tend to use $HOME rather than ~ in scripts as it's more descriptive.
A shell script isn't a record of key strokes which are typed into a terminal. If you write a script like this:
command1
bash
command2
it does not mean that the script will switch to bash, and then execute command2 in the different shell. It means that bash will be run. If there is a controlling terminal, that bash will show you a prompt and wait for a command to be typed in. You will have to type exit to quit that bash. Only then will the original script then continue with command2.
There is no way to switch a script to a different shell halfway through. There are ways to simulate this. A script can re-execute itself using a different shell. In order to do that, the script has to contain logic to detect that it is being re-executed, so that it can prevent re-executing itself again, and to skip some code that shouldn't be run twice.
In this script, I implemented such a re-execution hack. It consists of these lines:
#
# The #!/bin/sh might be some legacy piece of crap,
# not even up to 1990 POSIX.2 spec. So the first step
# is to look for a better shell in some known places
# and re-execute ourselves with that interpreter.
#
if test x$txr_shell = x ; then
for shell in /bin/bash /usr/bin/bash /usr/xpg4/bin/sh ; do
if test -x $shell ; then
txr_shell=$shell
break
fi
done
if test x$txr_shell = x ; then
echo "No known POSIX shell found: falling back on /bin/sh, which may not work"
txr_shell=/bin/sh
fi
export txr_shell
exec $txr_shell $0 ${#+"$#"}
fi
The txr_shell variable (not a standard variable, my invention) is how this logic detects that it's been re-executed. If the variable doesn't exist then this is the original execution. When we re-execute we export txr_shell so the re-executed instance will then have this environment variable.
The variable also holds the path to the shell; that is used later in the script; it is passed through to a Makefile as the SHELL variable, so that make build recipes use that same shell. In the above logic, the contents of txr_shell don't matter; it's used as Boolean: either it exists or it doesn't.
The programming style in the above code snippet is deliberately coded to work on very old shells. That is why test x$txr_shell = x is used instead of the modern syntax [ -z "$txr_shell" ], and why ${#+"$#"} is used instead of just "$#".
This style is no longer used after this point in the script, because the
rest of the script runs in some good, reasonably modern shell thanks to the re-execution trick.

cd && ls | grep: How to execute a command in the current shell and pass the output

I created an alias in order not to write ls every time I move into a new directory:
alias cl='cd_(){ cd "$#" && ls; }; cd_'
Let us say I have a folder named "Downloads" (which of course I happen to have) so I just type the following in the terminal:
cl Downloads
Now I will find myself in the "Downloads" folder and receive a list of the stuff I have in the folder, like say: example.txt, hack.hs, picture.jpg,...
If I want to move to a directory and look if there is, say, hack.hs I could try something like this:
cl Downloads | grep hack
What I get is just the output:
hack.hs
But I will remain in the folder I was (which means I am not in Downloads).
I understand this happens because every command is executed in a subshell, and thus cd Downloads && ls is executed in a subshell of its own and then the output (namely the list of stuff I have) gets redirected via the pipe to grep. This is why I then am not in the new folder.
My question is the following:
How do I do it in order to be able to write something like "cl Downloads | grep hack" and get the "hack"-greped list of stuff AND be in the Downloads folder?
Thank you very much,
Pol
For anyone ever googling this:
A quick fix was proposed by #gniourf_gniourf :
cl Downloads > >(grep hack)
Some marked this question as a possible duplicate of Make bash alias that takes duplicates, but the fact that my bash alias already takes arguments shows that this is not the case. The problem at hand was about how to execute a command in the current shell while at the same time redirecting the output to another command.
As you're aware (and as is covered in BashFAQ #24), the reason
{ cd "$#" && ls; } | grep ...
...prevents the results of cd being visible in the outer shell is that no component of a pipeline is guaranteed by POSIX to be run in the outer shell. (Some shells, including ksh [out-of-the-box] and very modern bash with non-default options enabled, will occasionally or optionally run the last piece of a pipeline in the parent shell, but this can't portably be relied on).
A way to avoid this, that's applicable to all POSIX shells, is to direct output to a named pipe, thus avoiding setting up a pipeline:
mkfifo mypipe
grep ... <mypipe &
{ cd "$#" && ls; } >mypipe
In modern ksh and bash, there's a shorter syntax that will do this for you -- using /dev/fd entries instead of setting up a named pipe if the operating system provides that facility:
{ cd "$#" && ls; } > >(grep ...)
In this case, >(grep ...) is replaced with a filename that points to either a FIFO or a /dev/fd entry that, when written to by the process in question, redirects output to grep -- but without a pipeline.
By the way -- I really do hope your use of ls in this manner is as an example. The output of ls is not well-specified for the range of all possible filenames, so grepping it is innately unreliable. Consider using printf '%s\0' * to emit a NUL-delimited list of non-hidden names in a directory, if you really do want to build a streamed result; or using glob expressions to check for files matching a specific pattern (BashFAQ #4 covers a similar scenario); extglobs are available if you need something closer to full regex matching support than POSIX patterns support.

Bash: Only allow script to run by being called from another script

We have two bash scripts to start up an application. The first (Start-App.sh) one sets up the environment and the second (startup.sh) is from a 3rd party that we are trying not to heavily edit. If someone runs the second script before the first the application does not come up correctly.
Is there a way to ensure that the startup.sh can only be called from the Start-App.sh script?
They are both in the same directory and run via bash on Red Hat Linux.
Is there a way to ensure that the startup.sh can only be called from the Start-App.sh script?
Ensure? No. And even less so without editing startup.sh at all. But you can get fairly close.
Below are three suggestions − you can either use one of them, or any combination of them.
The simplest, and probably the best, way is to add a single line at the top of startup.sh:
[ -z $CALLED_FROM_START_APP ] && { echo "Not called from Start-App.sh"; exit 42; }
And then call it from Start-App.sh like so:
export CALLED_FROM_START_APP=yes
sh startup.sh
of course, you can set this environment variable from the shell yourself, so it won't actually ensure anything, but I hope your engineering staff is mature enough not to do this.
You can also remove the execute permissions from startup.sh:
$ chmod a-x startup.sh
This will not prevent people from using sh startup.sh, so there is a very small guarantee here; but it might prevent auto-completion oopsies, and it will mark the file as "not intended to be executed" − if I see a directory with only one executable .sh file, I'll try and run that one, and not one of the others.
Lastly, you could perhaps rename the startup.sh script; for example, you could rename it to do_not_run, or "hide" it by renaming it to .startup. This probably won't interfere with the operation of this script (although I can't check this).
TL;DR:
[ $(basename "$0") = "Start-App.sh" ] || exit
Explanation
As with all other solutions presented it's not 100% bulletproof but this covers most common instances I've come across for preventing accidentally running a script directly as opposed to calling it from another script.
Unlike other approaches presented, this approach:
doesn't rely on manually set file names for each included/sourced script (i.e. is resilient to file name changes)
behaves consistently across all major *nix distros that ship with bash
introduces no unnecessary environment variables
isn't tied to a single parent script
prevents running the script through calling bash explicitly (e.g. bash myscript.sh)
The basic idea is having something like this at the top of your script:
[ $(basename "$0") = $(basename "$BASH_SOURCE") ] && exit
$0 returns the name of the script at the beginning of the execution chain
$BASH_SOURCE will always point to the file the currently executing code resides in (or empty if no file e.g. piping text directly to bash)
basename returns only the main file name without any directory information (e.g. basename "/user/foo/example.sh" will return example.sh). This is important so you don't get false negatives from comparing example.sh and ./example.sh for example.
To adapt this to only allow running when sourced from one specific file as in your question and provide a helpful error message to the end user, you could use:
[ $(basename "$0") = "Start-App.sh" ] || echo "[ERROR] To start MyApplication please run ./Start-App.sh" && exit
As mentioned from the start of the answer, this is not intended as a serious security measure of any kind, but I'm guessing that's not what you're looking for anyway.
You can make startup.sh non-executable by typing chmod -x startup.sh. That way the user would not be able to run it simply by typing ./startup.sh.
Then from Start-App.sh, call your script by explicitly invoking the shell:
sh ./startup.sh arg1 arg2 ...
or
bash ./startup.sh arg1 arg2 ...
You can check which shell it's supposed to run in by inspecting the first line of startup.sh, it should look like:
#!/bin/bash
You can set environment variable in your first script and before running second script check if that environment variable is set properly.
Another alternative is checking the parent process and finding the calling script. This also needs adding some code to the second script.
For example, in the called script, you can check the exit status of this and terminate.
ps $PPID | tail -1 | awk '$NF!~/parent/{exit 1}'
As others have pointed out, the short answer is "no", although you can play with permissions all day but this is still not bulletproof. Since you said you don't mind editing (just not heavily editing) the second script, the best way to accomplish this would be something along the lines of:
1) in the parent/first script, export an environment variable with its PID. This becomes the parent PID. For example,
# bash store parent pid
export FIRST_SCRIPT_PID = $$
2) then very briefly, in the second script, check to see if the calling PID matches the known acceptable parent PID. For example,
# confirm calling pid
if [ $PPID != $FIRST_SCRIPT_PID ] ; then
exit 0
fi
Check out these links here and here for reference.
To recap: the most direct way to do this is adding at least a minimal line or two to the second script, which hopefully doesn't count as "heavily editing".
You can create a script, let's call it check-if-my-env-set containing
#! /bin/bash
source Start-App.sh
exec /bin/bash $#
and replace the shebang (see this) on startup.sh by that script
#! /abs/path/to/check-if-my-env-set
#! /bin/bash
...
then, every time you run startup.sh it will ensure the environment is set correctly.
To the best of my knowledge, there is no way to do this in a way that it would be impossible to get around it.
However, you could stop most attempts by using permissions.
Change the owner of the startup.sh file:
sudo chown app_specific_user startup.sh
Make startup.sh only executable by the owner:
chmod u+x startup.sh
Run startup.sh as the app_specific_user from Start-App.sh:
sudo -u app_specific_user ./startup.sh

How do I run a program with a different working directory from current, from Linux shell?

Using a Linux shell, how do I start a program with a different working directory from the current working directory?
For example, I have a binary file helloworld that creates the file hello-world.txt in the current directory.
This file is inside of directory /a.
Currently, I am in the directory /b. I want to start my program running ../a/helloworld and get the hello-world.txt somewhere in a third directory /c.
Call the program like this:
(cd /c; /a/helloworld)
The parentheses cause a sub-shell to be spawned. This sub-shell then changes its working directory to /c, then executes helloworld from /a. After the program exits, the sub-shell terminates, returning you to your prompt of the parent shell, in the directory you started from.
Error handling: To avoid running the program without having changed the directory, e.g. when having misspelled /c, make the execution of helloworld conditional:
(cd /c && /a/helloworld)
Reducing memory usage: To avoid having the subshell waste memory while hello world executes, call helloworld via exec:
(cd /c && exec /a/helloworld)
[Thanks to Josh and Juliano for giving tips on improving this answer!]
Similar to David Schmitt's answer, plus Josh's suggestion, but doesn't leave a shell process running:
(cd /c && exec /a/helloworld)
This way is more similar to how you usually run commands on the shell. To see the practical difference, you have to run ps ef from another shell with each solution.
An option which doesn't require a subshell and is built in to bash
(pushd SOME_PATH && run_stuff; popd)
Demo:
$ pwd
/home/abhijit
$ pushd /tmp # directory changed
$ pwd
/tmp
$ popd
$ pwd
/home/abhijit
sh -c 'cd /c && ../a/helloworld'
Just change the last "&&" into ";" and it will cd back no matter if the command fails or succeeds:
cd SOME_PATH && run_some_command ; cd -
I always think UNIX tools should be written as filters, read input from stdin and write output to stdout. If possible you could change your helloworld binary to write the contents of the text file to stdout rather than a specific file. That way you can use the shell to write your file anywhere.
$ cd ~/b
$ ~/a/helloworld > ~/c/helloworld.txt
why not keep it simple
cd SOME_PATH && run_some_command && cd -
the last 'cd' command will take you back to the last pwd directory. This should work on all *nix systems.
One way to do that is to create a wrapper shell script.
The shell script would change the current directory to /c, then run /a/helloworld. Once the shell script exits, the current directory reverts back to /b.
Here's a bash shell script example:
#!/bin/bash
cd /c
/a/helloworld
If you always want it to go to /C, use an absolute path when you write the file.
If you want to perform this inside your program then I would do something like:
#include <unistd.h>
int main()
{
if(chdir("/c") < 0 )
{
printf("Failed\n");
return -1 ;
}
// rest of your program...
}
from the current directory provide the full path to the script directory to execute the command
/root/server/user/home/bin/script.sh

Resources