Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 4 years ago.
Improve this question
When i am writing the script like this
#!/bin/sh
pidof MX-CM48 | xargs kill
if [ -f /home/root/MX-CM48NEW ]; then
mv /home/root/MX-CM48NEW /home/root/MX-CM48
chmod 777 /home/root/MX-CM48
fi
cd /home/root
./MX-CM48 &
the script is working.
but when i am trying to write it like this:
#!/bin/sh
NEW_FILE="/home/root/MX-CM48NEW"
OLD_FILE="/home/root/MX-CM48"
PATH="/home/root"
APP_NAME="MX-CM48"
pidof $APP_NAME | xargs kill
if [ -f $NEW_FILE ]; then
mv $NEW_FILE $OLD_FILE
chmod 777 $OLD_FILE
fi
cd $PATH
./$APP_NAME &
the pidof and the if are not working.
The PATH environment variable defines where the system looks for the executables corresponding to commands. It's a colon-delimited list of directories that contain executable files, something like /usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin. So when you use a command like mv, the system looks for a file named "mv" in /usr/local/bin, then in /usr/bin, then in /bin etc.
When you set PATH to "/home/root" in the script, that then becomes the only place the system will look for command executables. I presume that's not what you wanted at all.
The solution is simple: use a different name for your variable. Actually, it's best to use lower- or mixed-case variable names for your things, because there are a lot of all-caps variable names that have some sort of special meaning and it's hard to remember & avoid all of them.
BTW, you should (almost) always double-quote variable references, e.g. pidof "$APP_NAME" to avoid unexpected weirdness from the way the shell parses unquoted variables. Finally, when you use cd in a script, you should always check for an error; otherwise if the cd command fails, the rest of the script will blindly continue executing in the wrong place.
cd "$path" || {
echo "Error moving to $path directory; giving up" >&2
exit 1
}
"./$app_name" &
Unless you copied all commands to /home/root, it is pretty clear what is wrong.
PATH="/home/root"
should (probably) be:
PATH="$PATH:/home/root"
or use full path-names, like in:
/usr/bin/pidof $APP_NAME | /usr/bin/xargs /bin/kill
Related
This question already has answers here:
Why can't I change directories using "cd" in a script?
(33 answers)
Closed 2 years ago.
I've been trying do put together a function that combines mkdir and cd. This is what I've been using:
#!/bin/bash
mk(){
mkdir "$1" && cd "$1"
}
mk $1
However, when I run the script using ./mker.sh test , it'll create a directory with but won't change into it. I'm brand new to Bash so I'm really at a loss as to why that part doesn't work. It doesn't return an error to the command line either.
What's the issue here? Thanks!
When working in / with bash There is no need to cd (it's actually considered "poor form"). cd is "meant" for command line usage -- Though it will work in some cases programmatically, it's easiest and best practice to use the directory when working with it rather than trying to change directory into it.
Simply "use" the full directory to do whatever you intend on "doing" with it .. IE
mkdir "$1" && echo "test" > $1/test.txt
NOTE
In case I read your question wrong, and you want the shell to change directory is a little trickier. The sub-shell (or bash script) has it's own concept of directory .. So no matter "where" you tell it to cd, it will only apply to that sub-shell and not the main shell (or command line). One way to get around this is to use alias
alias dir="cd $M1"
This question already has an answer here:
Find "command not found" when executed in bash loop
(1 answer)
Closed 3 years ago.
I made a simple script in linux bash just like bellow:
#!/bin/bash
PATH=/tmp_with_zip_files
FILETYPE=zip
i=1
for filename in $PATH/*.$FILETYPE;
do
echo "rm $filename";
if [ -f $filename ];
then rm $filename;
fi
i=$((i+1))
done
echo "$i files removed"
But, when i run script i have error, because script doesnt work correctly. It's mean from console i have a message:
zip_delete.sh: line 11: rm: command not found
Why linux bash script not recognize linux command rm?
Lol I think it's because you're overwriting the default $PATH variable (which is the variable that tells bash where to look for executables). During execution, it can't find the rm program in PATH because it's pointing to only /tmp_with_zip_files
Use a different variable name for your purposes like chicken_nuggets.
WARNING DON'T DO THE FOLLOWING LMAO PATH=$PATH:/tmp_with_zip_files you could delete a bunch of things from PATH and that would suck really bad
The PATH variable holds the path to OS commands (like rm), don't use that as a variable, name it something else, like path_to_files.
This question already has answers here:
Pass commands as input to another command (su, ssh, sh, etc)
(3 answers)
Closed 6 years ago.
I'm very very new to Linux(coming from windows) and trying to write a script that i can hopefully execute over multiple systems. I tried to use Python for this but fount it hard too. Here is what i have so far:
cd /bin
bash
source compilervars.sh intel64
cd ~
exit #exit bash
file= "~/a.out"
if[! -f "$file"]
then
icc code.c
fi
#run some commands here...
The script hangs in the second line (bash). I'm not sure how to fix that or if I'm doing it wrong. Please advice.
Also, any tips of how to run this script over multiple systems on the same network?
Thanks a lot.
What I believe you'd want to do:
#!/bin/bash
source /bin/compilervars.sh intel64
file="$HOME/a.out"
if [ ! -f "$file" ]; then
icc code.c
fi
You would put this in a file and make it executable with chmod +x myscript. Then you would run it with ./myscript. Alternatively, you could just run it with bash myscript.
Your script makes little sense. The second line will open a new bash session, but it will just sit there until you exit it. Also, changing directories back and forth is very seldom required. To execute a single command in another directory, one usually does
( cd /other/place && mycommand )
The ( ... ) tells the shell that you'd like to do this in a sub-shell. The cd happens within that sub-shell and you don't have to cd back after it's done. If the cd fails, the command will not be run.
For example: You might want to make sure you're in $HOME when you compile the code:
if [ ! -f "$file" ]; then
( cd $HOME && icc code.c )
fi
... or even pick out the directory name from the variable file and use that:
if [ -f "$file" ]; then
( cd $(dirname "$file") && icc code.c )
fi
Assigning to a variable needs to happen as I wrote it, without spaces around the =.
Likewise, there needs to be spaces after if and inside [ ... ] as I wrote it above.
I also tend to use $HOME rather than ~ in scripts as it's more descriptive.
A shell script isn't a record of key strokes which are typed into a terminal. If you write a script like this:
command1
bash
command2
it does not mean that the script will switch to bash, and then execute command2 in the different shell. It means that bash will be run. If there is a controlling terminal, that bash will show you a prompt and wait for a command to be typed in. You will have to type exit to quit that bash. Only then will the original script then continue with command2.
There is no way to switch a script to a different shell halfway through. There are ways to simulate this. A script can re-execute itself using a different shell. In order to do that, the script has to contain logic to detect that it is being re-executed, so that it can prevent re-executing itself again, and to skip some code that shouldn't be run twice.
In this script, I implemented such a re-execution hack. It consists of these lines:
#
# The #!/bin/sh might be some legacy piece of crap,
# not even up to 1990 POSIX.2 spec. So the first step
# is to look for a better shell in some known places
# and re-execute ourselves with that interpreter.
#
if test x$txr_shell = x ; then
for shell in /bin/bash /usr/bin/bash /usr/xpg4/bin/sh ; do
if test -x $shell ; then
txr_shell=$shell
break
fi
done
if test x$txr_shell = x ; then
echo "No known POSIX shell found: falling back on /bin/sh, which may not work"
txr_shell=/bin/sh
fi
export txr_shell
exec $txr_shell $0 ${#+"$#"}
fi
The txr_shell variable (not a standard variable, my invention) is how this logic detects that it's been re-executed. If the variable doesn't exist then this is the original execution. When we re-execute we export txr_shell so the re-executed instance will then have this environment variable.
The variable also holds the path to the shell; that is used later in the script; it is passed through to a Makefile as the SHELL variable, so that make build recipes use that same shell. In the above logic, the contents of txr_shell don't matter; it's used as Boolean: either it exists or it doesn't.
The programming style in the above code snippet is deliberately coded to work on very old shells. That is why test x$txr_shell = x is used instead of the modern syntax [ -z "$txr_shell" ], and why ${#+"$#"} is used instead of just "$#".
This style is no longer used after this point in the script, because the
rest of the script runs in some good, reasonably modern shell thanks to the re-execution trick.
We have two bash scripts to start up an application. The first (Start-App.sh) one sets up the environment and the second (startup.sh) is from a 3rd party that we are trying not to heavily edit. If someone runs the second script before the first the application does not come up correctly.
Is there a way to ensure that the startup.sh can only be called from the Start-App.sh script?
They are both in the same directory and run via bash on Red Hat Linux.
Is there a way to ensure that the startup.sh can only be called from the Start-App.sh script?
Ensure? No. And even less so without editing startup.sh at all. But you can get fairly close.
Below are three suggestions − you can either use one of them, or any combination of them.
The simplest, and probably the best, way is to add a single line at the top of startup.sh:
[ -z $CALLED_FROM_START_APP ] && { echo "Not called from Start-App.sh"; exit 42; }
And then call it from Start-App.sh like so:
export CALLED_FROM_START_APP=yes
sh startup.sh
of course, you can set this environment variable from the shell yourself, so it won't actually ensure anything, but I hope your engineering staff is mature enough not to do this.
You can also remove the execute permissions from startup.sh:
$ chmod a-x startup.sh
This will not prevent people from using sh startup.sh, so there is a very small guarantee here; but it might prevent auto-completion oopsies, and it will mark the file as "not intended to be executed" − if I see a directory with only one executable .sh file, I'll try and run that one, and not one of the others.
Lastly, you could perhaps rename the startup.sh script; for example, you could rename it to do_not_run, or "hide" it by renaming it to .startup. This probably won't interfere with the operation of this script (although I can't check this).
TL;DR:
[ $(basename "$0") = "Start-App.sh" ] || exit
Explanation
As with all other solutions presented it's not 100% bulletproof but this covers most common instances I've come across for preventing accidentally running a script directly as opposed to calling it from another script.
Unlike other approaches presented, this approach:
doesn't rely on manually set file names for each included/sourced script (i.e. is resilient to file name changes)
behaves consistently across all major *nix distros that ship with bash
introduces no unnecessary environment variables
isn't tied to a single parent script
prevents running the script through calling bash explicitly (e.g. bash myscript.sh)
The basic idea is having something like this at the top of your script:
[ $(basename "$0") = $(basename "$BASH_SOURCE") ] && exit
$0 returns the name of the script at the beginning of the execution chain
$BASH_SOURCE will always point to the file the currently executing code resides in (or empty if no file e.g. piping text directly to bash)
basename returns only the main file name without any directory information (e.g. basename "/user/foo/example.sh" will return example.sh). This is important so you don't get false negatives from comparing example.sh and ./example.sh for example.
To adapt this to only allow running when sourced from one specific file as in your question and provide a helpful error message to the end user, you could use:
[ $(basename "$0") = "Start-App.sh" ] || echo "[ERROR] To start MyApplication please run ./Start-App.sh" && exit
As mentioned from the start of the answer, this is not intended as a serious security measure of any kind, but I'm guessing that's not what you're looking for anyway.
You can make startup.sh non-executable by typing chmod -x startup.sh. That way the user would not be able to run it simply by typing ./startup.sh.
Then from Start-App.sh, call your script by explicitly invoking the shell:
sh ./startup.sh arg1 arg2 ...
or
bash ./startup.sh arg1 arg2 ...
You can check which shell it's supposed to run in by inspecting the first line of startup.sh, it should look like:
#!/bin/bash
You can set environment variable in your first script and before running second script check if that environment variable is set properly.
Another alternative is checking the parent process and finding the calling script. This also needs adding some code to the second script.
For example, in the called script, you can check the exit status of this and terminate.
ps $PPID | tail -1 | awk '$NF!~/parent/{exit 1}'
As others have pointed out, the short answer is "no", although you can play with permissions all day but this is still not bulletproof. Since you said you don't mind editing (just not heavily editing) the second script, the best way to accomplish this would be something along the lines of:
1) in the parent/first script, export an environment variable with its PID. This becomes the parent PID. For example,
# bash store parent pid
export FIRST_SCRIPT_PID = $$
2) then very briefly, in the second script, check to see if the calling PID matches the known acceptable parent PID. For example,
# confirm calling pid
if [ $PPID != $FIRST_SCRIPT_PID ] ; then
exit 0
fi
Check out these links here and here for reference.
To recap: the most direct way to do this is adding at least a minimal line or two to the second script, which hopefully doesn't count as "heavily editing".
You can create a script, let's call it check-if-my-env-set containing
#! /bin/bash
source Start-App.sh
exec /bin/bash $#
and replace the shebang (see this) on startup.sh by that script
#! /abs/path/to/check-if-my-env-set
#! /bin/bash
...
then, every time you run startup.sh it will ensure the environment is set correctly.
To the best of my knowledge, there is no way to do this in a way that it would be impossible to get around it.
However, you could stop most attempts by using permissions.
Change the owner of the startup.sh file:
sudo chown app_specific_user startup.sh
Make startup.sh only executable by the owner:
chmod u+x startup.sh
Run startup.sh as the app_specific_user from Start-App.sh:
sudo -u app_specific_user ./startup.sh
I'm unable to use ls, bash.. any popular commands that are critical after changing the path.
I'm unsure what it was before (because I can't do vi command either).
I ran the first command, and realized the first one had a typo - not PATH, but I've typed PATh.
So I immediately ran the next one:
export PATH="/usr/local/bin:$PATh"
export PATH="/usr/local/bin:$PATH"
Then vi, ls, bash commands started to not work.
I did echo $PATH to see the PATH.
usr/local/bin:/usr/local/bin:/usr/local/bin:/usr/local/bin:
This is what I got. Any help is appreciated.
You should be able to source /etc/profile to reset your PATH variable, though it may step on a few other variables you've configured along the way. You could also just grep for the appropriate line from that to set PATH and redo that in your current environment
Also, you can always specify the full path to an executable you need in the interim. For example, if you wanted to use grep with the PATH in its current state you could use /bin/grep (or perhaps /usr/bin/grep depending your system)
1 > Try to load default .profile script
$ /bin/bash ./.profile
2 > Just Logout from current session
and Re-login it.
It appears you have "broken" your bash shell ~/.bash_profile script, because of the PATh typo. (The question explicitly states bash, so I'm answering in that context.)
Since the PATH is "broken", you will need to access your editor by using its fully qualified path.
/usr/bin/vi ~/.bash_profile
That should allow you to fix the PATh to be PATH.
If you find that you need to edit your PATH environment variable a lot, this little bash function can be helpful.
vipath() {
local tmpfile=$(mktemp /tmp/vipath-edit.XXXXXX)
echo "$PATH" | tr ':' '\n' > "$tmpfile"
vi "$tmpfile" && PATH=$(paste -s -d ':' "$tmpfile")
rm "$tmpfile"
}
Note: there are better ways to ensure the $tmpfile gets deleted that I did not use in this snippet. And on a multiuser system, there are better ways to make sure the temporary file is not located in a shared location.
If you want to add a directory location to your PATH, without adding duplicate locations, this little bash function can be helpful to prepend a directory location.
pathadd() {
if [ -d "$1" ] && [[ ! ":$PATH:" =~ ":$1:" ]]
then
PATH="$1:$PATH"
fi
}
I had the same in RHEL8, I did an export PATH for a certain directory and then no command worked anymore. I performed a faulty export PATH command probably.
I got messages like this:
>$ yum
bash: yum: command not found...
Install package 'yum' to provide command 'yum'? [N/y] n
Luckily I had some other similar systems where I could get the path from, so I did:
export PATH=/home/USER/.local/bin:/home/USER/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbine
Change USER to your own.
To make it permanent: add to $HOME/.bashrc with:
export PATH=$PATH:<YOUR PATH HERE>
When you do export PATH="/usr/local/bin:$PATH" you are saying, set PATH to /usr/local/bin: plus whatever was in the PATH variable before. This essentially concatenates the string "/usr/local/bin:" with the old path. That is why your PATH repeats so much, you must have run that command a few times.
Try running this: export PATH="/usr/local/bin".