Wrong order during eval of array content with ZSH - linux

Considering the following example script:
#!/bin/zsh
typeset -A cmd
cmd[0]="mkdir"
cmd[1]="-p"
cmd[2]="to to/tata"
cmd[3]="anotherfolder"
I'm trying to eval the content of cmd (and keep the parameter separation, so I want to create "to to/tata" and "anotherfolder") but when I do:
${cmd}
It evals the content, but in the wrong order. For some reason it evals:
anotherfolder mkdir -p "to to/tata"
Do you have any idea why, and how to make it follow the natural order?
If you want to know what I'm really trying to do (because there might be a simpler way to do what I want), I created a small shell script that takes a command as argument, and executes it one specific folders. For exemple:
myscript mkdir -p "to to/tata"
And in my script, I simply have:
$#
That executes the mkdir command. But now, I'm trying to pass multiple command using a specific separator ("--" in my case) so that
myscript mkdir -p "to to/tata" -- touch "to to/tata/myfile"
would execute both commands. To do so, I thought it would be easier to first parse $# and create arrays containing each arguments up to "--". But now I'm stuck on how to execute the array...

The issue was that I used an associative array (typedef -A) instead of a standard array.
Proper solution would be:
cmd=()
# indexes start at 1
cmd[1]="mkdir"
cmd[2]="-p"

Related

Half of a mkdir function works but cd part doesn't [duplicate]

This question already has answers here:
Why can't I change directories using "cd" in a script?
(33 answers)
Closed 2 years ago.
I've been trying do put together a function that combines mkdir and cd. This is what I've been using:
#!/bin/bash
mk(){
mkdir "$1" && cd "$1"
}
mk $1
However, when I run the script using ./mker.sh test , it'll create a directory with but won't change into it. I'm brand new to Bash so I'm really at a loss as to why that part doesn't work. It doesn't return an error to the command line either.
What's the issue here? Thanks!
When working in / with bash There is no need to cd (it's actually considered "poor form"). cd is "meant" for command line usage -- Though it will work in some cases programmatically, it's easiest and best practice to use the directory when working with it rather than trying to change directory into it.
Simply "use" the full directory to do whatever you intend on "doing" with it .. IE
mkdir "$1" && echo "test" > $1/test.txt
NOTE
In case I read your question wrong, and you want the shell to change directory is a little trickier. The sub-shell (or bash script) has it's own concept of directory .. So no matter "where" you tell it to cd, it will only apply to that sub-shell and not the main shell (or command line). One way to get around this is to use alias
alias dir="cd $M1"

How to share an argument with system calls on bash?

Actually I don't know much about bash programming. I've read that pipes allows we to use the output of a program as the input of another one. Then I expected that some expression like bellow to works:
echo "newdirectory" | (mkdir && cd)
Where mkdirreceives the outputed string from echo as it first argument and after cd too. The other point is that pipes not executes synchronously from left processes to the right (is that?).
There is a way to reuse an argument through the system calls on bash?
Especially in this case of creating a new directory and change to it.
You can use variables for this, and pass command line arguments to the the two commands mkdir and cd, instead of trying to pipe data to these commands.
MYDIR="newdirectory"
mkdir "$MYDIR" && cd "$MYDIR"
With this,
echo "newdirectory" | (mkdir && cd)
You connect standard input of both mkdir and cd. A program/command need to know
if it should read data from stdin, and what to do about it. Neither the mkdir or cd command does this, they expect you to give them command line arguments.
Even in the case the commands could read data from standard input, in this case mkdir would consume the input, and not leave anything for cd. In other cases where you connect the same pipe to several commands/processes, you cannot determine which one of them would read the data.
Moreover the parenthesis in (mkdir && cd) means that the commands are run in a sub-shell. However cd affects only the current shell, so you would not be able to observe any effect of the cd command.
mkdir `echo NewDirectorName`
also uses the output of a program as an argument to another program.
Another way to accomplish this is with the xargs command.
echo NewDirectoryName | xargs mkdir
#nos's answer is the most correct for your situation, though.

Bash: Only allow script to run by being called from another script

We have two bash scripts to start up an application. The first (Start-App.sh) one sets up the environment and the second (startup.sh) is from a 3rd party that we are trying not to heavily edit. If someone runs the second script before the first the application does not come up correctly.
Is there a way to ensure that the startup.sh can only be called from the Start-App.sh script?
They are both in the same directory and run via bash on Red Hat Linux.
Is there a way to ensure that the startup.sh can only be called from the Start-App.sh script?
Ensure? No. And even less so without editing startup.sh at all. But you can get fairly close.
Below are three suggestions − you can either use one of them, or any combination of them.
The simplest, and probably the best, way is to add a single line at the top of startup.sh:
[ -z $CALLED_FROM_START_APP ] && { echo "Not called from Start-App.sh"; exit 42; }
And then call it from Start-App.sh like so:
export CALLED_FROM_START_APP=yes
sh startup.sh
of course, you can set this environment variable from the shell yourself, so it won't actually ensure anything, but I hope your engineering staff is mature enough not to do this.
You can also remove the execute permissions from startup.sh:
$ chmod a-x startup.sh
This will not prevent people from using sh startup.sh, so there is a very small guarantee here; but it might prevent auto-completion oopsies, and it will mark the file as "not intended to be executed" − if I see a directory with only one executable .sh file, I'll try and run that one, and not one of the others.
Lastly, you could perhaps rename the startup.sh script; for example, you could rename it to do_not_run, or "hide" it by renaming it to .startup. This probably won't interfere with the operation of this script (although I can't check this).
TL;DR:
[ $(basename "$0") = "Start-App.sh" ] || exit
Explanation
As with all other solutions presented it's not 100% bulletproof but this covers most common instances I've come across for preventing accidentally running a script directly as opposed to calling it from another script.
Unlike other approaches presented, this approach:
doesn't rely on manually set file names for each included/sourced script (i.e. is resilient to file name changes)
behaves consistently across all major *nix distros that ship with bash
introduces no unnecessary environment variables
isn't tied to a single parent script
prevents running the script through calling bash explicitly (e.g. bash myscript.sh)
The basic idea is having something like this at the top of your script:
[ $(basename "$0") = $(basename "$BASH_SOURCE") ] && exit
$0 returns the name of the script at the beginning of the execution chain
$BASH_SOURCE will always point to the file the currently executing code resides in (or empty if no file e.g. piping text directly to bash)
basename returns only the main file name without any directory information (e.g. basename "/user/foo/example.sh" will return example.sh). This is important so you don't get false negatives from comparing example.sh and ./example.sh for example.
To adapt this to only allow running when sourced from one specific file as in your question and provide a helpful error message to the end user, you could use:
[ $(basename "$0") = "Start-App.sh" ] || echo "[ERROR] To start MyApplication please run ./Start-App.sh" && exit
As mentioned from the start of the answer, this is not intended as a serious security measure of any kind, but I'm guessing that's not what you're looking for anyway.
You can make startup.sh non-executable by typing chmod -x startup.sh. That way the user would not be able to run it simply by typing ./startup.sh.
Then from Start-App.sh, call your script by explicitly invoking the shell:
sh ./startup.sh arg1 arg2 ...
or
bash ./startup.sh arg1 arg2 ...
You can check which shell it's supposed to run in by inspecting the first line of startup.sh, it should look like:
#!/bin/bash
You can set environment variable in your first script and before running second script check if that environment variable is set properly.
Another alternative is checking the parent process and finding the calling script. This also needs adding some code to the second script.
For example, in the called script, you can check the exit status of this and terminate.
ps $PPID | tail -1 | awk '$NF!~/parent/{exit 1}'
As others have pointed out, the short answer is "no", although you can play with permissions all day but this is still not bulletproof. Since you said you don't mind editing (just not heavily editing) the second script, the best way to accomplish this would be something along the lines of:
1) in the parent/first script, export an environment variable with its PID. This becomes the parent PID. For example,
# bash store parent pid
export FIRST_SCRIPT_PID = $$
2) then very briefly, in the second script, check to see if the calling PID matches the known acceptable parent PID. For example,
# confirm calling pid
if [ $PPID != $FIRST_SCRIPT_PID ] ; then
exit 0
fi
Check out these links here and here for reference.
To recap: the most direct way to do this is adding at least a minimal line or two to the second script, which hopefully doesn't count as "heavily editing".
You can create a script, let's call it check-if-my-env-set containing
#! /bin/bash
source Start-App.sh
exec /bin/bash $#
and replace the shebang (see this) on startup.sh by that script
#! /abs/path/to/check-if-my-env-set
#! /bin/bash
...
then, every time you run startup.sh it will ensure the environment is set correctly.
To the best of my knowledge, there is no way to do this in a way that it would be impossible to get around it.
However, you could stop most attempts by using permissions.
Change the owner of the startup.sh file:
sudo chown app_specific_user startup.sh
Make startup.sh only executable by the owner:
chmod u+x startup.sh
Run startup.sh as the app_specific_user from Start-App.sh:
sudo -u app_specific_user ./startup.sh

Displaying all the variables declared in a sh file

I have a test.sh file which contains something like the following
source lib.sh
input=$0
A=aa/bb/cc
B=$A/dd
C=$A/ee
doSomething $B $C
Is it possible to display the values of all the variables so that when I read the code, I don't have to trace their final value row by row.
In my case, I mean to display
input=$0
A=aa/bb/cc
B=aa/bb/cc/dd
C=aa/bb/cc/ee
I know
bash -x test.sh
can achieve this goal; but I don't want test.sh to be executed. ( Executing the functions in test.sh would delete some important files on my disk.) I only want to know the values of all the variables.
The concept of 'dry run' in bash actually is not possible. What would be the ouptput e.g in cases like the following:
if some_command; then
variable=output_1
else
variable=output_2
fi
In order to determine the script flow you have to execute the some_command which may require running some command and getting its output. This will modify your system (it will depend on the command). Thus, there's no 'safe' way to do what you need without executing the bash script.
In order to test your script will have to simulate the commands which could modify your system (by adding an echo at the beginning e.g). This way you can run the script and see the values for your variables.

Running a script inside a loop - echo file names

I have a basic script that runs inside another script. I call mass_split.sh which then invokes split_file.sh. The splite_file.sh takes two arguments -s file_name.txt and -c 1 (how many slices to cut the file). However I trying to run a loop to find all text file names in directory ./ and then input the results to the cut_file.sh . I am getting no results back and then text files are not being split.
mass_split.sh
#!/bin/bash
for f in ./*.txt
do
sudo bash split_file.sh -s echo "file '$f'"; -c 10
done
Maybe this has something to do with that errant semicolon after the string literal, which is almost certainly not doing what you want (unless you have another executable that you're intentionally running called -c).

Resources