How to read the current folder from a script when it's called with source (same shell) or bash (subshell)? - linux

This is a hard one, as I have researched for a few hours and could not find a solution that works, so I combined a few of solutions that I found and this is the results:
"$( cd "$( dirname "${BASH_SOURCE[0]}" )" && dirname -- "$(readlink -f -- "$0")" )"
If anyone has a simpler one, please share otherwise, enjoy!

I don't think that there is a solution for bash which works in every circumstance (for instance when sourcing a file via a link), but this approach might work most of the time:
${BASH_SOURCE[0]} contains the name of the script including PATH component, in the way it is invoked. If it was invoked via a $PATH search, it contains that respective PATH. Hence, dirname "${BASH_SOURCE[0]}" would be the directory, where the script is located (either as relative or absolute path). Consequently, readlink -f -- $(dirname "${BASH_SOURCE[0]}") would output the absolute path to this directory. Hence to locate other_script in the same directory would be:
source "$(readlink -f -- $(dirname "${BASH_SOURCE[0]}"))/other_script" # bash
You tagged your question also for zsh. In Zsh, things are a bit simpler. You find your script (plus directory part) in $0. The absolute path of the directory is hence returned $0:A, giving you
source $0:A/other_script # zsh
Of course, if you need this information only for sourcing the other script, you don't need to get the absolute path to other_script. The relative path would do as well.

Related

How do I navigate fast through directories with the command line?

I spent some time finding a solution for my problem but google couldn't provide me a sufficient answer... I'm working a lot with the command line in linux and I simply need a way to navigate fast through my file system. I don't want to type cd [relative or absoulte path] all the time. I know there is pushd and popd but that still seems too complicated for a simple problem like this.
When I'm in ~/Desktop/sampleFile I simply want to use sampleCommand fileToGo to get to ~/Desktop/anotherFile/anotherFile/fileToGo, no matter, where the file is located. Is there an easy command for this?
Thanks in advance!
This can be done with native Bash features without involving a sub-shell fork:
You can insert this into your "$HOME/.bashrc":
cdf(){
# Query globstar state
shopt -q globstar
# and save it in the gs variable (gs=0 if set, 1 if not)
local gs=$?
# Need globstar to glob find files in sub-directories
shopt -s globstar
# Find the file in directories
# and store the result into the matches array
matches=(**/"$1")
# globstar no longer needed, so restore its previous state
[ $gs -gt 0 ] && shopt -u globstar
# Change to the directory containing the first matched file
cd "${matches[0]%/*}" # cd EXIT status is preserved
}
Hmm, you could do something like this:
cd $(dirname $(find . -name name-of-your-file | head -n 1))
That will search the current directory (use / instead of . to search all directories) for a file called name-of-your-file and cd into the parent directory of the first file with that name that it finds.
If you're in a large directory, typing the path and using cd will probably be faster than this, but it works alright for small directories.

Docker bash'ing with find

I am having a hell of a time attempting to get a bash script to work as expected (as it does in a normal bash session) on a Docker run.
The goal is to replace all of the symlinked files within the official java container with their actual file within the JAVA_HOME directory, so everything is contained within the java directory and not outside of it,
e.g.
$JAVA_HOME/jre/lib/security/java.policy <--- is symlinked to ---> /etc/java-7-openjdk/security/java.policy
The end result should be the file located at: $JAVA_HOME/jre/lib/security/java.policy
The setup:
docker run java:7u91 /bin/bash -cxe "find /usr/lib/jvm/**/jre -type l | while read f; do echo $f; cp --remove-destination $(readlink $f) $f; done;"
I had attempted several different methods of effectively this, with xargs and exec all to no avail.
Any suggestions at this point would be appreciated.
It looks like this is what is happening: $(readlink $f) is not returning anything on the files that are not symbolic links (only works on symbolic links). Therefore, that expression is essentially nothing/empty.
So, only the $f is returning a value. Therefore, if the expression was evaluated, it would print cp --remove-destination VALUE_OF_$F;, and the $f would look like it was the first parameter of the cp command, with no second parameter present. That is why the 'destination' is missing.
Also, you need to consider the fact that putting your command inside of double quotes like that is presenting a problem. The variables will be parsed on the host rather than in the docker container. Replace the double quotes with single quotes to prevent that from happening.

output to a file in script directory

This probably quite basic but I have spent whole day finding an answer without much success.
I have an executable script that resides in ~/Desktop/shell/myScript.sh
I want a single line command to run this script from my terminal that outputs to a new directory in same directory where the script is located no matter what my present working directory is.
I was using:
mkdir -p tmp &&
./Desktop/shell/myScript.sh|grep '18x18'|cut -d":" -f1 > tmp/myList.txt
But it creates new directory in present working directory and not on the target location.
Any help would be appreciated.
Thanks!
You could solve it in one line if you pre-define a variable:
export LOC=$HOME/Desktop/shell
Then you can say
mkdir -p $LOC/tmp && $LOC/myScript.sh | grep '18x18' | cut -d":" -f1 > $LOC/tmp/myList.txt
But if you're doing this repeatedly it might be better long-term to wrap myScript.sh so that it creates the directory, and redirects the output, for you. The grep and cut parameters, as well as the output file name, would be passed as command-line arguments and options to the wrapper.
How about this:
SCRIPTDIR="./Desktop/shell/" ; mkdir "$SCRIPTDIR/tmp" ; "$SCRIPTDIR/myScript.sh" | grep '18x18' | cut -d ":" -f 1 > "$SCRIPTDIR/tmp/myList.txt"
In your case you have to give the path to the script anyway. If you put the script in the path where it is automatically searched, e.g. $HOME/bin, and you can just type myScript.sh without the directory prefix, you can use SCRIPTDIR=$( dirname $( which myScript.sh ) ).
Mixing directories with binaries and data files is usually a bad idea. For temporary files /tmp is the place to go. Consider that your script might become famous and get installed by the administrator in /usr/bin and run by several people at the same time. For this reason, try to think mktemp.
YOUR SCRIPT CAN DO THIS FOR YOU WITH SOME CODES
Instead of doing this manually from the command line and who knows where you will move your script and put it. add the following codes
[1] Find your script directory location using dirname
script_directory=`dirname $0`
The above code will find your script directory and save it in a variable.
[2] Create your "tmp" folder in your script directory
mkdir "$script_directory/tmp 2> /dev/null"
The above code will make a directory called "tmp" in your script directory. If the directory exist, mkdir will not overwrite any existing directory using this command line and gave an error. I hide all errors by "2> /dev/null"
[3] Open your script and modify it using "cut" and then redirect the output to a new file
cat "$0"|grep '18x18'|cut -d":" -f1 > "$script_directory"/tmp/myList.txt

How to recursively get all files filtered by multiple extensions within a folder including working folder without using find in Bash script

I have this question after quite a day of searching the net, perhaps I'm doing something wrong , here is my script:
#!/bin/bash
shopt -s extglob
FILE_EXTENSIONS=properties\|xml\|sh\|sql\|ksh
SOURCE_FOLDER=$1
if [ -z "$SOURCE_FOLDER" ]; then
SOURCE_FOLDER=$(pwd)
fi # Set directory to current working folder if no input parameter.
for file in $SOURCE_FOLDER/**/*.*($FILE_EXTENSIONS)
do
echo Working with file: $file
done
Basically, I want to recursively get all the files filtered by a list of extensions within folders from a directory that is passed as an argument including the directory itself.
I would like to know if there is a way of doing this and how without the use of the find command.
Imagine I have this file tree:
bin/props.properties
bin/xmls.xml
bin/source/sources.sh
bin/config/props.properties
bin/config/folders/moreProps.xml
My script, as it is right now and running from /bin, would echo:
bin/source/sources.sh
bin/config/props.properties
bin/config/folders/moreProps.xml
Leaving the ones in the working path aside.
P.S. I know this can be done with find but I really want to know if there's another way for the sake of learning.
Thanks!
You can use find with grep, just like this:
#!/bin/bash
SOURCE_FOLDER=$1
EXTENSIONS="properties|xml|sh|sql|ksh"
find $SOURCE_FOLDER | grep -E ".(${EXTENSIONS})"
#or even better
find $SOURCE_FOLDER -regextype posix-egrep -regex ".*(${EXTENSIONS})"

Create symbolic link with dependency to other files

I know my topic is a little confusing, but here is what I want to do.
I have a file which I would like to create a link to in my home directory ~/bin, but when I execute the file that is symbolically linked, the file requires another file in its directory. Therefore, it fails to run because it cannot find the other file. What can I do?
Well, you have two simple solutions.
edit the shell script to point to the absolute path of the file, not just the the basename.
./path/to/file.sh
VS
file.sh
so something like this should do what your after. sed -i 's|file.sh|./path/to/file.sh|g' ~/bin/script.sh it searches your symlinked file, script.sh in this case, and replaces the call to file.sh to ./path/to/file.sh. note you often see sed use /'s. but it can use just about anything as a delimiter, if you wish to use /'s here you will need to escape them. /. you may want to consider escaping the . (period) as well, but in this case its not necessary. If you are new to sed realize that the -i flag means it will edit the file in place. Lastly, realize its a simple search and replace operation and you may chose to do it by hand.
The second way is to create a ln -s to the file as you did with the other file so there exists a symbolic link between both files.
ln -s /far/off/script.sh ~/bin/script.sh
and
ln -s /far/off/file.sh ~/bin/file.sh
more on symlinking
I would rather create a script file in ~/bin/` that calls your executable from the appropriate directory.
Here is an example using /sbin/ifconfig:
$ cat > ~/bin/file
#!/bin/bash
file=/sbin/ifconfig
cd `dirname $file`
`basename $file`
(ctr+d)
$ chmod +x ~/bin/file
$ file
Here you should see the output of ifconfig but the point is: its get executed from the /sbin directory. So if ifconfig had dependencies it would work properly. Just replace /sbin/ifconfig with your absolute path.
Alternatively, you can modify your script as
pushd ~/bin
##### your script here
popd
Combination of readlink and dirname will get the actual directory of the script:
my_dir=$(dirname "$(readlink -f "$0")")
source "$my_dir/other_file"

Resources