change shell directory from a script? - linux

i want to make a script (to) that makes it easier for me to enter folders.
so eg. if i type "to apache" i want it to change the current directory to /etc/apache2.
however, when i use the "cd" command inside the script, it seems like it changes the path WITHIN the script, so the path in the shell has not changed.
how could i make this work?

Use an alias or function, or source the script instead of executing it.
BASH FAQ entry #60.

use a function
to_apache(){
cd /etc/apache
}
put in a file eg mylibrary.sh and whenever you want to use it, source the file. eg
#!/bin/bash
source /path/mylibrary.sh
to_apache

As Ignacio said, make it into a function (or perhaps an alias).
The way I tend to do it is have a shell script that creates the function - and the script and the function have the same name. Then once at some point in time, I will source the script ('. funcname') and thereafter I can simply use the function.
I tend to prefer functions to aliases; it is easier to manage arguments etc.
Also, for the specific case of changing directories, I use CDPATH. The trick with using CDPATH is to have the empty entry at the start:
export CDPATH=:/work4/jleffler:/u/jleffler:/work4/jleffler/src:\
/work4/jleffler/src/perl:/work4/jleffler/src/sqltools:/work4/jleffler/lib:\
/work4/jleffler/doc:/u/jleffler/mail:/work4/jleffler/work:/work4/jleffler/ids
On this machine, my main home directory is /work4/jleffler. I can get to most of the relevant sub-directories in one go with 'cd whatever'.
If you don't put the empty entry (or an explicit '.') first, then you can't 'cd' into a sub-directory of the current directory, which is disconcerting at least.

Ignacio Vazquez-Abrams gave a link to what probably answers the question, although I didn't really follow it. The short answer is to use either "source" or a single dot before the command, eg:
. to apache
But, I found there are down problems to this if you have a more complicated script. It seems that the original script filename variable ($0) is lost. I see "-bash" instead, so your script can't echo error text that that would include the full filename.
Also, you can't use the "exit" command, or your shell will exit (especially disconcerting from ssh).
Also, the "basename" function gives an error if you use that.
So, it seems to me that a function might be the only way to get around some of these problems, like if you are passing parameters.

Related

How does ${path} work, in this tutorial

I'm sure this is one of the dumbest problems asked on this site, but I am very new to linux, and a little out of my depths. I'm working off of this tutorial here and am stuck on the "add the path" and verify steps.
For this one the tutorial told me to use this:
export PATH=${PATH}:${DTITK_ROOT}/bin:${DTITK_ROOT}/utilities:${DTITK_ROOT}/scripts
I have already defined DTITK_ROOT, and have a few questions about the above instructions.
Should the ${} be left around the DTITK_ROOT?
My DTITK_ROOT is the full path (I think that's the right term) to the file I extracted the program to, should I change that?
What do I write for ${PATH} in that case? I understand that I'm supposed to replace it with something, but I don't know what. Everything I've tried doesn't pass the verify step.
I'm sorry if it seems like a dumb or really simple question, but I don't even know any keywords to google in order to find how to get the answer.
Yes. This is how you access the path stored in DTITK_ROOT. This is called parameter expansion. You can read more about it here.
No, don't change anything. Also, a more commonly used term is absolute path, in comparison to relative path. The absolute path is a path from the root directory, /. Relative path is a path from your current working directory. You can read more about paths in general and the difference between absolute and relative paths here.
You don't replace it with anything. Once again, parameter expansion comes into play and this will be replaced with what is already stored in your path variable. So really all this command is doing is taking your path variable, adding some more paths to it, and then storing it back into your path variable. If you didn't know, the path variable contains paths to all executable files that you would like to execute without typing the full path. Here is a good discussion on path variables, along with other environment variables.
1st command takes care of path
export DTITK_ROOT=mypathonSystem/dtitk
2nd command
export PATH=${PATH}:${DTITK_ROOT}/bin:${DTITK_ROOT}/utilities:${DTITK_ROOT}/scripts
I am not too sure but I think second command should run as is since you defined DDTITK_ROOT in first command
${PATH} is letting the system know where the resources can be found at
have you tried running first command, then running second command unmodified?
Should the ${} be left around the DTITK_ROOT?
Yes. In the case of the shell, it is not essential here because the / that follows the $DTITK_ROOT is enough to signal that we have reached the end of the variable name, but doing ${DTITK_ROOT} explicitly says that the variable name is DTITK_ROOT and not that plus whatever characters might be on the end of it. Other programs (such as make) which allow you to write shell commands to execute might not be so accommodating - make would think that $DTITK_ROOT would be the value of $D followed by the literal characters TITK_ROOT. So, it is a good practice to just get used to putting {} around shell variable names that are longer than a single character.
My DTITK_ROOT is the full path to the file I extracted the program to, should I change that?
If you mean the full path to the directory that you extracted the program to, then that is what you should use. I am assuming that you have something like "export DTITK_ROOT=/Users/huiz/unix/dtitk" (per the example).
On thing you can do is to verify that the value of DTITK_ROOT is available by executing a "echo ${DTITK_ROOT}" to verify that it has the proper value.

Change root-path for bash auto-completition (TAB) feature

Can I force BASH to see certain folder (let's call it main_folder) as the root of my file-system? I need BASH to behave this way at least during auto-completion of parameter names and command names, while inside the folder.
Let's say that I have directory tree that looks like this:
/z/y/main_folder/a.txt
/z/y/main_folder/bin/b.txt
/z/y/main_folder/bin/c.txt
/z/y/main_folder/bin/d.sh
Now that I call this custom version of bash, I could simply type:
/> /bi(TAB)/(TAB) /a(TAB)
What would expand to:
/> /bin/d.sh /a.txt
Where d.sh is command to be run and a.txt is it's first parameter. If I was CDed into /bin/ I could do:
/bin/> ./(TAB) (TAB)(TAB)
What would expand the command d.sh, and would give three options for the first parameter (namely: b.txt, c.txt, d.sh).
Few brief additional points:
I do not care if the original root of the file-system is inaccessible or is accessible via hard/soft link.
I do not care if I am able to run any commands that are out of scope for main_folder (I will change the $PATH variable anyway) or any shell builtins.
I do not care what the $PS#, $PWD, etc. variables actually hold.
I do not want to make my own version of BASH (changing source-code). My application should (probably) be started via some script (sh) or program (C/C++/C#) that setups the environment and either continues in interactive mode or runs interactive shell on one of it's last lines.
I want to run this as an unprivileged user. I do not want to allow the user to chroot.
I am not concerned with security, and I am not intending to jail anyone. I simply need BASH to auto-complete.
I would not mind to 'trap' BASH during directory lookups.
I have a feeling that set, compgen, complete and compopt builtins are what I need to utilize, but I do not know how. It does not seem that the examples I have found about these commands show all the features, and man pages are quite chaotic.
Thanks, Kupto :)

File name multiple extensions order

I want to create some bash scripts. They're actually going to be build scripts for Scala, so I'm going to identify them with my own .bld extension. They will be a sort of sub type of a shell script. Hence I want them to be easily recognised as a shell script. Should I call them
ProjectA.bld.sh //or
ProjectA.sh.bld
Edit: My natural inclination would be to go for the former but .tar.gz files seem to follow the latter naming convention.
A shell script doesn't mind what you call it.
It just needs to be..
executable (chmod +x)
in your path
contain a "shebang" as it's first line #!/bin/sh
The shebang determines which program is used to execute your script.
Call it ProjectA.bld.sh (or preferably buildProjectA.sh).
The .sh extension (although not necessary for the script to run) will allow you and everyone else to easily recognise it as a shell script.
While for the most part, naming conventions like this don't really matter at all to Unix/Linux, the usual convention is for the "extensions" to be in the order of the steps used to create the file. So, for example, a file named foo.tar.bz2.gpg.part01 would indicate a sequence of operations like the following:
Use tar to create foo.tar, which contains some other files
Use bzip2 to compress foo.tar into foo.tar.bz2
Use gnupg to encrypt foo.tar.bz2 into foo.tar.bz2.gpg
Use split or something similar to break the file into chunks for transmission/storage, resulting in one or more foo.tar.bz2.gpg.part* files.
The naming conventions are mostly just for human semantic meaning, though, and there's nothing stopping you from doing exactly the opposite, or even something completely random, except your own ability to remember exactly what you did...

Bash script ignores arguments when in /bin

I have this bash script that I can pass up to three arguments to. It works like a charm when I call it from the directory ./script -h but when I copy the same file to /bin and call it from anywhere with script -h, it seems to ignore the arguments passed.
Why? or maybe more importantly:
What can I do do change that?
script is a very useful standard utility program which take a copy of your current session (look for a file called typescript). It creates another shell interface, so you probably didn't notice it was running.
When you write a new program, use a naming convention, like script.sh.
Edit:
If you don't like using a file suffix (because it looks too much like Windows) then fine, but use some other naming convention which will ensure your script names do not clash with existing commands. test is another favorite, for example. You can use type to check a command, but that only checks your current environment, you might still have a name collision when running from a different username, for example.

Redirect program output without changing directory

Problem
I'm writing a set of scripts to help with automated batch job execution on a cluster.
The specific thing I have is a $OUTPUT_DIR, and an arbitrary $COMMAND.
I would like to execute the $COMMAND such that its output ends up in $OUTPUT_DIR.
For example, if COMMAND='cp ./foo ./bar; mv ./bar ./baz', I would like to run it such that the end result is equivalent to cp ./foo ./$OUTPUT_DIR/baz.
Ideally, the solution would look something like eval PWD="./$OUTPUT_DIR" $COMMAND, but that doesn't work.
Known solutions
[And their problems]
Editing $COMMAND: In most cases the command will be a script, or a compiled C or FORTRAN executable. Changing the internals of these isn't an option.
unionfs, aufs, etc.: While this is basically perfect, users running this won't have root, and causing thousands+ of arbitrary mounts seems like a questionable choice.
copying/ hard/soft links: This might be the solution I will have to use: some variety of actually duplicating the entire content of ./ into ./$OUTPUT_DIR
cd $OUTPUT_DIR; ../$COMMAND : Fails if $COMMAND ever reads files
pipes : only works if $COMMAND doesn't directly work with files; which it usually does
Is there another solution that I'm missing, or is this request actually impossible?
[EDIT:]Chosen Solution
I'm going to go with something where each object in the directory is symbolic-linked into the output directory, and the command is then run from there.
This has the downside of creating a lot of symbolic links, but it shouldn't be too bad.
You can't solve this without making some assumptions about the interface of $COMMAND. There is no single definition of what "output ends up in $OUTPUT_DIR" means. For one program this may be some files, but another program might just print something to stdout and yet another might try sending some data over the internet using some protocol or display something in a GUI and there isn't an obvious way of mapping all of these to "output goes to $OUTPUT_DIR".
So, you need to invent some assumptions and require any $COMMAND implementation to follow them. Then, it may get as simple as requesting that the command accept a parameter such as --target=<DIR>. If your command was some simple command, you would have to create a wrapper script around it to translate that parameter into what the app accepts. cp, mv and a few more utils already accept the parameter --target, so that may be a good starting point.
You cannot set the output directory, you can only set the working directory.
The problem is, once you set the working directory, other references are going to be invalid. For example in your code foo:
cp ./foo ./bar
If you have a specific command, there are workarounds (creating a script that alters arguments, prepending the directory to specific arguments), but in general this is not possible.

Resources