Bash script ignores arguments when in /bin - linux

I have this bash script that I can pass up to three arguments to. It works like a charm when I call it from the directory ./script -h but when I copy the same file to /bin and call it from anywhere with script -h, it seems to ignore the arguments passed.
Why? or maybe more importantly:
What can I do do change that?

script is a very useful standard utility program which take a copy of your current session (look for a file called typescript). It creates another shell interface, so you probably didn't notice it was running.
When you write a new program, use a naming convention, like script.sh.
Edit:
If you don't like using a file suffix (because it looks too much like Windows) then fine, but use some other naming convention which will ensure your script names do not clash with existing commands. test is another favorite, for example. You can use type to check a command, but that only checks your current environment, you might still have a name collision when running from a different username, for example.

Related

Change root-path for bash auto-completition (TAB) feature

Can I force BASH to see certain folder (let's call it main_folder) as the root of my file-system? I need BASH to behave this way at least during auto-completion of parameter names and command names, while inside the folder.
Let's say that I have directory tree that looks like this:
/z/y/main_folder/a.txt
/z/y/main_folder/bin/b.txt
/z/y/main_folder/bin/c.txt
/z/y/main_folder/bin/d.sh
Now that I call this custom version of bash, I could simply type:
/> /bi(TAB)/(TAB) /a(TAB)
What would expand to:
/> /bin/d.sh /a.txt
Where d.sh is command to be run and a.txt is it's first parameter. If I was CDed into /bin/ I could do:
/bin/> ./(TAB) (TAB)(TAB)
What would expand the command d.sh, and would give three options for the first parameter (namely: b.txt, c.txt, d.sh).
Few brief additional points:
I do not care if the original root of the file-system is inaccessible or is accessible via hard/soft link.
I do not care if I am able to run any commands that are out of scope for main_folder (I will change the $PATH variable anyway) or any shell builtins.
I do not care what the $PS#, $PWD, etc. variables actually hold.
I do not want to make my own version of BASH (changing source-code). My application should (probably) be started via some script (sh) or program (C/C++/C#) that setups the environment and either continues in interactive mode or runs interactive shell on one of it's last lines.
I want to run this as an unprivileged user. I do not want to allow the user to chroot.
I am not concerned with security, and I am not intending to jail anyone. I simply need BASH to auto-complete.
I would not mind to 'trap' BASH during directory lookups.
I have a feeling that set, compgen, complete and compopt builtins are what I need to utilize, but I do not know how. It does not seem that the examples I have found about these commands show all the features, and man pages are quite chaotic.
Thanks, Kupto :)

File name multiple extensions order

I want to create some bash scripts. They're actually going to be build scripts for Scala, so I'm going to identify them with my own .bld extension. They will be a sort of sub type of a shell script. Hence I want them to be easily recognised as a shell script. Should I call them
ProjectA.bld.sh //or
ProjectA.sh.bld
Edit: My natural inclination would be to go for the former but .tar.gz files seem to follow the latter naming convention.
A shell script doesn't mind what you call it.
It just needs to be..
executable (chmod +x)
in your path
contain a "shebang" as it's first line #!/bin/sh
The shebang determines which program is used to execute your script.
Call it ProjectA.bld.sh (or preferably buildProjectA.sh).
The .sh extension (although not necessary for the script to run) will allow you and everyone else to easily recognise it as a shell script.
While for the most part, naming conventions like this don't really matter at all to Unix/Linux, the usual convention is for the "extensions" to be in the order of the steps used to create the file. So, for example, a file named foo.tar.bz2.gpg.part01 would indicate a sequence of operations like the following:
Use tar to create foo.tar, which contains some other files
Use bzip2 to compress foo.tar into foo.tar.bz2
Use gnupg to encrypt foo.tar.bz2 into foo.tar.bz2.gpg
Use split or something similar to break the file into chunks for transmission/storage, resulting in one or more foo.tar.bz2.gpg.part* files.
The naming conventions are mostly just for human semantic meaning, though, and there's nothing stopping you from doing exactly the opposite, or even something completely random, except your own ability to remember exactly what you did...

How to track file creation and modification

We have put together a perl script that essentially looks at the argument that is being passed to it checks if is creating or modifying a file then it saves that in a mysql database so that it is easily accessible later. Here is the interesting part, how do I make this perl script run before all of the commands typed in the terminal. I need to make this dummy proof so people don't forget to run it.
Sorry I didn't formulate this question properly. What I want to do is prepend to each command such that each command will run like so "./run.pl ls" for example. That way I can track file changes if the command is mv or it creates an out file for example. The script pretty much takes care of that but I just don't know how to run it seamlessly to the user.
I am running ubuntu server with the bash terminal.
Thanks
If I understood correctly you need to execute a function before running every command, something similar to preexec and precmd in zsh.
Unfortunately bash doesn't have a native support for this but you can do it using DEBUG trap.
Here is a sample code applying this method.
This page also provide some useful information.
You can modify the ~/.bashrc file and launch your script there. Do note that each user would (and should) still have the privelege to modify this file, potentially removing the script invocation.
The /etc/bash.bashrc file is system-wide and only changeable by root.
These .bashrcs are executed when a new instance of bash is created (e.g. new terminal).
It is not the same as sh, the system shell, that is dash on Ubuntu systems.

Redirect program output without changing directory

Problem
I'm writing a set of scripts to help with automated batch job execution on a cluster.
The specific thing I have is a $OUTPUT_DIR, and an arbitrary $COMMAND.
I would like to execute the $COMMAND such that its output ends up in $OUTPUT_DIR.
For example, if COMMAND='cp ./foo ./bar; mv ./bar ./baz', I would like to run it such that the end result is equivalent to cp ./foo ./$OUTPUT_DIR/baz.
Ideally, the solution would look something like eval PWD="./$OUTPUT_DIR" $COMMAND, but that doesn't work.
Known solutions
[And their problems]
Editing $COMMAND: In most cases the command will be a script, or a compiled C or FORTRAN executable. Changing the internals of these isn't an option.
unionfs, aufs, etc.: While this is basically perfect, users running this won't have root, and causing thousands+ of arbitrary mounts seems like a questionable choice.
copying/ hard/soft links: This might be the solution I will have to use: some variety of actually duplicating the entire content of ./ into ./$OUTPUT_DIR
cd $OUTPUT_DIR; ../$COMMAND : Fails if $COMMAND ever reads files
pipes : only works if $COMMAND doesn't directly work with files; which it usually does
Is there another solution that I'm missing, or is this request actually impossible?
[EDIT:]Chosen Solution
I'm going to go with something where each object in the directory is symbolic-linked into the output directory, and the command is then run from there.
This has the downside of creating a lot of symbolic links, but it shouldn't be too bad.
You can't solve this without making some assumptions about the interface of $COMMAND. There is no single definition of what "output ends up in $OUTPUT_DIR" means. For one program this may be some files, but another program might just print something to stdout and yet another might try sending some data over the internet using some protocol or display something in a GUI and there isn't an obvious way of mapping all of these to "output goes to $OUTPUT_DIR".
So, you need to invent some assumptions and require any $COMMAND implementation to follow them. Then, it may get as simple as requesting that the command accept a parameter such as --target=<DIR>. If your command was some simple command, you would have to create a wrapper script around it to translate that parameter into what the app accepts. cp, mv and a few more utils already accept the parameter --target, so that may be a good starting point.
You cannot set the output directory, you can only set the working directory.
The problem is, once you set the working directory, other references are going to be invalid. For example in your code foo:
cp ./foo ./bar
If you have a specific command, there are workarounds (creating a script that alters arguments, prepending the directory to specific arguments), but in general this is not possible.

change shell directory from a script?

i want to make a script (to) that makes it easier for me to enter folders.
so eg. if i type "to apache" i want it to change the current directory to /etc/apache2.
however, when i use the "cd" command inside the script, it seems like it changes the path WITHIN the script, so the path in the shell has not changed.
how could i make this work?
Use an alias or function, or source the script instead of executing it.
BASH FAQ entry #60.
use a function
to_apache(){
cd /etc/apache
}
put in a file eg mylibrary.sh and whenever you want to use it, source the file. eg
#!/bin/bash
source /path/mylibrary.sh
to_apache
As Ignacio said, make it into a function (or perhaps an alias).
The way I tend to do it is have a shell script that creates the function - and the script and the function have the same name. Then once at some point in time, I will source the script ('. funcname') and thereafter I can simply use the function.
I tend to prefer functions to aliases; it is easier to manage arguments etc.
Also, for the specific case of changing directories, I use CDPATH. The trick with using CDPATH is to have the empty entry at the start:
export CDPATH=:/work4/jleffler:/u/jleffler:/work4/jleffler/src:\
/work4/jleffler/src/perl:/work4/jleffler/src/sqltools:/work4/jleffler/lib:\
/work4/jleffler/doc:/u/jleffler/mail:/work4/jleffler/work:/work4/jleffler/ids
On this machine, my main home directory is /work4/jleffler. I can get to most of the relevant sub-directories in one go with 'cd whatever'.
If you don't put the empty entry (or an explicit '.') first, then you can't 'cd' into a sub-directory of the current directory, which is disconcerting at least.
Ignacio Vazquez-Abrams gave a link to what probably answers the question, although I didn't really follow it. The short answer is to use either "source" or a single dot before the command, eg:
. to apache
But, I found there are down problems to this if you have a more complicated script. It seems that the original script filename variable ($0) is lost. I see "-bash" instead, so your script can't echo error text that that would include the full filename.
Also, you can't use the "exit" command, or your shell will exit (especially disconcerting from ssh).
Also, the "basename" function gives an error if you use that.
So, it seems to me that a function might be the only way to get around some of these problems, like if you are passing parameters.

Resources