Check from within bash script that autocompletion is initialized - linux

I'm looking for a method to check from within my shell script that script specific completion have been initialized by user using complete -F ...
I want this check to print out an advice on how to initialize the completion like:
Warning: Auto completion is not initialized. Please run : source ....; complete -F ...
The problem is that the script,being run in a sub-shell has no information about "complete" environment of the parent shell where user is working.
So complete -p| grep my-script-name never return any result.
User is expected to run "source" and "complete" commands or add them into his .bashrc manually, because we're working on a server where we have no access to the bash completion system directory.
Alternatively if you know a method of initializing(and not only checking) the auto-complete from within the script, I would happily accept it.

The only way your script can have access to such information for the parent shell is if it is included instead of executed as a sub-shell. Rather than instructing your users they can include some configuration, you can design your script so it works whether it is run or included.
Then you can simply inform your users that if they want completion enabled they need to include your script rather than run it as an executable script (with instuctions to use source or . as you wish).
However, in this case, I would be inclined to either add this information in documentation or add it into a banner which is always displayed (but can be disabled with an option switch like -q), rather than support two modes of running the script (since the gain is so small).

Either you want every user to use bash completion (which I don't think is good, for example I prefer to have it turned off), or let users decide themselves, but having them this message printed out on each TABTAB is a threat, don't you think?
I'd put into /etc/bash.bashrc what you think every user should run.

Related

Is there a means to keep track of commands executed on command line?

I wanted to know if there was some means of keeping track of every command executed on command line in a chronological manner along with the working directory where it was executed and the output of the command. Like putting all of this information for a session into a log file that you can go through later.
The acct program allows the system administrators to know exactly what their users were doing on their command line.
This site: http://www.cyberciti.biz/tips/howto-log-user-activity-using-process-accounting.html has a detailed description on how to enable it.

Change root-path for bash auto-completition (TAB) feature

Can I force BASH to see certain folder (let's call it main_folder) as the root of my file-system? I need BASH to behave this way at least during auto-completion of parameter names and command names, while inside the folder.
Let's say that I have directory tree that looks like this:
/z/y/main_folder/a.txt
/z/y/main_folder/bin/b.txt
/z/y/main_folder/bin/c.txt
/z/y/main_folder/bin/d.sh
Now that I call this custom version of bash, I could simply type:
/> /bi(TAB)/(TAB) /a(TAB)
What would expand to:
/> /bin/d.sh /a.txt
Where d.sh is command to be run and a.txt is it's first parameter. If I was CDed into /bin/ I could do:
/bin/> ./(TAB) (TAB)(TAB)
What would expand the command d.sh, and would give three options for the first parameter (namely: b.txt, c.txt, d.sh).
Few brief additional points:
I do not care if the original root of the file-system is inaccessible or is accessible via hard/soft link.
I do not care if I am able to run any commands that are out of scope for main_folder (I will change the $PATH variable anyway) or any shell builtins.
I do not care what the $PS#, $PWD, etc. variables actually hold.
I do not want to make my own version of BASH (changing source-code). My application should (probably) be started via some script (sh) or program (C/C++/C#) that setups the environment and either continues in interactive mode or runs interactive shell on one of it's last lines.
I want to run this as an unprivileged user. I do not want to allow the user to chroot.
I am not concerned with security, and I am not intending to jail anyone. I simply need BASH to auto-complete.
I would not mind to 'trap' BASH during directory lookups.
I have a feeling that set, compgen, complete and compopt builtins are what I need to utilize, but I do not know how. It does not seem that the examples I have found about these commands show all the features, and man pages are quite chaotic.
Thanks, Kupto :)

How to track file creation and modification

We have put together a perl script that essentially looks at the argument that is being passed to it checks if is creating or modifying a file then it saves that in a mysql database so that it is easily accessible later. Here is the interesting part, how do I make this perl script run before all of the commands typed in the terminal. I need to make this dummy proof so people don't forget to run it.
Sorry I didn't formulate this question properly. What I want to do is prepend to each command such that each command will run like so "./run.pl ls" for example. That way I can track file changes if the command is mv or it creates an out file for example. The script pretty much takes care of that but I just don't know how to run it seamlessly to the user.
I am running ubuntu server with the bash terminal.
Thanks
If I understood correctly you need to execute a function before running every command, something similar to preexec and precmd in zsh.
Unfortunately bash doesn't have a native support for this but you can do it using DEBUG trap.
Here is a sample code applying this method.
This page also provide some useful information.
You can modify the ~/.bashrc file and launch your script there. Do note that each user would (and should) still have the privelege to modify this file, potentially removing the script invocation.
The /etc/bash.bashrc file is system-wide and only changeable by root.
These .bashrcs are executed when a new instance of bash is created (e.g. new terminal).
It is not the same as sh, the system shell, that is dash on Ubuntu systems.

Why is exported variable blank after script is over?

I have a simple command in a Linux shell script (say foo.sh). In it I do this:
export INSTALL_DIR=/mnt/share/TEST_Linux
I run the script with:
> sh foo.sh
When it finishes I try to get the variable but the value is blank.
> echo $INSTALL_DIR
If I type the command directly the exported var becomes global to the opened terminal window. I'm using Ubuntu.
Setting environment variables is local to the child bash process running your script. To achieve what you want, you need to source it like this: source foo.sh. It means that it's run by your main bash process. Then, the setting of a variable will remain after the script is finished.
The variable is exported only in the new shell you are starting. You probably want to execute your script with source.
source foo.sh
I don't know the answer but i know how to overcome it.
# source ./foo.sh
# echo $INSTALL_DIR
And it's like magic.
I think it's because that script gets executed in it's own "shell". Not sure.
Because the process you are running (the shell running your script) can do whatever it wants, but its actions won't affect the parent process (your current shell).
A somewhat weird analogy would be: I can take 5 tequila shots and my environment will become blurry and gravity laws would be affected according to my perception. But to my father, his environment is the same, he doesn't get drunk because of my actions.
If you want that variables created/altered in your script affect your current shell, you should source the script as other answers pointed out. Please do note that doing this also may change the resulting working dir in your shell if the script does cd /whatever/path, that any other functions setted, but also altered or removed, would get affected in the same way in your shell.
A really weird and not very good analogy would be if I take 5 tekila shots and then my father kills me and drinks my blood.
Am I disturbed or what? ;-)

When executing a shell script in Ubuntu (or any unix-type enviro) how can I persist exports outside of the script?

I'm new to Linux and especially to Ubuntu 11 which I'm just trying today for the first time. I need Linux for some development which requires a Linux-based emulator, so I'm trying to write a shell script that sets up my dev environment.
Now I've created a .scripts folder in my home dir and added it to my path by exporting it in .bashrc so every time I start a new terminal instance, I can execute any custom scripts I drop in there.
Now one (three actually) of those scripts sets up all my dev-related paths, exports, as well as a cd command which switches to the appropriate folder for this dev. However (again forgive me if you already know this...) the script runs in its own 'session' (for lack of a better word) so although the enviro-vars and such are all set up and do execute (as was proven by embedding echo calls throughout) when the script finishes and I'm returned back to the terminal where I executed the script, that other session no longer exists and with the exception of clearing the screen and echoing output, there's nothing else showing the script ever ran.
Now I'm not sure its even possible to extend exported variables outside of that script back to the calling 'instance' or of there's some kind of flag I can set to execute the script in the existing session, so I'm stumped.
Now if that is not possible, is it at least possible to write a script or set up an icon that can launch a new terminal window, then execute the script but leaving the window open and initialized?
Thanks!
Mark
Put the script in a function definition in ~/.bashrc. For example
enter_dev_env() {
cd /home/foo/src
export foo="bar"
}
Run the command with source.
source foo.sh

Resources