Why Bash have such strange behavior regarding to the startup file - linux

I found how bash read startup files :
When Bash is invoked as an interactive login shell, or as a non-interactive shell with the --login option, it first reads and executes commands from the file /etc/profile, if that file exists. After reading that file, it looks for ~/.bash_profile, ~/.bash_login, and ~/.profile, in that order, and reads and executes commands from the first one that exists and is readable. The --noprofile option may be used when the shell is started to inhibit this behavior.
http://www.gnu.org/software/bash/manual/bashref.html#Bash-Startup-Files
Why is that - I mean this queue of "~/.bash_profile, ~/.bash_login, and ~/.profile". (And this logic - "if one of this file exist the other ones are not read at all")
I really don't understand point of that, why we need that much mess. Why Bash don't just read just one "global" and one "user specific" startup file ?

The reason for this is that there are different ways to use a shell, there are different shells and you may want to share / re-use some options (or not!).
For example, all shells derived from Bourne Shell read ~/.profile. So if you want to share options between /bin/sh, /bin/ksh and /bin/bash, put them there.
But then, you may want different options for BASH and KSH. In that case, use .bash_profile and .kshrc respectively and have them source the common ~/.profile.
Using the rules above, you can fine-tune your shell's setup. It will first load the config file which is most suitable for its purpose. In said config file, you can then chose to load others to inherit whatever your want. If you only use .profile, then that makes it easy to switch between different shells.
I'm not sure about the difference between .bash_profile and .bash_login; maybe this is a leftover from a bug or a change in the design.
Login scripts are executed only for login shells (i.e. the first shell a system creates when a user logs in; all other shells and processes will be children of it). The login shell contains things like global variables which you want everywhere. A common example is the ID of the SSH agent so you can load keys in any shell and they will work for every process of the same user. It doesn't make sense to do that for every shell that you start.
On the other hand, it doesn't make sense to define a prompt for non-interactive shells, so this goes into a different config script.

Bash has a number of different ways of being started and each of these allows for different configuration. These include, interactive, non-interactive, login, non-login, sh and any combination of these.
You are possibly confusing what would be easier for you and what would be easier for someone else with different requirements. This is pretty much the linux / unix way.
EDIT:
The reason for the loading order of the files is that .bash_login and .profile are synonyms for .bash_profile. These come from C shells .login file and bourne shell and korn shells .profile. As I understand it this ordering allows for backward compatibility (unsuccessful in the case of C shell) with these other shells.

Related

How can I get the name of the sourced script in tcsh?

I'm looking for a way to get the name of a script that's being sourced from another script that's being executed in tcsh.
If I needed to the the name of a script being executed (not sourced), it's $0. If I need to get the name of a script that's being sourced from the command line, I can get it from $_. But when an executed script sources a script, I get an empty value for $_ in the sourced script, so I can't get the script name or pathname from that.
I'm looking for a non-manual method for getting that information.
There isn't really anything for this; source is mostly just a way to read the file and run it in the current scope.
However, it does accept arguments; from the tcsh manpage:
source [-h] name [args ...]
The shell reads and executes commands from name. The commands
are not placed on the history list. If any args are given,
they are placed in argv. (+) source commands may be nested; if
they are nested too deeply the shell may run out of file
descriptors. An error in a source at any level terminates all
nested source commands. With -h, commands are placed on the
history list instead of being executed, much like `history -L'.
So for example source file.csh file.csh will have argv[1] set to file.csh.
Another option is to simple set a variable before the source command:
set src = "file.csh" # Will be available in file.csh
source file.csh
If you can't or don't want to modify the source call then you're out of luck as far as I know. (t)csh is an old crusty shell with many awkward things, large and small, and I would generally discourage using it for scripting unless you really don't have any option available.
$_ simply gets the last commandline from history; maybe, very maybe it's possible to come up with a super-hacky solution to (ab)use the history for this in some way, but it seems to me that just typing the filename twice is a lot easier.

How to accept the 'Did you mean?' terminal/git suggestion

This is a simple question.
Sometimes on a Terminal when you make a small mistake the console asks ¿Did you mean ...? - ¿Is there a way to quicky accept the suggestion?.
For example:
$ git add . -all
error: did you mean `--all` (with two dashes ?)
Is there a command that repeats the last line, but with the two dashes?
If you forget to write sudo, you can just do sudo !! and it will solve your problem. I want to know if there is something similar but for the error: did you mean case.
In the case of...
$ git add . -all
error: did you mean `--all` (with two dashes ?)
...the message is written by git directly to the terminal. This means that bash has no way of knowing what message was written; it would be literally impossible to implement anything in the shell that could automate putting that correction in place without making programs run under the shell considerably less efficient (by routing their output through the shell rather than directly to the terminal) and changing their behavior (if they ever call isatty() on their stdout or stderr).
That said, you can certainly run
^-all^--all
...if you haven't turned history expansion off, as with set +H (if off, it can be reenabled with set -H). I typically do turn this functionality off, myself; it's often more trouble than it's worth (making commands which would work perfectly well in scripts break in interactive shells when they use characters that history expansion is sensitive to, particularly !).

How to set TERM environment variable for linux shell

I've got very odd problem when I set export TERM=xterm-256color in ~/.bash_profile. When I try to run nano or emacs I get the following errors.
nano:
.rror opening terminal: xterm-256color
emacs:
is not defined.type xterm-256color
If that is not the actual type of terminal you have,
use the Bourne shell command `TERM=... export TERM' (C-shell:
`setenv TERM ...') to specify the correct type. It may be necessary
to do `unset TERMINFO' (C-shell: `unsetenv TERMINFO') as well.
If I manually enter the following into the shell it works
export TERM=xterm-256color
I'm stumped.
Looks like you have DOS line feeds in your .bash_profile. Don't edit files on Windows, and/or use a proper tool to copy them to your Linux system.
Better yet, get rid of Windows.
In more detail, you probably can't see it, but the erroneous line actually reads
export TERM=xterm-256color^M
where ^M is a literal DOS carriage return.
Like #EtanReisner mentions in a comment, you should not be hard-coding this value in your login files, anyway. Linux tries very hard to set it to a sane value depending on things like which terminal you are actually using and how you are connected. At most, you might want to override a particular value which the login process often chooses but which is not to your liking. Let's say you want to change to xterm-256color iff the value is xterm:
case $TERM in xterm) TERM=xterm-256color;; esac
This is not a programming question and yet an extremely common question on StackOverflow. Please google before asking.

Bash (or other shell): wrap all commands with function/script

Edit: This question was originally bash specific. I'd still rather have a bash solution, but if there's a good way to do this in another shell then that would be useful to know as well!
Okay, top level description of the problem. I would like to be able to add a hook to bash such that, when a user enters, for example $cat foo | sort -n | less, this is intercepted and translated into wrapper 'cat foo | sort -n | less'. I've seen ways to run commands before and after each command (using DEBUG traps or PROMPT_COMMAND or similar), but nothing about how to intercept each command and allow it to be handled by another process. Is there a way to do this?
For an explanation of why I'd like to do this, in case people have other suggestions of ways to approach it:
Tools like script let you log everything you do in a terminal to a log (as, to an extent, does bash history). However, they don't do it very well - script mixes input with output into one big string and gets confused with applications such as vi which take over the screen, history only gives you the raw commands being typed in, and neither of them work well if you have commands being entered into multiple terminals at the same time. What I would like to do is capture much richer information - as an example, the command, the time it executed, the time it completed, the exit status, the first few lines of stdin and stdout. I'd also prefer to send this to a listening daemon somewhere which could happily multiplex multiple terminals. The easy way to do this is to pass the command to another program which can exec a shell to handle the command as a subprocess whilst getting handles to stdin, stdout, exit status etc. One could write a shell to do this, but you'd lose much of the functionality already in bash, which would be annoying.
The motivation for this comes from trying to make sense of exploratory data analysis like procedures after the fact. With richer information like this, it would be possible to generate decent reporting on what happened, squashing multiple invocations of one command into one where the first few gave non-zero exits, asking where files came from by searching for everything that touched the file, etc etc.
Run this bash script:
#!/bin/bash
while read -e line
do
wrapper "$line"
done
In its simplest form, wrapper could consist of eval "$LINE". You mentioned wanting to have timings, so maybe instead have time eval "$line". You wanted to capture exit status, so this should be followed by the line save=$?. And, you wanted to capture the first few lines of stdout, so some redirecting is in order. And so on.
MORE: Jo So suggests that handling for multiple-line bash commands be included. In its simplest form, if eval returns with "syntax error: unexpected end of file", then you want to prompt for another line of input before proceeding. Better yet, to check for proper bash commands, run bash -n <<<"$line" before you do the eval. If bash -n reports the end-of-line error, then prompt for more input to add to `$line'. And so on.
Binfmt_misc comes to mind. The Linux kernel has a capability to allow arbitrary executable file formats to be recognized and passed to user application.
You could use this capability to register your wrapper but instead of handling arbitrary executable, it should handle all executable.

Securely using password as bash argument

I'm extracting a part of a web application that handles the signup, the other part will be rewritten.
The idea is the signup part can exist as a separate application, interface with the rest of the application for creating and setting up the account. Obviously there are a ton of ways to do this, most of them network based solutions like SOAP, but I'd like to use a simpler solution: a setup script.
The concern is that certain sensitive data, specifically the admin password of the new account, would be passed through bash.
I was thinking of sharing a small class between the applications, so that the password can be passed already hashed, but I would also have to pass the salt, so it still seems like a (small) security risk. One of the concerns is bash logging (can I disable that for a single command?) but I'm sure there are other concerns as well?
This would be on the same private server, so the risk seems minimal, but I don't want to take any chances whatsoever.
Thanks.
Use the $HISTFILE environment variable, unset it (this is for all users):
echo "unset HISTFILE" >> /etc/profile
Then set it back again.
More info on $HISTFILE variable here: http://linux.about.com/cs/linux101/g/histfileenviron.htm
Hope this helps!
From the man page of bash:
HISTIGNORE
A colon-separated list of patterns used to decide which
command
lines should be saved on the history list. Each pattern
is
anchored at the beginning of the line and must match
the com-
plete line (no implicit ‘*’ is appended). Each pattern
is
tested against the line after the checks specified by
HISTCONTROL are applied. In addition to the normal shell
pattern
matching characters, ‘&’ matches the previous history line.
‘&’
may be escaped using a backslash; the backslash is
removed
before attempting a match. The second and subsequent
lines of a
multi-line compound command are not tested, and are added
to the
history regardless of the value of HISTIGNORE.
Or, based on your comment, you could store the password in a protected file, then read from it.
Passing the salt in clear is no problem (the salt is usually stored in clear), the purpose of the salt is avoiding the same password hashing to the same hash always (so users with the same password would have the same hash, and rainbow tables would only need a single hash for each possible password).
What is more problematic is passing sensitive data through command line arguments, an eavesdropper on the same box can see the arguments to any command (on Linux they appear on /proc//cmdline, and on most Unixes can be seen using ps; some systems restrict permissions on /proc// to only the owner of the process for security).
What you could do is pass the sensitive information through a file, don't forget to set the umask to a very restrictive setting before creating the file.
Bash doesn't normally log commands executed in scripts, but only in interactive sessions (depending on appropriate settings). To show this, use the following script:
#!/bin/bash
echo "-- shopt --"
shopt | grep -i hist
echo "-- set --"
set -o | grep -i hist
echo "--vars --"
for v in ${!HIST*}
do
echo "$v=${!v}"
done
Run it like this:
$ ./histshow
and compare the output to that from sourcing it like this:
$ . ./histshow
In the first case take note that HISTFILE is not set and that the set option history is off. In the second case, sourcing the script runs it in your interactive session and shows what your settings are for it.
I was only able to make a script keep an in-memory history by doing set -o history within the script and to log its history to a file by also setting HISTFILE and then doing an explicit history -w.

Resources