I'm looking for a way to get the name of a script that's being sourced from another script that's being executed in tcsh.
If I needed to the the name of a script being executed (not sourced), it's $0. If I need to get the name of a script that's being sourced from the command line, I can get it from $_. But when an executed script sources a script, I get an empty value for $_ in the sourced script, so I can't get the script name or pathname from that.
I'm looking for a non-manual method for getting that information.
There isn't really anything for this; source is mostly just a way to read the file and run it in the current scope.
However, it does accept arguments; from the tcsh manpage:
source [-h] name [args ...]
The shell reads and executes commands from name. The commands
are not placed on the history list. If any args are given,
they are placed in argv. (+) source commands may be nested; if
they are nested too deeply the shell may run out of file
descriptors. An error in a source at any level terminates all
nested source commands. With -h, commands are placed on the
history list instead of being executed, much like `history -L'.
So for example source file.csh file.csh will have argv[1] set to file.csh.
Another option is to simple set a variable before the source command:
set src = "file.csh" # Will be available in file.csh
source file.csh
If you can't or don't want to modify the source call then you're out of luck as far as I know. (t)csh is an old crusty shell with many awkward things, large and small, and I would generally discourage using it for scripting unless you really don't have any option available.
$_ simply gets the last commandline from history; maybe, very maybe it's possible to come up with a super-hacky solution to (ab)use the history for this in some way, but it seems to me that just typing the filename twice is a lot easier.
Related
hi i'd like to get some help with my linux bash homeworks.
i have to make a script that gets a directory and returns the depth of the deepest subdirectory (+1 for each directory).
I must do it recursively.
I must use 'list_dirs.sh' that takes the virable dir and echo its subdirs.
thats what i got so far:
dir=$1
sub=`source list_dirs.sh`
((depth++))
for i in $sub
do
if [ -n "$sub" ] ; then
./depthScript $dir/$i
fi
done
if ((depth > max)) ; then
max=$depth
echo $max
fi
after testing with a dir that supose to return 3 I got insted:
1
1
1
1
it seems like my depth counter forget previous values and I get output for
each directory.. need some help!
You can use bash functions to create recursive function calls.
Your function would ideally echo 0 in the base case where it is called on a directory with no subdirectories, and echo 1+$(getDepth $subdir) in the case where some subdirectory $subdir exists. See this question on recursive functions in bash for a framework.
When you run a script normally (i.e. it's in your PATH and you just enter its name, or you enter an explicit path to it like ./depthScript), it runs as a subprocess of the current shell. This is important because each process has its own variables. Variables also come in two kinds: shell variables (which are only available in that one process) and environment variables (the values of which get exported to subprocesses but not back up from them). And depending on where you want a variable's value to be available, there are three different ways to define them:
# By default, variables are shell variable that's only defined in this process:
shellvar=something
# `export` puts a variable into the environment, so it'll be be exported to subprocesses.
# You can export a variable either while setting it, or as a separate operation:
export envvar=something
export anotherenvvar
anotherenvvar=something
# You can also prefix a command with a variable assignment. This makes an
# environment variable in the command process's environment, but not the current
# shell process's environment:
prefixvar=something ./depthScript $dir/$i
Given the above assignments:
shellvar is defined in the current shell process, but not in any other process (including the subprocess created to run depthScript).
envvar and anotherenvvar will be inherited by the subprocess (and its subprocesses, and all subprocesses for later commands), but any changes made to it in those subprocesses have no effect at all in the current process.
prefixvar is available only in the subprocess created to run depthScript (and its subprocesses), but not in the current shell process or any other of its subprocesses.
Short summary: it's a mess because of the process structure, and as a result it's best to just avoid even trying to pass values around between scripts (or different invocations of the same script) in variables. Use environment variables for settings and such that you want to be generally available (but don't need to be changed much). Use shell variables for things local to a particular invocation of a script.
So, how should you pass the depth values around? Well, the standardish way is for each script (or command) to print its output to "standard output", and then whatever's using the script can capture its output to either a file (command >outfile) or a variable (var=$(command)). I'd recommend the latter in this case:
depth=$(./depthScript "$dir/$i")
if ((depth > max)) ; then
max=$depth
fi
Some other recommendations:
Think your control and data flow through. The current script loops through all subdirectories, then at the end runs a single check for the deepest subdir. But you need to check each subdirectory individually to see if it's deeper than the current max, and at the end report the deepest of them.
Double-quote your variable references (as I did with "$dir/$i" above). Unquoted variable references are subject to word splitting and wildcard expansion, which is the source of much grief. It looks like you'll need to leave $sub unquoted because you need it to be split into words, but this will make the script unable to cope with directory names with spaces. See BashFAQ #20: "How can I find and safely handle file names containing newlines, spaces or both?"
The if [ -n "$sub" ] ; then test is irrelevant. If $sub is empty, the loop will never run.
In a shell script, relative paths (like ./depthScript) are relative to whatever the working directory of the parent process, not to the location of the script. If someone runs your script from another directory, ./depthScript will not work. Use "$BASH_SOURCE" instead. See BashFAQ #28: "How do I determine the location of my script? I want to read some config files from the same place."
When trying to troubleshoot a script, it can help to put set -x before the troublesome section. This makes the shell print each command as it runs, so you can see what's going on.
Run your scripts through shellcheck.net -- it'll point out a lot of common mistakes.
I found how bash read startup files :
When Bash is invoked as an interactive login shell, or as a non-interactive shell with the --login option, it first reads and executes commands from the file /etc/profile, if that file exists. After reading that file, it looks for ~/.bash_profile, ~/.bash_login, and ~/.profile, in that order, and reads and executes commands from the first one that exists and is readable. The --noprofile option may be used when the shell is started to inhibit this behavior.
http://www.gnu.org/software/bash/manual/bashref.html#Bash-Startup-Files
Why is that - I mean this queue of "~/.bash_profile, ~/.bash_login, and ~/.profile". (And this logic - "if one of this file exist the other ones are not read at all")
I really don't understand point of that, why we need that much mess. Why Bash don't just read just one "global" and one "user specific" startup file ?
The reason for this is that there are different ways to use a shell, there are different shells and you may want to share / re-use some options (or not!).
For example, all shells derived from Bourne Shell read ~/.profile. So if you want to share options between /bin/sh, /bin/ksh and /bin/bash, put them there.
But then, you may want different options for BASH and KSH. In that case, use .bash_profile and .kshrc respectively and have them source the common ~/.profile.
Using the rules above, you can fine-tune your shell's setup. It will first load the config file which is most suitable for its purpose. In said config file, you can then chose to load others to inherit whatever your want. If you only use .profile, then that makes it easy to switch between different shells.
I'm not sure about the difference between .bash_profile and .bash_login; maybe this is a leftover from a bug or a change in the design.
Login scripts are executed only for login shells (i.e. the first shell a system creates when a user logs in; all other shells and processes will be children of it). The login shell contains things like global variables which you want everywhere. A common example is the ID of the SSH agent so you can load keys in any shell and they will work for every process of the same user. It doesn't make sense to do that for every shell that you start.
On the other hand, it doesn't make sense to define a prompt for non-interactive shells, so this goes into a different config script.
Bash has a number of different ways of being started and each of these allows for different configuration. These include, interactive, non-interactive, login, non-login, sh and any combination of these.
You are possibly confusing what would be easier for you and what would be easier for someone else with different requirements. This is pretty much the linux / unix way.
EDIT:
The reason for the loading order of the files is that .bash_login and .profile are synonyms for .bash_profile. These come from C shells .login file and bourne shell and korn shells .profile. As I understand it this ordering allows for backward compatibility (unsuccessful in the case of C shell) with these other shells.
Edit: This question was originally bash specific. I'd still rather have a bash solution, but if there's a good way to do this in another shell then that would be useful to know as well!
Okay, top level description of the problem. I would like to be able to add a hook to bash such that, when a user enters, for example $cat foo | sort -n | less, this is intercepted and translated into wrapper 'cat foo | sort -n | less'. I've seen ways to run commands before and after each command (using DEBUG traps or PROMPT_COMMAND or similar), but nothing about how to intercept each command and allow it to be handled by another process. Is there a way to do this?
For an explanation of why I'd like to do this, in case people have other suggestions of ways to approach it:
Tools like script let you log everything you do in a terminal to a log (as, to an extent, does bash history). However, they don't do it very well - script mixes input with output into one big string and gets confused with applications such as vi which take over the screen, history only gives you the raw commands being typed in, and neither of them work well if you have commands being entered into multiple terminals at the same time. What I would like to do is capture much richer information - as an example, the command, the time it executed, the time it completed, the exit status, the first few lines of stdin and stdout. I'd also prefer to send this to a listening daemon somewhere which could happily multiplex multiple terminals. The easy way to do this is to pass the command to another program which can exec a shell to handle the command as a subprocess whilst getting handles to stdin, stdout, exit status etc. One could write a shell to do this, but you'd lose much of the functionality already in bash, which would be annoying.
The motivation for this comes from trying to make sense of exploratory data analysis like procedures after the fact. With richer information like this, it would be possible to generate decent reporting on what happened, squashing multiple invocations of one command into one where the first few gave non-zero exits, asking where files came from by searching for everything that touched the file, etc etc.
Run this bash script:
#!/bin/bash
while read -e line
do
wrapper "$line"
done
In its simplest form, wrapper could consist of eval "$LINE". You mentioned wanting to have timings, so maybe instead have time eval "$line". You wanted to capture exit status, so this should be followed by the line save=$?. And, you wanted to capture the first few lines of stdout, so some redirecting is in order. And so on.
MORE: Jo So suggests that handling for multiple-line bash commands be included. In its simplest form, if eval returns with "syntax error: unexpected end of file", then you want to prompt for another line of input before proceeding. Better yet, to check for proper bash commands, run bash -n <<<"$line" before you do the eval. If bash -n reports the end-of-line error, then prompt for more input to add to `$line'. And so on.
Binfmt_misc comes to mind. The Linux kernel has a capability to allow arbitrary executable file formats to be recognized and passed to user application.
You could use this capability to register your wrapper but instead of handling arbitrary executable, it should handle all executable.
I know the concept sounds a little abusive (?), but still - how can I create a pipe in bash which:
has no capacity
and therefore requires no memory copy, and
requires the write to be blocking
I am guessing a lot here. But possibly you are thinking about coprocesses and do not know what that term means.
bash supports coprocesses:
http://www.gnu.org/software/bash/manual/html_node/Coprocesses.html
The format for a coprocess is:
coproc [NAME] command [redirections]
This creates a coprocess named NAME.
If NAME is not supplied, the default name is COPROC.
NAME must not be supplied if command is a simple command (see Simple Commands);
otherwise, it is interpreted as the first word of the simple command.
When the coproc is executed, the shell creates an array variable (see Arrays) named NAME in the context of the executing shell. The standard output of command is connected via a pipe to a file descriptor in the executing shell, and that file descriptor is assigned to NAME[0].
The standard input of command is connected via a pipe to a file descriptor in the executing shell, and that file descriptor is assigned to NAME[1].
This pipe is established before any redirections specified by the command (see Redirections).
The file descriptors can be utilized as arguments to shell commands and redirections using standard word expansions.
I am trying to add security of GET query to exec function.
If I remove escapeshellarg() function, it work fine. How to fix this issue?
ajax_command.php
<?php
$command = escapeshellarg($_GET['command']);
exec("/usr/bin/php-cli " . $command);
?>
Assume $_GET['command'] value is run.php -n 3
What security check I can also add?
You want escapeshellcmd (escape a whole command, or in your case, sequence of arguments) instead of escapeshellarg (escape just a single argument).
Notice that although you have taken special precautions, this code allows anyone to execute arbitrary commands on your server anyways, by specifying the whole php script in a -r option. Note that php.ini can not be used to restrict this, since the location of it can be overwritten with -c. In short (and with a very small error margin): This code creates a severe security vulnerability.
escapeshellarg returns a quoted value, so if it contains multiple arguments, it won't work, instead looking like a single stringesque argument. You should probably look at splitting the command up into several different parameters, then each can be escaped individually.
It will fail unless there's a file called run.php -n 3. You don't want to escape a single argument, you want to escape a filename and arguments.
This is not the proper way to do this. Have a single PHP script run all your commands for you, everything specified in command line arguments. Escape the arguments and worry about security inside that PHP file.
Or better yet, communicate through a pipe.