how to create a zero-copy, no-capacity, blocking pipe in bash? - linux

I know the concept sounds a little abusive (?), but still - how can I create a pipe in bash which:
has no capacity
and therefore requires no memory copy, and
requires the write to be blocking

I am guessing a lot here. But possibly you are thinking about coprocesses and do not know what that term means.
bash supports coprocesses:
http://www.gnu.org/software/bash/manual/html_node/Coprocesses.html
The format for a coprocess is:
coproc [NAME] command [redirections]
This creates a coprocess named NAME.
If NAME is not supplied, the default name is COPROC.
NAME must not be supplied if command is a simple command (see Simple Commands);
otherwise, it is interpreted as the first word of the simple command.
When the coproc is executed, the shell creates an array variable (see Arrays) named NAME in the context of the executing shell. The standard output of command is connected via a pipe to a file descriptor in the executing shell, and that file descriptor is assigned to NAME[0].
The standard input of command is connected via a pipe to a file descriptor in the executing shell, and that file descriptor is assigned to NAME[1].
This pipe is established before any redirections specified by the command (see Redirections).
The file descriptors can be utilized as arguments to shell commands and redirections using standard word expansions.

Related

How can I get the name of the sourced script in tcsh?

I'm looking for a way to get the name of a script that's being sourced from another script that's being executed in tcsh.
If I needed to the the name of a script being executed (not sourced), it's $0. If I need to get the name of a script that's being sourced from the command line, I can get it from $_. But when an executed script sources a script, I get an empty value for $_ in the sourced script, so I can't get the script name or pathname from that.
I'm looking for a non-manual method for getting that information.
There isn't really anything for this; source is mostly just a way to read the file and run it in the current scope.
However, it does accept arguments; from the tcsh manpage:
source [-h] name [args ...]
The shell reads and executes commands from name. The commands
are not placed on the history list. If any args are given,
they are placed in argv. (+) source commands may be nested; if
they are nested too deeply the shell may run out of file
descriptors. An error in a source at any level terminates all
nested source commands. With -h, commands are placed on the
history list instead of being executed, much like `history -L'.
So for example source file.csh file.csh will have argv[1] set to file.csh.
Another option is to simple set a variable before the source command:
set src = "file.csh" # Will be available in file.csh
source file.csh
If you can't or don't want to modify the source call then you're out of luck as far as I know. (t)csh is an old crusty shell with many awkward things, large and small, and I would generally discourage using it for scripting unless you really don't have any option available.
$_ simply gets the last commandline from history; maybe, very maybe it's possible to come up with a super-hacky solution to (ab)use the history for this in some way, but it seems to me that just typing the filename twice is a lot easier.

Is it possible to detect how long hasn't user type command

I'm working on Linux shell and I'm trying to write a script, which is just like a trigger as below:
if user hasn't typed any command for 2 min
execute some executable file
fi
If I can have such a script, I could make it as a daemon process. For example, I could make my system do this:
if user stops typing command for 2 min
shutdown -h now
fi
You'll need to be careful... for example what if the command itself takes longer than 2 minutes to execute? Shutting down the system is likely unwanted in this instance.
The only way to get this behavior is from within the shell application (not as a separate entity / daemon), where you know about the state -
e.g: GETTING_COMMAND, or EXECUTING_COMMAND...
In bash you can use the TMOUT variable:
TMOUT
If set to a value greater than zero, TMOUT is treated as the default timeout for the read builtin (see Bash Builtins). The select command (see Conditional Constructs) terminates if input does not arrive after TMOUT seconds when input is coming from a terminal.
In an interactive shell, the value is interpreted as the number of seconds to wait for a line of input after issuing the primary prompt. Bash terminates after waiting for that number of seconds if a complete line of input does not arrive.
Be aware that this will also affect any calls to read or select that the script makes.
Something like this would do what you're after:
export TMOUT=120
bash
shutdown -h now
If you're developing your own shell, then of course you could use something like select(2) while taking the user's input.

bash: Creating many descriptors in a loop

I am trying to create multiple descriptors to files named 1, 2, 3, etc. in bash.
For example, exec 9>abc/1 works just fine, but when I try to create descriptors in a for loop, like this: exec $[$i+8]>abc/$i, it doesn't work. I tried many different ways, but it seems that exec just does not accept variables. Is there any way to do what I want to?
EDIT: If not, maybe there is a way to use flock without descriptors?
Yes, exec doesn't accept variables for file descriptor numbers. As pointed out in comments, you can use
eval "exec $((i + 8))>"'"abc/$i"'
which, if $i is 1, is equivalent to
exec 9>"abc/$i"
Those complex quotes ensure that eval-ed and then exec-ed command is safe even if file name is changed to something different than abc/1.
But there is a warning:
Redirections using file descriptors greater than 9 should be used with care, as they may conflict with file descriptors the shell uses internally.
So if your task doesn't require consecutive file descriptor numbers, you can use automatically allocated descriptors:
Each redirection that may be preceded by a file descriptor number may instead be preceded by a word of the form {varname}. In this case, for each redirection operator except >&- and <&-, the shell will allocate a file descriptor greater than 10 and assign it to varname.
So,
exec {fd}>"abc/$i"
echo "$fd"
will open file descriptor 10 (or greater) for writing to abc/1 and print that file descriptor number (e.g. 10).

Unix shell descriptor redirection

How translate this:
echo "test" | tee -a test1 test2
To a pure UNIX descriptor redirection solution (oneliner better and no PIPES).
Is it possible?
If you want a byte written to one file descriptor (pipe, socket etc.) to show up as readable data on more than one file descriptor which are not dup()s of each other (but e.g. they correspond to two different regular files), then it's not possible on a generic Unix system. Even if the two file descriptors are dup()s, after reading the byte from one of them it would make the byte disappear from the other one, so it can't be read twice.
If you want to do it in Bash without using a |, then it's not possible.
If you want to do it in Zsh without using a |, then just follow chepner's comment: do setopt multios, and then echo test >>test1 >>test2. In the background Zsh will create a helper process to do the copying equivalent to what tee -a does.

Bash (or other shell): wrap all commands with function/script

Edit: This question was originally bash specific. I'd still rather have a bash solution, but if there's a good way to do this in another shell then that would be useful to know as well!
Okay, top level description of the problem. I would like to be able to add a hook to bash such that, when a user enters, for example $cat foo | sort -n | less, this is intercepted and translated into wrapper 'cat foo | sort -n | less'. I've seen ways to run commands before and after each command (using DEBUG traps or PROMPT_COMMAND or similar), but nothing about how to intercept each command and allow it to be handled by another process. Is there a way to do this?
For an explanation of why I'd like to do this, in case people have other suggestions of ways to approach it:
Tools like script let you log everything you do in a terminal to a log (as, to an extent, does bash history). However, they don't do it very well - script mixes input with output into one big string and gets confused with applications such as vi which take over the screen, history only gives you the raw commands being typed in, and neither of them work well if you have commands being entered into multiple terminals at the same time. What I would like to do is capture much richer information - as an example, the command, the time it executed, the time it completed, the exit status, the first few lines of stdin and stdout. I'd also prefer to send this to a listening daemon somewhere which could happily multiplex multiple terminals. The easy way to do this is to pass the command to another program which can exec a shell to handle the command as a subprocess whilst getting handles to stdin, stdout, exit status etc. One could write a shell to do this, but you'd lose much of the functionality already in bash, which would be annoying.
The motivation for this comes from trying to make sense of exploratory data analysis like procedures after the fact. With richer information like this, it would be possible to generate decent reporting on what happened, squashing multiple invocations of one command into one where the first few gave non-zero exits, asking where files came from by searching for everything that touched the file, etc etc.
Run this bash script:
#!/bin/bash
while read -e line
do
wrapper "$line"
done
In its simplest form, wrapper could consist of eval "$LINE". You mentioned wanting to have timings, so maybe instead have time eval "$line". You wanted to capture exit status, so this should be followed by the line save=$?. And, you wanted to capture the first few lines of stdout, so some redirecting is in order. And so on.
MORE: Jo So suggests that handling for multiple-line bash commands be included. In its simplest form, if eval returns with "syntax error: unexpected end of file", then you want to prompt for another line of input before proceeding. Better yet, to check for proper bash commands, run bash -n <<<"$line" before you do the eval. If bash -n reports the end-of-line error, then prompt for more input to add to `$line'. And so on.
Binfmt_misc comes to mind. The Linux kernel has a capability to allow arbitrary executable file formats to be recognized and passed to user application.
You could use this capability to register your wrapper but instead of handling arbitrary executable, it should handle all executable.

Resources