Linux Named Pipes - MKFIFO query - linux

i am fairly new in the named Linux BASH, named pipes etc.
I am following an example from this article:
https://www.linuxjournal.com/content/using-named-pipes-fifos-bash
All works well and as expected. However this is only the beginning.
I would like to be able to call writer script from reader to pass info between the 2 scripts in the pipe without having to create cron job for the writer script.
The idea is that someone triggers the reader script without elevated permissions.
The reader calls the writer which has some hard-coded sudo user (for testing purposes), evaluates data and returns the result to the reader.
Any advise is appreciated.

As I understand it, you require the following:
A writer which listens for requests to write data to the named pipe.
A reader which sends requests for data to the writer, and reads the data from the named pipe.
The writer process should run as a privileged user, and the reader should run as an under-privileged user.
1 and 2 are possible with the scripts below, where:
The writer is run in the background and listens for requests: sh writer.sh &
When the reader is run, it sends a signal to the writer to trigger the writing of data to the named pipe
The reader then subsequently reads from the pipe and outputs the data.
3 is not possible because:
A process with lower privileges cannot send signals to a process with a higher privilege. See this
Alternatively, a script run by a user with lower privileges cannot launch another script with higher privileges (i.e the reader cannot launch a writer with higher privileges)
writer.sh
#!/bin/bash
# Store the value of the writer process
echo $$ > /tmp/pid
# Specify location of named pipe
pipe=/tmp/datapipe
# Create Data pipe if it doesn't exist
if [[ ! -p $pipe ]]; then
echo "Pipe does not exist. Creating..."
mkfifo $pipe
fi
# Send data to pipe
echo "Hello" >$pipe
# Send data to pipe based on trigger
function write_data {
echo "Writing data"
echo "Here is some data" >$pipe &
}
# Kill process based on trigger
function kill {
echo "Exiting"
exit
}
# Listen for signals
trap write_data SIGINT
trap kill KILL
# listen
while true; do
sleep 1;
done
reader.sh
#!/bin/bash
pipe=/tmp/datapipe
# Read the writer pid
pid=$(cat /tmp/pid)
# Trigger writer to create data
kill -s SIGINT $pid
# Read data from named pipe
if read line <$pipe; then
echo $line
fi

Related

How can I send a timeout signal to a wrapped command in sbatch?

I have a program that, when it receives a SIGUSR1, writes some output and quits. I'm trying to get sbatch to notify this program before timing out.
I enqueue the program using:
sbatch -t 06:00:00 --signal=USR1 ... --wrap my_program
but my_program never receives the signal. I've tried sending signals while the program is running, with: scancel -s USR1 <JOBID>, but without any success. I also tried scancel --full, but it kills the wrapper and my_program is not notified.
One option is to write a bash file that wraps my_program and traps the signal, forwarding it to my_program (similar to this example), but I don't need this cumbersome bash file for anything else. Also, sbatch --signal documentation very clearly says that, when you want to notify the enveloping bash file, you need to specify signal=B:, so I believe that the bash wrapper is not really necessary.
So, is there a way to send a SIGUSR1 signal to a program enqueued using sbatch --wrap?
Your command is sending the USR1 to the shell created by the --wrap. However, if you want the signal to be caught and processed, you're going to need to write the shell functions to handle the signal and that's probably too much for a --wrap command.
These folks are doing it but you can't see into their setup.sh script to see what they are defining. https://docs.nersc.gov/jobs/examples/#annotated-example-automated-variable-time-jobs
Note they use "." to run the code in setup.sh in the same process instead of spawing a sub-shell. You need that.
These folks describe a nice method of creating the functions you need: Is it possible to detect *which* trap signal in bash?
The only thing they don't show there is the function that would actually take action on receiving the signal. Here's what I wrote that does it - put this in a file that can be included from any user's sbatch submit script and show them how to use it and the --signal option:
trap_with_arg() {
func="$1" ; shift
for sig ; do
echo "setting trap for $sig"
trap "$func $sig" "$sig"
done
}
func_trap () {
echo "called with sig $1"
case $1 in
USR1)
echo "caught SIGUSR1, making ABORT file"
date
cd $WORKDIR
touch ABORT
ls -l ABORT
;;
*) echo "something else" ;;
esac
}
trap_with_arg func_trap USR1 USR2

Delaying not preventing Bash function from simultaneous execution

I need to block the simultaneous calling of highCpuFunction function. I have tried to create a blocking mechanism, but it is not working. How can I do this?
nameOftheScript="$(basename $0)"
pidOftheScript="$$"
highCpuFunction()
{
# Function with code causing high CPU usage. Like tar, zip, etc.
while [ -f /tmp/"$nameOftheScript"* ];
do
sleep 5;
done
touch /tmp/"$nameOftheScript"_"$pidOftheScript"
echo "$(date +%s) I am a Bad function you do not want to call me simultaniously..."
# Real high CPU usage code for reaching the database and
# parsing logs. It takes the heck out of the CPU.
rm -rf /tmp/"$nameOftheScript"_"$pidOftheScript" 2>/dev/null
}
while true
do
sleep 2
highCpuFunction
done
# The rest of the code...
In short, I want to run highCpuFunction at least with a gap of 5 seconds. Regardless of the instance/user/terminal. I need to allow other users to run this function but in proper sequence and with a gap of at least 5 seconds.
Use the flock tool. Consider this code (let's call it 'onlyoneofme.sh'):
#!/bin/sh
exec 9>/var/lock/myexclusivelock
flock 9
echo start
sleep 10
echo stop
It will open file /var/lock/myexclusivelock as descriptor 9 and then try to lock it exclusively. Only one instance of the script will be allowed to pass behind the flock 9 command. The rest of them will wait for the other script to finish (so the descriptor will be closed and the lock freed). After this, the next script will acquire the lock and execute, and so on.
In the following solution the # rest of the script part can be executed only by one process. The test and set is atomic, and there isn't any race condition, whereas test -f file .. touch file, two processes can touch the file.
try_acquire_lock() {
local lock_file=$1
# Noclobber option to fail if the file already exists
# in a sub-shell to avoid modifying current shell options
( set -o noclobber; : >"$lock_file")
}
# Trap to remove the file when the process exits
trap 'rm "$lock_file"' EXIT
lock_file=/tmp/"$nameOftheScript"_"$pidOftheScript"
while ! try_acquire_lock "$lock_file";
do
echo "failed to acquire lock, sleeping 5sec.."
sleep 5;
done
# The rest of the script
It's not optimal, because the wait is done in a loop with sleep. To improve, one can use inter process communication (FIFO), or operating system notifications or signals.
# Block current shell process
kill -STOP $BASHPID
# Unblock blocked shell process (where <pid> is the id of the blocked process)
kill -CONT <pid>

defer pipe process to background after text match

So I have a bash command to start a server and it outputs some lines before getting to the point where it outputs something like "Server started, Press Control+C to exit". How do I pipe this output so when this line occurs i put this process in the background and continue with another script/function (i.e to do stuff that needs to wait until the server starts such as run tests)
I want to end up with 3 functions
start_server
run_tests
stop_server
I've got something along the lines of:
function read_server_output{
while read data; do
printf "$data"
if [[ $data == "Server started, Press Control+C to exit" ]]; then
# do something here to put server process in the background
# so I can run another function
fi
done
}
function start_server{
# start the server and pipe its output to another function to check its running
start-server-command | read_server_output
}
function run_test{
# do some stuff
}
function stop_server{
# stop the server
}
# run the bash script code
start_server()
run_tests()
stop_tests()
related question possibly SH/BASH - Scan a log file until some text occurs, then exit. How?
Thanks in advance I'm pretty new to this.
First, a note on terminology...
"Background" and "foreground" are controlling-terminal concepts, i.e., they have to do with what happens when you type ctrl+C, ctrl+Z, etc. (which process gets the signal), whether a process can read from the terminal device (a "background" process gets a SIGTTIN that by default causes it to stop), and so on.
It seems clear that this has little to do with what you want to achieve. Instead, you have an ill-behaved program (or suite of programs) that needs some special coddling: when the server is first started, it needs some hand-holding up to some point, after which it's OK. The hand-holding can stop once it outputs some text string (see your related question for that, or the technique below).
There's a big potential problem here: a lot of programs, when their output is redirected to a pipe or file, produce no output until they have printed a "block" worth of output, or are exiting. If this is the case, a simple:
start-server-command | cat
won't print the line you're looking for (so that's a quick way to tell whether you will have to work around this issue as well). If so, you'll need something like expect, which is an entirely different way to achieve what you want.
Assuming that's not a problem, though, let's try an entirely-in-shell approach.
What you need is to run the start-server-command and save the process-ID so that you can (eventually) send it a SIGINT signal (as ctrl+C would if the process were "in the foreground", but you're doing this from a script, not from a controlling terminal, so there's no key the script can press). Fortunately sh has a syntax just for this.
First let's make a temporary file:
#! /bin/sh
# myscript - script to run server, check for startup, then run tests
TMPFILE=$(mktemp -t myscript) || exit 1 # create /tmp/myscript.<unique>
trap "rm -f $TMPFILE" 0 1 2 3 15 # arrange to clean up when done
Now start the server and save its PID:
start-server-command > $TMPFILE & # start server, save output in file
SERVER_PID=$! # and save its PID so we can end it
trap "kill -INT $SERVER_PID; rm -f $TMPFILE" 0 1 2 3 15 # adjust cleanup
Now you'll want to scan through $TMPFILE until the desired output appears, as in the other question. Because this requires a certain amount of polling you should insert a delay. It's also probably wise to check whether the server has failed and terminated without ever getting to the "started" point.
while ! grep '^Server started, Press Control+C to exit$' >/dev/null; do
# message has not yet appeared, is server still starting?
if kill -0 $SERVER_PID 2>/dev/null; then
# server is running; let's wait a bit and try grepping again
sleep 1 # or other delay interval
else
echo "ERROR: server terminated without starting properly" 1>&2
exit 1
fi
done
(Here kill -0 is used to test whether the process still exists; if not, it has exited. The "cleanup" kill -INT will produce an error message, but that's probably OK. If not, either redirect that kill command's error-output, or adjust the cleanup or do it manually, as seen below.)
At this point, the server is running and you can do your tests. When you want it to exit as if the user hit ctrl+C, send it a SIGINT with kill -INT.
Since there's a kill -INT in the trap set for when the script exits (0) as well as when it's terminated by SIGHUP (1), SIGINT (2), SIGQUIT (3), and SIGTERM (15)—that's the:
trap "do some stuff" 0 1 2 3 15
part—you can simply let your script exit at this point, unless you want to specifically wait for the server to exit too. If you want that, perhaps:
kill -INT $SERVER_PID; rm -f $TMPFILE # do the pre-arranged cleanup now
trap - 0 1 2 3 15 # don't need it arranged anymore
wait $SERVER_PID # wait for server to finish exit
would be appropriate.
(Obviously none of the above is tested, but that's the general framework.)
Probably the easiest thing to do is to start it in the background and block on reading its output. Something like:
{ start-server-command & } | {
while read -r line; do
echo "$line"
echo "$line" | grep -q 'Server started' && break
done
cat &
}
echo script continues here after server outputs 'Server started' message
But this is a pretty ugly hack. It would be better if the server could be modified to perform a more specific action which the script could wait for.

Setting variables in a KSH spawned process

I have a lengthy menu script that relies on a few command outputs for it's variables. These commands take several seconds to run each and I would like to spawn new processes to set these variables. It would look something like this:
VAR1=`somecommand` &
VAR2=`somecommand` &
...
wait
echo $VAR1 $VAR2
The problem is that the processes are spawned and die with those variables they set. I realize that I can do this by sending these to a file and then reading that but I would like to do it without a temp file. Any ideas?
You can get the whole process' output using command substitution, like:
VAR1=$(somecommand &)
VAR2=$(somecommand &)
...
wait
echo $VAR1 $VAR2
This is rather clunky, but works for me. I have three scripts.
cmd.sh is your "somecommand", it is a test script only:
#!/bin/ksh
sleep 10
echo "End of job $1"
Below is wrapper.sh, which runs a single command, captures the output, signals the parent when done, then writes the result to stdout:
#!/bin/ksh
sig=$1
shift
var=$($#)
kill -$sig $PPID
echo $var
and here is the parent script:
#!/bin/ksh
trap "read -u3 out1" SIGUSR1
trap "read -p out2" SIGUSR2
./wrapper.sh SIGUSR1 ./cmd.sh one |&
exec 3<&p
exec 4>&p
./wrapper.sh SIGUSR2 ./cmd.sh two |&
wait
wait
echo "out1: $out1, out2: $out2"
echo "Ended"
2x wait because the first will be interrupted.
In the parent script I am running the wrapper twice, once for each job, passing in the command to be run and any arguments. The |& means "pipe to background" - run as a co-process.
The two exec commands copy the pipe file descriptors to fds 3 and 4. When the jobs are finished, the wrapper signals the main process to read the pipes. The signals are caught using the trap, which read the pipe for the appropriate child process, and gather the resulting data.
Rather convoluted and clunky, but it appears to work.

Example of using named pipes in Linux shell (Bash)

Can someone post a simple example of using named pipes in Bash on Linux?
One of the best examples of a practical use of a named pipe...
From http://en.wikipedia.org/wiki/Netcat:
Another useful behavior is using netcat as a proxy. Both ports and hosts can be redirected. Look at this example:
nc -l 12345 | nc www.google.com 80
Port 12345 represents the request.
This starts a nc server on port 12345 and all the connections get redirected to google.com:80. If a web browser makes a request to nc, the request will be sent to google but the response will not be sent to the web browser. That is because pipes are unidirectional. This can be worked around with a named pipe to redirect the input and output.
mkfifo backpipe
nc -l 12345 0<backpipe | nc www.google.com 80 1>backpipe
Here are the commands:
mkfifo named_pipe
echo "Hi" > named_pipe &
cat named_pipe
The first command creates the pipe.
The second command writes to the pipe (blocking). The & puts this into the background so you can continue to type commands in the same shell. It will exit when the FIFO is emptied by the next command.
The last command reads from the pipe.
Open two different shells, and leave them side by side. In both, go to the /tmp/ directory:
cd /tmp/
In the first one type:
mkfifo myPipe
echo "IPC_example_between_two_shells">myPipe
In the second one, type:
while read line; do echo "What has been passed through the pipe is ${line}"; done<myPipe
First shell won't give you any prompt back until you execute the second part of the code in the second shell. It's because the fifo read and write is blocking.
You can also have a look at the FIFO type by doing a ls -al myPipe and see the details of this specific type of file.
Next step would be to embark the code in a script!
Creating a named pipe
$ mkfifo pipe_name
On Unix-likes named pipe (FIFO) is a special type of file with no content. The mkfifo command creates the pipe on a file system (assigns a name to it), but doesn't open it. You need to open and close it separately like any other file.
Using a named pipe
Named pipes are useful when you need to pipe from/to multiple processes or if you can't connect two processes with an anonymous pipe. They can be used in multiple ways:
In parallel with another process:
$ echo 'Hello pipe!' > pipe_name & # runs writer in a background
$ cat pipe_name
Hello pipe!
Here writer runs along the reader allowing real-time communication between processes.
Sequentially with file descriptors:
$ # open the pipe on auxiliary FD #5 in both ways (otherwise it will block),
$ # then open descriptors for writing and reading and close the auxiliary FD
$ exec 5<>pipe_name 3>pipe_name 4<pipe_name 5>&-
$
$ echo 'Hello pipe!' >&3 # write into the pipe through FD #3
...
$ exec 3>&- # close the FD when you're done
$ # (otherwise reading will block)
$ cat <&4
Hello pipe!
...
$ exec 4<&-
In fact, communication through a pipe can be sequential, but it's limited to 64 KB (buffer size).
It's preferable to use descriptors to transfer multiple blocks of data in order to reduce overhead.
Conditionally with signals:
$ handler() {
> cat <&3
>
> exec 3<&-
> trap - USR1 # unregister signal handler (see below)
> unset -f handler writer # undefine the functions
> }
$
$ exec 4<>pipe_name 3<pipe_name 4>&-
$ trap handler USR1 # register handler for signal USR1
$
$ writer() {
> if <condition>; then
> kill -USR1 $PPID # send the signal USR1 to a specified process
> echo 'Hello pipe!' > pipe_name
> fi
> }
$ export -f writer # pass the function to child shells
$
$ bash -c writer & # can actually be run sequentially as well
$
Hello pipe!
FD allows data transfer to start before the shell is ready to receive it. Required when used sequentially.
The signal should be sent before data to prevent a deadlock if pipe buffer will fill up.
Destroying a named pipe
The pipe itself (and its content) gets destroyed when all descriptors to it are closed. What's left is just a name.
To make the pipe anonymous and unavailable under the given name (can be done when the pipe is still open) you could use the rm con­sole com­mand (it's the opposite of mkfifo command):
$ rm pipe_name
Terminal 1:
$ mknod new_named_pipe p
$ echo 123 > new_named_pipe
Terminal 1 created a named pipe.
It wrote data in it using echo.
It is blocked as there is no receiving end (as pipes both named and unnamed need receiving and writing ends to it)
Terminal 2:
$ cat new_named_pipe
$ 123
$
From Terminal 2, a receiving end for the data is added.
It read the data in it using cat.
Since both receiving and writing ends are there for the new_named_pipe it displays the information and blocking stops
Named pipes are used everywhere in Linux, most of the char and block files we see during ls -l command are char and block pipes (All of these reside at /dev).
These pipes can be blocking and non-blocking, and the main advantage is these provides the simplest way for IPC.

Resources