Redirecting stderr, stdout and stdin to pipe is blocking the process from running - linux

I have written a simple application which has two threads. First thread is printing to stdout and second thread is reading from stdin.
I have redirected stdin, stdout and stdin of the process to 3 different pipes as below.
mkfifo pipe_in && mkifo pipe_out && mkfifo pipe_err
./a.out < pipe_in 2> pipe_err 1> pipe_out &
Problem is that I this application (./a.out) is blocked from running until I do below:
cat < pipe_out &
cat < pipe_err &
cat > pipe_in
Why is this application blocked? Is it because no body from other side has opened the pipe?
What is the workaround for the problem where I want to run my application without it being blocked completely. I want only the thread which is waiting for user input to get blocked and other thread to continue execution.
This application is started at bootup. So, this application should be run without getting blocked for user input. User can use anytime "cat > pipe_in" to start providing input to get some details about this application.

Redirection is done by the shell, before starting the application program. Thus a.out does not start, and cannot create any threads that do anything, until the opens of the pipes complete, and the opens of the pipe write sides (for 1 and 2) do not complete until the read sides are opened. (The open of a read side for 0 does complete immediately.)

Related

How to log the live output of a running process

I want to run a game server inside my Ubuntu machine. I want to run it in the background and write the live output of that process inside a log file. I tried using nohup and running the game server using "&" at the end but I couldn't make it work the way I wanted.
Then I started reading about named pipes and actually gave it a go. I made a simple script that in theory should work. But, of course I am missing something.
First, I made a pipe using the mkfifo command.
mkfifo testpipe
Then I created a small script:
#!/bin/bash
./mta-server64 > pipe &
pid=$!
echo $pid // so I know the pid of the process
cat < pipe > log.txt &
(Note: I wrote this code from memory.)
The code works only when there is an error and the process stops. It actually records the game console error. But when the game server is running I get no output in the log file.
I want to read the output (stdout and stderr if I am not mistaken) of a process running in background and record it those inside a log file.
I also thought about using screen as it logs everything inside a file but I would prefer not using it if there is a better solution.
EDIT:
First of all: thank you for the interest you had in helping me. In the same way, I have to apologize for only giving scarce details about what I intend to do with this small project and for my limited understanding of stdout and stderr.
Let's go to the first base.
I want to run a game server named Multi Theft Auto (https://multitheftauto.com/). This is GTA San Andreas but multiplayer.
I can easily run this game server in my Ubuntu server by calling the executable ./mta-server-64. After calling it the game server console appears:
[|] MTA: San Andreas :: 0/32 players :: 196 resources :: 125 fps (25)
MTA:BLUE Server for MTA:SA
==================================================================
= Multi Theft Auto: San Andreas v1.5.6 [64 bit]
==================================================================
= Server name : Default MTA Server
= Server IP address: auto
= Server port : 22884
=
= Log file : /root/mta/mods/deathmatch/logs/server.log
= Maximum players : 32
= HTTP port : 22564
= Voice Chat : Disabled
= Bandwidth saving : Medium
==================================================================
[09:49:07] Resource 'mapmanager' requests some acl rights. Use the command 'aclrequest list mapmanager'
[09:49:07] Resources: 196 loaded, 0 failed
[09:49:07] Starting resources...
[09:49:07] Server minclientversion is now 1.5.6-9.16588.0
[09:49:07] INFO: MAPMANAGER: Some important ACL permissions are missing. To ensure the correct functioning of Mapmanager, please write: aclrequest allow mapmanager all
[09:49:07] Gamemode 'play' started.
[09:49:07] Authorized serial account protection is enabled for the ACL group(s): `Admin` See http://mtasa.com/authserial
[09:49:07] WARNING: <owner_email_address> not set
[09:49:07] Server started and is ready to accept connections!
[09:49:07] To stop the server, type 'shutdown' or press Ctrl-C
[09:49:07] Type 'help' for a list of commands.
[09:49:07] Querying MTA master server... success! (Auto detected IP:xxx.xxx.xxx.xxx)
I am using the following script to run the process in the background and (try to) get the live output from:
#!/bin/bash
newport=$(shuf -i 22003-22900 -n 1)
newip=$(shuf -i 22003-22900 -n 1)
rm -rf ~/server/*
cp -r /home/user*/ftp/server/mtaserver/serverfiles/* ~/server
sed -i "s/<httpport>[0-9][0-9][0-9][0-9][0-9]<\/httpport>/<httpport>$newport<\/httpport>/g" ~/server/mods/deathmatch/mtaserver.conf
sed -i "s/<serverport>[0-9][0-9][0-9][0-9][0-9]<\/serverport>/<serverport>$newip<\/serverport>/g" ~/server/mods/deathmatch/mtaserver.conf
~/server/mta-server64 2>&1 | tee -a outfile &
mta_pid=$!
echo $mta_pid
sleep 6
pkill $mta_pid
(Note: Because of some technical problems I had to add the first few lines of script which automatically replace the game files with new ones and also replace the existing ports with random ones.)
This script starts the server and tries to log the output of the process. The process is automatically killed after few seconds so there is only one instance of the game server at any given time.
THE ISSUE:
This script only logs the output if there is an error. I still cannot get the live output of the process when it is still running. Maybe this is an issue with the game server but truly believe there should be a way to make it work the way I intend.
I believe you want to use tee command to split the pipe output to log file.
I suggest you read this article and these answers 1 2.
Usually this is enough nohup somecommand > somecommand.log 2>&1 & then, tail -F somecommand.log to follow the logs.
After 2 days I finally figured out a way to make it work (the way I intended to work, without taking in consideration any major security/performance risks).
Reading the comments made me realize I was attacking the wrong point. The stdout of the game server is buffered, thus making it impossible to log it into a log file using the methods I tried when I posted my question At least this is what I came to understand).
I did some research on how to run the application without having the stdout buffered: https://serverfault.com/questions/294218/is-there-a-way-to-redirect-output-to-a-file-without-buffering-on-unix-linux
My code now:
stdbuf -o0 ~/server/mta-server64 >> pipe &
cat < pipe | tee -a outfile &
After creating the named pipe it executes the game server inside that pipe and then appends the stdout into the log file.
The stdbug -o0 command disables the stdout buffering (as noted in the link above).
This works for me and I cannot guarantee it will work for anybody else. I am still not aware if disabling the buffering is a safe approach to my issue but for now it is what I need.

Is it possible to pass input to a running service or daemon?

I want to create a Java console application that runs as a daemon on Linux, I have created the application and the script to run the application as a background daemon. The application runs and waits for command line input.
My question:
Is it possible to pass command line input to a running daemon?
On Linux, all running processes have a special directory under /proc containing information and hooks into the process. Each subdirectory of /proc is the PID of a running process. So if you know the PID of a particular process you can get information about it. E.g.:
$ sleep 100 & ls /proc/$!
...
cmdline
...
cwd
environ
exe
fd
fdinfo
...
status
...
Of note is the fd directory, which contains all the file descriptors associated with the process. 0, 1, and 2 exist for (almost?) all processes, and 0 is the default stdin. So writing to /proc/$PID/fd/0 will write to that process' stdin.
A more robust alternative is to set up a named pipe connected to your process' stdin; then you can write to that pipe and the process will read it without needing to rely on the /proc file system.
See also Writing to stdin of background process on ServerFault.
The accepted answer above didn't quite work for me, so here's my implementation.
For context I'm running a Minecraft server on a Linux daemon managed with systemctl. I wanted to be able to send commands to stdin (StandardInput).
First, use mkfifo /home/user/server_input to create a FIFO file somewhere (also known as the 'named pipe' solution mentioned above).
[Service]
ExecStart=/usr/local/bin/minecraft.sh
StandardInput=file:/home/user/server_input
Then, in your daemon *.service file, execute the bash script that runs your server or background program and set the StandardInput directive to the FIFO file we just created.
In minecraft.sh, the following is the key command that runs the server and gets input piped into the console of the running service.
tail -f /home/user/server_input| java -Xms1024M -Xmx4096M -jar /path/to/server.jar nogui
Finally, run systemctl start your_daemon_service and to pass input commands simply use:
echo "command" > /home/user/server_input
Creds to the answers given on ServerFault

How can I launch a new process that is NOT a child of the original process?

(OSX 10.7) An application we use let us assign scripts to be called when certain activities occur within the application. I have assigned a bash script and it's being called, the problem is that what I need to do is to execute a few commands, wait 30 seconds, and then execute some more commands. If I have my bash script do a "sleep 30" the entire application freezes for that 30 seconds while waiting for my script to finish.
I tried putting the 30 second wait (and the second set of commands) into a separate script and calling "./secondScript &" but the application still sits there for 30 seconds doing nothing. I assume the application is waiting for the script and all child processes to terminate.
I've tried these variations for calling the second script from within the main script, they all have the same problem:
nohup ./secondScript &
( ( ./secondScript & ) & )
( ./secondScript & )
nohup script -q /dev/null secondScript &
I do not have the ability to change the application and tell it to launch my script and not wait for it to complete.
How can I launch a process (I would prefer the process to be in a scripting language) such that the new process is not a child of the current process?
Thanks,
Chris
p.s. I tried the "disown" command and it didn't help either. My main script looks like this:
[initial commands]
echo Launching second script
./secondScript &
echo Looking for jobs
jobs
echo Sleeping for 1 second
sleep 1
echo Calling disown
disown
echo Looking again for jobs
jobs
echo Main script complete
and what I get for output is this:
Launching second script
Looking for jobs
[1]+ Running ./secondScript &
Sleeping for 1 second
Calling disown
Looking again for jobs
Main script complete
and at this point the calling application sits there for 45 seconds, waiting for secondScript to finish.
p.p.s
If, at the top of the main script, I execute "ps" the only thing it returns is the process ID of the interactive bash session I have open in a separate terminal window.
The value of $SHELL is /bin/bash
If I execute "ps -p $$" it correctly tells me
PID TTY TIME CMD
26884 ?? 0:00.00 mainScript
If I execute "lsof -p $$" it gives me all kinds of results (I didn't paste all the columns here assuming they aren't relevant):
FD TYPE NAME
cwd DIR /private/tmp/blahblahblah
txt REG /bin/bash
txt REG /usr/lib/dyld
txt REG /private/var/db/dyld/dyld_shared_cache_x86_64
0 PIPE
1 PIPE -> 0xffff8041ea2d10
2 PIPE -> 0xffff 8017d21cb
3r DIR /private/tmp/blahblah
4r REG /Volumes/DATA/blahblah
255r REG /Volumes/DATA/blahblah
The typical way of doing this in Unix is to double fork. In bash, you can do this with
( sleep 30 & )
(..) creates a child process, and & creates a grandchild process. When the child process dies, the grandchild process is inherited by init.
If this doesn't work, then your application is not waiting for child processes.
Other things it may be waiting for include the session and open lock files:
To create a new session, Linux has a setsid. On OS X, you might be able to do it through script, which incidentally also creates a new session:
# Linux:
setsid sleep 30
# OS X:
nohup script -q -c 'sleep 30' /dev/null &
To find a list of inherited file descriptors, you can use lsof -p yourpid, which will output something like:
sleep 22479 user 0u CHR 136,32 0t0 35 /dev/pts/32
sleep 22479 user 1u CHR 136,32 0t0 35 /dev/pts/32
sleep 22479 user 2u CHR 136,32 0t0 35 /dev/pts/32
sleep 22479 user 5w REG 252,0 0 1048806 /tmp/lockfile
In this case, in addition to the standard FDs 0, 1 and 2, you also have a fd 5 open with a lock file that the parent can be waiting for.
To close fd 5, you can use exec 5>&-. If you think the lock file might be stdin/stdout/stderr themselves, you can use nohup to redirect them to something else.
Another way is to abandon the child
#!/bin/bash
yourprocess &
disown
As far as I understand, the application replaces the normal bash shell because it is still waiting for a process to finish even if init should have taken care of this child process.
It could be that the "application" intercepts the orphan handling which is normally done by init.
In that case, only a parallel process with some IPC can offer a solution (see my other answer)
I think it depends on how your parent process tries to detect if your child process has been finished.
In my case (my parent process was gnu make), I succeed by closing stdout and stderr (slightly based on the answer of that other guy) like this:
sleep 30 >&- 2>&- &
You might also close stdin
sleep 30 <&- >&- 2>&- &
or additionally disown your child process (not for Mac)
sleep 30 <&- >&- 2>&- & disown
Currently tested only in bash on kubuntu 14.04 and Mac OSX.
If all else fails:
Create a named pipe
start the "slow" script independent from the "application", make sure executes it's task in an endless loop, starting with reading from the pipe. It will become read-blocked when it tries to read..
from the application, start your other script. When it needs to invoke the "slow" script, just write some data to the pipe. The slow script will start independently so your script won't wait for the "slow" script to finish.
So, to answer the question:
bash - how can I launch a new process that is NOT a child of the original process?
Simple: don't launch it but let an independent entity launch it during boot...like init or on the fly with the command at or batch
Here I have a shell
└─bash(13882)
Where I start a process like this:
$ (urxvt -e ssh somehost&)
I get a process tree (this output snipped from pstree -p):
├─urxvt(14181)───ssh(14182)
where the process is parented beneath pid 1 (systemd in my case).
However, had I instead done this (note where the & is) :
$ (urxvt -e ssh somehost)&
then the process would be a child of the shell:
└─bash(13882)───urxvt(14181)───ssh(14182)
In both cases the shell prompt is immediately returned and I can exit
without terminating the process tree that I started above.
For the latter case the process tree is reparented beneath pid 1 when
the shell exits, so it ends up the same as the first example.
├─urxvt(14181)───ssh(14182)
Either way, the result is a process tree that outlives the shell. The
only difference is the initial parenting of that process tree.
For reference, you can also use
nohup urxvt -e ssh somehost &
urxvt -e ssh somehost & disown $!
Both give the same process tree as the second example above.
└─bash(13882)───urxvt(14181)───ssh(14182)
When the shell is terminated the process tree is, like before, reparented
to pid 1.
nohup additionally redirects the process' standard output to a file
nohup.out so, if that is a useful trait, it may be a more useful choice.
Otherwise, with the first form above, you immediately have a completely
detached process tree.

How to start a stopped process in Linux

I have a stopped process in Linux at a given terminal. Now I am at another terminal. How do I start that process. What kill signal would I send. I own that process.
You can issue a kill -CONT pid, which will do what you want as long as the other terminal session is still around. If the other session is dead it might not have anywhere to put the output.
In addition to #Dave's answer, there is an advanced method to redirect input and output file descriptors of a running program using GDB.
A FreeBSD example for an arbitrary shell script with PID 4711:
> gdb /bin/sh 4711
...
Attaching to program: /bin/sh, process 4711
...
(gdb) p close(1)
$1 = 0
(gdb) p creat("/tmp/testout.txt",0644)
$2 = 1
(gdb) p close(2)
$3 = 0
(gdb) p dup2(1,2)
$4 = 2
EDIT - explanation: this closes filehandle 1, then opens a file, which reuses 1. Then it closes filehandle 2 and duplicates filehandle 1 to 2.
Now this process' stdout and stderr go to indicated file and are readable from there. If stdin is required, you need to p close(0) and then attach some input file or PIPE or smth.
For the time being, I could not find a method to remotely disown this process from the controlling terminal, which means that when the terminal exits, this process receives SIGHUP signal.
Note: If you do have/gain access to the other terminal, you can disown -a so that this process will continue to run after the terminal closes.

How to know from a bash script if the user abruptly closes ssh session

I have a bash script that acts as the default shell for a user loging in trough ssh.
It provides a menu with several options one of wich is sending a file using netcat.
The netcat of the embedded linux I'm using lacks the -w option, so if the user closes the ssh connection without ever sending the file, the netcat command waits forever.
I need to know if the user abruptly closes the connection so the script can kill the netcat command and exit gracefully.
Things I've tried so far:
Trapping the SIGHUP: it is not issued. The only signal issued i could find is SIGCONT, but I don't think it's reliable and portable.
Playing with the -t option of the read command to detect a closed stdin: this would work if not for a silly bug in the embedded read command (only times out on the first invocation)
Edit:
I'll try to answer the questions in the comments and explain the situation further.
The code I have is:
nc -l -p 7576 > /dev/null 2>> $LOGFILE < $TMP_DIR/$BACKUP_FILE &
wait
I'm ignoring SIGINT and SIGTSTP, but I've tried to trap all the signals and the only one received is SIGCONT.
Reading the bash man page I've found out that the SIGHUP should be sent to both script and netcat and that the SIGCONT is sent to stopped jobs to ensure they receive the SIGHUP.
I guess the wait makes the script count as stopped and so it receives the SIGCONT but at the same time the wait somehow eats up the SIGHUP.
So I've tried changing the wait for a sleep and then both SIGHUP and SIGCONT are received.
The question is: why is the wait blocking the SIGHUP?
Edit 2: Solved
I solved it polling for a closed stdin with the read builtin using the -t option. To work around the bug in the read builtin I spawn it in a new bash (bash -c "read -t 3 dummy").
Does the Parent PiD change? If so you could look up the parent in the process list and make sure the process name is correct.
I have written similar applications. It would be helpful to have more of the code in your shell. I think there may be a way of writing your overall program differently which would address this issue.

Resources