Linux i/o to running daemon / process - linux

Is it possible to i/o to a running process?
I have multiple game servers running like this:
cd /path/to/game/server/binary
./binary arg1 arg2 ... argn &
Is it possible to write a message to a server if i know the process id?
Something like this would be handy:
echo "quit" > process1234
Where process1234 is the process (with sid 1234).
The game server is not a binary written by me, but it is a Call of Duty binary. So i can't change anything to the code.

Yes, you can start up the process with a pipe as its stdin and then write to the pipe. You can used a named or anonymous pipe.
Normally a parent process would be needed to do this, which would create an anonmyous pipe and supply that to the child process as its stdin - popen() does this, many libraries also implement it (see Perl's IPC::Open2 for example)
Another way would be to run it under a pseudo tty, which is what "screen" does. Screen itself may also have a mechanism for doing this.

Only if the process is listening for some message somewhere. For instance, your game server can be waiting for input on a file, over a network connection, or from standard input.
If your process is not actively listening for something, the only things you can really do is halt or kill it.
Now if your process is waiting on standard input, and you ran it like so:
$ myprocess &
Then (in linux) you should be able to try the following:
$ jobs
[1]+ Running myprocess &
$ fg 1
And at this point you are typing standard input into your process.

You can only do that if the process is explicitly designed for that.
But since you example is requesting the process quit, I'd recommend trying signals. First try to send the TERM (i.e. terminate) signal which is the default:
kill _pid_
If that doesn't work, you can try other signals such as QUIT:
kill -QUIT _pid_
If all else fails, you can use the KILL signal. This is guaranteed (*) to stop the process but the process will have no change to clean up:
kill -KILL _pid_
* - in the past, kill -KILL would not work if the process was hung when on a flaky network file server. Don't know if they ever fixed this.

I'm pretty sure this would work, since the server has a console on stdin:
echo "quit" > /proc/<server pid>/fd/0
You mention in a comment below that your process does not appear to read from the console on fd 0. But it must on some fd. ls -l /proc/<server pid/>/fd/ and look for one that's pointing at /dev/pts/ if the process is running in a gnome-terminal or xterm or something.

If you want to do a few simple operations on your server, use signals as mentioned elsewhere. Set up signal handlers in the server and have each signal perform a different action e.g.:
SIGINT: Reread config file
SIGHUP: quit
...

Highly hackish, don't do this if you have a saner alternative, but you can redirect a process's file descriptors on the fly if you have ptrace permissions.
$ echo quit > /tmp/quitfile
$ gdb binary 1234
(gdb) call dup2(open("/tmp/quitfile", 0), 0)
(gdb) continue
open("/tmp/quitfile", O_RDONLY) returns a file descriptor to /tmp/quitfile. dup2(..., STDIN_FILENO) replaces the existing standard input by the new file descriptor.
We inject this code into the application using gdb (but with numeric constants, as #define constants may not be available), and taadaah.

Simply run it under screen and don't background it. Then you can either connect to it with screen interactively and tell it to quit, or (with a bit of expect hackery) write a script that will connect to screen, send the quit message, and disconnect.

Related

Linux cpu-to-cpu inter-process communications

I have written a Linux C program, which runs on an embedded processor, and which behaves like a shell -- either interactively, giving a prompt, parsing user commands, and executing them, in an indefinite loop, or non-interactively -- reading and parsing a command off the invocation command. I want to run the same program concurrently on another embedded processor, which is reachable using e/net (e.g. ssh), and use it for some commands because the second processor has access to some hardware that the first processor does not. I usually need to capture and process the outputs from that remote command. Currently, I invoke the program on the second processor for each command -- e.g.
system ("ssh other-cpu my_program "do this command > /vtmp/out_capt");
system ("scp other-cpu:/vtmp/out_capt .")
This works, but it is slow. Is there a better way, perhaps using pipes? If someone could point me to their pick of a best way to do this kind of IPC I would appreciate it.
You could get rid of scp and just save the output from ssh on the local machine. Something like this:
ssh other-cpu '( my_program command )' > file.log
Or if you want to run multiple commands:
ssh other-cpu > file.log << EOF
my_program command
my_program other_command
EOF
There are a few ways to do this, with varying speed and complexity. [Naturally :-)], the fastest requires the most setup.
(1) You can replace your two command sequence with an output pipe.
You create a single pipe via pipe(2). You do a fork(2). The child attaches the output fildes of the pipe to stdout. The child does execvp("ssh","remote","my_program","blah"). The parent reads the results from the input fildes of the pipe. No need for a temp file.
This is similar to what you're currently doing in that you do an ssh for each remote command you want to execute, but eliminating the temp file and the scp.
(2) You can modify my_program to accept various commands from stdin. I believe you already do this in the program's "shell" mode.
You create two pipes via pipe(2). Again, fork a child. Attach the output fildes of the "from_remote" pipe as before to stdout. But, now, attach the input fildes of the "to_remote" pipe to stdin.
In the parent, using the fildes for output of the "to_remote" pipe, send a command line. The remote reads this line [via the input fildes of the "to_remote" pipe, parses it the way a shell would, and fork/execs the resulting command.
After the child program on the remote has terminated, my_program can output a separater line.
The parent reads the data as before until it sees this separater line.
Now, any time the local wants to do something on the remote, the pipes are already set up. It can just write subsequent commands to the output fildes of the "to_remote" pipe and repeat the process.
Thus, no teardown and recreation required. Only one ssh needs to be set up. This is similar to setting up a server with a socket, but we're using ssh and pipes.
If the local wishes to close the connection, it can close the pipe on its end [or send a (e.g.) "!!!stop" command]
If your remote target commands are text based, the separater is relatively easy (i.e. some string that none of your programs would ever output like: _jsdfl2_werou_tW__987_).
If you've got raw binary data, my_program may have to filter/encapsulate the data in some way (e.g. similar to what the PPP protocol does with its flag character)
(3) You can create a version of my_program (e.g. my_program -server) that acts like a server that listens on a socket [in shell mode].
The "protocol" is similar to case (2) above, but may be a bit easier to set up because a network socket is inherently bidirectional (vs. the need for two pipe(2) calls above).
One advantage here is that you're communicating directly over a TCP socket, bypassing the overhead of the encryption layer.
You can either start the remote server at boot time or you can use a [one-time] ssh invocation to kick it off into the background.
There is one additional advantage. Instead of the "separater line" above, the local could make a separate socket connection to the server for each command. This is still slower than above, but faster than creating the ssh on each invocation.

Does running a process in the background reduce its permissions?

I am using an embedded system, which runs linux. When i run a compiled C program in the forground, it works correctly. However, when i add the '&' after the program call, to make it run as a job in the background, certain features do not work correctly. The main feature which stops working is the use of the 'read' function (unistd.h), used to read from a socket.
Does running a process in the backround reduce its permissions?
What else could cause this behaviour?
Edit:
The function uses the 'select' and 'read' function to read from a socket used for the reception of CANbus message frames. When the data is received, we analyse it and 'echo' a string into a .txt file, to act as a datalogger. When run in the foreground, the file is created and added to successfully, but when in the background, the file is not created/appended.
The only difference between running a process in foreground of background is the interaction with your terminal.
Typically when you background a process it's stdin gets disconnected (it no longer reads input from your keyboard) and you can no longer send keyboard-shortcut signals like Ctrl-C/Ctrl-D to the process.
Other then that nothing changes, no permissions or priorities are changed.
No, a process doesn't have its persmissons changed when going into background.
Internally whats happening is before the process's code starts getting executed, the file descriptors 0,1,2 (stdin,out,err) will be pointed to /dev/null instead of usual files.
Similarly if you use >/file/path the stdout descriptor will point to that particular file
You can verify this with
ls -l /proc/process_number/fd

Linux process in background - "Stopped" in jobs?

I'm currently running a process with the & sign.
$ example &
However, (please note i'm a newbie to Linux) I realised that pretty much a second after such command I'm getting a note that my process received a stopped signal. If I do
$ jobs
I'll get the list with my example process with a little note "Stopped". Is it really stopped and not working at all in the background? How does it exactly work? I'm getting mixed info from the Internet.
In Linux and other Unix systems, a job that is running in the background, but still has its stdin (or std::cin) associated with its controlling terminal (a.k.a. the window it was run in) will be sent a SIGTTIN signal, which by default causes the program to be completely stopped, pending the user bringing it to the foreground (fg %job or similar) to allow input to actually be given to the program. To avoid the program being paused in this way, you can either:
Make sure the programs stdin channel is no longer associated with the terminal, by either redirecting it to a file with appropriate contents for the program to input, or to /dev/null if it really doesn't need input - e.g. myprogram < /dev/null &.
Exit the terminal after starting the program, which will cause the association with the program's stdin to go away. But this will cause a SIGHUP to be delivered to the program (meaning the input/output channel experienced a "hangup") - this normally causes a program to be terminated, but this can be avoided by using nohup - e.g. nohup myprogram &.
If you are at all interested in capturing the output of the program, this is probably the best option, as it prevents both of the above signals (as well as a couple others), and saves the output for you to look at to determine if there are any issues with the programs execution:
nohup myprogram < /dev/null > ${HOME}/myprogram.log 2>&1 &
Yes it really is stopped and no longer working in the background. To bring it back to life type fg job_number
From what I can gather.
Background jobs are blocked from reading the user's terminal. When one tries to do so it will be suspended until the user brings it to the foreground and provides some input. "reading from the user's terminal" can mean either directly trying to read from the terminal or changing terminal settings.
Normally that is what you want, but sometimes programs read from the terminal and/or change terminal settings not because they need user input to continue but because they want to check if the user is trying to provide input.
http://curiousthing.org/sigttin-sigttou-deep-dive-linux has the gory technical details.
Just enter fg which will resolve the error when you then try to exit.

view output of already running processes in linux

I have a process that is running in the background (sh script) and I wonder if it is possible to view the output of this process without having to interrupt it.
The process ran by some application otherwise I would have attached it to a screen for later viewing. It might take an hour to finish and i want to make sure it's running normally with no errors.
There is already an program that uses ptrace(2) in linux to do this, retty:
http://pasky.or.cz/dev/retty/
It works if your running program is already attached to a tty, I do not know if it will work if you run your program in background.
At least it may give some good hints. :)
You can probably retreive the exit code from the program using ptrace(2), otherwise just attach to the process using gdb -p <pid>, and it will be printed when the program dies.
You can also manipulate file descriptors using gdb:
(gdb) p close(1)
$1 = 0
(gdb) p creat("/tmp/stdout", 0600)
$2 = 1
http://etbe.coker.com.au/2008/02/27/redirecting-output-from-a-running-process/
You could try to hook into the /proc/[pid]/fd/[012] triple, but likely that won't work.
Next idea that pops to my mind is strace -p [pid], but you'll get "prittified" output. The possible solution is to strace yourself by writing a tiny program using ptrace(2) to hook into write(2) and writing the data somewhere. It will work but is not done in just a few seconds, especially if you're not used to C programming.
Unfortunately I can't think of a program that does precisely what you want, which is why I give you a hint of how to write it yourself. Good luck!

In Linux, I'm looking for a way for one process to signal another, with blocking

I'm looking for a simple event notification system:
Process A blocks until it gets notified by...
Process B, which triggers Process A.
If I was doing this in Win32 I'd likely use event objects ('A' blocks, when 'B' does a SetEvent).
I need something pretty quick and dirty (prefer script rather than C code).
What sort of things would you suggest? I'm wondering about file advisory locks but it seems messy. One of the processes has to actively open the file in order to hold a lock.
Quick and dirty?
Then use fifo. It is a named pipe. The process A read from the fifo's FD with blocking mode. The process B writes to it when needed.
Simple, indeed.
And here is the bash scripting implementation:
Program A:
#!/bin/bash
mkfifo /tmp/event
while read -n 1 </tmp/event; do
echo "got message";
done
Program B:
#!/bin/bash
echo -n "G" >>/tmp/event
First start script A, then in another shell window repeatedly start script B.
Other than fifo you can use signal and kill to essentially do interrupts and have one process sleep until it receives a signal like SIGUSR1 which then unblocks it (you can use condition variables to achieve this easily without polling).
Slow and clean?
Then use (named) semaphores: either POSIX or SysV (not recommended, but possibly slightly more portable). Process A does a sem_wait (or sem_timedwait) and Process B calls sem_post.

Resources