Linux cpu-to-cpu inter-process communications - linux

I have written a Linux C program, which runs on an embedded processor, and which behaves like a shell -- either interactively, giving a prompt, parsing user commands, and executing them, in an indefinite loop, or non-interactively -- reading and parsing a command off the invocation command. I want to run the same program concurrently on another embedded processor, which is reachable using e/net (e.g. ssh), and use it for some commands because the second processor has access to some hardware that the first processor does not. I usually need to capture and process the outputs from that remote command. Currently, I invoke the program on the second processor for each command -- e.g.
system ("ssh other-cpu my_program "do this command > /vtmp/out_capt");
system ("scp other-cpu:/vtmp/out_capt .")
This works, but it is slow. Is there a better way, perhaps using pipes? If someone could point me to their pick of a best way to do this kind of IPC I would appreciate it.

You could get rid of scp and just save the output from ssh on the local machine. Something like this:
ssh other-cpu '( my_program command )' > file.log
Or if you want to run multiple commands:
ssh other-cpu > file.log << EOF
my_program command
my_program other_command
EOF

There are a few ways to do this, with varying speed and complexity. [Naturally :-)], the fastest requires the most setup.
(1) You can replace your two command sequence with an output pipe.
You create a single pipe via pipe(2). You do a fork(2). The child attaches the output fildes of the pipe to stdout. The child does execvp("ssh","remote","my_program","blah"). The parent reads the results from the input fildes of the pipe. No need for a temp file.
This is similar to what you're currently doing in that you do an ssh for each remote command you want to execute, but eliminating the temp file and the scp.
(2) You can modify my_program to accept various commands from stdin. I believe you already do this in the program's "shell" mode.
You create two pipes via pipe(2). Again, fork a child. Attach the output fildes of the "from_remote" pipe as before to stdout. But, now, attach the input fildes of the "to_remote" pipe to stdin.
In the parent, using the fildes for output of the "to_remote" pipe, send a command line. The remote reads this line [via the input fildes of the "to_remote" pipe, parses it the way a shell would, and fork/execs the resulting command.
After the child program on the remote has terminated, my_program can output a separater line.
The parent reads the data as before until it sees this separater line.
Now, any time the local wants to do something on the remote, the pipes are already set up. It can just write subsequent commands to the output fildes of the "to_remote" pipe and repeat the process.
Thus, no teardown and recreation required. Only one ssh needs to be set up. This is similar to setting up a server with a socket, but we're using ssh and pipes.
If the local wishes to close the connection, it can close the pipe on its end [or send a (e.g.) "!!!stop" command]
If your remote target commands are text based, the separater is relatively easy (i.e. some string that none of your programs would ever output like: _jsdfl2_werou_tW__987_).
If you've got raw binary data, my_program may have to filter/encapsulate the data in some way (e.g. similar to what the PPP protocol does with its flag character)
(3) You can create a version of my_program (e.g. my_program -server) that acts like a server that listens on a socket [in shell mode].
The "protocol" is similar to case (2) above, but may be a bit easier to set up because a network socket is inherently bidirectional (vs. the need for two pipe(2) calls above).
One advantage here is that you're communicating directly over a TCP socket, bypassing the overhead of the encryption layer.
You can either start the remote server at boot time or you can use a [one-time] ssh invocation to kick it off into the background.
There is one additional advantage. Instead of the "separater line" above, the local could make a separate socket connection to the server for each command. This is still slower than above, but faster than creating the ssh on each invocation.

Related

How to create a virtual command-backed file in Linux?

What is the most straightforward way to create a "virtual" file in Linux, that would allow the read operation on it, always returning the output of some particular command (run everytime the file is being read from)? So, every read operation would cause an execution of a command, catching its output and passing it as a "content" of the file.
There is no way to create such so called "virtual file". On the other hand, you would be
able to achieve this behaviour by implementing simple synthetic filesystem in userspace via FUSE. Moreover you don't have to use c, there
are bindings even for scripting languages such as python.
Edit: And chances are that something like this already exists: see for example scriptfs.
This is a great answer I copied below.
Basically, named pipes let you do this in scripting, and Fuse let's you do it easily in Python.
You may be looking for a named pipe.
mkfifo f
{
echo 'V cebqhpr bhgchg.'
sleep 2
echo 'Urer vf zber bhgchg.'
} >f
rot13 < f
Writing to the pipe doesn't start the listening program. If you want to process input in a loop, you need to keep a listening program running.
while true; do rot13 <f >decoded-output-$(date +%s.%N); done
Note that all data written to the pipe is merged, even if there are multiple processes writing. If multiple processes are reading, only one gets the data. So a pipe may not be suitable for concurrent situations.
A named socket can handle concurrent connections, but this is beyond the capabilities for basic shell scripts.
At the most complex end of the scale are custom filesystems, which lets you design and mount a filesystem where each open, write, etc., triggers a function in a program. The minimum investment is tens of lines of nontrivial coding, for example in Python. If you only want to execute commands when reading files, you can use scriptfs or fuseflt.
No one mentioned this but if you can choose the path to the file you can use the standard input /dev/stdin.
Everytime the cat program runs, it ends up reading the output of the program writing to the pipe which is simply echo my input here:
for i in 1 2 3; do
echo my input | cat /dev/stdin
done
outputs:
my input
my input
my input
I'm afraid this is not easily possible. When a process reads from a file, it uses system calls like open, fstat, read. You would need to intercept these calls and output something different from what they would return. This would require writing some sort of kernel module, and even then it may turn out to be impossible.
However, if you simply need to trigger something whenever a certain file is accessed, you could play with inotifywait:
#!/bin/bash
while inotifywait -qq -e access /path/to/file; do
echo "$(date +%s)" >> /tmp/access.txt
done
Run this as a background process, and you will get an entry in /tmp/access.txt each time your file is being read.

How does Linux Expect script work?

I ever tried to input password by I/O redirection like echo <password> | ssh <user>#<host>, but it didn't work of course. Then I got that ssh actually reads password directly from /dev/tty instead of STDIN, so I/O redirection doesn't work for it.
As far as I know, Expect script is the standard way for this kind of job. I'm curious about how Expect works? I guess it runs the target program in a child process, and it changes the /dev/tty of the child process to refer to another place, but I don't know how.
It uses something called a pseudo-TTY which looks to the called program like a TTY, but allows for programmed control. See e.g. Don Libes' Exploring Expect p498f

how to log just the output of commands with expect

I'm using expect to execute a bunch of commands in a remote machine. Then, i'm calling the expect script from a shell script.
I don't want the expect script to log to stdout the sent commands but i want it to log the output of the commands, so my shell script can do other things depending on that results.
log_user 0
Hides both the commands and the results, so it doesn't fit my needs. How can i tell expect to log the results?
Hmm... I'm not sure you can do that, since the reason for seeing the commands you send is because the remote device echoes them back to you. This is standard procedure, and is done so that a user sees what he or she types when interacting with the device.
What I'm trying to say is that both the device output to issued commands, and the echoed-back commands, are part of the spawned process's stdout, therefore I don't believe you can separate one from the other.
Now that I think of it, I think you can configure a terminal to not display echoed commands... but not sure how you would go about doing that with a spawned process that is not using an interactive terminal.
Let us know if you find a way, I'd be interested of knowing if there is one.

Controlling multiple background process from a shell on an embedded Linux

Currently I am working with a embedded system that has the Linux OS. I need to run multiple application at the same time, and I would like them to be able to run through one script. A fellow colleague already had implemented this by using a wrapper script and return codes.
wrapperScript.sh $command & > output_log.txt
wrapperScript.sh $command2 & >output_log2.txt
But the problem arises in when exiting the application. Normally all the application that are on the embedded system require a user to press q to exit. But the wrapper script rather than doing that when it gets the kill signal or user signal, it just kill the process. This is dangerous because the wrapper script assumes that the application has the proper facilities to deal with the kill signal (that is not always the case and leads to memory leaks and unwanted socket connections). I have looked into automating programs such as expect but since I am using an embedded board, I am unable to get expect for it. Is there a way in the bash shell or embedded C to deal with multiple process have one single program automatically send the q signal to the programs.
I also would like the capability to maintain log and the output of the files.
EDIT:
Solution:
Okay I found the issue to the problem, Expect is the way to go about it in any situation. There is a serious limitation that it might slower, but the trade off is not bad in this situation. I decided to use Expect Scripting Language to implement the solution. There are certain trade off.
Pros:
* Precise control over embedded application
* Can Make Process Interactive to User
* can Deal with Multiple Process
Cons:
* Performance is slow
Use a pipe
Make the command read input from a named pipe. You'll then be able to send it commands from anywhere.
mkfifo command1.ctrl
{ "$command1" <command1.ctrl >command1.log 2>&1;
rm command1.ctrl; } &
Use screen
Run your applications inside the Screen program. You can run all your commands in separate windows in a single instance of screen (you'll save a little memory that way). You can specify the commands to run from a Screen configuration file:
sessionname mycommands
screen -t command1 command1
screen -t command2 command2
To terminate a program, use
screen -S mycommands -p 1 -X stuff 'q
'
where 1 is the number of the window to send the input to (each screen clause in the configuration file starts a window). The text after stuff is input to send to the program; note the presence of a newline after the q (some applications may require a carriage return instead; you can get one with stuff "q$(printf \\015)" if your shell isn't too featured-starved). If your command expects a q with no newline at all, just stuff q.
For logging, you can use Screen's logging feature, or redirect the output to a file as before.

Linux i/o to running daemon / process

Is it possible to i/o to a running process?
I have multiple game servers running like this:
cd /path/to/game/server/binary
./binary arg1 arg2 ... argn &
Is it possible to write a message to a server if i know the process id?
Something like this would be handy:
echo "quit" > process1234
Where process1234 is the process (with sid 1234).
The game server is not a binary written by me, but it is a Call of Duty binary. So i can't change anything to the code.
Yes, you can start up the process with a pipe as its stdin and then write to the pipe. You can used a named or anonymous pipe.
Normally a parent process would be needed to do this, which would create an anonmyous pipe and supply that to the child process as its stdin - popen() does this, many libraries also implement it (see Perl's IPC::Open2 for example)
Another way would be to run it under a pseudo tty, which is what "screen" does. Screen itself may also have a mechanism for doing this.
Only if the process is listening for some message somewhere. For instance, your game server can be waiting for input on a file, over a network connection, or from standard input.
If your process is not actively listening for something, the only things you can really do is halt or kill it.
Now if your process is waiting on standard input, and you ran it like so:
$ myprocess &
Then (in linux) you should be able to try the following:
$ jobs
[1]+ Running myprocess &
$ fg 1
And at this point you are typing standard input into your process.
You can only do that if the process is explicitly designed for that.
But since you example is requesting the process quit, I'd recommend trying signals. First try to send the TERM (i.e. terminate) signal which is the default:
kill _pid_
If that doesn't work, you can try other signals such as QUIT:
kill -QUIT _pid_
If all else fails, you can use the KILL signal. This is guaranteed (*) to stop the process but the process will have no change to clean up:
kill -KILL _pid_
* - in the past, kill -KILL would not work if the process was hung when on a flaky network file server. Don't know if they ever fixed this.
I'm pretty sure this would work, since the server has a console on stdin:
echo "quit" > /proc/<server pid>/fd/0
You mention in a comment below that your process does not appear to read from the console on fd 0. But it must on some fd. ls -l /proc/<server pid/>/fd/ and look for one that's pointing at /dev/pts/ if the process is running in a gnome-terminal or xterm or something.
If you want to do a few simple operations on your server, use signals as mentioned elsewhere. Set up signal handlers in the server and have each signal perform a different action e.g.:
SIGINT: Reread config file
SIGHUP: quit
...
Highly hackish, don't do this if you have a saner alternative, but you can redirect a process's file descriptors on the fly if you have ptrace permissions.
$ echo quit > /tmp/quitfile
$ gdb binary 1234
(gdb) call dup2(open("/tmp/quitfile", 0), 0)
(gdb) continue
open("/tmp/quitfile", O_RDONLY) returns a file descriptor to /tmp/quitfile. dup2(..., STDIN_FILENO) replaces the existing standard input by the new file descriptor.
We inject this code into the application using gdb (but with numeric constants, as #define constants may not be available), and taadaah.
Simply run it under screen and don't background it. Then you can either connect to it with screen interactively and tell it to quit, or (with a bit of expect hackery) write a script that will connect to screen, send the quit message, and disconnect.

Resources