Supervisord - Redirect process stdout to console - linux

I am planning to run multiple processes using supervisor and please find my supervisord.conf file below:
[supervisord]
[program:bash]
command=xyz
stdout_logfile =/tmp/bash.log
redirect_stderr=true
[supervisorctl]
serverurl=unix:///tmp/supervisor.sock
[unix_http_server]
file=/tmp/supervisor.sock ; path to your socket file
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
I wish to redirect the stdout of the process named bash to supervisor console so that when i start supervisor using
/usr/bin/supervisord
command, i could see the child process logs. How can i do this ? I tried putting syslog for stdout_logfile attribute but it did not work.

You can redirect the program's stdout to supervisor's stdout using the following configuration options:
stdout_logfile=/dev/fd/1
stdout_logfile_maxbytes=0
Explanation:
When a process opens /dev/fd/1 (which is the same as /proc/self/fd/1), the system actually clones file descriptor #1 (stdout) of that process. Using this as stdout_logfile therefore causes supervisord to redirect the program's stdout to its own stdout.
stdout_logfile_maxbytes=0 disables log file rotation which is obviously not meaningful for stdout. Not specifying this option will result in an error because the default value is 50MB and supervisor is not smart enough to detect that the specified log file is not a regular file.
For more information:
http://veithen.github.io/2015/01/08/supervisord-redirecting-stdout.html

Related

stdout and stderr is not visible in syslog

We have a node.js application running as a daemon on a Linux (Yocto) gateway, but I see no trace from the application in the /var/log/syslog file. What would I have to do to include all console.log (stdout) messages into the syslog file?
I suspect this is not a development question and would be better on Superuser or other site.
But anyway.
You can pipe the output of the program through a program called logger which will copy all of its input into the log socket.
Or you could use a version of Linux that uses systemd and journald. The systemd log system will copy all stdout and stderr into its journal log.
Or you can use your own log file (not /var/log/syslog) and redirect the daemon's output into that file.

OpenMPI: have each process write to stdout

Child processes started by mpirun redirect their output to the mpirun process, so all output ends up on one node.
Instead, I'd like each of the processes spawned by MPI to write to STDOUT on their own nodes, or to a file or named pipe.
I read the faq and tried out some things:
mpirun -host host1,host2 my_script >&1
Just redirects stdout from all hosts to stdout on the invoking node (like default). Doing
mpirun -host host1,host2 my_script
Where my_script redirects output to >&1 just captures output from processes on the invoking node.
Is there a way I can get each node to write to their local filesystems (for example) without redirecting to the invoking node's mpirun process?
Thanks.
Open MPI has the --output-file option, it is pretty close but not exactly what you are asking for.
I do not think there is a native way to achieve what you expect.
That being said, that can be easily achieved via a wrapper
For example, via the command line
mpirun --host host1,host2 sh -c 'my_script > /tmp/log.$OMPI_COMM_WORLD_RANK'
Each MPI task will redirect its stdout to /tmp/log.<id>.
An other method is to use the fork_agent
mpirun --host host1,host2 --mca orte_fork_agent /.../wrapper my_script
basically, instead of exec'ing my_script, Open MPI will exec /.../wrapper my_script and with a bit of creativity, the wrapper you have to write can do whatever you need.
Within this wrapper, you will likely want to check the following environment variables
OMPI_COMM_WORLD_SIZE
OMPI_COMM_WORLD_RANK
OMPI_COMM_WORLD_LOCAL_SIZE
OMPI_COMM_WORLD_LOCAL_RANK

Is it possible to pass input to a running service or daemon?

I want to create a Java console application that runs as a daemon on Linux, I have created the application and the script to run the application as a background daemon. The application runs and waits for command line input.
My question:
Is it possible to pass command line input to a running daemon?
On Linux, all running processes have a special directory under /proc containing information and hooks into the process. Each subdirectory of /proc is the PID of a running process. So if you know the PID of a particular process you can get information about it. E.g.:
$ sleep 100 & ls /proc/$!
...
cmdline
...
cwd
environ
exe
fd
fdinfo
...
status
...
Of note is the fd directory, which contains all the file descriptors associated with the process. 0, 1, and 2 exist for (almost?) all processes, and 0 is the default stdin. So writing to /proc/$PID/fd/0 will write to that process' stdin.
A more robust alternative is to set up a named pipe connected to your process' stdin; then you can write to that pipe and the process will read it without needing to rely on the /proc file system.
See also Writing to stdin of background process on ServerFault.
The accepted answer above didn't quite work for me, so here's my implementation.
For context I'm running a Minecraft server on a Linux daemon managed with systemctl. I wanted to be able to send commands to stdin (StandardInput).
First, use mkfifo /home/user/server_input to create a FIFO file somewhere (also known as the 'named pipe' solution mentioned above).
[Service]
ExecStart=/usr/local/bin/minecraft.sh
StandardInput=file:/home/user/server_input
Then, in your daemon *.service file, execute the bash script that runs your server or background program and set the StandardInput directive to the FIFO file we just created.
In minecraft.sh, the following is the key command that runs the server and gets input piped into the console of the running service.
tail -f /home/user/server_input| java -Xms1024M -Xmx4096M -jar /path/to/server.jar nogui
Finally, run systemctl start your_daemon_service and to pass input commands simply use:
echo "command" > /home/user/server_input
Creds to the answers given on ServerFault

Monitoring all running process using strace in shell script

I want to monitor all the running processes using strace and when a process ends the output of the strace should be sent to a file.
And how to find every running proc PID. I also want to include process name in the output file.
$ sudo strace -p 1725 -o firefox_trace.txt
$ tail -f firefox_trace.txt
1725 would be the PID of the proccess you want to monitor (you can find the PID with "ps -C firefox-bin", for firefox in the example)
And firefox_trace.txt would be the output file !
The way to got would be to find every running proc PID, and use the command to write them in the output file !
Considering the doc,
-p pid
Attach to the process with the process ID pid and begin tracing. The
trace may be terminated at any time by a keyboard interrupt signal (
CTRL -C). strace will respond by detaching itself from the traced
process(es) leaving it (them) to continue running. Multiple -p options
can be used to attach to up to 32 processes in addition to command
(which is optional if at least one -p option is given).
Use -o to store the output to the file, or 2>&1 to redirect standard error to output, so you can filter it (grep) or redirect it into file (> file).
To monitor process without knowing its PID, but name, you can use pgrep command, e.g.
strace -p $(pgrep command) -o file.out
where command is your name of process (e.g. php, Chrome, etc.).
To learn more about parameters, check man strace.

Can we save the execution log when we run a command using PuTTY/Plink

I am using Plink to run a command on remote machine. In order to fully automate the process I
need to save the execution log somewhere. I am using a bat file:
C:\Ptty\plink.exe root#<IP> -pw <password> -m C:\Ptty\LaunchFile.txt
The C:\Ptty\LaunchFile.txt contains my command that i want to run.
./Launch.sh jobName=<job name> restart.mode=false
Is there a way to save the execution log so that I can monitor it later... ?
The plink is a console application. Actually that's probably it's only purpose. As such, its output can be redirected to a file as with any other command-line command/tool.
Following example redirects both standard and error output to a file output.log:
plink.exe -m script.txt username#example.com > output.log 2>&1
See also Redirect Windows cmd stdout and stderr to a single file.
This is the one of my way to log everything when I use putty.exe on Windows.

Resources