Is it possible to pass input to a running service or daemon? - linux

I want to create a Java console application that runs as a daemon on Linux, I have created the application and the script to run the application as a background daemon. The application runs and waits for command line input.
My question:
Is it possible to pass command line input to a running daemon?

On Linux, all running processes have a special directory under /proc containing information and hooks into the process. Each subdirectory of /proc is the PID of a running process. So if you know the PID of a particular process you can get information about it. E.g.:
$ sleep 100 & ls /proc/$!
...
cmdline
...
cwd
environ
exe
fd
fdinfo
...
status
...
Of note is the fd directory, which contains all the file descriptors associated with the process. 0, 1, and 2 exist for (almost?) all processes, and 0 is the default stdin. So writing to /proc/$PID/fd/0 will write to that process' stdin.
A more robust alternative is to set up a named pipe connected to your process' stdin; then you can write to that pipe and the process will read it without needing to rely on the /proc file system.
See also Writing to stdin of background process on ServerFault.

The accepted answer above didn't quite work for me, so here's my implementation.
For context I'm running a Minecraft server on a Linux daemon managed with systemctl. I wanted to be able to send commands to stdin (StandardInput).
First, use mkfifo /home/user/server_input to create a FIFO file somewhere (also known as the 'named pipe' solution mentioned above).
[Service]
ExecStart=/usr/local/bin/minecraft.sh
StandardInput=file:/home/user/server_input
Then, in your daemon *.service file, execute the bash script that runs your server or background program and set the StandardInput directive to the FIFO file we just created.
In minecraft.sh, the following is the key command that runs the server and gets input piped into the console of the running service.
tail -f /home/user/server_input| java -Xms1024M -Xmx4096M -jar /path/to/server.jar nogui
Finally, run systemctl start your_daemon_service and to pass input commands simply use:
echo "command" > /home/user/server_input
Creds to the answers given on ServerFault

Related

rclone mount volume automatically via bashrc on startup

I am using rclone to mount a folder from my cloudstorage on my local computers. however, on one machine I only connect via terminal and I want to mount the volume on startup.
So I setup a small shell-script with following contents:
rclone mount remoterep:/examplefolder ~/Documents/examplefolder
and I call it in bashrc with exec ~/mount_examplefolder
when I ssh into said computer, it is working as I do not get any errors but the shell refuses to take any further commands as the mount command is executed.
If I add another ssh login, I get an error-prompt, because it can't overwrite the mount folder from the other session.
So how do I fix this, that the rclone is being executed in the background giving me access to shell back?
Or am I restricted to mounting it manually and then using another ssh session to perform the desired actions?
There's a couple of things here that are causing some problems.
First, when you use exec to spawn a process in the shell, that means that you want to replace the existing shell process with the program you've mentioned. When you do that in an SSH session, you replace the shell process that the SSH daemon started (and you were intending to use to log in). SSH will then wait for that process to exit (which it won't until the volume is umounted), which is why you see the hang. You'll want to skip the exec in your shell configuration, which will spawn the process without replacing your shell.
Second, the reason you see the error is that the mount process is designed to be run once, as you've noticed. If you want to skip mounting the folder if it's already mounted, you can use something like the following as your shell script:
#!/bin/sh
if ! grep " $HOME/Documents/examplefolder " /proc/mounts
then
rclone mount remoterep:/examplefolder ~/Documents/examplefolder
fi
Note the spaces inside the quotes that ensure that you haven't matched something else by accident. This will ensure that your script doesn't try to mount multiple times.
Third, you'll probably want to run this command in the background and detached from the shell so that the exit of the shell doesn't cause it to receive SIGHUP and exit (or restart, depending on how it's configured). You can do this by writing the invocation in your shell configuration as nohup ~/mount_examplefolder >/dev/null 2>&1 &. nohup prevents the program from receiving SIGHUP and redirecting output prevents it from printing messages or creating nohup.out files all over the place.
Finally, you may (or may not) want to configure this to run only when you're using an interactive shell; that is, when you're logging in to start a shell for interactive use rather than scripting use. If so, you can make the invocation of nohup condition on PS1 being set like so:
if [ -n "$PS1" ]
then
nohup ~/mount_examplefolder >/dev/null 2>&1 &
fi

OpenMPI: have each process write to stdout

Child processes started by mpirun redirect their output to the mpirun process, so all output ends up on one node.
Instead, I'd like each of the processes spawned by MPI to write to STDOUT on their own nodes, or to a file or named pipe.
I read the faq and tried out some things:
mpirun -host host1,host2 my_script >&1
Just redirects stdout from all hosts to stdout on the invoking node (like default). Doing
mpirun -host host1,host2 my_script
Where my_script redirects output to >&1 just captures output from processes on the invoking node.
Is there a way I can get each node to write to their local filesystems (for example) without redirecting to the invoking node's mpirun process?
Thanks.
Open MPI has the --output-file option, it is pretty close but not exactly what you are asking for.
I do not think there is a native way to achieve what you expect.
That being said, that can be easily achieved via a wrapper
For example, via the command line
mpirun --host host1,host2 sh -c 'my_script > /tmp/log.$OMPI_COMM_WORLD_RANK'
Each MPI task will redirect its stdout to /tmp/log.<id>.
An other method is to use the fork_agent
mpirun --host host1,host2 --mca orte_fork_agent /.../wrapper my_script
basically, instead of exec'ing my_script, Open MPI will exec /.../wrapper my_script and with a bit of creativity, the wrapper you have to write can do whatever you need.
Within this wrapper, you will likely want to check the following environment variables
OMPI_COMM_WORLD_SIZE
OMPI_COMM_WORLD_RANK
OMPI_COMM_WORLD_LOCAL_SIZE
OMPI_COMM_WORLD_LOCAL_RANK

mount fails from spawned process

I want to start a process that uses a USB hard drive once it gets inserted.
Since UDEV rules specifically mentions not to run long-time processes from RUN command, I send a FIFO message to my service which then opens the relevant process.
So the flow goes like this:
UDEV > runs action process > sends FIFO message to service > service gets message > runs the process who works with the HDD (aka HDD-PROCESS).
If I run my service from shell-1 and run 'action process' (the one that UDEV runs) from shell-2 everything works (including when trying it with udev).
But in deployment, the service is spawned from init, and when it does, the mount command fails saying "No such device".
I then detached "HDD-PROCESS" with fork and setsid, but that didn't help either.
from inittab:
::respawn:/opt/spwn_frm_init
ps relevant output:
PID PPID PGID SID COMM ARGS
31112 1 31112 31112 spwn_frm_init /bin/sh /opt/spwn_frm_init
31113 31112 31112 31112 runSvc /bin/sh /app/sys/runSvc
31114 31113 31112 31112 python python /app/sys/mainSvc.py
24064 1 24064 24064 python /usr/bin/python /app/sys/hdd_proc.py sdb1
everything runs under root (ps shows that too, I omitted that to save screen space).
So in short: when I run /opt/spwn_frm_init from shell, everything works. when I kill it and let it re-spawn from inittab, it doesn't and mount fails with error above.
UPDATE:
There is no problem when trying to mount an ext3 drive, but only on the NTFS one (using ntfs-3g).
Found it!
One of the differences between spawned process and another one who runs from shell is the environment variables which usually should be a problem when all I want is to call mount.
But when I noticed the problem happens only with the NTFS drive, it suddenly occurred to me that mount might need to call ntfs-3g so it worth checking if the second is accessible in the PATH variable.
which ntfs-3g led to /usr/local/bin/ntfs-3g which was mentioned in the default shell PATH but not in the one spawned from init.
To solve it, I added this /usr/local/bin to PATH in the "HDD-PROCESS" and mount began to work :)
A better error message in mount could have saved a lot of time here...

Script not starting on boot with start-stop-daemon

My script (located in /etc/init.d) is creating a pid file ($PIDFILE), but there is no process running. My daemon script includes:
start-stop-daemon --start --quiet --pidfile $PIDFILE -m -b --startas $DAEMON --test > /dev/null || return 1
The script works fine when executing it manually.
You need to create startup links.
sudo update-rc.d SCRIPT_NAME defaults
then reboot. SCRIPT_NAME is the name of the script in /etc/init.d (Without the path)
Was able to get it working, but tried so many things, don't know exactly what fixed it (probably an error in script or config). However, learned a lot and wanted to share since I can't find much of the same in the internet abyss.
It seems Ubuntu (and many other distros based on Ubuntu, including Mint) has migrated to Upstart for job and service management. Upstart includes SysVinit (using /etc/init.d daemons) compatibility that still can use update-rc.d to manage daemons (so if you are familiar with that usage, you can keep on using it). The Upstart method is to use a single .conf file in the /etc/init folder. My SCRIPT.conf file is very simple (I'm using a python script):
start on filesystem or runlevel [2345]
stop on runlevel [016]
exec python /usr/share/python-support/SCRIPT/SCRIPT.py
This simple file completely replaces the standard script in /etc/init.d with the case statement to provide [start|stop|restart|reload] functions and the pointer to /usr/bin/SCRIPT. You can see that it includes runlevel control that would normally be found in the /etc/rc*.d files (thus eliminating several files).
I tried update-rc.d to create the necessary /etc/rc*.d/ files for my daemon. My daemon bash script is located in /etc/init.d and includes the start-stop-daemon command as in my original question. (That command also works fine from terminal.)
I had /etc/rc*.d/ files, the bash script in /etc/init.d and /etc/init/SCRIPT.conf file during boot and it seems that Upstart likely first looks for the .conf file for its direction because the SysVinit command service SCRIPT [start|stop|restart|reload] returns Unknown Instance, however you can find the process is running with ps -elf | grep SCRIPT_FILE.
One interesting thing to note is the forking of your daemon when using .conf. The script as written above only spawns one fork of the daemon. However, total independence of the original script is possible by using expect fork or expect daemon and respawn (see the Upstart Cookbook for reference). Using these will ensure that your daemon will never be killed (at least by using the kill command).
I continued to test both my daemon and the boot process by utilizing the sudo initctl reload-configuration command. This reloads the conf files where you can test your daemon by the sudo [start|stop|restart] SCRIPT command. The result of the start command is:
$ sudo start SCRIPT
SCRIPT start/running, process xxxx
$ sudo restart SCRIPT
SCRIPT start/running, process xxxx
$ sudo stop SCRIPT
SCRIPT stop/waiting
Also, there is a nice log in /var/log/upstart/SCRIPT.log that gives you useful information for your daemon during boot. Mine still has a very annoying bug that prevents root from displaying osd messages with notify-send from my daemon. My log file includes a gtk warning (I will open another question to solicit help).
Hope this helps others in developing their daemons.

linux: suspend process at startup

I would like to spawn a process suspended, possibly in the context of another user (e.g. via sudo -u ...), set up some iptables rules for the spawned process, continue running the process, and remove the iptable rules when the process exists.
Is there any standart means (bash, corutils, etc.) that allows me to achieve the above? In particular, how can I spawn a process in a suspended state and get its pid?
Write a wrapper script start-stopped.sh like this:
#!/bin/sh
kill -STOP $$ # suspend myself
# ... until I receive SIGCONT
exec $# # exec argument list
And then call it like:
sudo -u $SOME_USER start-stopped.sh mycommand & # start mycommand in stopped state
MYCOMMAND_PID=$!
setup_iptables $MYCOMMAND_PID # use its PID to setup iptables
sudo -u $SOME_USER kill -CONT $MYCOMMAND_PID # make mycommand continue
wait $MYCOMMAND_PID # wait for its termination
MYCOMMAND_EXIT_STATUS=$?
teardown_iptables # remove iptables rules
report $MYCOMMAND_EXIT_STATUS # report errors, if necessary
All this is overkill, however. You don't need to spawn your process in a suspended state to get the job done. Just make a wrapper script setup_iptables_and_start:
#!/bin/sh
setup_iptables $$ # use my own PID to setup iptables
exec sudo -u $SOME_USER $# # exec'ed command will have same PID
And then call it like
setup_iptables_and_start mycommand || report errors
teardown_iptables
You can write a C wrapper for your program that will do something like this :
fork and print child pid.
In the child, wait for user to press Enter. This puts the child in sleep and you can add the rules with the pid.
Once rules are added, user presses enter. The child runs your original program, either using exec or system.
Will this work?
Edit:
Actually you can do above procedure with a shell script. Try following bash script:
#!/bin/bash
echo "Pid is $$"
echo -n "Press Enter.."
read
exec $#
You can run this as /bin/bash ./run.sh <your command>
One way to do it is to enlist gdb to pause the program at the start of its main function (using the command "break main"). This will guarantee that the process is suspended fast enough (although some initialisation routines can run before main, they probably won't do anything relevant). However, for this you will need debugging information for the program you want to start suspended.
I suggest you try this manually first, see how it works, and then work out how to script what you've done.
Alternatively, it may be possible to constrain the process (if indeed that is what you're trying to do!) without using iptables, using SELinux or a ptrace-based tool like sydbox instead.
I suppose you could write a util yourself that forks, and wherein the child of the fork suspends itself just before doing an exec. Otherwise, consider using an LD_PRELOAD lib to do your 'custom' business.
If you care about making that secure, you should probably look at bigger guns (with chroot, perhaps paravirtualization, user mode linux etc. etc);
Last tip: if you don't mind doing some more coding, the ptrace interface should allow you to do what you describe (since it is used to implement debuggers with)
You probably need the PID of a program you're starting, before that program actually starts running. You could do it like this.
Start a plain script
Force the script to wait
You can probably use suspend which is a bash builitin but in the worst case you can make it stop itself with a signal
Use the PID of the bash process in every way you want
Restart the stopped bash process (SIGCONT) and do an exec - another builtin - starting your real process (it will inherit the PID)

Resources