I want to execute another process and normally want to wait until it has finished. Lets say we spawn and wait for the process in thread T1:
let child = Command::new("rustc").spawn().unwrap();
child.wait();
Now, if a special event occurs (which thread T0 is waiting for) I want to kill the spawned process:
if let Ok(event) = special_event_notifier.recv() {
child.kill();
}
But I don't see a way to do it: both kill and wait take a mutable reference to Child and are therefore mutually exclusive. After calling wait no one can have any reference to child anymore.
I've found the wait-timeout crate, but I want to know if there's another way.
If the child subprocess do not close stdout before finishing, it's possible to wait reading stdout. Here is an example
use std::io::Read;
use std::process::*;
use std::thread;
use std::time::Duration;
fn wait_on_output(mut out: ChildStdout) {
while out.read_exact(&mut [0; 1024]).is_ok() { }
}
fn wait_or_kill(cmd: &mut Command, max: Duration) {
let mut child = cmd.stdout(Stdio::piped())
.spawn()
.expect("Cannot spawn child");
let out = child.stdout.take().expect("No stdout on child");
let h = thread::spawn(move || {
thread::sleep(max);
child.kill().expect("Cannot kill child");
println!("{:?}", child.wait());
});
wait_on_output(out);
h.join().expect("join fail");
}
fn main() {
wait_or_kill(Command::new("sleep").arg("1"), Duration::new(2, 0));
wait_or_kill(Command::new("sleep").arg("3"), Duration::new(2, 0));
}
The output of this program on my system is
Ok(ExitStatus(ExitStatus(0)))
Ok(ExitStatus(ExitStatus(9)))
Although not in the docs, killing a finished child returns Ok.
This works because killing a process close the files associated with it. However, if the child spawn new processes, killing the child may not kill these other processes and they may keep the stdout opened.
Obviously, you can just kill the process yourself. The Child::id method gives you the "OS-assigned process identifier" that should be sufficient for that.
The only problem is that killing a process is a platform-dependent action. On UNIX killing a process is handled with the kill function:
#![feature(libc)]
extern crate libc;
use std::env::args;
use std::process::Command;
use std::thread::{spawn, sleep};
use std::time::Duration;
use libc::{kill, SIGTERM};
fn main() {
let mut child = Command::new("/bin/sh").arg("-c").arg("sleep 1; echo foo").spawn().unwrap();
let child_id = child.id();
if args().any(|arg| arg == "--kill") {
spawn(move || {
sleep(Duration::from_millis(100));
unsafe {
kill(child_id as i32, SIGTERM);
}
});
}
child.wait().unwrap();
}
On Windows you might try the OpenProcess and TerminateProcess functions (available with the kernel32-sys crate).
Related
Is there any way to avoid killing the parent process using the fork library, just like this code, but without killing the parent and leaving it doing its thing until it ends?
use fork::{daemon, Fork};
use std::process::Command;
fn main() {
if let Ok(Fork::Child) = daemon(false, false) {
Command::new("/usr/bin/firefox")
.output()
.expect("failed to execute process");
}
}
I have a UDP socket that is receiving data
pub async fn start() -> Result<(), std::io::Error> {
loop {
let mut data = vec![0; 1024];
socket.recv_from(&mut data).await?;
}
}
This code is currently blocked on the .await when there is no data coming in. I want to gracefully shut down my server from my main thread, so how do I send a signal to this .await that it should stop sleeping and shut down instead?
Note: The Tokio website has a page on graceful shutdown.
If you have more than one task to kill, you should use a broadcast channel to send shutdown messages. You can use it together with tokio::select!.
use tokio::sync::broadcast::Receiver;
// You may want to log errors rather than return them in this function.
pub async fn start(kill: Receiver<()>) -> Result<(), std::io::Error> {
tokio::select! {
output = real_start() => output,
_ = kill.recv() => Err(...),
}
}
pub async fn real_start() -> Result<(), std::io::Error> {
loop {
let mut data = vec![0; 1024];
socket.recv_from(&mut data).await?;
}
}
Then to kill all the tasks, send a message on the channel.
To kill only a single task, you can use the JoinHandle::abort method, which will kill the task as soon as possible. Note that this method is available only in Tokio 1.x and 0.3.x, and to abort a task using Tokio 0.2.x, see the next section below.
let task = tokio::spawn(start());
...
task.abort();
As an alternative to JoinHandle::abort, you can use abortable from the futures crate. When you spawn the task, you do the following:
let (task, handle) = abortable(start());
tokio::spawn(task);
Then later you can kill the task by calling the abort method.
handle.abort();
Of course, a channel with select! can also be used to kill a single task, perhaps combined with an oneshot channel rather than a broadcast channel.
All of these methods guarantee that the real_start method is killed at an .await. It is not possible to kill the task while it is running code between two .awaits. You can read more about why this is here.
The mini-redis project contains an accessible real-world example of graceful shutdown of a server. Additionally, the Tokio tutorial has chapters on both select and channels.
I'm trying to get a parent process and a child process to communicate with each other using a tokio::net::UnixStream. For some reason the child is unable to read whatever the parent writes to the socket, and presumably the other way around.
The function I have is similar to the following:
pub async fn run() -> Result<(), Error> {
let mut socks = UnixStream::pair()?;
match fork() {
Ok(ForkResult::Parent { .. }) => {
socks.0.write_u32(31337).await?;
Ok(())
}
Ok(ForkResult::Child) => {
eprintln!("Reading from master");
let msg = socks.1.read_u32().await?;
eprintln!("Read from master {}", msg);
Ok(())
}
Err(_) => Err(Error),
}
}
The socket doesn't get closed, otherwise I'd get an immediate error trying to read from socks.1. If I move the read into the parent process it works as expected. The first line "Reading from master" gets printed, but the second line never gets called.
I cannot change the communication paradigm, since I'll be using execve to start another binary that expects to be talking to a socketpair.
Any idea what I'm doing wrong here? Is it something to do with the async/await?
When you call the fork() system call:
The child process is created with a single thread—the one that called fork().
The default executor in tokio is a thread pool executor. The child process will only get one of the threads in the pool, so it won't work properly.
I found I was able to make your program work by setting the thread pool to contain only a single thread, like this:
use tokio::prelude::*;
use tokio::net::UnixStream;
use nix::unistd::{fork, ForkResult};
use nix::sys::wait;
use std::io::Error;
use std::io::ErrorKind;
use wait::wait;
// Limit to 1 thread
#[tokio::main(core_threads = 1)]
async fn main() -> Result<(), Error> {
let mut socks = UnixStream::pair()?;
match fork() {
Ok(ForkResult::Parent { .. }) => {
eprintln!("Writing!");
socks.0.write_u32(31337).await?;
eprintln!("Written!");
wait().unwrap();
Ok(())
}
Ok(ForkResult::Child) => {
eprintln!("Reading from master");
let msg = socks.1.read_u32().await?;
eprintln!("Read from master {}", msg);
Ok(())
}
Err(_) => Err(Error::new(ErrorKind::Other, "oh no!")),
}
}
Another change I had to make was to force the parent to wait for the child to complete, by calling wait() - also something you probably do not want to be doing in a real async program.
Most of the advice I have read that if you need to fork from a threaded program, either do it before creating any threads, or call exec_ve() in the child immediately after forking (which is what you plan to do anyway).
I am trying to write a program that spawns a bunch of threads and then joins the threads at the end. I want it to be interruptible, because my plan is to make this a constantly running program in a UNIX service.
The idea is that worker_pool will contain all the threads that have been spawned, so terminate can be called at any time to collect them.
I can't seem to find a way to utilize the chan_select crate to do this, because this requires I spawn a thread first to spawn my child threads, and once I do this I can no longer use the worker_pool variable when joining the threads on interrupt, because it had to be moved out for the main loop. If you comment out the line in the interrupt that terminates the workers, it compiles.
I'm a little frustrated, because this would be really easy to do in C. I could set up a static pointer, but when I try and do that in Rust I get an error because I am using a vector for my threads, and I can't initialize to an empty vector in a static. I know it is safe to join the workers in the interrupt code, because execution stops here waiting for the signal.
Perhaps there is a better way to do the signal handling, or maybe I'm missing something that I can do.
The error and code follow:
MacBook8088:video_ingest pjohnson$ cargo run
Compiling video_ingest v0.1.0 (file:///Users/pjohnson/projects/video_ingest)
error[E0382]: use of moved value: `worker_pool`
--> src/main.rs:30:13
|
24 | thread::spawn(move || run(sdone, &mut worker_pool));
| ------- value moved (into closure) here
...
30 | worker_pool.terminate();
| ^^^^^^^^^^^ value used here after move
<chan macros>:42:47: 43:23 note: in this expansion of chan_select! (defined in <chan macros>)
src/main.rs:27:5: 35:6 note: in this expansion of chan_select! (defined in <chan macros>)
|
= note: move occurs because `worker_pool` has type `video_ingest::WorkerPool`, which does not implement the `Copy` trait
main.rs
#[macro_use]
extern crate chan;
extern crate chan_signal;
extern crate video_ingest;
use chan_signal::Signal;
use video_ingest::WorkerPool;
use std::thread;
use std::ptr;
///
/// Starts processing
///
fn main() {
let mut worker_pool = WorkerPool { join_handles: vec![] };
// Signal gets a value when the OS sent a INT or TERM signal.
let signal = chan_signal::notify(&[Signal::INT, Signal::TERM]);
// When our work is complete, send a sentinel value on `sdone`.
let (sdone, rdone) = chan::sync(0);
// Run work.
thread::spawn(move || run(sdone, &mut worker_pool));
// Wait for a signal or for work to be done.
chan_select! {
signal.recv() -> signal => {
println!("received signal: {:?}", signal);
worker_pool.terminate(); // <-- Comment out to compile
},
rdone.recv() => {
println!("Program completed normally.");
}
}
}
fn run(sdone: chan::Sender<()>, worker_pool: &mut WorkerPool) {
loop {
worker_pool.ingest();
worker_pool.terminate();
}
}
lib.rs
extern crate libc;
use std::thread;
use std::thread::JoinHandle;
use std::os::unix::thread::JoinHandleExt;
use libc::pthread_join;
use libc::c_void;
use std::ptr;
use std::time::Duration;
pub struct WorkerPool {
pub join_handles: Vec<JoinHandle<()>>
}
impl WorkerPool {
///
/// Does the actual ingestion
///
pub fn ingest(&mut self) {
// Use 9 threads for an example.
for i in 0..10 {
self.join_handles.push(
thread::spawn(move || {
// Get the videos
println!("Getting videos for thread {}", i);
thread::sleep(Duration::new(5, 0));
})
);
}
}
///
/// Joins all threads
///
pub fn terminate(&mut self) {
println!("Total handles: {}", self.join_handles.len());
for handle in &self.join_handles {
println!("Joining thread...");
unsafe {
let mut state_ptr: *mut *mut c_void = 0 as *mut *mut c_void;
pthread_join(handle.as_pthread_t(), state_ptr);
}
}
self.join_handles = vec![];
}
}
terminate can be called at any time to collect them.
I don't want to stop the threads; I want to collect them with join. I agree stopping them would not be a good idea.
These two statements don't make sense to me. You can only join a thread when it's complete. The word "interruptible" and "at any time" would mean that you could attempt to stop a thread while it is still doing some processing. Which behavior do you want?
If you want to be able to stop a thread that has partially completed, you have to enhance your code to check if it should exit early. This is usually complicated by the fact that you are doing some big computation that you don't have control over. Ideally, you break that up into chunks and check your exit flag frequently. For example, with video work, you could check every frame. Then the response delay is roughly the time to process a frame.
this would be really easy to do in C.
This would be really easy to do incorrectly. For example, the code currently presented attempts to perform mutation to the pool from two different threads without any kind of synchronization. That's a sure-fire recipe to make broken, hard-to-debug code.
// Use 9 threads for an example.
0..10 creates 10 threads.
Anyway, it seems like the missing piece of knowledge is Arc and Mutex. Arc allows sharing ownership of a single item between threads, and Mutex allows for run-time mutable borrowing between threads.
#[macro_use]
extern crate chan;
extern crate chan_signal;
use chan_signal::Signal;
use std::thread::{self, JoinHandle};
use std::sync::{Arc, Mutex};
fn main() {
let worker_pool = Arc::new(Mutex::new(WorkerPool::new()));
let signal = chan_signal::notify(&[Signal::INT, Signal::TERM]);
let (work_done_tx, work_done_rx) = chan::sync(0);
let worker_pool_clone = worker_pool.clone();
thread::spawn(move || run(work_done_tx, worker_pool_clone));
// Wait for a signal or for work to be done.
chan_select! {
signal.recv() -> signal => {
println!("received signal: {:?}", signal);
let mut pool = worker_pool.lock().expect("Unable to lock the pool");
pool.terminate();
},
work_done_rx.recv() => {
println!("Program completed normally.");
}
}
}
fn run(_work_done_tx: chan::Sender<()>, worker_pool: Arc<Mutex<WorkerPool>>) {
loop {
let mut worker_pool = worker_pool.lock().expect("Unable to lock the pool");
worker_pool.ingest();
worker_pool.terminate();
}
}
pub struct WorkerPool {
join_handles: Vec<JoinHandle<()>>,
}
impl WorkerPool {
pub fn new() -> Self {
WorkerPool {
join_handles: vec![],
}
}
pub fn ingest(&mut self) {
self.join_handles.extend(
(0..10).map(|i| {
thread::spawn(move || {
println!("Getting videos for thread {}", i);
})
})
)
}
pub fn terminate(&mut self) {
for handle in self.join_handles.drain(..) {
handle.join().expect("Unable to join thread")
}
}
}
Beware that the program logic itself is still poor; even though an interrupt is sent, the loop in run continues to execute. The main thread will lock the mutex, join all the current threads1, unlock the mutex and exit the program. However, the loop can lock the mutex before the main thread has exited and start processing some new data! And then the program exits right in the middle of processing. It's almost the same as if you didn't handle the interrupt at all.
1: Haha, tricked you! There are no running threads at that point. Since the mutex is locked for the entire loop, the only time another lock can be made is when the loop is resetting. However, since the last instruction in the loop is to join all the threads, there won't be anymore running.
I don't want to let the program terminate before all threads have completed.
Perhaps it's an artifact of the reduced problem, but I don't see how the infinite loop can ever exit, so the "I'm done" channel seems superfluous.
I'd probably just add a flag that says "please stop" when an interrupt is received. Then I'd check that instead of the infinite loop and wait for the running thread to finish before exiting the program.
use std::sync::atomic::{AtomicBool, Ordering};
fn main() {
let worker_pool = WorkerPool::new();
let signal = chan_signal::notify(&[Signal::INT, Signal::TERM]);
let please_stop = Arc::new(AtomicBool::new(false));
let threads_please_stop = please_stop.clone();
let runner = thread::spawn(|| run(threads_please_stop, worker_pool));
// Wait for a signal
chan_select! {
signal.recv() -> signal => {
println!("received signal: {:?}", signal);
please_stop.store(true, Ordering::SeqCst);
},
}
runner.join().expect("Unable to join runner thread");
}
fn run(please_stop: Arc<AtomicBool>, mut worker_pool: WorkerPool) {
while !please_stop.load(Ordering::SeqCst) {
worker_pool.ingest();
worker_pool.terminate();
}
}
I believe I understand, in general, one way of doing this:
Create a Command
Use Stdio::piped() to create a new pair of output streams
Configure command.stdout(), and command.stderr()
Spawn the process
Create a new thread and pass the stderr and stdout to it <-- ???
In the remote thread, continually poll for input and write it to the output stream.
In the main thread, wait for the process to finish.
Does that sound right?
My two actual questions:
Is there an easier way that doesn't involve a 'read thread' per process?
If there isn't an easier way, Read::read() requires &mut self; how do you pass that into a remote thread?
Please provide specific examples of how to actually stream the output, not just generic advice about how to do it...
To be more specific, here's the default example of using spawn:
use std::process::Command;
let mut child = Command::new("/bin/cat")
.arg("file.txt")
.spawn()
.expect("failed to execute child");
let ecode = child.wait()
.expect("failed to wait on child");
assert!(ecode.success());
How can the above example be changed to stream the output of child to the console, rather than just waiting for an exit code?
Although the accepted answer is correct, it doesn't cover the non-trivial case.
To stream output and handle it manually, use Stdio::piped() and manually handle the .stdout property on the child returned from calling spawn, like this:
use std::process::{Command, Stdio};
use std::path::Path;
use std::io::{BufReader, BufRead};
pub fn exec_stream<P: AsRef<Path>>(binary: P, args: Vec<&'static str>) {
let mut cmd = Command::new(binary.as_ref())
.args(&args)
.stdout(Stdio::piped())
.spawn()
.unwrap();
{
let stdout = cmd.stdout.as_mut().unwrap();
let stdout_reader = BufReader::new(stdout);
let stdout_lines = stdout_reader.lines();
for line in stdout_lines {
println!("Read: {:?}", line);
}
}
cmd.wait().unwrap();
}
#[test]
fn test_long_running_process() {
exec_stream("findstr", vec!("/s", "sql", "C:\\tmp\\*"));
}
See also Merge child process stdout and stderr regarding catching the output from stderr and stdout simultaneously.
I'll happily accept any example of spawning a long running process and streaming output to the console, by whatever means.
It sounds like you want Stdio::inherit:
use std::process::{Command, Stdio};
fn main() {
let mut cmd =
Command::new("cat")
.args(&["/usr/share/dict/web2"])
.stdout(Stdio::inherit())
.stderr(Stdio::inherit())
.spawn()
.unwrap();
// It's streaming here
let status = cmd.wait();
println!("Exited with status {:?}", status);
}