everyone
I am using rust to code a program to notify a directory
1.Start a watcher
2.loop hit the case and then do sth.
because i use the Ok(DebouncedEvent::Create(p)), so i first remove the directory(is being watched),and then creat it, but the watcher fails to keeping watching
so i think fs maybe be not atomicity, so i sleep 3s, but it fails again
and then i try to delete the files but not the directory but it fails again
start a watcher
// Create a channel to receive the events.
let (tx, rx) = channel();
// Create a watcher
let mut watcher: RecommendedWatcher = try!(Watcher::new(tx.clone(), Duration::from_secs(policy_interval as u64)));
// Path to be monitored
try!(watcher.watch(metrics_repo_path.as_path(), RecursiveMode::Recursive));
my loop
loop
{ // Step 2: Start monitoring metrics repository
match rx.recv()
{
Ok(DebouncedEvent::Create(p)) =>
{
eprintln!("OK OK, loop start");
if let Some(ext) = p.extension()
{
if num_log_files == num_instances
{ // We have all logs for this epoch
remove_dir_contents(metrics_repo_path.to_str()).unwrap();
thread::sleep(Duration::from_millis(3000));
// // Remove old rates files
// // TODO: remove only files in the repo
// let _ = Command::new("rm")
// .arg("-r")
// .arg(metrics_repo_path.to_str().unwrap())
// .output()
// .expect("Failed to remove log files.");
// // Create a new rates folder
// let _ = Command::new("mkdir")
// .arg(metrics_repo_path.to_str().unwrap())
// .output()
// .expect("Failed to create new rates folder.");
}
else
{ // No re-configuration was issued
epochs_since_reconfiguration += 1;
}
// Clear epoch information
epoch_files.remove(epoch);
}
}
}
},
Err(e) => panic!("Monitoring error: {:?}", e),
_ => {}
}
}
the fn
fn remove_dir_contents<P: AsRef<Path>>(path: P) -> io::Result<()> {
for entry in fs::read_dir(path)? {
fs::remove_file(entry?.path())?;
}
Ok(())
}
i also try just restart the watcher but fails
like this
remove_dir_contents(metrics_repo_path.to_str()).unwrap();
thread::sleep(Duration::from_millis(3000));
try!(watcher.watch(metrics_repo_path.as_path(), RecursiveMode::Recursive));
In my opinoin my code will go to the condition
Ok(DebouncedEvent::Create(p))
**but it fails go to it even though new file was created in the watched file **
Any help is greatly appreciated.
Related
The bounty expires in 5 days. Answers to this question are eligible for a +50 reputation bounty.
Adam Burke wants to draw more attention to this question.
I have a very simple Gstreamer pipeline looking like this
uridecodebin -> compositor -> videoconvert -> autovideosink
I want to be able at any time to add a new uridecodebin on the compositor.
If I add the second source before the pipeline is running, it works fine, but if I delay the addition of the second source, pipeline gets stuck and I have tons of QoS events telling me frames are being dropped.
This issue does not occur if I only read non-live sources, but it happens with my RTMP streams, or if I mix live and non-live sources.
When sync=false is set on the sync, RTMP streams are played, but it does not work with non-live sources.
My assumption is that I am missing a step with time/clock/latency, but I don't know what.
Here is the code (in rust) used to add a new source :
fn connect_pad_added(src_pad: &gst::Pad, src: &gst::Element, compositor: &gst::Element) {
println!("Received new pad {} from {}", src_pad.name(), src.name());
let new_pad_caps = src_pad
.current_caps()
.expect("Failed to get caps of new pad.");
let new_pad_struct = new_pad_caps
.structure(0)
.expect("Failed to get first structure of caps.");
let new_pad_type = new_pad_struct.name();
let is_video = new_pad_type.starts_with("video/x-raw");
if !is_video {
println!(
"It has type {} which is not raw video. Ignoring.",
new_pad_type
);
return;
}
println!("Created template");
let sink_pad = compositor
.request_pad_simple("sink_%u")
.expect("Could not get sink pad from compositor");
println!("Got pad");
if sink_pad.is_linked() {
println!("We are already linked. Ignoring.");
return;
}
if sink_pad.name() == "sink_0" {
sink_pad.set_property("width", 1920i32);
sink_pad.set_property("height", 1080i32);
} else {
sink_pad.set_property("alpha", 0.8f64);
}
let res = src_pad.link(&sink_pad);
if res.is_err() {
println!("Type is {} but link failed.", new_pad_type);
} else {
println!("Link succeeded (type {}).", new_pad_type);
}
}
fn add_new_element(pipeline: &gst::Pipeline, uri: &str) {
println!("Adding new element");
let source = gst::ElementFactory::make("uridecodebin")
.property("uri", uri)
.build()
.unwrap();
let compositor = pipeline.by_name("compositor").unwrap();
pipeline.add(&source).unwrap();
source.connect_pad_added(move |src, src_pad| {
println!("Received new pad {} from {}", src_pad.name(), src.name());
connect_pad_added(src_pad, src, &compositor);
});
source
.set_state(gst::State::Paused)
.expect("Unable to set the uridecodebin to the `Paused` state");
println!("Added new element");
}
I am trying to run a streamlit server behind the scenes of a Tauri application.
The streamlit server is packaged with PyInstaller into a single binary file and works as expected when standalone.
I have a rust main.rs file that needs to run a streamlit binary using a Command (in this case its a tauri::api::process::Command using a new_sidecar).
It spawns the server, but when the app closes, my streamlit server is not disposed off.
On window exit, I want to send a kill command to kill the server instance in the child.
Here is an example of my code:
#![cfg_attr(
all(not(debug_assertions), target_os = "windows"),
windows_subsystem = "windows"
)]
use async_std::task;
use std::sync::mpsc::sync_channel;
use std::thread;
use std::time::Duration;
use tauri::api::process::{Command, CommandEvent};
use tauri::{Manager, WindowEvent};
fn main() {
let (tx_kill, rx_kill) = sync_channel(1);
tauri::Builder::default()
.setup(|app| {
println!("App Setup Start");
let t = Command::new_sidecar("streamlit").expect("failed to create sidecar");
let (mut rx, child) = t.spawn().expect("Failed to spawn server");
let splashscreen_window = app.get_window("splashscreen").unwrap();
let main_window = app.get_window("main").unwrap();
// Listen for server port then refresh main window
tauri::async_runtime::spawn(async move {
while let Some(event) = rx.recv().await {
if let CommandEvent::Stdout(output) = event {
if output.contains("Network URL:") {
let tokens: Vec<&str> = output.split(":").collect();
let port = tokens.last().unwrap();
println!("Connect to port {}", port);
main_window.eval(&format!(
"window.location.replace('http://localhost:{}')",
port
));
task::sleep(Duration::from_secs(2)).await;
splashscreen_window.close().unwrap();
main_window.show().unwrap();
}
}
}
});
// Listen for kill command
thread::spawn(move || loop {
let event = rx_kill.recv();
if event.unwrap() == -1 {
child.kill().expect("Failed to close API");
}
});
Ok(())
})
.on_window_event(move |event| match event.event() {
WindowEvent::Destroyed => {
println!("Window destroyed");
tx_kill.send(-1).expect("Failed to send close signal");
}
_ => {}
})
.run(tauri::generate_context!())
.expect("error while running application");
}
But I am getting an error, when I try to kill the child instance:
thread::spawn(move || loop {
let event = rx_kill.recv();
if event.unwrap() == -1 {
child.kill().expect("Failed to close API");
}
});
use of moved value: `child`
move occurs because `child` has type `CommandChild`, which does not implement the `Copy` trait
Any ideas?
I referred this and also tried tungstenite library. But I was able to run only one server at a time, it captured whole thread.
I tried running multiple servers on different thread but that never listen anything and just exit the program.
Is there anyway that I can run multiple WebSocket servers on different ports, and create, destroy a server in runtime?
Edit: If I run a server on main thread and another one on other thread, it works, looks like I'd have to keep main thread busy somehow.. but is there any better way?
here's some example code:
it uses:
use std::net::TcpListener;
use std::thread::spawn;
use tungstenite::accept;
this is the normal code that blocks the main thread
let server = TcpListener::bind("127.0.0.1:9002").expect("err: ");
for stream in server.incoming() {
spawn(move || {
let mut websocket = accept(stream.unwrap()).unwrap();
loop {
let msg = websocket.read_message().unwrap();
println!("{}", msg);
// We do not want to send back ping/pong messages.
if msg.is_binary() || msg.is_text() {
websocket.write_message(msg).unwrap();
}
}
});
}
here's the code with thread:
spawn(|| {
let server = TcpListener::bind("127.0.0.1:9001").expect("err: ");
for stream in server.incoming() {
spawn(move || {
let mut websocket = accept(stream.unwrap()).unwrap();
loop {
let msg = websocket.read_message().unwrap();
println!("{}", msg);
// We do not want to send back ping/pong messages.
if msg.is_binary() || msg.is_text() {
websocket.write_message(msg).unwrap();
}
}
});
}
});
but the above code needs the main thread to run somehow, I'm indeed able to run multiple servers on different threads but need something to occupy main thread.
Rust programs terminate when the end of main() is reached. What you need to do is wait until your secondary threads have finished.
std::thread::spawn returns a JoinHandle, which has a join method which does exactly that - it waits (blocks) until the thread that the handle refers to finishes, and returns an error if the thread panicked.
So, to keep your program alive as long as any threads are running, you need to collect all of these handles, and join() them one by one. Unlike a busy-loop, this will not waste CPU resources unnecessarily.
use std::net::TcpListener;
use std::thread::spawn;
use tungstenite::accept;
fn main() {
let mut handles = vec![];
// Spawn 3 identical servers on ports 9001, 9002, 9003
for i in 1..=3 {
let handle = spawn(move || {
let server = TcpListener::bind(("127.0.0.1", 9000 + i)).expect("err: ");
for stream in server.incoming() {
spawn(move || {
let mut websocket = accept(stream.unwrap()).unwrap();
loop {
let msg = websocket.read_message().unwrap();
println!("{}", msg);
// We do not want to send back ping/pong messages.
if msg.is_binary() || msg.is_text() {
websocket.write_message(msg).unwrap();
}
}
});
}
});
handles.push(handle);
}
// Wait for each thread to finish before exiting
for handle in handles {
if let Err(e) = handle.join() {
eprintln!("{:?}", e)
}
}
}
When you do all the work in a thread (or threads) and the main thread has nothing to do, usually it is set to wait (join) that thread.
This has the additional advantage that if your secondary thread finishes or panics, then your program will also finish. Or you can wrap the whole create-thread/join-thread in a loop and make it more resilient:
fn main() {
loop {
let th = std::thread::spawn(|| {
// Do the real work here
std::thread::sleep(std::time::Duration::from_secs(1));
panic!("oh!");
});
if let Err(e) = th.join() {
eprintln!("Thread panic: {:?}", e)
}
}
}
Link to playground, I've changed to the loop into a for _ in ..3 because playgrond does not like infinite loops.
I am trying to read a serial data stream coming from a bluetooth low energy devboard. The firmware registers as UART emulation service ( custom UUID ), and sends data via Receive_Characteristic ( custom UUID ). The serial data being send is just an incrementing numbers.
Using rumble, I am able to form a connection to the device, and read something, but not the stream. What follows is a minimal working code example:
let manager = Manager::new().unwrap();
let mut adapter = manager
.adapters()
.expect("could not list adapters")
.into_iter()
.find(|a| a.name == self.adapter_name)
.expect("could not find adapter by name");
println!("power cycle adapter");
adapter = manager.down(&adapter).unwrap();
adapter = manager.up(&adapter).unwrap();
println!("connect adapter");
let central = adapter.connect().unwrap();
central.start_scan().unwrap();
println!(
"find desired {:?} peripheral...",
&self.device_name
);
// keep scanning for 10 s
std::thread::sleep(std::time::Duration::from_secs(1));
central.stop_scan().unwrap();
let peripherals = central.peripherals();
let mdevice = central
.peripherals()
.into_iter()
.find(|perf| {
perf.properties()
.local_name
.iter()
.any(|name| name.contains(&self.device_name))
})
.expect("could not find peripheral by name");
std::thread::sleep(std::time::Duration::from_secs(1));
match mdevice.connect() {
Ok(d) => {
println!("mdevice connected");
d
}
Err(err) => {
eprintln!("error connecting to mdevice: {:?}", err);
panic!()
}
};
std::thread::sleep(std::time::Duration::from_secs(1));
println!("discovering characteristics");
for ch in mdevice.discover_characteristics().unwrap().into_iter() {
println!("found characteristic: {:?}", ch);
}
std::thread::sleep(std::time::Duration::from_secs(1));
println!("get desired characteristic");
let receive_characteristic = mdevice
.discover_characteristics()
.unwrap()
.into_iter()
.find(|c| {
RECEIVE_CHARACTERISTIC == c.uuid
})
.expect("could not find given characteristic");
// this is some testing code to print out received data
let (tx, rx) = std::sync::mpsc::channel();
std::thread::spawn(move || loop {
let data = match mdevice.read(&receive_characteristic) {
Ok(d) => d,
Err(err) => { println!("received an error {:?}", err);
Vec::new()}
};
println!("send : {:02?}", data);
match tx.send(data) {
Ok(d) => d,
Err(e) => println!("error {:?}", e)
};
});
loop {
let dd = rx.recv();
println!("received : {:02?}", dd.unwrap());
}
Ok(())
Using rumble, I am able to connect to the device, but getting a stream is weird. I keep getting the same number in a vec, but get sometimes a number being in range of the increment. Is reading the serial stream being done correctly?
EDIT: I am currently using the nRF52840-DK development board. The firmware sends out incrementing numbers from 0 to 255, and then repeats the sequence.
Solved it.
The main problem was, that I didn't fully understand the GATT profile and thus the Bluetooth LE protocol. This resource gives a good introduction into this topic.
The solution is to subscribe to data (event) updates, after the device has been connected and register an event handler, that reacts to incoming data. It was that simple.
// ... same code as before, but only the relevant pieces are shown.
mdevice.connect().expect("Could not connect to device");
std::thread::sleep(std::time::Duration::from_secs(1));
let chars = mdevice.discover_characteristics()
.expect("Discovering characteristics failed");
std::thread::sleep(std::time::Duration::from_secs(1));
let receive_characteristic = chars.clone().into_iter()
.find(|c|
{
// The constant is just a fixed array
RECEIVE_CHARACTERISTIC == c.uuid
}).expect("Could not find given characteristic");
// subscribe to the event
mdevice.subscribe(&receive_characteristic)
mdevice.on_notification(Box::from(move |v : rumble::api::ValueNotification|
{
// do something with the received data
}));
I am replacing synchronous socket code written in Rust with the asynchronous equivalent using Tokio. Tokio uses futures for asynchronous activity so tasks are chained together and queued onto an executor to be executed by a thread pool.
The basic pseudocode for what I want to do is like this:
let tokio::net::listener = TcpListener::bind(&sock_addr).unwrap();
let server_task = listener.incoming().for_each(move |socket| {
let in_buf = vec![0u8; 8192];
// TODO this should happen continuously until an error happens
let read_task = tokio::io::read(socket, in_buf).and_then(move |(socket, in_buf, bytes_read)| {
/* ... Logic I want to happen repeatedly as bytes are read ... */
Ok(())
};
tokio::spawn(read_task);
Ok(())
}).map_err(|err| {
error!("Accept error = {:?}", err);
});
tokio::run(server_task);
This pseudocode would only execute my task once. How do I run it continuously? I want it to execute and then execute again and again etc. I only want it to stop executing if it panics or has an error result code. What's the simplest way of doing that?
Using loop_fn should work:
let read_task =
futures::future::loop_fn((socket, in_buf, 0), |(socket, in_buf, bytes_read)| {
if bytes_read > 0 { /* handle bytes */ }
tokio::io::read(socket, in_buf).map(Loop::Continue)
});
A clean way to accomplish this and not have to fight the type system is to use tokio-codec crate; if you want to interact with the reader as a stream of bytes instead of defining a codec you can use tokio_codec::BytesCodec.
use tokio::codec::Decoder;
use futures::Stream;
...
let tokio::net::listener = TcpListener::bind(&sock_addr).unwrap();
let server_task = listener.incoming().for_each(move |socket| {
let (_writer, reader) = tokio_codec::BytesCodec::new().framed(socket).split();
let read_task = reader.for_each(|bytes| {
/* ... Logic I want to happen repeatedly as bytes are read ... */
});
tokio::spawn(read_task);
Ok(())
}).map_err(|err| {
error!("Accept error = {:?}", err);
});
tokio::run(server_task);