The bounty expires in 5 days. Answers to this question are eligible for a +50 reputation bounty.
Adam Burke wants to draw more attention to this question.
I have a very simple Gstreamer pipeline looking like this
uridecodebin -> compositor -> videoconvert -> autovideosink
I want to be able at any time to add a new uridecodebin on the compositor.
If I add the second source before the pipeline is running, it works fine, but if I delay the addition of the second source, pipeline gets stuck and I have tons of QoS events telling me frames are being dropped.
This issue does not occur if I only read non-live sources, but it happens with my RTMP streams, or if I mix live and non-live sources.
When sync=false is set on the sync, RTMP streams are played, but it does not work with non-live sources.
My assumption is that I am missing a step with time/clock/latency, but I don't know what.
Here is the code (in rust) used to add a new source :
fn connect_pad_added(src_pad: &gst::Pad, src: &gst::Element, compositor: &gst::Element) {
println!("Received new pad {} from {}", src_pad.name(), src.name());
let new_pad_caps = src_pad
.current_caps()
.expect("Failed to get caps of new pad.");
let new_pad_struct = new_pad_caps
.structure(0)
.expect("Failed to get first structure of caps.");
let new_pad_type = new_pad_struct.name();
let is_video = new_pad_type.starts_with("video/x-raw");
if !is_video {
println!(
"It has type {} which is not raw video. Ignoring.",
new_pad_type
);
return;
}
println!("Created template");
let sink_pad = compositor
.request_pad_simple("sink_%u")
.expect("Could not get sink pad from compositor");
println!("Got pad");
if sink_pad.is_linked() {
println!("We are already linked. Ignoring.");
return;
}
if sink_pad.name() == "sink_0" {
sink_pad.set_property("width", 1920i32);
sink_pad.set_property("height", 1080i32);
} else {
sink_pad.set_property("alpha", 0.8f64);
}
let res = src_pad.link(&sink_pad);
if res.is_err() {
println!("Type is {} but link failed.", new_pad_type);
} else {
println!("Link succeeded (type {}).", new_pad_type);
}
}
fn add_new_element(pipeline: &gst::Pipeline, uri: &str) {
println!("Adding new element");
let source = gst::ElementFactory::make("uridecodebin")
.property("uri", uri)
.build()
.unwrap();
let compositor = pipeline.by_name("compositor").unwrap();
pipeline.add(&source).unwrap();
source.connect_pad_added(move |src, src_pad| {
println!("Received new pad {} from {}", src_pad.name(), src.name());
connect_pad_added(src_pad, src, &compositor);
});
source
.set_state(gst::State::Paused)
.expect("Unable to set the uridecodebin to the `Paused` state");
println!("Added new element");
}
Related
My requirement is very simple, which is a very reasonable requirement in many programs. It is to send a specified message to my Channel after a specified time.
I've checked tokio for topics related to delay, interval or timeout, but none of them seem that straightforward to implement.
What I've come up with now is to spawn an asynchronous task, then wait or sleep for a certain amount of time, and finally send the message.
But, obviously, spawning an asynchronous task is a relatively heavy operation. Is there a better solution?
async fn my_handler(sender: mpsc::Sender<i32>, dur: Duration) {
tokio::spawn(async {
time::sleep(dur).await;
sender.send(0).await;
}
}
You could try adding a second channel and a continuously running task that buffers messages until the time they are to be received. Implementing this is more involved than it sounds, I hope I'm handling cancellations right here:
fn make_timed_channel<T: Ord + Send + Sync + 'static>() -> (Sender<(Instant, T)>, Receiver<T>) {
// Ord is an unnecessary requirement arising from me stuffing both the Instant and the T into the Binary heap
// You could drop this requirement by using the priority_queue crate instead
let (sender1, receiver1) = mpsc::channel::<(Instant, T)>(42);
let (sender2, receiver2) = mpsc::channel::<T>(42);
let mut receiver1 = Some(receiver1);
tokio::spawn(async move {
let mut buf = std::collections::BinaryHeap::<Reverse<(Instant, T)>>::new();
loop {
// Pretend we're a bounded channel or exit if the upstream closed
if buf.len() >= 42 || receiver1.is_none() {
match buf.pop() {
Some(Reverse((time, element))) => {
sleep_until(time).await;
if sender2.send(element).await.is_err() {
break;
}
}
None => break,
}
}
// We have some deadline to send a message at
else if let Some(Reverse((then, _))) = buf.peek() {
if let Ok(recv) = timeout_at(*then, receiver1.as_mut().unwrap().recv()).await {
match recv {
Some(recv) => buf.push(Reverse(recv)),
None => receiver1 = None,
}
} else {
if sender2.send(buf.pop().unwrap().0 .1).await.is_err() {
break;
}
}
}
// We're empty, wait around
else {
match receiver1.as_mut().unwrap().recv().await {
Some(recv) => buf.push(Reverse(recv)),
None => receiver1 = None,
}
}
}
});
(sender1, receiver2)
}
Playground
Whether this is more efficient than spawning tasks, you'd have to benchmark. (I doubt it. Tokio iirc has some much fancier solution than a BinaryHeap for waiting for waking up at the next timeout, e.g.)
One optimization you could make if you don't need a Receiver<T> but just something that .poll().await can be called on: You could drop the second channel and maintain the BinaryHeap inside a custom receiver.
everyone
I am using rust to code a program to notify a directory
1.Start a watcher
2.loop hit the case and then do sth.
because i use the Ok(DebouncedEvent::Create(p)), so i first remove the directory(is being watched),and then creat it, but the watcher fails to keeping watching
so i think fs maybe be not atomicity, so i sleep 3s, but it fails again
and then i try to delete the files but not the directory but it fails again
start a watcher
// Create a channel to receive the events.
let (tx, rx) = channel();
// Create a watcher
let mut watcher: RecommendedWatcher = try!(Watcher::new(tx.clone(), Duration::from_secs(policy_interval as u64)));
// Path to be monitored
try!(watcher.watch(metrics_repo_path.as_path(), RecursiveMode::Recursive));
my loop
loop
{ // Step 2: Start monitoring metrics repository
match rx.recv()
{
Ok(DebouncedEvent::Create(p)) =>
{
eprintln!("OK OK, loop start");
if let Some(ext) = p.extension()
{
if num_log_files == num_instances
{ // We have all logs for this epoch
remove_dir_contents(metrics_repo_path.to_str()).unwrap();
thread::sleep(Duration::from_millis(3000));
// // Remove old rates files
// // TODO: remove only files in the repo
// let _ = Command::new("rm")
// .arg("-r")
// .arg(metrics_repo_path.to_str().unwrap())
// .output()
// .expect("Failed to remove log files.");
// // Create a new rates folder
// let _ = Command::new("mkdir")
// .arg(metrics_repo_path.to_str().unwrap())
// .output()
// .expect("Failed to create new rates folder.");
}
else
{ // No re-configuration was issued
epochs_since_reconfiguration += 1;
}
// Clear epoch information
epoch_files.remove(epoch);
}
}
}
},
Err(e) => panic!("Monitoring error: {:?}", e),
_ => {}
}
}
the fn
fn remove_dir_contents<P: AsRef<Path>>(path: P) -> io::Result<()> {
for entry in fs::read_dir(path)? {
fs::remove_file(entry?.path())?;
}
Ok(())
}
i also try just restart the watcher but fails
like this
remove_dir_contents(metrics_repo_path.to_str()).unwrap();
thread::sleep(Duration::from_millis(3000));
try!(watcher.watch(metrics_repo_path.as_path(), RecursiveMode::Recursive));
In my opinoin my code will go to the condition
Ok(DebouncedEvent::Create(p))
**but it fails go to it even though new file was created in the watched file **
Any help is greatly appreciated.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 1 year ago.
Improve this question
I'm reading this Rust code and I barely have the mental capacity of understanding what's going on with all the mutexes and handles. It's all overhead to make the Rust gods happy, and it's making it hard to focus on what's actually going on. Take a look:
#[tauri::command]
fn spawn(param: String, window: Window<Wry>) {
let window_arc = Arc::new(Mutex::new(window));
// Spawn bin
let (mut rx, child) = tauri::api::process::Command::new_sidecar("bin")
.expect("failed to create binary command")
.args([param])
.spawn()
.expect("Failed to spawn sidecar");
let child_arc = Arc::new(Mutex::new(child));
// Handle data from bin
let window = window_arc.clone();
let (handle, mut handle_rx) = broadcast::channel(1);
let handle_arc = Arc::new(Mutex::new(handle));
tauri::async_runtime::spawn(async move {
loop {
tokio::select! {
_ = handle_rx.recv() => {
return;
}
Some(event) = rx.recv() => {
if let CommandEvent::Stdout(line) = &event {
let data = decode_and_xor(line.clone());
println!("Data from bin: {}", data);
window.lock().unwrap().emit("from_bin", data).expect("failed to emit message");
}
if let CommandEvent::Stderr(line) = &event {
println!("Fatal error bin: {}", &line);
window.lock().unwrap().emit("bin_fatal_error", line).expect("failed to emit message");
}
}
}
;
}
});
let window = window_arc.clone();
let window_cc = window.clone();
window_cc.lock().unwrap().listen("kill_bin", move |event| {
let handle = handle_arc.clone();
handle.lock().unwrap().send(true).unwrap();
window.lock().unwrap().unlisten(event.id());
});
// Handle data to bin
let window = window_arc.clone();
tauri::async_runtime::spawn(async move {
let child_clone = child_arc.clone();
let (handle, handle_rx) = broadcast::channel(1);
let handle_rx_arc = Arc::new(Mutex::new(handle_rx));
let handle_arc = Arc::new(Mutex::new(handle));
let window_c = window.clone();
window.lock().unwrap().listen("to_bin", move |event| {
let handle_rx = handle_rx_arc.clone();
if handle_rx.lock().unwrap().try_recv().is_ok() {
window_c.lock().unwrap().unlisten(event.id());
return;
}
// Send data to bin
let payload = String::from(event.payload().unwrap());
let encrypted = xor_and_encode(payload) + "\n";
println!("Data to send: {}", event.payload().unwrap());
child_clone.clone().lock().unwrap().write(encrypted.as_bytes()).expect("could not write to child");
});
let window_c = window.clone();
window.lock().unwrap().listen("kill_bin", move |event| {
let handle = handle_arc.clone();
handle.lock().unwrap().send(true).unwrap();
window_c.lock().unwrap().unlisten(event.id());
});
});
}
Are all these Arcs, Mutexes and clones necessary? How would I go about cleaning this up in a Rust idiomatic way, making it easier to see what's going on?
Are all these Arcs, Mutexes and clones necessary?
Probably not, you seem to be way over-cloning -- and rewrapping concurrent structure, but you'll have to look at the specific APIs
e.g. assuming broascast::channel is Tokio's it's designed for concurrent usage (that's kinda the point) so senders are designed to be clonable (for multiple producers) and you can create as many receivers as you need from the senders.
There's no need to wrap in an Arc, and there's especially no need whatsoever to protect them behind locks, they're designed to work as-is.
Furthermore, in this case it's even less necessary because you have just one sender task and one receiver tasks, neither is shared. Nor do you need to clone them when you use them. So e.g.
let handle_arc = Arc::new(Mutex::new(handle));
[...]
window_cc.lock().unwrap().listen("kill_bin", move |event| {
let handle = handle_arc.clone();
handle.lock().unwrap().send(true).unwrap();
window.lock().unwrap().unlisten(event.id());
});
I'm pretty sure can just be
window_cc.lock().unwrap().listen("kill_bin", move |event| {
handle.send(true).unwrap();
window.lock().unwrap().unlisten(event.id());
});
that'll move the handle inside the closure, then send on that. Sender is internally mutable so it needs no locking to send an event (that would rather defeat the point).
I'm trying to delete a file tree in parallel with Rust. I'm using jwalk for parallel traversal and deletion of files. The code shown below deletes all the files. In general it's working as expected, but the performance is terrible.
Compared to a Python version I've implemented, it's 5 times slower on Windows! What am I doing wrong, so that the Rust version is that slow?
What I've found out so far is that std::fs::remove_file is the reason for the bad performance. I'm wondering if the implementation of this function has a performance issue, at least on Windows?
I'm using Rust version 1.42.0 with toolchain stable-x86_64-pc-windows-msvc.
let (tx, rx) = mpsc::channel();
WalkBuilder::new(tmpr)
.hidden(false)
.standard_filters(false)
.threads(cmp::min(30, num_cpus::get()))
.build_parallel()
.run(move || {
let tx = tx.clone();
Box::new(move |dir_entry_result| {
if let Ok(dir_entry) = dir_entry_result {
if dir_entry.file_type().unwrap().is_dir() {
tx.send(dir_entry.path().to_owned()).unwrap();
} else {
if let Err(_) = std::fs::remove_file(&dir_entry.path()) {
match fix_permissions(&dir_entry.path()) {
Ok(_) => {
if let Err(_) = std::fs::remove_file(&dir_entry.path()) {
tx.send(dir_entry.path().to_owned()).unwrap();
}
}
Err(_) => {
tx.send(dir_entry.path().to_owned()).unwrap();
}
}
}
}
}
ignore::WalkState::Continue
})
});
let paths: Vec<_> = rx.into_iter().collect();
I've found the reason for the slowdown. It was the virus scanner. The reason why the Python version was faster is that Python.exe is ignored by the virus scanner, but the Rust exe not. After I disabled the virus scanner the Rust version is now lightning fast.
I am trying to read a serial data stream coming from a bluetooth low energy devboard. The firmware registers as UART emulation service ( custom UUID ), and sends data via Receive_Characteristic ( custom UUID ). The serial data being send is just an incrementing numbers.
Using rumble, I am able to form a connection to the device, and read something, but not the stream. What follows is a minimal working code example:
let manager = Manager::new().unwrap();
let mut adapter = manager
.adapters()
.expect("could not list adapters")
.into_iter()
.find(|a| a.name == self.adapter_name)
.expect("could not find adapter by name");
println!("power cycle adapter");
adapter = manager.down(&adapter).unwrap();
adapter = manager.up(&adapter).unwrap();
println!("connect adapter");
let central = adapter.connect().unwrap();
central.start_scan().unwrap();
println!(
"find desired {:?} peripheral...",
&self.device_name
);
// keep scanning for 10 s
std::thread::sleep(std::time::Duration::from_secs(1));
central.stop_scan().unwrap();
let peripherals = central.peripherals();
let mdevice = central
.peripherals()
.into_iter()
.find(|perf| {
perf.properties()
.local_name
.iter()
.any(|name| name.contains(&self.device_name))
})
.expect("could not find peripheral by name");
std::thread::sleep(std::time::Duration::from_secs(1));
match mdevice.connect() {
Ok(d) => {
println!("mdevice connected");
d
}
Err(err) => {
eprintln!("error connecting to mdevice: {:?}", err);
panic!()
}
};
std::thread::sleep(std::time::Duration::from_secs(1));
println!("discovering characteristics");
for ch in mdevice.discover_characteristics().unwrap().into_iter() {
println!("found characteristic: {:?}", ch);
}
std::thread::sleep(std::time::Duration::from_secs(1));
println!("get desired characteristic");
let receive_characteristic = mdevice
.discover_characteristics()
.unwrap()
.into_iter()
.find(|c| {
RECEIVE_CHARACTERISTIC == c.uuid
})
.expect("could not find given characteristic");
// this is some testing code to print out received data
let (tx, rx) = std::sync::mpsc::channel();
std::thread::spawn(move || loop {
let data = match mdevice.read(&receive_characteristic) {
Ok(d) => d,
Err(err) => { println!("received an error {:?}", err);
Vec::new()}
};
println!("send : {:02?}", data);
match tx.send(data) {
Ok(d) => d,
Err(e) => println!("error {:?}", e)
};
});
loop {
let dd = rx.recv();
println!("received : {:02?}", dd.unwrap());
}
Ok(())
Using rumble, I am able to connect to the device, but getting a stream is weird. I keep getting the same number in a vec, but get sometimes a number being in range of the increment. Is reading the serial stream being done correctly?
EDIT: I am currently using the nRF52840-DK development board. The firmware sends out incrementing numbers from 0 to 255, and then repeats the sequence.
Solved it.
The main problem was, that I didn't fully understand the GATT profile and thus the Bluetooth LE protocol. This resource gives a good introduction into this topic.
The solution is to subscribe to data (event) updates, after the device has been connected and register an event handler, that reacts to incoming data. It was that simple.
// ... same code as before, but only the relevant pieces are shown.
mdevice.connect().expect("Could not connect to device");
std::thread::sleep(std::time::Duration::from_secs(1));
let chars = mdevice.discover_characteristics()
.expect("Discovering characteristics failed");
std::thread::sleep(std::time::Duration::from_secs(1));
let receive_characteristic = chars.clone().into_iter()
.find(|c|
{
// The constant is just a fixed array
RECEIVE_CHARACTERISTIC == c.uuid
}).expect("Could not find given characteristic");
// subscribe to the event
mdevice.subscribe(&receive_characteristic)
mdevice.on_notification(Box::from(move |v : rumble::api::ValueNotification|
{
// do something with the received data
}));