I am trying to read a serial data stream coming from a bluetooth low energy devboard. The firmware registers as UART emulation service ( custom UUID ), and sends data via Receive_Characteristic ( custom UUID ). The serial data being send is just an incrementing numbers.
Using rumble, I am able to form a connection to the device, and read something, but not the stream. What follows is a minimal working code example:
let manager = Manager::new().unwrap();
let mut adapter = manager
.adapters()
.expect("could not list adapters")
.into_iter()
.find(|a| a.name == self.adapter_name)
.expect("could not find adapter by name");
println!("power cycle adapter");
adapter = manager.down(&adapter).unwrap();
adapter = manager.up(&adapter).unwrap();
println!("connect adapter");
let central = adapter.connect().unwrap();
central.start_scan().unwrap();
println!(
"find desired {:?} peripheral...",
&self.device_name
);
// keep scanning for 10 s
std::thread::sleep(std::time::Duration::from_secs(1));
central.stop_scan().unwrap();
let peripherals = central.peripherals();
let mdevice = central
.peripherals()
.into_iter()
.find(|perf| {
perf.properties()
.local_name
.iter()
.any(|name| name.contains(&self.device_name))
})
.expect("could not find peripheral by name");
std::thread::sleep(std::time::Duration::from_secs(1));
match mdevice.connect() {
Ok(d) => {
println!("mdevice connected");
d
}
Err(err) => {
eprintln!("error connecting to mdevice: {:?}", err);
panic!()
}
};
std::thread::sleep(std::time::Duration::from_secs(1));
println!("discovering characteristics");
for ch in mdevice.discover_characteristics().unwrap().into_iter() {
println!("found characteristic: {:?}", ch);
}
std::thread::sleep(std::time::Duration::from_secs(1));
println!("get desired characteristic");
let receive_characteristic = mdevice
.discover_characteristics()
.unwrap()
.into_iter()
.find(|c| {
RECEIVE_CHARACTERISTIC == c.uuid
})
.expect("could not find given characteristic");
// this is some testing code to print out received data
let (tx, rx) = std::sync::mpsc::channel();
std::thread::spawn(move || loop {
let data = match mdevice.read(&receive_characteristic) {
Ok(d) => d,
Err(err) => { println!("received an error {:?}", err);
Vec::new()}
};
println!("send : {:02?}", data);
match tx.send(data) {
Ok(d) => d,
Err(e) => println!("error {:?}", e)
};
});
loop {
let dd = rx.recv();
println!("received : {:02?}", dd.unwrap());
}
Ok(())
Using rumble, I am able to connect to the device, but getting a stream is weird. I keep getting the same number in a vec, but get sometimes a number being in range of the increment. Is reading the serial stream being done correctly?
EDIT: I am currently using the nRF52840-DK development board. The firmware sends out incrementing numbers from 0 to 255, and then repeats the sequence.
Solved it.
The main problem was, that I didn't fully understand the GATT profile and thus the Bluetooth LE protocol. This resource gives a good introduction into this topic.
The solution is to subscribe to data (event) updates, after the device has been connected and register an event handler, that reacts to incoming data. It was that simple.
// ... same code as before, but only the relevant pieces are shown.
mdevice.connect().expect("Could not connect to device");
std::thread::sleep(std::time::Duration::from_secs(1));
let chars = mdevice.discover_characteristics()
.expect("Discovering characteristics failed");
std::thread::sleep(std::time::Duration::from_secs(1));
let receive_characteristic = chars.clone().into_iter()
.find(|c|
{
// The constant is just a fixed array
RECEIVE_CHARACTERISTIC == c.uuid
}).expect("Could not find given characteristic");
// subscribe to the event
mdevice.subscribe(&receive_characteristic)
mdevice.on_notification(Box::from(move |v : rumble::api::ValueNotification|
{
// do something with the received data
}));
Related
The bounty expires in 5 days. Answers to this question are eligible for a +50 reputation bounty.
Adam Burke wants to draw more attention to this question.
I have a very simple Gstreamer pipeline looking like this
uridecodebin -> compositor -> videoconvert -> autovideosink
I want to be able at any time to add a new uridecodebin on the compositor.
If I add the second source before the pipeline is running, it works fine, but if I delay the addition of the second source, pipeline gets stuck and I have tons of QoS events telling me frames are being dropped.
This issue does not occur if I only read non-live sources, but it happens with my RTMP streams, or if I mix live and non-live sources.
When sync=false is set on the sync, RTMP streams are played, but it does not work with non-live sources.
My assumption is that I am missing a step with time/clock/latency, but I don't know what.
Here is the code (in rust) used to add a new source :
fn connect_pad_added(src_pad: &gst::Pad, src: &gst::Element, compositor: &gst::Element) {
println!("Received new pad {} from {}", src_pad.name(), src.name());
let new_pad_caps = src_pad
.current_caps()
.expect("Failed to get caps of new pad.");
let new_pad_struct = new_pad_caps
.structure(0)
.expect("Failed to get first structure of caps.");
let new_pad_type = new_pad_struct.name();
let is_video = new_pad_type.starts_with("video/x-raw");
if !is_video {
println!(
"It has type {} which is not raw video. Ignoring.",
new_pad_type
);
return;
}
println!("Created template");
let sink_pad = compositor
.request_pad_simple("sink_%u")
.expect("Could not get sink pad from compositor");
println!("Got pad");
if sink_pad.is_linked() {
println!("We are already linked. Ignoring.");
return;
}
if sink_pad.name() == "sink_0" {
sink_pad.set_property("width", 1920i32);
sink_pad.set_property("height", 1080i32);
} else {
sink_pad.set_property("alpha", 0.8f64);
}
let res = src_pad.link(&sink_pad);
if res.is_err() {
println!("Type is {} but link failed.", new_pad_type);
} else {
println!("Link succeeded (type {}).", new_pad_type);
}
}
fn add_new_element(pipeline: &gst::Pipeline, uri: &str) {
println!("Adding new element");
let source = gst::ElementFactory::make("uridecodebin")
.property("uri", uri)
.build()
.unwrap();
let compositor = pipeline.by_name("compositor").unwrap();
pipeline.add(&source).unwrap();
source.connect_pad_added(move |src, src_pad| {
println!("Received new pad {} from {}", src_pad.name(), src.name());
connect_pad_added(src_pad, src, &compositor);
});
source
.set_state(gst::State::Paused)
.expect("Unable to set the uridecodebin to the `Paused` state");
println!("Added new element");
}
I'm trying to receive a message (server side) on the network using TcpListener in Rust.
Here's the server code:
// Instanciate TcpListener
let server = TcpListener::bind("0.0.0.0:52001").expect("Could not bind");
match server.accept() { // Waiting new connection from a client
Ok((stream, addr)) => {
println!("New connection from {}", addr);
// Wrapping the stream in a BufReader
let mut reader = BufReader::new(&stream);
let mut message = String::new();
loop { // Entering the loop when a client is connected
reader.read_line(&mut message).expect("Cannot read new line");
println!("message received: {}", message);
message.clear();
}
}
Err(e) => {
println!("Fail: {:?}", e)
}
}
Here's my Kotlin client:
Socket("192.168.134.138", 52001).use { client ->
client.getOutputStream().use { out ->
out.write("test\n".toByteArray())
}
}
while(true) {
Thread.sleep(15_000)
}
The client send the following line: test\n and it ends with a linebreak for the server to read.
The intended behaviours would be that on the server side it prints message received: test and then the server waits at the read_line() instruction for the next line
It works because I receive the test but the read_line() method does not seem to block nor wait for another message. So it creates an infinite loop. So in the terminal I'm getting:
New connection from 192.168.134.123:7869
message received: test
message received:
message received:
message received:
message received:
Process finished with exit code 130 (interrupted by signal 2: SIGINT)
And I have to stop the program forcefully.
Any ideas?
To detect the end of the stream, you need to check if read_line() returned Ok(0):
From the docs:
If this function returns Ok(0), the stream has reached EOF.
loop { // Entering the loop when a client is connected
let mut message = String::new();
if reader.read_line(&mut message).expect("Cannot read new line") == 0 {
break;
}
println!("message received: {}", message);
}
Another way option is to use BufReader::lines() iterator:
for line in reader.lines() {
let message = line.expect("Cannot read new line");
println!("message received: {}", message);
}
This approach is a bit inefficient as it allocates a new String on every iteration. For best performance, you should allocate a single String and reuse it like #BlackBeans pointed out in a comment:
let mut message = String::new();
loop { // Entering the loop when a client is connected
message.clear();
if reader.read_line(&mut message).expect("Cannot read new line") == 0 {
break;
}
println!("message received: {}", message);
}
everyone
I am using rust to code a program to notify a directory
1.Start a watcher
2.loop hit the case and then do sth.
because i use the Ok(DebouncedEvent::Create(p)), so i first remove the directory(is being watched),and then creat it, but the watcher fails to keeping watching
so i think fs maybe be not atomicity, so i sleep 3s, but it fails again
and then i try to delete the files but not the directory but it fails again
start a watcher
// Create a channel to receive the events.
let (tx, rx) = channel();
// Create a watcher
let mut watcher: RecommendedWatcher = try!(Watcher::new(tx.clone(), Duration::from_secs(policy_interval as u64)));
// Path to be monitored
try!(watcher.watch(metrics_repo_path.as_path(), RecursiveMode::Recursive));
my loop
loop
{ // Step 2: Start monitoring metrics repository
match rx.recv()
{
Ok(DebouncedEvent::Create(p)) =>
{
eprintln!("OK OK, loop start");
if let Some(ext) = p.extension()
{
if num_log_files == num_instances
{ // We have all logs for this epoch
remove_dir_contents(metrics_repo_path.to_str()).unwrap();
thread::sleep(Duration::from_millis(3000));
// // Remove old rates files
// // TODO: remove only files in the repo
// let _ = Command::new("rm")
// .arg("-r")
// .arg(metrics_repo_path.to_str().unwrap())
// .output()
// .expect("Failed to remove log files.");
// // Create a new rates folder
// let _ = Command::new("mkdir")
// .arg(metrics_repo_path.to_str().unwrap())
// .output()
// .expect("Failed to create new rates folder.");
}
else
{ // No re-configuration was issued
epochs_since_reconfiguration += 1;
}
// Clear epoch information
epoch_files.remove(epoch);
}
}
}
},
Err(e) => panic!("Monitoring error: {:?}", e),
_ => {}
}
}
the fn
fn remove_dir_contents<P: AsRef<Path>>(path: P) -> io::Result<()> {
for entry in fs::read_dir(path)? {
fs::remove_file(entry?.path())?;
}
Ok(())
}
i also try just restart the watcher but fails
like this
remove_dir_contents(metrics_repo_path.to_str()).unwrap();
thread::sleep(Duration::from_millis(3000));
try!(watcher.watch(metrics_repo_path.as_path(), RecursiveMode::Recursive));
In my opinoin my code will go to the condition
Ok(DebouncedEvent::Create(p))
**but it fails go to it even though new file was created in the watched file **
Any help is greatly appreciated.
I'm making a project on raspberry pi 4 model b based on Rust, using the Blurz Rust Library.
So this is my code:
let sessionBLE = BluetoothSession::create_session(Option::None).unwrap();
let adapter: BluetoothAdapter = BluetoothAdapter::init(&sessionBLE).unwrap();
let discoverySessionBLE : BluetoothDiscoverySession = BluetoothDiscoverySession::create_session(&sessionBLE, adapter.get_id()).unwrap();
println!("IS DISCOVERABLE {}", adapter.is_discoverable().unwrap());
println!("IS POWERED {}", adapter.is_powered().unwrap());
println!("IS DISCOVERING {}", adapter.is_discovering().unwrap());
adapter.set_discoverable(true);
println!("start discovery");
discoverySessionBLE.start_discovery().unwrap();
thread::sleep(Duration::from_secs(2));
println!("<------------------------------------------>");
println!("IS DISCOVERABLE {}", adapter.is_discoverable().unwrap());
println!("IS POWERED {}", adapter.is_powered().unwrap());
println!("IS DISCOVERING {}", adapter.is_discovering().unwrap());
println!("device list number (count) {}", adapter.get_device_list().unwrap().len());
println!("device list {:?}", adapter.get_device_list().unwrap());
let devices = adapter.get_device_list().unwrap();
println!("Test array.: {}", adapter.get_device_list().unwrap()[1]);
let uuid = adapter.get_uuids().unwrap()[1].to_string();
println!("Connecting..");
let deviceBle : BluetoothDevice = adapter.get_first_device().unwrap();
deviceBle.connect(10000);
thread::sleep(Duration::from_secs(2));
println!("device BluetoothDevice {}", deviceBle.get_address().unwrap());
println!("device RSSI {}", deviceBle.get_rssi().unwrap());
println!("device CONNECETED {}", deviceBle.is_connected().unwrap());
println!("device GATT {:?}", deviceBle.get_gatt_services().unwrap());
Everything works but there's a problem, so I don't understand how to connect to specific device. This is the list found after scan:
["/org/bluez/hci0/dev_D4_CA_6E_F0_9E_21", "/org/bluez/hci0/dev_00_A0_50_FC_85_B4", "/org/bluez/hci0/dev_7D_A6_E2_28_E2_21", "/org/bluez/hci0/dev_7D_26_EF_98_8A_F4", "/org/bluez/hci0/dev_42_14_C3_01_25_25", "/org/bluez/hci0/dev_75_0C_80_CD_04_1E", "/org/bluez/hci0/dev_52_71_26_A1_52_AA", "/org/bluez/hci0/dev_40_C3_D7_65_CC_BF", "/org/bluez/hci0/dev_72_95_B8_34_82_A3", "/org/bluez/hci0/dev_6B_99_43_81_D0_31", "/org/bluez/hci0/dev_45_66_4A_20_7C_0B", "/org/bluez/hci0/dev_46_99_9C_AA_BD_36", "/org/bluez/hci0/dev_CC_52_AF_CB_49_12", "/org/bluez/hci0/dev_C4_22_C6_95_9C_1A", "/org/bluez/hci0/dev_00_A0_50_FC_18_46", "/org/bluez/hci0/dev_00_A0_50_FD_15_F4"]
I can't connect to a specific device, I need to connect with the third device: "7D_A6_E2_28_E2_21" but from the documentation I don't understand how to, I see there's a single method:
let deviceBle : BluetoothDevice = adapter.get_first_device().unwrap();
But it gets only the first device, what can I do to get the third device?
It is pretty straightforward with BluetoothDevice::new:
let device_3 = BluetoothDevice::new(&sessionBLE, devices[2].clone());
Here's an example but what should I wait on to decide when it is done. Do we have a better way to wait for the channel to be empty and all the threads to have completed? Full example is at http://github.com/posix4e/rust_webcrawl
loop {
let n_active_threads = running_threads.compare_and_swap(0, 0, Ordering::SeqCst);
match rx.try_recv() {
Ok(new_site) => {
let new_site_copy = new_site.clone();
let tx_copy = tx.clone();
counter += 1;
print!("{} ", counter);
if !found_urls.contains(&new_site) {
found_urls.insert(new_site);
running_threads.fetch_add(1, Ordering::SeqCst);
let my_running_threads = running_threads.clone();
pool.execute(move || {
for new_url in get_websites_helper(new_site_copy) {
if new_url.starts_with("http") {
tx_copy.send(new_url).unwrap();
}
}
my_running_threads.fetch_sub(1, Ordering::SeqCst);
});
}
}
Err(TryRecvError::Empty) if n_active_threads == 0 => break,
Err(TryRecvError::Empty) => {
writeln!(&mut std::io::stderr(),
"Channel is empty, but there are {} threads running",
n_active_threads);
thread::sleep_ms(10);
},
Err(TryRecvError::Disconnected) => unreachable!(),
}
}
This is actually a very complicated question, one with a great potential for race conditions! As I understand it, you:
Have an unbounded queue
Have a set of workers that operate on the queue items
The workers can put an unknown amount of items back into the queue
Want to know when everything is "done"
One obvious issue is that it may never be done. If every worker puts one item back into the queue, you've got an infinite loop.
That being said, I feel like the solution is to track
How many items are queued
How many items are in progress
When both of these values are zero, then you are done. Easier said than done...
use std::sync::Arc;
use std::sync::atomic::{AtomicUsize,Ordering};
use std::sync::mpsc::{channel,TryRecvError};
use std::thread;
fn main() {
let running_threads = Arc::new(AtomicUsize::new(0));
let (tx, rx) = channel();
// We prime the channel with the first bit of work
tx.send(10).unwrap();
loop {
// In an attempt to avoid a race condition, we fetch the
// active thread count before checking the channel. Otherwise,
// we might read nothing from the channel, and *then* a thread
// finishes and added something to the queue.
let n_active_threads = running_threads.compare_and_swap(0, 0, Ordering::SeqCst);
match rx.try_recv() {
Ok(id) => {
// I lie a bit and increment the counter to start
// with. If we let the thread increment this, we might
// read from the channel before the thread ever has a
// chance to run!
running_threads.fetch_add(1, Ordering::SeqCst);
let my_tx = tx.clone();
let my_running_threads = running_threads.clone();
// You could use a threadpool, but I'm spawning
// threads to only rely on stdlib.
thread::spawn(move || {
println!("Working on {}", id);
// Simulate work
thread::sleep_ms(100);
if id != 0 {
my_tx.send(id - 1).unwrap();
// Send multiple sometimes
if id % 3 == 0 && id > 2 {
my_tx.send(id - 2).unwrap();
}
}
my_running_threads.fetch_sub(1, Ordering::SeqCst);
});
},
Err(TryRecvError::Empty) if n_active_threads == 0 => break,
Err(TryRecvError::Empty) => {
println!("Channel is empty, but there are {} threads running", n_active_threads);
// We sleep a bit here, to avoid quickly spinning
// through an empty channel while the worker threads
// work.
thread::sleep_ms(1);
},
Err(TryRecvError::Disconnected) => unreachable!(),
}
}
}
I make no guarantees that this implementation is perfect (I probably should guarantee that it's broken, because threading is hard). One big caveat is that I don't intimately know the meanings of all the variants of Ordering, so I chose the one that looked to give the strongest guarantees.