Too many connection from pinging in Rust? - rust

I have a code to check a network connection like this
fn network_connection() {
loop {
let output = Command::new("ping")
.arg("-c")
.arg("1")
.arg("google.com")
.output()
.expect("[ ERR ] Failed to execute network check process");
let network_status = output.status;
if network_status.success() == false {
Command::new("init")
.arg("6");
}
thread::sleep(Duration::from_millis(60000));
}
}
fn main() {
thread::spawn(|| {
network_connection();
});
...
this should ping google every 60 seconds. But when I'm looking at number of request sent to router its like 200 requests per 10 minutes.
Is this spamming more threads than one?
main() is running only once.

Related

Rust: How to move child (commandchild) to thread to be killed on window exit

I am trying to run a streamlit server behind the scenes of a Tauri application.
The streamlit server is packaged with PyInstaller into a single binary file and works as expected when standalone.
I have a rust main.rs file that needs to run a streamlit binary using a Command (in this case its a tauri::api::process::Command using a new_sidecar).
It spawns the server, but when the app closes, my streamlit server is not disposed off.
On window exit, I want to send a kill command to kill the server instance in the child.
Here is an example of my code:
#![cfg_attr(
all(not(debug_assertions), target_os = "windows"),
windows_subsystem = "windows"
)]
use async_std::task;
use std::sync::mpsc::sync_channel;
use std::thread;
use std::time::Duration;
use tauri::api::process::{Command, CommandEvent};
use tauri::{Manager, WindowEvent};
fn main() {
let (tx_kill, rx_kill) = sync_channel(1);
tauri::Builder::default()
.setup(|app| {
println!("App Setup Start");
let t = Command::new_sidecar("streamlit").expect("failed to create sidecar");
let (mut rx, child) = t.spawn().expect("Failed to spawn server");
let splashscreen_window = app.get_window("splashscreen").unwrap();
let main_window = app.get_window("main").unwrap();
// Listen for server port then refresh main window
tauri::async_runtime::spawn(async move {
while let Some(event) = rx.recv().await {
if let CommandEvent::Stdout(output) = event {
if output.contains("Network URL:") {
let tokens: Vec<&str> = output.split(":").collect();
let port = tokens.last().unwrap();
println!("Connect to port {}", port);
main_window.eval(&format!(
"window.location.replace('http://localhost:{}')",
port
));
task::sleep(Duration::from_secs(2)).await;
splashscreen_window.close().unwrap();
main_window.show().unwrap();
}
}
}
});
// Listen for kill command
thread::spawn(move || loop {
let event = rx_kill.recv();
if event.unwrap() == -1 {
child.kill().expect("Failed to close API");
}
});
Ok(())
})
.on_window_event(move |event| match event.event() {
WindowEvent::Destroyed => {
println!("Window destroyed");
tx_kill.send(-1).expect("Failed to send close signal");
}
_ => {}
})
.run(tauri::generate_context!())
.expect("error while running application");
}
But I am getting an error, when I try to kill the child instance:
thread::spawn(move || loop {
let event = rx_kill.recv();
if event.unwrap() == -1 {
child.kill().expect("Failed to close API");
}
});
use of moved value: `child`
move occurs because `child` has type `CommandChild`, which does not implement the `Copy` trait
Any ideas?

multiple websocket servers in rust

I referred this and also tried tungstenite library. But I was able to run only one server at a time, it captured whole thread.
I tried running multiple servers on different thread but that never listen anything and just exit the program.
Is there anyway that I can run multiple WebSocket servers on different ports, and create, destroy a server in runtime?
Edit: If I run a server on main thread and another one on other thread, it works, looks like I'd have to keep main thread busy somehow.. but is there any better way?
here's some example code:
it uses:
use std::net::TcpListener;
use std::thread::spawn;
use tungstenite::accept;
this is the normal code that blocks the main thread
let server = TcpListener::bind("127.0.0.1:9002").expect("err: ");
for stream in server.incoming() {
spawn(move || {
let mut websocket = accept(stream.unwrap()).unwrap();
loop {
let msg = websocket.read_message().unwrap();
println!("{}", msg);
// We do not want to send back ping/pong messages.
if msg.is_binary() || msg.is_text() {
websocket.write_message(msg).unwrap();
}
}
});
}
here's the code with thread:
spawn(|| {
let server = TcpListener::bind("127.0.0.1:9001").expect("err: ");
for stream in server.incoming() {
spawn(move || {
let mut websocket = accept(stream.unwrap()).unwrap();
loop {
let msg = websocket.read_message().unwrap();
println!("{}", msg);
// We do not want to send back ping/pong messages.
if msg.is_binary() || msg.is_text() {
websocket.write_message(msg).unwrap();
}
}
});
}
});
but the above code needs the main thread to run somehow, I'm indeed able to run multiple servers on different threads but need something to occupy main thread.
Rust programs terminate when the end of main() is reached. What you need to do is wait until your secondary threads have finished.
std::thread::spawn returns a JoinHandle, which has a join method which does exactly that - it waits (blocks) until the thread that the handle refers to finishes, and returns an error if the thread panicked.
So, to keep your program alive as long as any threads are running, you need to collect all of these handles, and join() them one by one. Unlike a busy-loop, this will not waste CPU resources unnecessarily.
use std::net::TcpListener;
use std::thread::spawn;
use tungstenite::accept;
fn main() {
let mut handles = vec![];
// Spawn 3 identical servers on ports 9001, 9002, 9003
for i in 1..=3 {
let handle = spawn(move || {
let server = TcpListener::bind(("127.0.0.1", 9000 + i)).expect("err: ");
for stream in server.incoming() {
spawn(move || {
let mut websocket = accept(stream.unwrap()).unwrap();
loop {
let msg = websocket.read_message().unwrap();
println!("{}", msg);
// We do not want to send back ping/pong messages.
if msg.is_binary() || msg.is_text() {
websocket.write_message(msg).unwrap();
}
}
});
}
});
handles.push(handle);
}
// Wait for each thread to finish before exiting
for handle in handles {
if let Err(e) = handle.join() {
eprintln!("{:?}", e)
}
}
}
When you do all the work in a thread (or threads) and the main thread has nothing to do, usually it is set to wait (join) that thread.
This has the additional advantage that if your secondary thread finishes or panics, then your program will also finish. Or you can wrap the whole create-thread/join-thread in a loop and make it more resilient:
fn main() {
loop {
let th = std::thread::spawn(|| {
// Do the real work here
std::thread::sleep(std::time::Duration::from_secs(1));
panic!("oh!");
});
if let Err(e) = th.join() {
eprintln!("Thread panic: {:?}", e)
}
}
}
Link to playground, I've changed to the loop into a for _ in ..3 because playgrond does not like infinite loops.

How to cheaply send a delay message?

My requirement is very simple, which is a very reasonable requirement in many programs. It is to send a specified message to my Channel after a specified time.
I've checked tokio for topics related to delay, interval or timeout, but none of them seem that straightforward to implement.
What I've come up with now is to spawn an asynchronous task, then wait or sleep for a certain amount of time, and finally send the message.
But, obviously, spawning an asynchronous task is a relatively heavy operation. Is there a better solution?
async fn my_handler(sender: mpsc::Sender<i32>, dur: Duration) {
tokio::spawn(async {
time::sleep(dur).await;
sender.send(0).await;
}
}
You could try adding a second channel and a continuously running task that buffers messages until the time they are to be received. Implementing this is more involved than it sounds, I hope I'm handling cancellations right here:
fn make_timed_channel<T: Ord + Send + Sync + 'static>() -> (Sender<(Instant, T)>, Receiver<T>) {
// Ord is an unnecessary requirement arising from me stuffing both the Instant and the T into the Binary heap
// You could drop this requirement by using the priority_queue crate instead
let (sender1, receiver1) = mpsc::channel::<(Instant, T)>(42);
let (sender2, receiver2) = mpsc::channel::<T>(42);
let mut receiver1 = Some(receiver1);
tokio::spawn(async move {
let mut buf = std::collections::BinaryHeap::<Reverse<(Instant, T)>>::new();
loop {
// Pretend we're a bounded channel or exit if the upstream closed
if buf.len() >= 42 || receiver1.is_none() {
match buf.pop() {
Some(Reverse((time, element))) => {
sleep_until(time).await;
if sender2.send(element).await.is_err() {
break;
}
}
None => break,
}
}
// We have some deadline to send a message at
else if let Some(Reverse((then, _))) = buf.peek() {
if let Ok(recv) = timeout_at(*then, receiver1.as_mut().unwrap().recv()).await {
match recv {
Some(recv) => buf.push(Reverse(recv)),
None => receiver1 = None,
}
} else {
if sender2.send(buf.pop().unwrap().0 .1).await.is_err() {
break;
}
}
}
// We're empty, wait around
else {
match receiver1.as_mut().unwrap().recv().await {
Some(recv) => buf.push(Reverse(recv)),
None => receiver1 = None,
}
}
}
});
(sender1, receiver2)
}
Playground
Whether this is more efficient than spawning tasks, you'd have to benchmark. (I doubt it. Tokio iirc has some much fancier solution than a BinaryHeap for waiting for waking up at the next timeout, e.g.)
One optimization you could make if you don't need a Receiver<T> but just something that .poll().await can be called on: You could drop the second channel and maintain the BinaryHeap inside a custom receiver.

How can I restart the watcher when it was failed?

everyone
I am using rust to code a program to notify a directory
1.Start a watcher
2.loop hit the case and then do sth.
because i use the Ok(DebouncedEvent::Create(p)), so i first remove the directory(is being watched),and then creat it, but the watcher fails to keeping watching
so i think fs maybe be not atomicity, so i sleep 3s, but it fails again
and then i try to delete the files but not the directory but it fails again
start a watcher
// Create a channel to receive the events.
let (tx, rx) = channel();
// Create a watcher
let mut watcher: RecommendedWatcher = try!(Watcher::new(tx.clone(), Duration::from_secs(policy_interval as u64)));
// Path to be monitored
try!(watcher.watch(metrics_repo_path.as_path(), RecursiveMode::Recursive));
my loop
loop
{ // Step 2: Start monitoring metrics repository
match rx.recv()
{
Ok(DebouncedEvent::Create(p)) =>
{
eprintln!("OK OK, loop start");
if let Some(ext) = p.extension()
{
if num_log_files == num_instances
{ // We have all logs for this epoch
remove_dir_contents(metrics_repo_path.to_str()).unwrap();
thread::sleep(Duration::from_millis(3000));
// // Remove old rates files
// // TODO: remove only files in the repo
// let _ = Command::new("rm")
// .arg("-r")
// .arg(metrics_repo_path.to_str().unwrap())
// .output()
// .expect("Failed to remove log files.");
// // Create a new rates folder
// let _ = Command::new("mkdir")
// .arg(metrics_repo_path.to_str().unwrap())
// .output()
// .expect("Failed to create new rates folder.");
}
else
{ // No re-configuration was issued
epochs_since_reconfiguration += 1;
}
// Clear epoch information
epoch_files.remove(epoch);
}
}
}
},
Err(e) => panic!("Monitoring error: {:?}", e),
_ => {}
}
}
the fn
fn remove_dir_contents<P: AsRef<Path>>(path: P) -> io::Result<()> {
for entry in fs::read_dir(path)? {
fs::remove_file(entry?.path())?;
}
Ok(())
}
i also try just restart the watcher but fails
like this
remove_dir_contents(metrics_repo_path.to_str()).unwrap();
thread::sleep(Duration::from_millis(3000));
try!(watcher.watch(metrics_repo_path.as_path(), RecursiveMode::Recursive));
In my opinoin my code will go to the condition
Ok(DebouncedEvent::Create(p))
**but it fails go to it even though new file was created in the watched file **
Any help is greatly appreciated.

Ping in reverse order

I run ping pong example in two different console windows like described in that tutorial.
use libp2p::futures::StreamExt;
use libp2p::ping::{Ping, PingConfig};
use libp2p::swarm::{Swarm, SwarmEvent};
use libp2p::{identity, Multiaddr, PeerId};
use std::error::Error;
#[tokio::main]
async fn main() -> Result<(), Box<dyn Error>> {
let local_key = identity::Keypair::generate_ed25519();
let local_peer_id = PeerId::from(local_key.public());
println!("Local peer id: {:?}", local_peer_id);
let transport = libp2p::development_transport(local_key).await?;
let behaviour = Ping::new(PingConfig::new().with_keep_alive(true));
let mut swarm = Swarm::new(transport, behaviour, local_peer_id);
swarm.listen_on("/ip4/0.0.0.0/tcp/0".parse()?)?;
if let Some(addr) = std::env::args().nth(1) {
let remote: Multiaddr = addr.parse()?;
swarm.dial(remote)?;
println!("Dialed {}", addr)
}
loop {
match swarm.select_next_some().await {
SwarmEvent::NewListenAddr { address, .. } => println!("Listening on {:?}", address),
SwarmEvent::Behaviour(event) => println!("{:?}", event),
_ => {}
}
}
}
console 1:
cargo run
Local peer id: PeerId("12D3KooWGi2AZgQL5sXDx4WYcPbtpXQDJJf2JTzEfHMKoYARohUN")
Listening on "/ip4/127.0.0.1/tcp/53912"
Listening on "/ip4/192.168.100.4/tcp/53912"
Event { peer: PeerId("12D3KooWQTRrShA41Kh7qoQYUc3G63uDZcXPSFw68AmzZqmaaHSh"), result: Ok(Pong) }
Event { peer: PeerId("12D3KooWQTRrShA41Kh7qoQYUc3G63uDZcXPSFw68AmzZqmaaHSh"), result: Ok(Ping { rtt: 1.112947ms }) }
Event { peer: PeerId("12D3KooWQTRrShA41Kh7qoQYUc3G63uDZcXPSFw68AmzZqmaaHSh"), result: Ok(Pong) }
Event { peer: PeerId("12D3KooWQTRrShA41Kh7qoQYUc3G63uDZcXPSFw68AmzZqmaaHSh"), result: Ok(Ping { rtt: 1.682508ms }) }
console 2
cargo run -- /ip4/192.168.100.4/tcp/53912
Local peer id: PeerId("12D3KooWQTRrShA41Kh7qoQYUc3G63uDZcXPSFw68AmzZqmaaHSh")
Dialed /ip4/192.168.100.4/tcp/53912
Listening on "/ip4/127.0.0.1/tcp/53913"
Listening on "/ip4/192.168.100.4/tcp/53913"
Event { peer: PeerId("12D3KooWGi2AZgQL5sXDx4WYcPbtpXQDJJf2JTzEfHMKoYARohUN"), result: Ok(Pong) }
Event { peer: PeerId("12D3KooWGi2AZgQL5sXDx4WYcPbtpXQDJJf2JTzEfHMKoYARohUN"), result: Ok(Ping { rtt: 869.493µs }) }
Event { peer: PeerId("12D3KooWGi2AZgQL5sXDx4WYcPbtpXQDJJf2JTzEfHMKoYARohUN"), result: Ok(Pong) }
Event { peer: PeerId("12D3KooWGi2AZgQL5sXDx4WYcPbtpXQDJJf2JTzEfHMKoYARohUN"), result: Ok(Ping { rtt: 2.109459ms }) }
What confuses me is the order of ping and pong messages. This example creates two nodes which send ping pong messages each other. But why do they start with Pong msg every time I run these nodes? I expect first Ping and then Pong event. Do I miss something?
That seems to simply be down to the library's semantics: if you go look at the documentation for PingSuccess (the "success" side of a ping operation), the two variants are:
Pong Received a ping and sent back a pong.
Ping Sent a ping and received back a pong. Includes the round-trip time.
So the events are not asymmetric: while "pong" means "I received a ping and sent a pong", "ping" means I received a response to my ping, which is why it can report the RTT. And so the timeline is
ping ->
<- pong
triggers PingSuccess::Pong
triggers PingSuccess::Ping
Thus the order you can see here.
If you take a look at the protocol source, you will notices that the protocol always pings first. But, in the example only the incoming events are logged. So, both nodes first send a ping which is not logged, the start receiving the pong responses and the ping requests.

Resources