Ping in reverse order - rust

I run ping pong example in two different console windows like described in that tutorial.
use libp2p::futures::StreamExt;
use libp2p::ping::{Ping, PingConfig};
use libp2p::swarm::{Swarm, SwarmEvent};
use libp2p::{identity, Multiaddr, PeerId};
use std::error::Error;
#[tokio::main]
async fn main() -> Result<(), Box<dyn Error>> {
let local_key = identity::Keypair::generate_ed25519();
let local_peer_id = PeerId::from(local_key.public());
println!("Local peer id: {:?}", local_peer_id);
let transport = libp2p::development_transport(local_key).await?;
let behaviour = Ping::new(PingConfig::new().with_keep_alive(true));
let mut swarm = Swarm::new(transport, behaviour, local_peer_id);
swarm.listen_on("/ip4/0.0.0.0/tcp/0".parse()?)?;
if let Some(addr) = std::env::args().nth(1) {
let remote: Multiaddr = addr.parse()?;
swarm.dial(remote)?;
println!("Dialed {}", addr)
}
loop {
match swarm.select_next_some().await {
SwarmEvent::NewListenAddr { address, .. } => println!("Listening on {:?}", address),
SwarmEvent::Behaviour(event) => println!("{:?}", event),
_ => {}
}
}
}
console 1:
cargo run
Local peer id: PeerId("12D3KooWGi2AZgQL5sXDx4WYcPbtpXQDJJf2JTzEfHMKoYARohUN")
Listening on "/ip4/127.0.0.1/tcp/53912"
Listening on "/ip4/192.168.100.4/tcp/53912"
Event { peer: PeerId("12D3KooWQTRrShA41Kh7qoQYUc3G63uDZcXPSFw68AmzZqmaaHSh"), result: Ok(Pong) }
Event { peer: PeerId("12D3KooWQTRrShA41Kh7qoQYUc3G63uDZcXPSFw68AmzZqmaaHSh"), result: Ok(Ping { rtt: 1.112947ms }) }
Event { peer: PeerId("12D3KooWQTRrShA41Kh7qoQYUc3G63uDZcXPSFw68AmzZqmaaHSh"), result: Ok(Pong) }
Event { peer: PeerId("12D3KooWQTRrShA41Kh7qoQYUc3G63uDZcXPSFw68AmzZqmaaHSh"), result: Ok(Ping { rtt: 1.682508ms }) }
console 2
cargo run -- /ip4/192.168.100.4/tcp/53912
Local peer id: PeerId("12D3KooWQTRrShA41Kh7qoQYUc3G63uDZcXPSFw68AmzZqmaaHSh")
Dialed /ip4/192.168.100.4/tcp/53912
Listening on "/ip4/127.0.0.1/tcp/53913"
Listening on "/ip4/192.168.100.4/tcp/53913"
Event { peer: PeerId("12D3KooWGi2AZgQL5sXDx4WYcPbtpXQDJJf2JTzEfHMKoYARohUN"), result: Ok(Pong) }
Event { peer: PeerId("12D3KooWGi2AZgQL5sXDx4WYcPbtpXQDJJf2JTzEfHMKoYARohUN"), result: Ok(Ping { rtt: 869.493µs }) }
Event { peer: PeerId("12D3KooWGi2AZgQL5sXDx4WYcPbtpXQDJJf2JTzEfHMKoYARohUN"), result: Ok(Pong) }
Event { peer: PeerId("12D3KooWGi2AZgQL5sXDx4WYcPbtpXQDJJf2JTzEfHMKoYARohUN"), result: Ok(Ping { rtt: 2.109459ms }) }
What confuses me is the order of ping and pong messages. This example creates two nodes which send ping pong messages each other. But why do they start with Pong msg every time I run these nodes? I expect first Ping and then Pong event. Do I miss something?

That seems to simply be down to the library's semantics: if you go look at the documentation for PingSuccess (the "success" side of a ping operation), the two variants are:
Pong Received a ping and sent back a pong.
Ping Sent a ping and received back a pong. Includes the round-trip time.
So the events are not asymmetric: while "pong" means "I received a ping and sent a pong", "ping" means I received a response to my ping, which is why it can report the RTT. And so the timeline is
ping ->
<- pong
triggers PingSuccess::Pong
triggers PingSuccess::Ping
Thus the order you can see here.

If you take a look at the protocol source, you will notices that the protocol always pings first. But, in the example only the incoming events are logged. So, both nodes first send a ping which is not logged, the start receiving the pong responses and the ping requests.

Related

Too many connection from pinging in Rust?

I have a code to check a network connection like this
fn network_connection() {
loop {
let output = Command::new("ping")
.arg("-c")
.arg("1")
.arg("google.com")
.output()
.expect("[ ERR ] Failed to execute network check process");
let network_status = output.status;
if network_status.success() == false {
Command::new("init")
.arg("6");
}
thread::sleep(Duration::from_millis(60000));
}
}
fn main() {
thread::spawn(|| {
network_connection();
});
...
this should ping google every 60 seconds. But when I'm looking at number of request sent to router its like 200 requests per 10 minutes.
Is this spamming more threads than one?
main() is running only once.

Reading a line in a BufReader creates an infinite loop

I'm trying to receive a message (server side) on the network using TcpListener in Rust.
Here's the server code:
// Instanciate TcpListener
let server = TcpListener::bind("0.0.0.0:52001").expect("Could not bind");
match server.accept() { // Waiting new connection from a client
Ok((stream, addr)) => {
println!("New connection from {}", addr);
// Wrapping the stream in a BufReader
let mut reader = BufReader::new(&stream);
let mut message = String::new();
loop { // Entering the loop when a client is connected
reader.read_line(&mut message).expect("Cannot read new line");
println!("message received: {}", message);
message.clear();
}
}
Err(e) => {
println!("Fail: {:?}", e)
}
}
Here's my Kotlin client:
Socket("192.168.134.138", 52001).use { client ->
client.getOutputStream().use { out ->
out.write("test\n".toByteArray())
}
}
while(true) {
Thread.sleep(15_000)
}
The client send the following line: test\n and it ends with a linebreak for the server to read.
The intended behaviours would be that on the server side it prints message received: test and then the server waits at the read_line() instruction for the next line
It works because I receive the test but the read_line() method does not seem to block nor wait for another message. So it creates an infinite loop. So in the terminal I'm getting:
New connection from 192.168.134.123:7869
message received: test
message received:
message received:
message received:
message received:
Process finished with exit code 130 (interrupted by signal 2: SIGINT)
And I have to stop the program forcefully.
Any ideas?
To detect the end of the stream, you need to check if read_line() returned Ok(0):
From the docs:
If this function returns Ok(0), the stream has reached EOF.
loop { // Entering the loop when a client is connected
let mut message = String::new();
if reader.read_line(&mut message).expect("Cannot read new line") == 0 {
break;
}
println!("message received: {}", message);
}
Another way option is to use BufReader::lines() iterator:
for line in reader.lines() {
let message = line.expect("Cannot read new line");
println!("message received: {}", message);
}
This approach is a bit inefficient as it allocates a new String on every iteration. For best performance, you should allocate a single String and reuse it like #BlackBeans pointed out in a comment:
let mut message = String::new();
loop { // Entering the loop when a client is connected
message.clear();
if reader.read_line(&mut message).expect("Cannot read new line") == 0 {
break;
}
println!("message received: {}", message);
}

TcpListener stuck in accept() even when client thinks the connection is already established

let addr: SocketAddr = self.listen_bind.parse().unwrap();
let mut listener = TcpListener::bind(&addr).await?;
info!("Nightfort listening on {}", addr);
loop {
info!("debug1");
match listener.accept().await {
Ok((stream, addr)) => {
info!("debug2");
let watcher = self.watcher.clone();
info!("debug3");
tokio::spawn(async move {
info!("debug4");
if let Err(e) = Nightfort::process(watcher, stream, addr).await {
error!("Error on this ranger: {}, error: {:?}", addr, e);
}
});
}
Err(e) => error!("Socket conn error {}", e),
}
// let (stream, addr) = listener.accept().await?;
}
I spent two days on troubleshooting this weird issue. The process in rust can run very well on my local macos, linux, docker on linux, but can not run on aws linux or k8s on aws. The main issues I found is: the process hang on accept() even a client thinks it established a connection to the server and started sending messages to it. ps show the the server process is in S status. The code was written in nightly rust with alpha libs, and I thought there might be a bug in the dependency, then I updated my code and switch it to stable rust with the latest release of dependencies, but the issue is still there.

How can I retrieve the port from a crust Session?

I'm trying the examples from the crust crate but cannot figure how to obtain the port from a peer I'm connected to.
The crust example includes the following function
use crust::Service;
pub fn print_connected_nodes(&self, service: &Service) {
println!("Node count: {}", self.nodes.len());
for (id, node) in &self.nodes {
let ip = service.get_peer_ip_addr(node).unwrap();
let status = if service.is_connected(node) {
"Connected "
} else {
"Disconnected"
};
println!("[{} - {}] {} {:?}", id, ip, status, node);
}
println!();
}
There I can establish a connection and obtain the IP address with let ip = service.get_peer_ip_addr(node) but after reading the documentation I cannot find any method to obtain the port.
Is there another method to obtain the port?
Looking at the source for get_peer_ip_addr, it uses get_peer_socket_addr to get the socket, which contains both the address and the port. Unfortunately get_peer_socket_addr is private, so you can't get the port. There is an open question about this issue in the crust bugtracker.

How to retrieve information from the tokio-proto connection handshake?

I'm figuring out how to use the tokio-proto crate, particularly on the handshake made when a connection is established. I've got the example from the official documentation working:
impl<T: AsyncRead + AsyncWrite + 'static> ClientProto<T> for ClientLineProto {
type Request = String;
type Response = String;
/// `Framed<T, LineCodec>` is the return value of `io.framed(LineCodec)`
type Transport = Framed<T, line::LineCodec>;
type BindTransport = Box<Future<Item = Self::Transport, Error = io::Error>>;
fn bind_transport(&self, io: T) -> Self::BindTransport {
// Construct the line-based transport
let transport = io.framed(line::LineCodec);
// Send the handshake frame to the server.
let handshake = transport.send("You ready?".to_string())
// Wait for a response from the server, if the transport errors out,
// we don't care about the transport handle anymore, just the error
.and_then(|transport| transport.into_future().map_err(|(e, _)| e))
.and_then(|(line, transport)| {
// The server sent back a line, check to see if it is the
// expected handshake line.
match line {
Some(ref msg) if msg == "Bring it!" => {
println!("CLIENT: received server handshake");
Ok(transport)
}
Some(ref msg) if msg == "No! Go away!" => {
// At this point, the server is at capacity. There are a
// few things that we could do. Set a backoff timer and
// try again in a bit. Or we could try a different
// remote server. However, we're just going to error out
// the connection.
println!("CLIENT: server is at capacity");
let err = io::Error::new(io::ErrorKind::Other, "server at capacity");
Err(err)
}
_ => {
println!("CLIENT: server handshake INVALID");
let err = io::Error::new(io::ErrorKind::Other, "invalid handshake");
Err(err)
}
}
});
Box::new(handshake)
}
}
But the official docs only mention a handshake without stateful information. Is there a common way to retrieve and store useful data from the handshake?
For example, if during the handshake (in the first message after the connection is established) the server sends some key that should be used later by the client, how should the ClientProto implementation look into that key? And where should it be stored?
You can add fields to ClientLineProto, so this should work:
pub struct ClientLineProto {
handshakes: Arc<Mutex<HashMap<String, String>>>
}
And then you can reference it and store data as needed:
let mut handshakes = self.handshakes.lock();
handshakes.insert(handshake_key, "Blah blah handshake data")
This sort of access would work in bind_transport() for storing things. Then when you create the Arc::Mutex::HashMap in your main() function and you will have access to the whole thing in the serve() method as well, which means you can pass it in to the Service object instantiation and then the handshakes will be available during call().

Resources