I am attempting to have a tokio::select! loop where I want an Interval to tick() every second and listen for Udp messages to come in on a UdpFramed Stream
When there are no messages, the Interval ticks just fine, but when a message is received, it seems like the loop is blocking on f.next() but I don't understand why.
Shouldn't next() call poll_next() on the Stream and only wait for the next item if it is available? And thus shouldn't it skip this select! arm and just keep on ticking?
use futures::StreamExt;
use socket2::{Domain, Protocol, SockAddr, Socket, Type};
use std::io;
use std::net::{Ipv4Addr, SocketAddrV4};
use std::time::Duration;
use tokio::net::UdpSocket;
use tokio::select;
use tokio::time::interval;
use tokio_util::codec::BytesCodec;
use tokio_util::udp::UdpFramed;
//MULTICAST Constants
const IP_ANY: [u8; 4] = [0, 0, 0, 0];
#[tokio::main]
async fn main() -> io::Result<()> {
pretty_env_logger::init();
info!("Tokio Select Example");
//Create a udp ip4 socket
let socket = Socket::new(Domain::IPV4, Type::DGRAM, Some(Protocol::UDP))?;
//Allow this port to be reused by other sockets
socket.set_reuse_address(true)?;
socket.set_reuse_port(true)?;
//Create IPV4 any adress
let address = SocketAddrV4::new(IP_ANY.into(), 5353);
println!("Created Address");
//Bind to wildcard 0.0.0.0
socket.bind(&SockAddr::from(address))?;
println!("Bound Socket");
//Join multicast group
socket.join_multicast_v4(&Ipv4Addr::new(224, 0, 0, 251), address.ip())?;
println!("Joined Multicast");
//Convert to std::net udp socket
let udp_std_socket: std::net::UdpSocket = socket.into();
//Convert to tokio udp socket
let udp_socket = UdpSocket::from_std(udp_std_socket)?;
println!(
"Created a UDP Socket at {}, {}",
address.ip().to_string(),
address.port().to_string()
);
let mut f = UdpFramed::new(udp_socket, BytesCodec::new());
let mut interval = interval(Duration::from_secs(1));
loop {
select! {
result = tokio::time::timeout(Duration::from_millis(200), f.next()) => {
println!("{:?}", result);
}
default = interval.tick() => {
println!("Tick!");
}
}
}
}
Quote from the documentation of UdpSocket::from_std():
This function is intended to be used to wrap a UDP socket from the standard library in the Tokio equivalent. The conversion assumes nothing about the underlying socket; it is left up to the user to set it in non-blocking mode.
You are not setting the underlying socket in non-blocking mode.
This works:
use futures::StreamExt;
use socket2::{Domain, Protocol, SockAddr, Socket, Type};
use std::io;
use std::net::SocketAddrV4;
use std::time::Duration;
use tokio::net::UdpSocket;
use tokio::select;
use tokio::time::interval;
use tokio_util::codec::BytesCodec;
use tokio_util::udp::UdpFramed;
//MULTICAST Constants
const IP_ANY: [u8; 4] = [0, 0, 0, 0];
#[tokio::main]
async fn main() -> io::Result<()> {
println!("Tokio Select Example");
//Create a udp ip4 socket
let socket = Socket::new(Domain::IPV4, Type::DGRAM, Some(Protocol::UDP))?;
//Allow this port to be reused by other sockets
socket.set_reuse_address(true)?;
socket.set_reuse_port(true)?;
socket.set_nonblocking(true)?;
//Create IPV4 any adress
let address = SocketAddrV4::new(IP_ANY.into(), 15253);
println!("Created Address");
//Bind to wildcard 0.0.0.0
socket.bind(&SockAddr::from(address))?;
println!("Bound Socket");
//Convert to tokio udp socket
let udp_socket = UdpSocket::from_std(socket.into())?;
println!(
"Created a UDP Socket at {}, {}",
address.ip().to_string(),
address.port().to_string()
);
let mut f = UdpFramed::new(udp_socket, BytesCodec::new());
let mut interval = interval(Duration::from_secs(1));
loop {
println!("A");
select! {
result = tokio::time::timeout(Duration::from_millis(200), f.next()) => {
println!("{:?}", result);
}
_ = interval.tick() => {
println!("Tick!");
}
}
println!("Z");
}
}
Related
I want to set up socket to listen for incoming connections, and do some logic based on incoming message.
use std::io::{Read, Write};
use std::net::{TcpListener, TcpStream};
fn main() {
set_listening_server("192.168.80.180", 2048);
}
fn set_listening_server(ip: &str, port: i32) {
assert!(port > 1000 && ip.len() > 0);
println!("Function is OK!");
let addr = format!("{}:{}", ip, port);
let listener = match TcpListener::bind(&addr) {
Ok(listener) => {
println!("Listening on {}", &addr);
loop {}
}
Err(e) => {
println!("Error binding to {}: {}", &addr, e);
return;
}
};
}
The above is how I have set up the socket. At first I had the loop{} exist right after the set_listening_server call but I figured that as soon as the function finished executing that socket was dropped. Moving the loop inside the function call solves those issues, but is there perhaps a way to declare listener globally?
I want to have seperate functions to handle steps of the communication, for example if I want to have something like:
fn service_connection() {
for stream in listener.incoming() {
let stream = stream.unwrap();
println!("Connection established!");
}
}
How can I access listener if it is being declared inside of the set_listening_server?
Would calling this service_connection function inside of the loop be the correct approach?
EDIT:
If I want to return the socket how can I do it correctly?
fn set_listening_server(ip: &str, port: i32) -> TcpListener{
assert!(port > 1000 && ip.len() > 0);
println!("Function is OK!");
let addr = format!("{}:{}", ip, port);
let listener = TcpListener::bind(&addr);
return listener;
}
I currently am attempting to write a server and client using Tokio and the broadcast channel. I have a loop that essentially listens for connections and, after reading the TcpStream, I send through the channel.
Here is the code that I have attempted:
What I end up getting is a print each time I connect to the server and bytes are read.. , but I never get a 'Received'
use dbjade::serverops::ServerOp;
use tokio::io::{BufReader};
use tokio::net::TcpStream;
use tokio::{net::TcpListener, io::AsyncReadExt};
use tokio::sync::broadcast;
const ADDR: &str = "localhost:7676"; // Your own address : TODO change to be configured
const CHANNEL_NUM: usize = 100;
use std::io;
use std::net::{SocketAddr};
use bincode;
#[tokio::main]
async fn main() {
// Create listener instance that bounds to certain address
let listener = TcpListener::bind(ADDR).await.map_err(|err| panic!("Failed to bind: {err}")).unwrap();
let (tx, mut rx) = broadcast::channel::<(ServerOp, SocketAddr)>(CHANNEL_NUM);
loop {
if let Ok((mut socket, addr)) = listener.accept().await {
let tx = tx.clone();
let mut rx = tx.subscribe();
println!("Receieved stream from: {}", addr);
let mut buf = vec![0, 255];
tokio::select! {
result = socket.read(&mut buf) => {
match result {
Ok(res) => println!("Bytes Read: {res}"),
Err(_) => println!(""),
}
tx.send((ServerOp::Dummy, addr)).unwrap();
}
result = rx.recv() =>{
let (msg, addr) = result.unwrap();
println!("Receieved: {msg}");
}
}
}
}
}
The main problem in your code are these lines
let tx = tx.clone();
let mut rx = tx.subscribe();
You are redefining tx and rx. and you do it in the loop, so next iteration there is never same tx and rx so they can not be connected between iterations. And so when you do rx.recv() it is not the rx that you expect that is on the other end of the channel. The rx that you define in the beginning is unused. Variable shadowing is a common problem in Rust. The general way to solve it is to read all warnings of the compiler and resolve all "unused" variables, imports, etc. I would argue that turning these warnings into errors by default won't harm either. So that's what I did: I removed all unused stuff and connected the correct channel ends. I also removed dbjade as I've no idea where to get it and for the sake of the example replaced it with "Dummy" string.
use tokio::{net::TcpListener, io::AsyncReadExt};
use tokio::sync::broadcast;
const ADDR: &str = "localhost:7676"; // Your own address : TODO change to be configured
const CHANNEL_NUM: usize = 100;
use std::net::{SocketAddr};
#[tokio::main]
async fn main() {
// Create listener instance that bounds to certain address
let listener = TcpListener::bind(ADDR).await.map_err(|err| panic!("Failed to bind: {err}")).unwrap();
let (tx, mut rx) = broadcast::channel::<(String, SocketAddr)>(CHANNEL_NUM);
loop {
if let Ok((mut socket, addr)) = listener.accept().await {
println!("Receieved stream from: {}", addr);
let mut buf = vec![0, 255];
tokio::select! {
result = socket.read(&mut buf) => {
match result {
Ok(res) => println!("Bytes Read: {res}"),
Err(_) => println!("Err"),
}
tx.send(("Dummy".to_string(), addr)).unwrap();
}
result = rx.recv() =>{
let (msg, _) = result.unwrap();
println!("Receieved: {msg}");
}
}
}
}
}
I have four modules. The client is sending messages and the server is receiving messages. Once the server receives the message, it tries to send the message to the MPSC channel. I put the receiver in the other .rs file where I intend to receive the message.
I am not getting any message on the receiver side.
Maybe an infinite loop on the server side creates a problem, but is there a way to make this channel communication working?
client.rs
use std::io::prelude::*;
use std::os::unix::net::UnixDatagram;
use std::path::Path;
use std::sync::mpsc;
pub fn tcp_datagram_client() {
pub static FILE_PATH: &'static str = "/tmp/datagram.sock";
let socket = UnixDatagram::unbound().unwrap();
match socket.connect(FILE_PATH) {
Ok(socket) => socket,
Err(e) => {
println!("Couldn't connect: {:?}", e);
return;
}
};
println!("TCP client Connected to TCP Server {:?}", socket);
loop {
socket
.send(b"Hello from client to server")
.expect("recv function failed");
}
}
fn main() {
tcp_datagram_client();
}
server.rs
use std::os::unix::net::UnixDatagram;
use std::path::Path;
use std::str::from_utf8;
use std::sync::mpsc::Sender;
fn unlink_socket(path: impl AsRef<Path>) {
let path = path.as_ref();
if path.exists() {
if let Err(e) = std::fs::remove_file(path) {
eprintln!("Couldn't remove the file: {:?}", e);
}
}
}
static FILE_PATH: &'static str = "/tmp/datagram.sock";
pub fn tcp_datagram_server(tx: Sender<String>) {
unlink_socket(FILE_PATH);
let socket = match UnixDatagram::bind(FILE_PATH) {
Ok(socket) => socket,
Err(e) => {
eprintln!("Couldn't bind: {:?}", e);
return;
}
};
let mut buf = vec![0; 1024];
println!("Waiting for client to connect...");
loop {
let received_bytes = socket.recv(&mut buf).expect("recv function failed");
println!("Received {:?}", received_bytes);
let received_message = from_utf8(&buf).expect("utf-8 convert failed");
tx.send(received_message.to_string());
}
}
message_receiver.rs
use crate::server;
use std::sync::mpsc;
pub fn handle_messages() {
let (tx, rx) = mpsc::channel();
server::tcp_datagram_server(tx);
let message_from_tcp_server = rx.recv().unwrap();
println!("{:?}", message_from_tcp_server);
}
main.rs
mod server;
mod message_receiver;
fn main() {
message_receiver::handle_messages();
}
Once the TCP client is connected:
TCP client Connected to TCP Server UnixDatagram { fd: 3, local: (unnamed), peer: "/tmp/datagram.sock" (pathname) }
I receive no messages on the channel receiver end:
Waiting for client to connect...
Maybe an infinite loop on the server side creates a problem
Yes, quite literally, your server code does an infinite loop to handle continuously messages from the client(s). So the call to tcp_datagram_server never returns.
but is there a way to make this channel communication working?
Of course, it seems you are simply missing a second thread for your message_receiver. Wrapping your tcp_datagram_server(tx) in std::thread::spawn should do it. You could also add a loop to keep processing requests to match the one in tcp_datagram_server:
pub fn handle_messages() {
let (tx, rx) = mpsc::channel();
std::thread::spawn(|| tcp_datagram_server(tx));
loop {
let message_from_tcp_server = rx.recv().unwrap();
println!("{}", message_from_tcp_server);
}
}
I have this code for a reverse shell with tcp in rust
fn pipe_thread<R, W>(mut r: R, mut w: W) -> std::thread::JoinHandle<()>
where
R: std::io::Read + Send + 'static,
W: std::io::Write + Send + 'static,
{
std::thread::spawn(move || {
let mut buffer = [0; 1024];
loop {
let len = r.read(&mut buffer).unwrap();
if len == 0 {
println!("Connection lost");
std::process::exit(0x0100);
}
w.write(&buffer[..len]).unwrap();
w.flush().unwrap();
}
})
}
fn listen() -> std::io::Result<()> {
let listener = std::net::TcpListener::bind(format!("{}:{}", "0.0.0.0", "55100"))?;
println!("Started listener");
let (stream, _) = listener.accept()?;
let t1 = pipe_thread(std::io::stdin(), stream.try_clone()?);
println!("Connection recieved");
let t2 = pipe_thread(stream, std::io::stdout());
t1.join().unwrap();
t2.join().unwrap();
return Ok(());
}
How would but would I do the exact same thing but with udp instead of tcp?
Netcat example:
netcat -ul 55100
The following code will listen for udp on port 55100. When it is sent a packet, it will save the address of the sender in the addr variable. Then when the user inputs a command, a udp packet is sent to the addr address.
To test it, first send it a udp packet from port 55101 nc -u localhost 55100 -p 55101, this sets addr to localhost:55101. Then listen on port 55101 with nc -ul localhost 55101 and send a command from the rust program so that the nc listener knows where to send its packets to. Now you should be able to send udp messages back and forth between net cat and the rust program.
fn main() {
let socket = std::net::UdpSocket::bind(format!("{}:{}", "0.0.0.0", "55100"))?;
println!("Started listener");
// Addr is the last address that send me a udp packet or None if I have never received a udp packet
use std::sync::{Arc, Mutex};
let addr: Arc<Mutex<Option<std::net::SocketAddr>>> = Arc::from(Mutex::new(None));
// Clone addr so it can be moved into the closure
let addr_clone = addr.clone();
let socket_clone = socket.try_clone().unwrap();
// Spawn listening loop
std::thread::spawn(move || loop {
let mut buffer = [0; 1024];
let (len, src_addr) = socket_clone.recv_from(&mut buffer).unwrap();
*addr_clone.lock().unwrap() = Some(src_addr);
print!("{}", std::str::from_utf8(&buffer).unwrap());
});
loop {
// Get user input
let mut buffer = String::new();
std::io::stdin().read_line(&mut buffer).unwrap();
// Send the command to the last address that send me a udp packet or if I have never received a udp packet, panic
let addr_option = *addr.lock().unwrap();
if let Some(addr) = addr_option {
socket.send_to(buffer.as_bytes(), addr).unwrap();
} else {
panic!("Cant send udp because I dont know where to send it.");
}
}
}
I'm new to Rust and I'm trying to configure a simple tcp socket server which will listen to connections and will reply with the same message that received.
The thing is, this works as I want except when connecting with multiple clients.. The first client that connects will send and receive the messages but if a second client connects, the first one keeps working but the second never receives messages, in fact the message never gets in the code that will handle it. And if I disconnect the first socket, the server will start spamming forever that received a message from the first socket with the same content than the last message it sent.
I am pretty sure I did something wrong in my code but I can't find it
This is my server struct:
use std::collections::HashMap;
use std::io::Read;
use std::io::Write;
use std::net::Shutdown;
use std::net::TcpListener;
use std::net::TcpStream;
use std::str;
use std::sync::{Arc, RwLock};
use threadpool::ThreadPool;
#[derive(Clone, Debug)]
pub struct Server {
id: Arc<RwLock<u32>>,
connections: Arc<RwLock<HashMap<u32, TcpStream>>>,
url: String,
thread_pool: ThreadPool
}
impl Server {
pub fn new(url: String) -> Server {
let server = Server {
id: Arc::new(RwLock::new(0)),
connections: Arc::new(RwLock::new(HashMap::new())),
url,
thread_pool: ThreadPool::new(10)
};
server
}
pub fn start(&self) {
let listener = TcpListener::bind(&self.url).expect("Could not start the server");
println!("Server started succesfully");
for stream in listener.incoming() {
match stream {
Ok(stream) => {
let mut self_clone = self.clone();
self.thread_pool.execute(move || {
self_clone.on_client_connect(stream.try_clone().unwrap());
});
}
Err(error) => eprintln!("Error when tried to use stream. Error = {:?}", error),
}
}
}
fn on_client_connect(&mut self, stream: TcpStream) {
println!("Client connected from {}", stream.local_addr().unwrap());
let mut id = self.id.write().unwrap();
{
*id += 1;
}
self.connections
.write()
.unwrap()
.insert(*id, stream.try_clone().unwrap());
let mut stream = stream.try_clone().unwrap();
let mut buffer = [0; 1024];
while match stream.read(&mut buffer) {
Ok(size) => {
println!(
"Message received from {} - {}",
id,
str::from_utf8(&buffer).unwrap()
);
stream.write_all(&buffer[0..size]).unwrap();
true
}
Err(error) => {
println!(
"Error when reading message from socket. Error = {:?}",
error
);
stream.shutdown(Shutdown::Both).unwrap();
false
}
} { }
}
}
And in my main.rs I'm just calling the connect function and the server starts working
In this piece of code in your on_client_connect function, you're aquiring a read lock for self.id:
let mut id = self.id.write().unwrap();
{
*id += 1;
}
However, the id variable, which holds the lock, is not released until it drops at the end of the function. This means that all other clients will wait for this lock to be released, which won't happen until the function currently holding the lock has completed (which happens when that client disconnects).
You can solve this by rewriting the above code to only keep the lock while incrementing, and then storing the ID value in a variable:
let id: u32 = {
let mut id_lock = self.id.write.unwrap();
*id_lock += 1;
*id_lock
// id_lock is dropped at the end of this block, so the lock is released
};
Even better, you can use AtomicU32, which is still thread-safe yet does not require locking at all:
use std::sync::atomic::{AtomicU32, Ordering};
struct {
id: Arc<AtomicU32>,
// ...
}
// Fetch previous value, then increment `self.id` by one, in a thread-safe and lock-free manner
let id: u32 = self.id.fetch_add(1, Ordering::Relaxed);
Also, when the connection is closed your code goes into an infinite loop because you're not handling the case where stream.read() returns Ok(0), which indicates that the connection was closed:
while match stream.read(&mut buffer) {
Ok(0) => false, // handle connection closed...
Ok(size) => { /* ... */ }
Err(err) => { /* ... */ }
} {}