Actix Websocket : how to handle JSON message from client? - rust

I'm making a small fighting game with Actix Web and Actix WebSocket. My client sends a JSON that contains attack data, and I'm wondering how to serialize the JSON into the corresponding Rust struct. It seems that StreamHandler does not allow to handle anything other than text.
impl StreamHandler<Result<ws::Message, ws::ProtocolError>> for MyWs {
fn handle(&mut self, msg: Result<ws::Message, ws::ProtocolError>, ctx: &mut Self::Context) {
match msg {
Ok(ws::Message::Ping(msg)) => ctx.pong(&msg),
Ok(ws::Message::Text(text)) => ctx.text(text),
Ok(ws::Message::Binary(bin)) => ctx.binary(bin),
_ => (),
}
}
}

Related

How to use an extractor with from_request

I have the following route handler, which sends a 400 BAD REQUEST in case of parse error in FormData.
#[post("/newsletter")]
pub async fn publish_newsletter(
form: web::Form<FormData>,
...
) -> Result<HttpResponse, PublishError> {
...
}
To provide better UX, I want to opt out of this behavior. I'd like to redirect the user to the same page and display the error as a flash message.
But I can't seem to figure out how to extract FormData using Form::from_request
I have tried using HttpRequest and web::dev::Payload extractors:
#[post("/newsletter")]
pub async fn publish_newsletter(
...
req: HttpRequest,
mut payload: dev::Payload,
) -> Result<HttpResponse, PublishError> {
let form: web::Form<FormData> = match web::Form::from_request(&req, &mut payload).await {
Ok(data) => data,
Err(e) => {
// send flash message
// redirect to same page
return Ok(see_other("/newsletter"));
}
};
...
}
But ultimately I'm faced with this error:
the trait bound `actix_web::dev::Payload: FromRequest` is not satisfied
I've solved it in the following way:
#[post("/newsletter")]
pub async fn publish_newsletter(
form: Result<web::Form<FormData>, actix_web::Error>,
...
) -> Result<HttpResponse, PublishError> {
let form = match form {
Ok(form) => form,
Err(e) => return Ok(send_flash_message_and_redirect(e, "/newsletter")),
};
...
}
Shoutout to the helpful folks at Actix Web Discord.

How do I create a stream for actix-web HttpResponse to send a file chunk by chunk?

I want to stream a encrypted file with actix-web in Rust. I have a loop that decrypts the encrypted file chunk by chunk using sodiumoxide. I want to send the chunks to the client.
My loop looks like this:
while stream.is_not_finalized() {
match in_file.read(&mut buffer) {
Ok(num_read) if num_read > 0 => {
let (decrypted, _tag) = stream
.pull(&buffer[..num_read], None)
.map_err(|_| error::ErrorInternalServerError("Incorrect password"))
.unwrap();
// here I want to send decrypted to HttpResponse
continue;
}
Err(e) => error::ErrorInternalServerError(e),
_ => error::ErrorInternalServerError("Decryption error"), // reached EOF
};
}
I found a streaming method, which needs a Stream as a parameter. How can I create a stream where I can add chunk by chunk?
Depending on your workflow(How big your data chunks are, how time consuming the decrypting is, etc) you may have different options on how to make the stream. The most legimitate way that comes to my mind is using some kind of thread-pool with a channel to communicate between the thread and your handler. Tokio's mpsc can be an option in that situation and its Receiver already implements Stream and you can feed it from your thread by using it's Sender's try_send from your thread, considering you use an unbounded channel or a bounded channel with enough length, it should work.
Another possible option for cases where your decryption process isn't that time consuming to be considered blocking, or you just want to see how you can implement a Stream for actix, here's an example:
use std::pin::Pin;
use std::task::{Context, Poll};
use actix_web::{get, App, HttpResponse, HttpServer, Responder};
use pin_project::pin_project;
use sodiumoxide::crypto::secretstream::{Pull, Stream};
use tokio::{fs::File, io::AsyncRead};
#[pin_project]
struct Streamer {
crypto_stream: Stream<Pull>,
#[pin]
file: File,
}
impl tokio::stream::Stream for Streamer {
type Item = Result<actix_web::web::Bytes, actix_web::Error>;
fn poll_next(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Option<Self::Item>> {
let this = self.project();
let mut buffer = [0; BUFFER_LENGTH];
if this.crypto_stream.is_not_finalized() {
match this.file.poll_read(cx, &mut buffer) {
Poll::Ready(res) => match res {
Ok(bytes_read) if bytes_read > 0 => {
let value = this.crypto_stream.pull(&buffer, None);
match value {
Ok((decrypted, _tag)) => Poll::Ready(Some(Ok(decrypted.into()))),
Err(_) => Poll::Ready(Some(Err(
actix_web::error::ErrorInternalServerError("Incorrect password"),
))),
}
}
Err(err) => {
Poll::Ready(Some(Err(actix_web::error::ErrorInternalServerError(err))))
}
_ => Poll::Ready(Some(Err(actix_web::error::ErrorInternalServerError("Decryption error")))),
},
Poll::Pending => Poll::Pending,
}
} else {
// Stream finishes when it returns None
Poll::Ready(None)
}
}
}
and use it from your handler:
let in_file = File::open(FILE_NAME).await?;
let stream = Stream::init_pull(&header, &key)?;
let stream = Streamer {
crypto_stream: stream,
file: in_file,
};
HttpResponse::Ok()
// .content_type("text/text")
.streaming(stream)
Note that you need pin_project and tokio with ["stream", "fs"] as dependency for it to work.

How can I implement a pull-based system using Tokio?

I want to implement a pull-based system between a server and a client where the server will only push data when the client asks for it.
I was playing with Tokio and was able to create a push-based system where I was able to push a string at an interval of 1ms.
let done = listener
.incoming()
.for_each(move |socket| {
let server_queue = _cqueue.clone();
let (reader, mut writer) = socket.split();
let sender = Interval::new_interval(std::time::Duration::from_millis(1))
.for_each(move |_| {
writer
.poll_write(server_queue.pull().borrow())
.map_err(|_| {
tokio::timer::Error::shutdown();
})
.unwrap();
return Ok(());
})
.map_err(|e| println!("{}", e));
;
tokio::spawn(sender);
return Ok(());
})
.map_err(|e| println!("Future_error {}", e));
Is there a way to send only when the client asks for it without having to use a reader?
Let's think back for a moment on the kind of events that could lead to this "sending of data". You can think of multiple ways:
The client connects to the server. By contract, this is "asking for data". You've implemented this case
The client sends an in-band message on the socket/pipe connecting the client and server. For that, you need to take the AsyncRead part of your socket, the AsyncWrite part that you've already used and build a duplex channel so you can read and talk at the same time
The client sends an out-of-band message, typically on another proto-host-port triplet and using a different protocol. Your current server recognizes it, and sends the client that data. To do this, you need a reader for the other triplet, and you need a messaging structure in place to relay this to the one place having access to the AsyncWrite part of your socket
The short answer is no, you cannot really act on an event that you're not listening for.
#Shepmaster I was just wondering if there was an existing library that can be used to handle this "neatly"
There is, and then there isn't.
Most libraries are centered around a specific problem. In your case, you've opted to work at the lowest possible level by having a TCP socket (implementing AsyncRead + AsyncWrite).
To do anything, you're going to need to decide on:
A transport format
A protocol
I tend to wrap code into this when I need a quick and dirty implementation of a duplex stream:
use futures::sync::mpsc::{UnboundedSender, unbounded};
use std::sync::{Arc};
use futures::{Sink, Stream, Future, future, stream};
use tokio::io::{AsyncRead, AsyncWrite};
use tokio::codec::{Framed, Encoder, Decoder};
use std::io;
use std::fmt::Debug;
use futures_locks::{RwLock as FutLock};
enum Message<T:Send+Debug+'static> {
Content(T),
Done
}
impl<T: Send + Debug + 'static> From<T> for Message<T> {
fn from(message:T) -> Message<T> {
Message::Content(message)
}
}
struct DuplexStream<T:Send+Debug+'static> {
writer: Arc<FutLock<UnboundedSender<Message<T>>>>,
handlers: Arc<FutLock<Option<Box<dyn Stream<Item = Message<T>, Error = ()> + Send>>>>
}
impl<T:Send+Debug+'static> DuplexStream<T> {
pub fn from<R,U>(framed_socket: Framed<R, U>) -> Arc<DuplexStream<T>>
where U: Send + Encoder<Item = T> + Decoder<Item = T> + 'static, R: Send + AsyncRead + AsyncWrite + 'static {
let (tx, rx) = framed_socket.split();
// Assemble the combined upstream stream
let (upstream_tx, upstream_rx) = unbounded();
let upstream = upstream_rx.take_while(|item| match item {
Message::Done => future::ok(false),
_ => future::ok(true)
}).fold(tx, |o, m| {
o.send(match m {
Message::Content(i) => i,
_ => unreachable!()
}).map_err(|_| {
()
})
}).map(|e| {
Message::Done
}).into_stream();
// Assemble the downstream stream
let downstream = rx.map_err(|_| ()).map(|r| {
Message::Content(r)
}).chain(stream::once(Ok(Message::Done)));
Arc::new(DuplexStream {
writer: Arc::new(FutLock::new(upstream_tx)),
handlers: Arc::new(FutLock::new(Some(Box::new(upstream.select(downstream).take_while(|m| match m {
Message::Content(_) => {
future::ok(true)
},
Message::Done => {
future::ok(false)
}
})))))
})
}
pub fn start(self: Arc<Self>) -> Box<dyn Stream<Item = T, Error = io::Error> + Send> {
Box::new(self.handlers
.write()
.map_err(|_| io::Error::new(io::ErrorKind::NotFound, "Stream closed"))
.map(|mut handler| -> Box<dyn Stream<Item = T, Error = io::Error> + Send> {
match handler.take() {
Some(e) => Box::new(e.map(|r| match r {
Message::Content(i) => i,
_ => unreachable!()
}).map_err(|_| io::Error::new(io::ErrorKind::NotFound, "Stream closed"))),
None => Box::new(stream::once(Err(io::Error::new(io::ErrorKind::AddrInUse, "Handler already taken"))))
}
}).into_stream().flatten()
)
}
pub fn close(self: Arc<Self>) -> Box<dyn Future<Item = (), Error = io::Error> + Send> {
self.inner_send(Message::Done)
}
pub fn send(self: Arc<Self>, message: T) -> Box<dyn Future<Item = (), Error = io::Error> + Send> {
self.inner_send(message.into())
}
pub fn inner_send(self: Arc<Self>, message: Message<T>) -> Box<dyn Future<Item = (), Error = io::Error> + Send> {
Box::new(self.writer.write()
.map_err(|_| io::Error::new(io::ErrorKind::NotFound, "The mutex has disappeared")).and_then(|guard| {
future::result(guard.unbounded_send(message).map_err(|_| io::Error::new(io::ErrorKind::BrokenPipe, "The sink has gone away")))
}))
}
}
This struct has a multitude of advantages, but a few drawbacks. The main advantage is that you can deal with the read and write part on the same object the same way you would in another language. The object itself implements Clone (since it's an Arc), every method is usable everywhere (particularly useful for old futures code) and as long as you keep a copy of it somewhere and don't call close() it'll keep running (as long as the underlying AsyncRead + AsyncWrite implementation is still there).
This does not absolve you from points 1 and 2, but you can (and should) leverage tokio::codec::Framed for point 1, and implement point 2 as business logic.
An example (it's actually a test ;-) ) of the usage:
#[test]
fn it_writes() {
let stream = DuplexStream::from(make_w());
let stream_write = Arc::clone(&stream);
let stream_read= Arc::clone(&stream);
let dup = Arc::clone(&stream);
tokio::run(lazy(move || {
let stream_write = Arc::clone(&stream_write);
stream_read.start().and_then(move |i| {
let stream_write = Arc::clone(&stream_write);
stream_write.send("foo".to_string()).map(|_| i)
}).collect().map(|r| {
assert_eq!(r, vec!["foo".to_string(), "bar".to_string(), "bazfoo".to_string(), "foo".to_string()])
}).map_err(|_| {
assert_eq!(true, false);
})
}));
}

Actix SyncArbiter registry

I'm trying to implement a pool of 10 Redis of conections using a SyncArbiter for different actors to use. Say that we have an actor named Bob that has to use a Redis actor to accomplish it's task.
While this is achievable in the following manner:
// crate, use and mod statements have been omitted to lessen clutter
/// FILE main.rs
pub struct AppState {
pub redis: Addr<Redis>,
pub bob: Addr<Bob>
}
fn main() {
let system = actix::System::new("theatre");
server::new(move || {
let redis_addr = SyncArbiter::start(10, || Redis::new("redis://127.0.0.1").unwrap());
let bob_addr = SyncArbiter::start(10, || Bob::new());
let state = AppState {
redis: redis_addr,
bob: bob_addr
};
App::with_state(state).resource("/bob/eat", |r| {
r.method(http::Method::POST)
.with_async(controllers::bob::eat)
})
})
.bind("0.0.0.0:8080")
.unwrap()
.start();
println!("Server started.");
system.run();
}
/// FILE controllers/bob.rs
pub struct Food {
name: String,
kcal: u64
}
pub fn eat(
(req, state): (Json<Food>, State<AppState>),
) -> impl Future<Item = HttpResponse, Error = Error> {
state
.bob
.send(Eat::new(req.into_inner()))
.from_err()
.and_then(|res| match res {
Ok(val) => {
println!("==== BODY ==== {:?}", val);
Ok(HttpResponse::Ok().into())
}
Err(_) => Ok(HttpResponse::InternalServerError().into()),
})
}
/// FILE actors/redis.rs
#[derive(Debug)]
pub struct Redis {
pub client: Client
}
pub struct RunCommand(Cmd);
impl RunCommand {
pub fn new(cmd: Cmd) -> Self {
RunCommand(cmd)
}
}
impl Message for RunCommand {
type Result = Result<RedisResult<String>, ()>;
}
impl Actor for Redis {
type Context = SyncContext<Self>;
}
impl Handler<RunCommand> for Redis {
type Result = Result<RedisResult<String>, ()>;
fn handle(&mut self, msg: RunCommand, _context: &mut Self::Context) -> Self::Result {
println!("Redis received command!");
Ok(Ok("OK".to_string()))
}
}
impl Redis {
pub fn new(url: &str) -> Result<Self, RedisError> {
let client = match Client::open(url) {
Ok(client) => client,
Err(error) => return Err(error)
};
let redis = Redis {
client: client,
};
Ok(redis)
}
}
/// FILE actors/bob.rs
pub struct Bob;
pub struct Eat(Food);
impl Message for Eat {
type Result = Result<Bob, ()>;
}
impl Actor for Eat {
type Context = SyncContext<Self>;
}
impl Handler<Eat> for Bob {
type Result = Result<(), ()>;
fn handle(&mut self, msg: Eat, _context: &mut Self::Context) -> Self::Result {
println!("Bob received {:?}", &msg);
// How to get a Redis actor and pass data to it here?
Ok(msg.datapoint)
}
}
impl Bob {
pub fn new() -> () {
Bob {}
}
}
From the above handle implementation in Bob, it's unclear how Bob could get the address of an Redis actor. Or send any message to any Actor running in a SyncArbiter.
The same could be achieved using a regular Arbiter and a Registry but, as far as I am aware of, Actix doesn't allow multiple same actors (e.g. we can't start 10 Redis actors using a regular Arbiter).
To formalize my questions:
Is there a Registry for SyncArbiter actors
Can I start multiple same type actors in a regular Arbiter
Is there a better / more canonical way to implement a connection pool
EDIT
Versions:
actix 0.7.9
actix_web 0.7.19
futures = "0.1.26"
rust 1.33.0
I found the answer myself.
Out-of-the box there is no way for an Actor with a SyncContext to be retrieved from the registry.
Given my above example. For the actor Bob to send any kind of message to the Redis actor it needs to know the address of the Redis actor. Bob can get the address of Redis explicitly - contained in a message sent to it or read from some kind of shared state.
I wanted a system similar to Erlang's so I decided against passing the addresses of actors through messages as it seemed too laborious, error prone and in my mind it defeats the purpose of having an actor based concurrency model (since no one actor can message any other actor).
Therefore I investigated the idea of a shared state, and decided to implement my own SyncRegistry that would be an analog to the Actix standard Registry - whic does exactly what I want but not for Actors with a SyncContext.
Here is the naive solution i coded up: https://gist.github.com/monorkin/c463f34764ab23af2fd0fb0c19716177
With the following setup:
fn main() {
let system = actix::System::new("theatre");
let addr = SyncArbiter::start(10, || Redis::new("redis://redis").unwrap());
SyncRegistry::set(addr);
let addr = SyncArbiter::start(10, || Bob::new());
SyncRegistry::set(addr);
server::new(move || {
let state = AppState {};
App::with_state(state).resource("/foo", |r| {
r.method(http::Method::POST)
.with_async(controllers::foo::create)
})
})
.bind("0.0.0.0:8080")
.unwrap()
.start();
println!("Server started.");
system.run();
}
The actor Bob can get the address of Redis in the following manner, from any point in the program:
impl Handler<Eat> for Bob {
type Result = Result<(), ()>;
fn handle(&mut self, msg: Eat, _context: &mut Self::Context) -> Self::Result {
let redis = match SyncRegistry::<Redis>::get() {
Some(redis) => redis,
_ => return Err(())
};
let cmd = redis::cmd("XADD")
.arg("things_to_eat")
.arg("*")
.arg("data")
.arg(&msg.0)
.to_owned();
redis.clone().lock().unwrap().send(RunCommand::new(cmd)).wait().unwrap();
}
}

A client for HTTP server push (streaming) in Rust?

For the lack of a better example, let's say I want to write a simple client with Rust that could establish a connection and receive data from Twitter's HTTP Streaming API. Is this possible yet? I've been keeping an eye on Iron and Nickel which seem like good frameworks, but I don't think they have this feature yet?
The http client hyper supports reading responses incrementally (as anything that implements rust's Reader trait), but I wasn't able to find anything to parse the response incrementally, or that implements twitter's particular protocol (to end objecs with \r\n).
That said, I was able to implement a quick'n'dirty proof of concept:
EDIT: See and play with it on github.
use rustc_serialize::json::Json;
use std::str;
pub trait JsonObjectStreamer {
fn json_objects(&mut self) -> JsonObjects<Self>;
}
impl<T: Buffer> JsonObjectStreamer for T {
fn json_objects(&mut self) -> JsonObjects<T> {
JsonObjects { reader: self }
}
}
pub struct JsonObjects<'a, B> where B: 'a {
reader: &'a mut B
}
impl<'a, B> Iterator for JsonObjects<'a, B> where B: Buffer + 'a {
type Item = Json;
fn next(&mut self) -> Option<Json> {
let mut line_bytes = match self.reader.read_until(b'\r') {
Ok(bytes) => bytes,
Err(_) => return None,
};
if line_bytes.last() == Some(&b'\r') {
// drop the \r
line_bytes.pop();
// skip the \n
match self.reader.read_char() {
Ok(_) => (),
Err(_) => return None,
}
}
let line = match str::from_utf8(&line_bytes) {
Ok(line) => line,
Err(_) => return None
};
Json::from_str(line).ok()
}
}
Usage: (assuming you have dropped it on a src/json_streamer.rs file on your project)
#![feature(io)]
extern crate hyper;
extern crate "rustc-serialize" as rustc_serialize;
mod json_streamer;
use hyper::Client;
use std::old_io::BufferedReader;
use json_streamer::JsonObjectStreamer;
fn main() {
let mut client = Client::new();
let res = client.get("http://localhost:4567/").send().unwrap();
for obj in BufferedReader::new(res).json_objects() {
println!("object arrived: {}", obj);
}
}
I've used this tiny sinatra app to test it:
require 'sinatra'
require 'json'
class Stream
def each
hash = { index: 0 }
loop do
hash[:index] += 1
yield hash.to_json + "\r\n"
sleep 0.5
end
end
end
get '/' do
Stream.new
end

Resources