ps: the answer below helped but it's not the answer I need, I have a new problem and I edited the question
I'm trying to make a custom transporter for the hyper http crate, so I can transport http packets in my own way.
Hyper's http client can be passed a custom https://docs.rs/hyper/0.14.2/hyper/client/connect/trait.Connect.html here:
pub fn build<C, B>(&self, connector: C) -> Client<C, B> where C: Connect + Clone, B: HttpBody + Send, B::Data: Send,
If we look at
impl<S, T> Connect for S where
S: Service<Uri, Response = T> + Send + 'static,
S::Error: Into<Box<dyn StdError + Send + Sync>>,
S::Future: Unpin + Send,
T: AsyncRead + AsyncWrite + Connection + Unpin + Send + 'static,
the type T, which is the type of the Response, must implement AsyncRead + AsyncWrite, so I've chosen type Response = Cursor<Vec<u8>>.
Here's my custom transporter with a Response of type std::io::Cursor wrapped in CustomResponse so I can implement AsyncWrite and AsyncRead to it:
use hyper::service::Service;
use core::task::{Context, Poll};
use core::future::Future;
use std::pin::Pin;
use std::io::Cursor;
use hyper::client::connect::{Connection, Connected};
use tokio::io::{AsyncRead, AsyncWrite};
#[derive(Clone)]
pub struct CustomTransporter;
unsafe impl Send for CustomTransporter {}
impl CustomTransporter {
pub fn new() -> CustomTransporter {
CustomTransporter{}
}
}
impl Connection for CustomTransporter {
fn connected(&self) -> Connected {
Connected::new()
}
}
pub struct CustomResponse {
//w: Cursor<Vec<u8>>,
v: Vec<u8>,
i: i32
}
unsafe impl Send for CustomResponse {
}
impl Connection for CustomResponse {
fn connected(&self) -> Connected {
println!("connected");
Connected::new()
}
}
impl AsyncRead for CustomResponse {
fn poll_read(
mut self: Pin<&mut Self>,
cx: &mut Context<'_>,
buf: &mut tokio::io::ReadBuf<'_>
) -> Poll<std::io::Result<()>> {
self.i+=1;
if self.i >=3 {
println!("poll_read for buf size {}", buf.capacity());
buf.put_slice(self.v.as_slice());
println!("did poll_read");
Poll::Ready(Ok(()))
} else {
println!("poll read pending, i={}", self.i);
Poll::Pending
}
}
}
impl AsyncWrite for CustomResponse {
fn poll_write(
mut self: Pin<&mut Self>,
cx: &mut Context<'_>,
buf: &[u8]
) -> Poll<Result<usize, std::io::Error>>{
//let v = vec!();
println!("poll_write____");
let s = match std::str::from_utf8(buf) {
Ok(v) => v,
Err(e) => panic!("Invalid UTF-8 sequence: {}", e),
};
println!("result: {}, size: {}, i: {}", s, s.len(), self.i);
if self.i>=0{
//r
Poll::Ready(Ok(s.len()))
}else{
println!("poll_write pending");
Poll::Pending
}
}
fn poll_flush(
mut self: Pin<&mut Self>,
cx: &mut Context<'_>
) -> Poll<Result<(), std::io::Error>> {
println!("poll_flush");
if self.i>=0{
println!("DID poll_flush");
Poll::Ready(Ok(()))
}else{
println!("poll_flush pending");
Poll::Pending
}
}
fn poll_shutdown(
mut self: Pin<&mut Self>,
cx: &mut Context<'_>
) -> Poll<Result<(), std::io::Error>>
{
println!("poll_shutdown");
Poll::Ready(Ok(()))
}
}
impl Service<hyper::Uri> for CustomTransporter {
type Response = CustomResponse;
type Error = hyper::http::Error;
type Future = Pin<Box<dyn Future<Output = Result<Self::Response, Self::Error>> + Send>>;
fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll<Result<(), Self::Error>> {
println!("poll_ready");
Poll::Ready(Ok(()))
//Poll::Pending
}
fn call(&mut self, req: hyper::Uri) -> Self::Future {
println!("call");
// create the body
let body: Vec<u8> = "HTTP/1.1 200 OK\nDate: Mon, 27 Jul 2009 12:28:53 GMT\nServer: Apache/2.2.14 (Win32)\nLast-Modified: Wed, 22 Jul 2009 19:15:56 GMT\nContent-Length: 88\nContent-Type: text/html\nConnection: Closed<html><body><h1>Hello, World!</h1></body></html>".as_bytes()
.to_owned();
// Create the HTTP response
let resp = CustomResponse{
//w: Cursor::new(body),
v: body,
i: 0
};
// create a response in a future.
let fut = async move{
Ok(resp)
};
println!("gonna return from call");
// Return the response as an immediate future
Box::pin(fut)
}
}
Then I use it like this:
let connector = CustomTransporter::new();
let client: Client<CustomTransporter, hyper::Body> = Client::builder().build(connector);
let mut res = client.get(url).await.unwrap();
However, it gets stuck and hyper never reads my response, but it writes the GET to it.
Here's a complete project for testing: https://github.com/lzunsec/rust_hyper_custom_transporter/blob/39cd036fc929057d975a71969ccbe97312543061/src/custom_req.rs
RUn like this:
cargo run http://google.com
I cannot simply implement Send to Future, and I cannot change Future by a wrapper. What should I do here?
It looks like the problem is your Service::Future is missing the Send constraint. The future being returned in call is already Send so it will work with the simple change:
impl Service<hyper::Uri> for CustomTransporter {
type Response = CustomResponse;
type Error = hyper::http::Error;
type Future = Pin<Box<dyn Future<Output = Result<Self::Response, Self::Error>> + Send>>;
// ^^^^
...
Your code has a few other errors: un-inferred vec!(), self: Pin<...> missing mut, CustomResponse should be pub...
You can specify the B of client by using inference:
let client: Client<CustomTransporter, hyper::Body> = Client::builder().build(connector);
Or by using the turbofish operator on build:
let client = Client::builder().build::<CustomTransporter, hyper::Body>(connector);
I don't know enough about creating custom hyper transports to know if its functional, but these fixes make it compile. Hopefully it helps you make progress.
Related
I have the following setup:
use core::task::Poll;
use tokio::io::ReadBuf;
use core::task::Context;
use core::pin::Pin;
use std::error::Error;
use tokio::io::AsyncRead;
struct Dummy;
impl AsyncRead for Dummy {
fn poll_read(
self: Pin<&mut Self>,
cx: &mut Context<'_>,
buf: &mut ReadBuf<'_>,
) -> Poll<tokio::io::Result<()>> {
Poll::Pending
}
}
fn request_peers() -> impl futures::stream::Stream<Item = impl futures::Future<Output = tokio::io::Result<impl tokio::io::AsyncRead>>> {
futures::stream::iter((0..10).map(move |i| {
futures::future::ok(Dummy{})
}))
}
async fn connect (
peers: impl futures::stream::Stream<Item = impl futures::Future<Output = tokio::io::Result<impl tokio::io::AsyncRead>>>
) -> impl futures::stream::Stream<Item = impl tokio::io::AsyncRead> {
todo!()
}
#[tokio::main]
async fn main() {
let peers = request_peers();
let connected_peers = connect(peers).await;
}
playground link
I want to connect all peers by awaiting a future and ignore the peers which do not connect. Ideally, I would want to keep the peers in a future::stream::Stream. I thought that the following might work:
use core::task::Poll;
use tokio::io::ReadBuf;
use core::task::Context;
use core::pin::Pin;
use std::error::Error;
use tokio::io::AsyncRead;
struct Dummy;
impl AsyncRead for Dummy {
fn poll_read(
self: Pin<&mut Self>,
cx: &mut Context<'_>,
buf: &mut ReadBuf<'_>,
) -> Poll<tokio::io::Result<()>> {
Poll::Pending
}
}
fn request_peers() -> impl futures::stream::Stream<Item = impl futures::Future<Output = tokio::io::Result<impl tokio::io::AsyncRead>>> {
futures::stream::iter((0..10).map(move |i| {
println!("instantiated");
futures::future::ok(Dummy{})
}))
}
use futures::{StreamExt};
fn connect (
peers: impl futures::stream::Stream<Item = impl futures::Future<Output = tokio::io::Result<impl tokio::io::AsyncRead>>>
) -> impl futures::stream::Stream<Item = impl tokio::io::AsyncRead> {
peers.filter_map(|peer_fut| async move {
if let Ok(peer) = peer_fut.await {
tokio::time::sleep(core::time::Duration::from_secs(1)).await;
println!("connected");
Some(peer)
} else {
None
}
})
}
#[tokio::main]
async fn main() {
let peers = request_peers();
let connected_peers = connect(peers);
connected_peers.for_each_concurrent(None, |peer| async {
println!("processed")
}).await;
}
playground link
But the peers are not connected concurrently, so this will take 10 seconds to finish - instead of ~1 sec.
I notice if I return a Vec instead of future::stream::Stream it will connect the peers concurrently with the following code snippet:
use futures::{StreamExt};
async fn connect (
peers: impl futures::stream::Stream<Item = impl futures::Future<Output = tokio::io::Result<impl tokio::io::AsyncRead>>>
) -> Vec<impl tokio::io::AsyncRead> {
let mut peers = peers.map(|peer_fut| async move {
if let Ok(peer) = peer_fut.await {
tokio::time::sleep(core::time::Duration::from_secs(1)).await;
println!("connected");
Some(peer)
} else {
None
}
})
.buffer_unordered(50)
.collect::<Vec<_>>().await;
peers.into_iter().flatten().collect()
}
#[tokio::main]
async fn main() {
let peers = request_peers();
let connected_peers = connect(peers).await;
futures::stream::iter(connected_peers).for_each_concurrent(None, |peer| async {
println!("processed")
}).await;
}
playground link
Is there a way to do this without converting to Vec and instead keeping the futures::stream::Stream ?
This sounds like a good use case for FuturesUnordered
You create a number of futures (I.e. by running map and collect on a Vec), then convert them into an iterator which asynchronously yields results from whatever future completes first.
If any futures return an error result, it could be skipped or handled appropriately.
Forgive in advance for the bad title. I will try to be clear in the description.
I am making an application that requires to work with tokio_postresql and tiberius.
I need to provide query parameters for both connectors. This are their signatures.
postgresql
tokio_postgres::client::Client
pub async fn query<T>(&self, statement: &T, params: &[&dyn ToSql + Sync]) -> Result<Vec<Row>, Error>
tiberius
tiberius::query::Query
pub fn bind(&mut self, param: impl IntoSql<'a> + 'a)
As you may observe, tokio_postres admits a reference to an array a trait objets, which is really convenient. But, my bottleneck is with the param of tiberius.
Here's my code:
#[async_trait]
pub trait Transaction<T: Debug> {
/// Performs the necessary to execute a query against the database
async fn query<'a>(stmt: String, params: &'a [&'a (dyn QueryParameters<'a> + Sync)], datasource_name: &'a str)
-> Result<DatabaseResult<T>, Box<(dyn std::error::Error + Sync + Send + 'static)>>
{
let database_connection = if datasource_name == "" {
DatabaseConnection::new(&DEFAULT_DATASOURCE.properties).await
} else { // Get the specified one
DatabaseConnection::new(
&DATASOURCES.iter()
.find( |ds| ds.name == datasource_name)
.expect(&format!("No datasource found with the specified parameter: `{}`", datasource_name))
.properties
).await
};
if let Err(_db_conn) = database_connection {
todo!();
} else {
// No errors
let db_conn = database_connection.ok().unwrap();
match db_conn.database_type {
DatabaseType::PostgreSql => {
let mut m_params: Vec<&(dyn ToSql + Sync)> = Vec::new();
for p in params.iter() {
m_params.push(&p as &(dyn ToSql + Sync))
}
postgres_query_launcher::launch::<T>(db_conn, stmt, params).await
},
DatabaseType::SqlServer =>
sqlserver_query_launcher::launch::<T>(db_conn, stmt, params).await
}
}
}
}
where QueryParameters:
pub trait QueryParameters<'a> {}
impl<'a> QueryParameters<'a> for i32 {}
impl<'a> QueryParameters<'a> for i64 {}
impl<'a> QueryParameters<'a> for &'a str {}
impl<'a> QueryParameters<'a> for String {}
impl<'a> QueryParameters<'a> for &'a String {}
impl<'a> QueryParameters<'a> for &'a [u8] {}
impl<'a> QueryParameters<'a> for &'a (dyn ToSql + Sync + Send) {}
impl<'a> QueryParameters<'a> for &'a dyn IntoSql<'a> {}
1st question:
I want to cast the &'a dyn QueryParameters<'a> to &'a (dyn ToSql + Sync). Is this possible to cast from some trait to another?
2nd question:
The .bind() method of the tiberius client, only accept values that impl IntoSql<'a>.
But I need to mix in my collection different values that already implements IntoSql<'a, but they have different type. I would like to know how to... cast??? those values of type &'a dyn QueryParameters<'a> to the values accepted by the function.
Are those things possible?
NOTE: The launch method from both modules are just a wrapper over the method calls provided above, but they accept as parameter params: &'a[&'a dyn QueryParameters<'a>]
Edit:
pub async fn launch<'a, T>(
db_conn: DatabaseConnection,
stmt: String,
params: &'a [&'a dyn QueryParameters<'a>],
) -> Result<DatabaseResult<T>, Box<(dyn std::error::Error + Send + Sync + 'static)>>
where
T: Debug
{
let mut sql_server_query = Query::new(stmt);
params.into_iter().for_each( |param| sql_server_query.bind( param ));
let client: &mut Client<TcpStream> = &mut db_conn.sqlserver_connection
.expect("Error querying the SqlServer database") // TODO Better msg
.client;
let _results: Vec<Row> = sql_server_query.query(client).await?
.into_results().await?
.into_iter()
.flatten()
.collect::<Vec<_>>();
Ok(DatabaseResult::new(vec![]))
}
that's the more conflictive part for me. .bind(impl IntoSql<'a> + 'a), so I should call this method for every parameter that I want to bind. I would like to cast ' &dyn QueryParameters<'a> to impl ..., but I don't know if that's is even possible.
But, if I change the method signature to:
pub async fn launch<'a, T>(
db_conn: DatabaseConnection,
stmt: String,
params: &'a [impl IntoSql<'a> + 'a],
) -> Result<DatabaseResult<T>, Box<(dyn std::error::Error + Send + Sync + 'static)>>
I just only can accept values of the same type. Imagine a insert query, for example. I need to be flexible to accept both i32, i64, &str... depending on the column type. So this isn't valid for my case.
Edit 2
I've found a way to solve the postgres side of the issue.
trait AsAny {
fn as_any(&self) -> &dyn std::any::Any;
}
impl AsAny for i32 {
fn as_any(&self) -> &dyn std::any::Any {
self
}
}
pub trait QueryParameters<'a> {
fn as_postgres_param(&self) -> &(dyn ToSql + Sync + 'a);
}
impl<'a> QueryParameters<'a> for i32 {
fn as_postgres_param(&self) -> &(dyn ToSql + Sync + 'a) {
let a: Box<&dyn AsAny> = Box::new(self);
match a.as_any().downcast_ref::<i32>() {
Some(b) => b,
None => panic!("Bad conversion of parameters"),
}
}
}
I don't know if it's elegant, or harms performance (sure it does), but I can write now:
let mut m_params: Vec<&(dyn ToSql + Sync)> = Vec::new();
for param in params {
m_params.push(param.as_postgres_param());
}
let query_result = client.query(&stmt, m_params.as_slice()).await;
But I can't figure out still how to work with the impl IntoSql<'a> + 'a of tiberius
Essentially, you need a &dyn QueryParameter to work as both a &dyn ToSql and an impl IntoSql, right? Lets start from scratch:
trait QueryParameter {}
The &dyn ToSql part is easy since you can use the trick shown in this answer. You need your QueryParameter trait to have an associated function to convert from &self to &dyn Sql. Like so:
trait QueryParameter {
fn as_to_sql(&self) -> &dyn ToSql;
The impl IntoSql is trickier since consuming trait objects is a dicey affair. However, to implement the trait, we only need to construct a ColumnData. And we'll see in a second that its just that simple:
trait QueryParameter {
fn as_column_data(&self) -> ColumnData<'_>;
because we can next implement IntoSql for &dyn QueryParameter like I mentioned in your other question:
impl<'a> IntoSql<'a> for &'a dyn QueryParameter {
fn into_sql(self) -> ColumnData<'a> {
self.as_column_data()
}
}
And besides implementation for QueryParameter itself, that's it! We need to sprinkle in some Sync since ToSql and IntoSql require them, but this is a (mostly) working example:
use tiberius::{ColumnData, IntoSql, Query};
use tokio_postgres::types::ToSql;
trait QueryParameter: Sync {
fn as_to_sql(&self) -> &(dyn ToSql + Sync);
fn as_column_data(&self) -> ColumnData<'_>;
}
impl QueryParameter for i32 {
fn as_to_sql(&self) -> &(dyn ToSql + Sync) { self }
fn as_column_data(&self) -> ColumnData<'_> { ColumnData::I32(Some(*self)) }
}
impl QueryParameter for i64 {
fn as_to_sql(&self) -> &(dyn ToSql + Sync) { self }
fn as_column_data(&self) -> ColumnData<'_> { ColumnData::I64(Some(*self)) }
}
impl QueryParameter for &'_ str {
fn as_to_sql(&self) -> &(dyn ToSql + Sync) { self }
fn as_column_data(&self) -> ColumnData<'_> { ColumnData::String(Some((*self).into())) }
}
impl QueryParameter for String {
fn as_to_sql(&self) -> &(dyn ToSql + Sync) { self }
fn as_column_data(&self) -> ColumnData<'_> { ColumnData::String(Some(self.into())) }
}
impl<'a> IntoSql<'a> for &'a dyn QueryParameter {
fn into_sql(self) -> ColumnData<'a> {
self.as_column_data()
}
}
async fn via_tiberius(stmt: &str, params: &[&dyn QueryParameter]) {
let mut client: tiberius::Client<_> = todo!();
let mut query = Query::new(stmt);
for ¶m in params {
query.bind(param)
}
let _ = query.execute(&mut client).await;
}
async fn via_tokio_postgres(stmt: &str, params: &[&dyn QueryParameter]) {
let client: tokio_postgres::Client = todo!();
let params: Vec<_> = params.iter().map(|p| p.as_to_sql()).collect();
let _ = client.query(stmt, ¶ms).await;
}
I'm trying to implement an async read wrapper that will add read timeout functionality. The objective is that the API is plain AsyncRead. In other words, I don't want to add io.read(buf).timeout(t) everywehere in the code. Instead, the read instance itself should return the appropriate io::ErrorKind::TimedOut after the given timeout expires.
I can't poll the delay to Ready though. It's always Pending. I've tried with async-std, futures, smol-timeout - the same result. While the timeout does trigger when awaited, it just doesn't when polled. I know timeouts aren't easy. Something needs to wake it up. What am I doing wrong? How to pull this through?
use async_std::{
future::Future,
io,
pin::Pin,
task::{sleep, Context, Poll},
};
use std::time::Duration;
pub struct PrudentIo<IO> {
expired: Option<Pin<Box<dyn Future<Output = ()> + Sync + Send>>>,
timeout: Duration,
io: IO,
}
impl<IO> PrudentIo<IO> {
pub fn new(timeout: Duration, io: IO) -> Self {
PrudentIo {
expired: None,
timeout,
io,
}
}
}
fn delay(t: Duration) -> Option<Pin<Box<dyn Future<Output = ()> + Sync + Send + 'static>>> {
if t.is_zero() {
return None;
}
Some(Box::pin(sleep(t)))
}
impl<IO: io::Read + Unpin> io::Read for PrudentIo<IO> {
fn poll_read(
mut self: Pin<&mut Self>,
cx: &mut Context<'_>,
buf: &mut [u8],
) -> Poll<io::Result<usize>> {
if let Some(ref mut expired) = self.expired {
match expired.as_mut().poll(cx) {
Poll::Ready(_) => {
println!("expired ready");
// too much time passed since last read/write
return Poll::Ready(Err(io::ErrorKind::TimedOut.into()));
}
Poll::Pending => {
println!("expired pending");
// in good time
}
}
}
let res = Pin::new(&mut self.io).poll_read(cx, buf);
println!("read {:?}", res);
match res {
Poll::Pending => {
if self.expired.is_none() {
// No data, start checking for a timeout
self.expired = delay(self.timeout);
}
}
Poll::Ready(_) => self.expired = None,
}
res
}
}
impl<IO: io::Write + Unpin> io::Write for PrudentIo<IO> {
fn poll_write(
mut self: Pin<&mut Self>,
cx: &mut Context<'_>,
buf: &[u8],
) -> Poll<io::Result<usize>> {
Pin::new(&mut self.io).poll_write(cx, buf)
}
fn poll_flush(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<io::Result<()>> {
Pin::new(&mut self.io).poll_flush(cx)
}
fn poll_close(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<io::Result<()>> {
Pin::new(&mut self.io).poll_close(cx)
}
}
#[cfg(test)]
mod io_tests {
use super::*;
use async_std::io::ReadExt;
use async_std::prelude::FutureExt;
use async_std::{
io::{copy, Cursor},
net::TcpStream,
};
use std::time::Duration;
#[async_std::test]
async fn fail_read_after_timeout() -> io::Result<()> {
let mut output = b"______".to_vec();
let io = PendIo;
let mut io = PrudentIo::new(Duration::from_millis(5), io);
let mut io = Pin::new(&mut io);
insta::assert_debug_snapshot!(io.read(&mut output[..]).timeout(Duration::from_secs(1)).await,#"Ok(io::Err(timeou))");
Ok(())
}
#[async_std::test]
async fn timeout_expires() {
let later = delay(Duration::from_millis(1)).expect("some").await;
insta::assert_debug_snapshot!(later,#r"()");
}
/// Mock IO always pending
struct PendIo;
impl io::Read for PendIo {
fn poll_read(
self: Pin<&mut Self>,
_cx: &mut Context<'_>,
_buf: &mut [u8],
) -> Poll<futures_io::Result<usize>> {
Poll::Pending
}
}
impl io::Write for PendIo {
fn poll_write(
self: Pin<&mut Self>,
_cx: &mut Context<'_>,
_buf: &[u8],
) -> Poll<futures_io::Result<usize>> {
Poll::Pending
}
fn poll_flush(self: Pin<&mut Self>, _cx: &mut Context<'_>) -> Poll<futures_io::Result<()>> {
Poll::Pending
}
fn poll_close(self: Pin<&mut Self>, _cx: &mut Context<'_>) -> Poll<futures_io::Result<()>> {
Poll::Pending
}
}
}
Async timeouts work as follows:
You create the timeout future.
The runtime calls poll into the timeout, it checks whether the timeout has expired.
If it is expired, it returns Ready and done.
If it is not expired, it somehow registers a callback for when the right time has passed it calls cx.waker().wake(), or similar.
When the time has passed, the callback from #4 is invoked, that calls wake() in the proper waker, which instructs the runtime to call poll again.
This time poll will return Ready. Done!
The problem with your code is that you create the delay from inside the poll() implementation: self.expired = delay(self.timeout);. But then you return Pending without polling the timeout even once. This way, there is no callback registered anywhere that would call the Waker. No waker, no timeout.
I see several solutions:
A. Do not initialize PrudentIo::expired to None but create the timeout directly in the constructor. That way the timeout will always be polled before the io at least once, and it will be woken. But you will create a timeout always, even if it is not actually needed.
B. When creating the timeout do a recursive poll:
Poll::Pending => {
if self.expired.is_none() {
// No data, start checking for a timeout
self.expired = delay(self.timeout);
return self.poll_read(cx, buf);
}
This will call the io twice, unnecesarily, so it may not be optimal.
C. Add a call to poll after creating the timeout:
Poll::Pending => {
if self.expired.is_none() {
// No data, start checking for a timeout
self.expired = delay(self.timeout);
self.expired.as_mut().unwrap().as_mut().poll(cx);
}
Maybe you should match the output of poll in case it returns Ready, but hey, it's a new timeout, it's probably pending yet, and it seems to work nicely.
// This is another solution. I think it is better.
impl<IO: io::AsyncRead + Unpin> io::AsyncRead for PrudentIo<IO> {
fn poll_read(
self: Pin<&mut Self>,
cx: &mut Context<'_>,
buf: &mut [u8],
) -> Poll<io::Result<usize>> {
let this = self.get_mut();
let io = Pin::new(&mut this.io);
if let Poll::Ready(res) = io.poll_read(cx, buf) {
return Poll::Ready(res);
}
loop {
if let Some(expired) = this.expired.as_mut() {
ready!(expired.poll(cx));
this.expired.take();
return Poll::Ready(Err(io::ErrorKind::TimedOut.into()));
}
let timeout = Timer::after(this.timeout);
this.expired = Some(timeout);
}
}
}
// 1. smol used, not async_std.
// 2. IO should be 'static.
// 3. when timeout, read_poll return Poll::Ready::Err(io::ErrorKind::Timeout)
use {
smol::{future::FutureExt, io, ready, Timer},
std::{
future::Future,
pin::Pin,
task::{Context, Poll},
time::Duration,
},
};
// --
pub struct PrudentIo<IO> {
expired: Option<Pin<Box<dyn Future<Output = io::Result<usize>>>>>,
timeout: Duration,
io: IO,
}
impl<IO> PrudentIo<IO> {
pub fn new(timeout: Duration, io: IO) -> Self {
PrudentIo {
expired: None,
timeout,
io,
}
}
}
impl<IO: io::AsyncRead + Unpin + 'static> io::AsyncRead for PrudentIo<IO> {
fn poll_read(
self: Pin<&mut Self>,
cx: &mut Context<'_>,
buf: &mut [u8],
) -> Poll<io::Result<usize>> {
let this = self.get_mut();
loop {
if let Some(expired) = this.expired.as_mut() {
let res = ready!(expired.poll(cx))?;
this.expired.take();
return Ok(res).into();
}
let timeout = this.timeout.clone();
let (io, read_buf) = unsafe {
// Safety: ONLY used in poll_read method.
(&mut *(&mut this.io as *mut IO), &mut *(buf as *mut [u8]))
};
let fut = async move {
let timeout_fut = async {
Timer::after(timeout).await;
io::Result::<usize>::Err(io::ErrorKind::TimedOut.into())
};
let read_fut = io::AsyncReadExt::read(io, read_buf);
let res = read_fut.or(timeout_fut).await;
res
}
.boxed_local();
this.expired = Some(fut);
}
}
}
impl<IO: io::AsyncWrite + Unpin> io::AsyncWrite for PrudentIo<IO> {
fn poll_write(
mut self: Pin<&mut Self>,
cx: &mut Context<'_>,
buf: &[u8],
) -> Poll<io::Result<usize>> {
Pin::new(&mut self.io).poll_write(cx, buf)
}
fn poll_flush(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<io::Result<()>> {
Pin::new(&mut self.io).poll_flush(cx)
}
fn poll_close(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<io::Result<()>> {
Pin::new(&mut self.io).poll_close(cx)
}
}
I am writing a server that allocates some compressed data on startup. Now when I serve a hyper response I do not want to copy these bytes, but I cannot figure out a way to do this with hyper.
I have tried implementing HttpBody for my own type, but the lifetime restriction on the trait blocks me from doing this.
Am I missing something? Here is a minimal example of what I am trying to do:
use hyper::{service::Service, Body, Request, Response, Server};
use std::net::SocketAddr;
use std::sync::Arc;
use std::{
future::Future,
pin::Pin,
task::{Context, Poll},
};
fn main() {
let addr = SocketAddr::new("0.0.0.0".parse().unwrap(), 8080);
println!("Server startup...");
let rt = tokio::runtime::Runtime::new().unwrap();
rt.block_on(async {
let app = MakeSvc::new().await;
let ret = app.clone();
let server = Server::bind(&addr).serve(app);
println!("Running on {}", &addr);
server.await.unwrap();
})
}
#[derive(Debug, Clone)]
pub struct WrapperApp {
pub cache: Arc<Vec<u8>>,
}
impl WrapperApp {
//Let's say I allocate some bytes here.
async fn new() -> Self {
Self {
cache: Arc::new(Vec::new()),
}
}
}
impl Service<Request<Body>> for WrapperApp {
type Response = Response<Body>;
type Error = hyper::Error;
type Future = Pin<Box<dyn Future<Output = Result<Self::Response, Self::Error>> + Send>>;
fn poll_ready(&mut self, _: &mut Context) -> Poll<Result<(), Self::Error>> {
Poll::Ready(Ok(()))
}
fn call(&mut self, req: Request<Body>) -> Self::Future {
let a = Arc::clone(&self.cache);
return Box::pin(async { Ok(Response::builder().body(Body::from(a)).unwrap()) });
}
}
#[derive(Debug, Clone)]
pub struct MakeSvc {
app: WrapperApp,
}
impl MakeSvc {
pub async fn new() -> Self {
Self {
app: WrapperApp::new().await,
}
}
}
impl<T> Service<T> for MakeSvc {
type Response = WrapperApp;
type Error = hyper::Error;
type Future = Pin<Box<dyn Future<Output = Result<Self::Response, Self::Error>> + Send>>;
fn poll_ready(&mut self, _: &mut Context) -> Poll<Result<(), Self::Error>> {
Poll::Ready(Ok(()))
}
fn call(&mut self, _: T) -> Self::Future {
let app = self.app.clone();
let fut = async move { Ok(app) };
Box::pin(fut)
}
}
error[E0277]: the trait bound `Body: From<Arc<Vec<u8>>>` is not satisfied
--> src/main.rs:50:61
|
50 | return Box::pin(async { Ok(Response::builder().body(Body::from(a)).unwrap()) });
| ^^^^^^^^^^ the trait `From<Arc<Vec<u8>>>` is not implemented for `Body`
|
= help: the following implementations were found:
<Body as From<&'static [u8]>>
<Body as From<&'static str>>
<Body as From<Box<(dyn futures_core::stream::Stream<Item = std::result::Result<hyper::body::Bytes, Box<(dyn std::error::Error + Send + Sync + 'static)>>> + Send + 'static)>>>
<Body as From<Cow<'static, [u8]>>>
and 4 others
= note: required by `from`
The Cargo.toml that goes with this. The example breaks at the place where I am trying to use the ref:
[package]
name = "hyper-ptr"
version = "0.1.0"
authors = ["Pierre Laas <lanklaas123#gmail.com>"]
edition = "2018"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
hyper={version="0.14", features=["server","http1","tcp"]}
tokio={version="1.0.0", features=["macros","io-util","rt-multi-thread"]}
serde={version="*", features=["derive","rc"]}
serde_json="*"
flate2="*"
openssl="*"
rand="*"
From the documentation, there is no From implementation that doesn't require either 'static references or ownership of the body.
Instead, you can clone the cache. Body::From's implementation requires ownership of the Vec<u8> or static data (if that is useful for your case).
Notice that you need to dereference it first:
use hyper::Body; // 0.14.2
use std::sync::Arc;
const FOO: [u8; 1] = [8u8];
fn main() {
let cache = Arc::new(vec![0u8]);
let body = Body::from((*cache).clone());
let other_body = Body::from(&FOO[..]);
println!("{:?}", cache);
}
Playground
I can't figure out how to provide a Stream where I await async functions to get the data needed for the values of the stream.
I've tried to implement the the Stream trait directly, but I run into issues because I'd like to use async things like awaiting, the compiler does not want me to call async functions.
I assume that I'm missing some background on what the goal of Stream is and I'm just attacking this incorrectly and perhaps I shouldn't be looking at Stream at all, but I don't know where else to turn. I've seen the other functions in the stream module that could be useful, but I'm unsure how I could store any state and use these functions.
As a slightly simplified version of my actual goal, I want to provide a stream of 64-byte Vecs from an AsyncRead object (i.e. TCP stream), but also store a little state inside whatever logic ends up producing values for the stream, in this example, a counter.
pub struct Receiver<T>
where
T: AsyncRead + Unpin,
{
readme: T,
num: u64,
}
// ..code for a simple `new() -> Self` function..
impl<T> Stream for Receiver<T>
where
T: AsyncRead + Unpin,
{
type Item = Result<Vec<u8>, io::Error>;
fn poll_next(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Option<Self::Item>> {
let mut buf: [u8; 64] = [0; 64];
match self.readme.read_exact(&mut buf).await {
Ok(()) => {
self.num += 1;
Poll::Ready(Some(Ok(buf.to_vec())))
}
Err(e) => Poll::Ready(Some(Err(e))),
}
}
}
This fails to build, saying
error[E0728]: `await` is only allowed inside `async` functions and blocks
I'm using rustc 1.36.0-nightly (d35181ad8 2019-05-20) and my Cargo.toml looks like this:
[dependencies]
futures-preview = { version = "0.3.0-alpha.16", features = ["compat", "io-compat"] }
pin-utils = "0.1.0-alpha.4"
Answer copy/pasted from the reddit post by user Matthias247:
It's unfortunately not possible at the moment - Streams have to be implemented by hand and can not utilize async fn. Whether it's possible to change this in the future is unclear.
You can work around it by defining a different Stream trait which makes use of Futures like:
trait Stream<T> {
type NextFuture: Future<Output=T>;
fn next(&mut self) -> Self::NextFuture;
}
This article and this futures-rs issue have more information around it.
You can do it with gen-stream crate:
#![feature(generators, generator_trait, gen_future)]
use {
futures::prelude::*,
gen_stream::{gen_await, GenTryStream},
pin_utils::unsafe_pinned,
std::{
io,
marker::PhantomData,
pin::Pin,
sync::{
atomic::{AtomicU64, Ordering},
Arc,
},
task::{Context, Poll},
},
};
pub type Inner = Pin<Box<dyn Stream<Item = Result<Vec<u8>, io::Error>> + Send>>;
pub struct Receiver<T> {
inner: Inner,
pub num: Arc<AtomicU64>,
_marker: PhantomData<T>,
}
impl<T> Receiver<T> {
unsafe_pinned!(inner: Inner);
}
impl<T> From<T> for Receiver<T>
where
T: AsyncRead + Unpin + Send + 'static,
{
fn from(mut readme: T) -> Self {
let num = Arc::new(AtomicU64::new(0));
Self {
inner: Box::pin(GenTryStream::from({
let num = num.clone();
static move || loop {
let mut buf: [u8; 64] = [0; 64];
match gen_await!(readme.read_exact(&mut buf)) {
Ok(()) => {
num.fetch_add(1, Ordering::Relaxed);
yield Poll::Ready(buf.to_vec())
}
Err(e) => return Err(e),
}
}
})),
num,
_marker: PhantomData,
}
}
}
impl<T> Stream for Receiver<T>
where
T: AsyncRead + Unpin,
{
type Item = Result<Vec<u8>, io::Error>;
fn poll_next(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Option<Self::Item>> {
self.inner().poll_next(cx)
}
}