How to POST a file using reqwest? - rust

The documentation for reqwest v0.9.18 shows the following example of posting a file:
let file = fs::File::open("from_a_file.txt")?;
let client = reqwest::Client::new();
let res = client.post("http://httpbin.org/post")
.body(file)
.send()?;
The latest documentation for reqwest v0.11 no longer includes this example, and trying to build it fails with the following error when calling body():
the trait `From<std::fs::File>` is not implemented for `Body`
What is the updated method for sending a file?

The specific example you're linking to, was prior to the reqwest crate using async. If you want to use that exact example, then instead of reqwest::Client, you need to use reqwest::blocking::Client. This also requires enabling the blocking feature.
To be clear, you can actually still find that example, it's just located in the docs for reqwest::blocking::RequestBuilder's body() method instead.
// reqwest = { version = "0.11", features = ["blocking"] }
use reqwest::blocking::Client;
use std::fs::File;
fn main() -> Result<(), Box<dyn std::error::Error>> {
let file = File::open("from_a_file.txt")?;
let client = Client::new();
let res = client.post("http://httpbin.org/post")
.body(file)
.send()?;
Ok(())
}
Also check out reqwest's Form and RequestBuilder's multipart() method, as there for instance is a file() method.
If you do want to use async, then you can use FramedRead from the tokio-util crate. Along with the TryStreamExt trait, from the futures crate.
Just make sure to enable the stream feature for reqwest, and the codec feature for tokio-util.
// futures = "0.3"
use futures::stream::TryStreamExt;
// reqwest = { version = "0.11", features = ["stream"] }
use reqwest::{Body, Client};
// tokio = { version = "1.0", features = ["full"] }
use tokio::fs::File;
// tokio-util = { version = "0.6", features = ["codec"] }
use tokio_util::codec::{BytesCodec, FramedRead};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let file = File::open("from_a_file.txt").await?;
let client = reqwest::Client::new();
let res = client
.post("http://httpbin.org/post")
.body(file_to_body(file))
.send()
.await?;
Ok(())
}
fn file_to_body(file: File) -> Body {
let stream = FramedRead::new(file, BytesCodec::new());
let body = Body::wrap_stream(stream);
body
}

If you want to use multipart/form-data and you are using Tokio
already, this approach could help you.
1. Setup Dependencies
# Cargo.toml
[dependencies]
tokio = { version = "1.19", features = ["macros", "rt-multi-thread"] }
reqwest = { version = "0.11.11", features = ["stream","multipart","json"] }
tokio-util = { version = "0.7.3", features = ["codec"] }
2. Upload file using multipart/form-data
use reqwest::{multipart, Body, Client};
use tokio::fs::File;
use tokio_util::codec::{BytesCodec, FramedRead};
async fn reqwest_multipart_form(url: &str) -> anyhow::Result<String> {
let client = Client::new();
let file = File::open(".gitignore").await?;
// read file body stream
let stream = FramedRead::new(file, BytesCodec::new());
let file_body = Body::wrap_stream(stream);
//make form part of file
let some_file = multipart::Part::stream(file_body)
.file_name("gitignore.txt")
.mime_str("text/plain")?;
//create the multipart form
let form = multipart::Form::new()
.text("username", "seanmonstar")
.text("password", "secret")
.part("file", some_file);
//send request
let response = client.post(url).multipart(form).send().await?;
let result = response.text().await?;
Ok(result)
}
3. Unit Testing
#[cfg(test)]
mod tests {
use super::*;
#[tokio::test]
async fn test_post_form_file() {
let url = "http://httpbin.org/post?a=1&b=true";
let get_json = reqwest_multipart_form(url).await.unwrap();
println!("users: {:#?}", get_json);
}
}

the crate streamer can do that for you with feature hyper enabled:
use hyper::{Body, Request}:
let file = File::open("from_a_file.txt").unwrap();
let mut streaming = Streamer::new(file)
// optional, set the field name
// streaming.meta.set_name("txt");
// optional, set the file name
streaming.meta.set_filename("from_a_file.txt");
// length sent as a chunk, the default is 64kB if not set
streaming.meta.set_buf_len(1024 * 1024);
let body: Body = streaming.streaming();
// build a request
let request: Request<Body> = Request::post("<uri-here>").body(body).expect("failed to build a request");
streamer will stream your file in 1 Mega-bytes chunks

Related

How to add tracing to a Rust microservice?

I built a microservice in Rust. I receive messages, request a document based on the message, and call a REST api with the results. I built the REST api with warp and send out the result with reqwest. We use jaeger for tracing and the "b3" format. I have no experience with tracing and am a Rust beginner.
Question: What do I need to add the the warp / reqwest source below to propagate the tracing information and add my own span?
My version endpoint (for simplicity) looks like:
pub async fn version() -> Result<impl warp::Reply, Infallible> {
Ok(warp::reply::with_status(VERSION, http::StatusCode::OK))
}
I assume I have to extract e.g. the traceid / trace information here.
A reqwest call I do looks like this:
pub async fn get_document_content_as_text(
account_id: &str,
hash: &str,
) -> Result<String, Box<dyn std::error::Error>> {
let client = reqwest::Client::builder().build()?;
let res = client
.get(url)
.bearer_auth(TOKEN)
.send()
.await?;
if res.status().is_success() {}
let text = res.text().await?;
Ok(text)
}
I assume I have to add the traceid / trace information here.
You need to add a tracing filter into your warp filter pipeline.
From the documentation example:
use warp::Filter;
let route = warp::any()
.map(warp::reply)
.with(warp::trace(|info| {
// Create a span using tracing macros
tracing::info_span!(
"request",
method = %info.method(),
path = %info.path(),
)
}));
I'll assume that you're using tracing within your application and using opentelemetry and opentelemetry-jaeger to wire it up to an external service. The specific provider you're using doesn't matter. Here's a super simple setup to get that all working that I'll assume you're using on both applications:
# Cargo.toml
[dependencies]
opentelemetry = "0.17.0"
opentelemetry-jaeger = "0.16.0"
tracing = "0.1.33"
tracing-subscriber = { version = "0.3.11", features = ["env-filter"] }
tracing-opentelemetry = "0.17.2"
reqwest = "0.11.11"
tokio = { version = "1.21.1", features = ["macros", "rt", "rt-multi-thread"] }
warp = "0.3.2"
opentelemetry::global::set_text_map_propagator(opentelemetry_jaeger::Propagator::new());
tracing_subscriber::registry()
.with(tracing_opentelemetry::layer().with_tracer(
opentelemetry_jaeger::new_pipeline()
.with_service_name("client") // or "server"
.install_simple()
.unwrap())
).init();
Let's say the "client" application is set up like so:
#[tracing::instrument]
async fn call_hello() {
let client = reqwest::Client::default();
let _resp = client
.get("http://127.0.0.1:3030/hello")
.send()
.await
.unwrap()
.text()
.await
.unwrap();
}
#[tokio::main]
async fn main() {
// ... initialization above ...
call_hello().await;
}
The traces produced by the client are a bit chatty because of other crates but fairly simple, and does not include the server-side:
Let's say the "server" application is set up like so:
#[tracing::instrument]
fn hello_handler() -> &'static str {
tracing::info!("got hello message");
"hello world"
}
#[tokio::main]
async fn main() {
// ... initialization above ...
let routes = warp::path("hello")
.map(hello_handler);
warp::serve(routes).run(([127, 0, 0, 1], 3030)).await;
}
Likewise, the traces produced by the server are pretty bare-bones:
The key part to marrying these two traces is to declare the client-side trace as the parent of the server-side trace. This can be done over HTTP requests with the traceparent and tracestate headers as designed by the W3C Trace Context Standard. There is a TraceContextPropagator available from the opentelemetry crate that can be used to "extract" and "inject" these values (though as you'll see, its not very easy to work with since it only works on HashMap<String, String>s).
For the "client" to send these headers, you'll need to:
get the current tracing Span
get the opentelemetry Context from the Span (if you're not using tracing at all, you can skip the first step and use Context::current() directly)
create the propagator and fields to propagate into and "inject" then from the Context
use those fields as headers for reqwest
#[tracing::instrument]
async fn call_hello() {
let span = tracing::Span::current();
let context = span.context();
let propagator = TraceContextPropagator::new();
let mut fields = HashMap::new();
propagator.inject_context(&context, &mut fields);
let headers = fields
.into_iter()
.map(|(k, v)| {(
HeaderName::try_from(k).unwrap(),
HeaderValue::try_from(v).unwrap(),
)})
.collect();
let client = reqwest::Client::default();
let _resp = client
.get("http://127.0.0.1:3030/hello")
.headers(headers)
.send()
.await
.unwrap()
.text()
.await
.unwrap();
}
For the "server" to make use of those headers, you'll need to:
pull them out from the request and store them in a HashMap
use the propagator to "extract" the values into a Context
set that Context as the parent of the current tracing Span (if you didn't use tracing, you could .attach() it instead)
#[tracing::instrument]
fn hello_handler(traceparent: Option<String>, tracestate: Option<String>) -> &'static str {
let fields: HashMap<_, _> = [
dbg!(traceparent).map(|value| ("traceparent".to_owned(), value)),
dbg!(tracestate).map(|value| ("tracestate".to_owned(), value)),
]
.into_iter()
.flatten()
.collect();
let propagator = TraceContextPropagator::new();
let context = propagator.extract(&fields);
let span = tracing::Span::current();
span.set_parent(context);
tracing::info!("got hello message");
"hello world"
}
#[tokio::main]
async fn main() {
// ... initialization above ...
let routes = warp::path("hello")
.and(warp::header::optional("traceparent"))
.and(warp::header::optional("tracestate"))
.map(hello_handler);
warp::serve(routes).run(([127, 0, 0, 1], 3030)).await;
}
With all that, hopefully your traces have now been associated with one another!
Full code is available here and here.
Please, someone let me know if there is a better way! It seems ridiculous to me that there isn't better integration available. Sure some of this could maybe be a bit simpler and/or wrapped up in some nice middleware for your favorite client and server of choice... But I haven't found a crate or snippet of that anywhere!

rust AWS multipart upload using rusoto, multithreaded (rayon) panicked at 'there is no reactor running ...`

I'm trying to upload a file to aws in rust, for that I'm using the s3 rust client by rusoto_s3, I managed to get the multipart upload code working when these parts are sent from a single thread, however, that is not what I want, I want to upload big files and I want to be able to send these parts in multiple threads, for that, I did a little bit of googling and I came across rayon.
For info the way multipart upload works is as follows:
Initiate the multipart -> aws will return an ID
Use this ID to send the different parts, pass the file chunk, and the part number -> aws will return an Etag
Once you sent all the parts, send a complete upload request with all the completed parts as an array contains the Etag and the part number.
I'm new to rust, coming from C++ and Java background, here is my code:
#[tokio::test]
async fn if_multipart_then_upload_multiparts_dicom() {
let now = Instant::now();
dotenv().ok();
let local_filename = "./files/test_big.DCM";
let destination_filename = "24_time_test.dcm";
let mut file = std::fs::File::open(local_filename).unwrap();
const CHUNK_SIZE: usize = 7_000_000;
let mut buffer = Vec::with_capacity(CHUNK_SIZE);
let client = super::get_client().await;
let create_multipart_request = CreateMultipartUploadRequest {
bucket: client.bucket_name.to_owned(),
key: destination_filename.to_owned(),
..Default::default()
};
// Start the multipart upload and note the upload_id generated
let response = client
.s3
.create_multipart_upload(create_multipart_request)
.await
.expect("Couldn't create multipart upload");
let upload_id = response.upload_id.unwrap();
// Create upload parts
let create_upload_part = |body: Vec<u8>, part_number: i64| -> UploadPartRequest {
UploadPartRequest {
body: Some(body.into()),
bucket: client.bucket_name.to_owned(),
key: destination_filename.to_owned(),
upload_id: upload_id.to_owned(),
part_number: part_number,
..Default::default()
}
};
let completed_parts = Arc::new(Mutex::new(vec![]));
rayon::scope(|scope| {
let mut part_number = 1;
loop {
let maximum_bytes_to_read = CHUNK_SIZE - buffer.len();
println!("maximum_bytes_to_read: {}", maximum_bytes_to_read);
file.by_ref()
.take(maximum_bytes_to_read as u64)
.read_to_end(&mut buffer)
.unwrap();
println!("length: {}", buffer.len());
println!("part_number: {}", part_number);
if buffer.len() == 0 {
// The file has ended.
break;
}
let next_buffer = Vec::with_capacity(CHUNK_SIZE);
let data_to_send = buffer;
let completed_parts_cloned = completed_parts.clone();
scope.spawn(move |_| {
let part = create_upload_part(data_to_send.to_vec(), part_number);
{
let part_number = part.part_number;
let client = executor::block_on(super::get_client());
let response = executor::block_on(client.s3.upload_part(part));
completed_parts_cloned.lock().unwrap().push(CompletedPart {
e_tag: response
.expect("Couldn't complete multipart upload")
.e_tag
.clone(),
part_number: Some(part_number),
});
}
});
buffer = next_buffer;
part_number = part_number + 1;
}
});
let completed_upload = CompletedMultipartUpload {
parts: Some(completed_parts.lock().unwrap().to_vec()),
};
let complete_req = CompleteMultipartUploadRequest {
bucket: client.bucket_name.to_owned(),
key: destination_filename.to_owned(),
upload_id: upload_id.to_owned(),
multipart_upload: Some(completed_upload),
..Default::default()
};
client
.s3
.complete_multipart_upload(complete_req)
.await
.expect("Couldn't complete multipart upload");
println!(
"time taken: {}, with chunk:: {}",
now.elapsed().as_secs(),
CHUNK_SIZE
);
}
here is a sample of the output and error I'm getting:
maximum_bytes_to_read: 7000000
length: 7000000
part_number: 1
maximum_bytes_to_read: 7000000
length: 7000000
part_number: 2
maximum_bytes_to_read: 7000000
thread '<unnamed>' panicked at 'there is no reactor running, must be called from the context of a Tokio 1.x runtime', C:\Users\DNDT\.cargo\registry\src\github.com-1ecc6299db9ec823\tokio-1.2.0\src\runtime\blocking\pool.rs:85:33
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
thread '<unnamed>' panicked at 'there is no reactor running, must be called from the context of a Tokio 1.x runtime', C:\Users\DNDT\.cargo\registry\src\github.com-1ecc6299db9ec823\tokio-1.2.0\src\runtime\blocking\pool.rs:85:33
length: 7000000
I googled this error but I did not have a clear understanding on what actually its:
there is no reactor running, must be called from the context of Tokio runtime”
Here is what I found:
another question with the same error
and another question
Which seems its some compatibility issue because s3 might be using some version of tokio that is not compatible with the version of tokio I have.
Here are some relevant dependencies:
tokio = { version = "1", features = ["full"] }
tokio-compat-02 = "0.1.2"
rusoto_s3 = "0.46.0"
rusoto_core = "0.46.0"
rusoto_credential = "0.46.0"
rayon = "1.5.0"
I think the main issue comes on actually wanting to run async code in a rayon thread. I tried changing my async code to blocking code using executor::block_on, I also spend some time trying to make the compiler happy, I have multiple threads they all want to write to let completed_parts = Arc::new(Mutex::new(vec![])); so I did some cloning here to make the complier happy.
Also if my used craes matter, here are they:
extern crate dotenv;
extern crate tokio;
use bytes::Bytes;
use dotenv::dotenv;
use futures::executor;
use futures::*;
use rusoto_core::credential::{EnvironmentProvider, ProvideAwsCredentials};
use rusoto_s3::util::{PreSignedRequest, PreSignedRequestOption};
use rusoto_s3::PutObjectRequest;
use rusoto_s3::StreamingBody;
use rusoto_s3::{
CompleteMultipartUploadRequest, CompletedMultipartUpload, CompletedPart,
CreateMultipartUploadRequest, UploadPartRequest, S3,
};
use std::io::Read;
use std::sync::{Arc, Mutex};
use std::time::Duration;
use std::time::Instant;
use tokio::fs;
New to rust, so there a lot of moving pieces to get this one right!
Thanks #Jmb for the discussion, I got rid of the threads and I spawn a tokio task as follows:
create a vector to hold or the futures so we could wait for them:
let mut multiple_parts_futures = Vec::new();
spawn the async task:
loop { // loop file chuncks
...
let send_part_task_future = tokio::task::spawn(async move {
// Upload part
...
}
and then later wait for all futures:
let _results = futures::future::join_all(multiple_parts_futures).await;
worth mentioning, the completed parts need to be sorted:
let mut completed_parts_vector = completed_parts.lock().unwrap().to_vec();
completed_parts_vector.sort_by_key(|part| part.part_number);
The whole code is:
#[tokio::test]
async fn if_multipart_then_upload_multiparts_dicom() {
let now = Instant::now();
dotenv().ok();
let local_filename = "./files/test_big.DCM";
let destination_filename = generate_unique_name();
let destination_filename_clone = destination_filename.clone();
let mut file = std::fs::File::open(local_filename).unwrap();
const CHUNK_SIZE: usize = 6_000_000;
let mut buffer = Vec::with_capacity(CHUNK_SIZE);
let client = super::get_client().await;
let create_multipart_request = CreateMultipartUploadRequest {
bucket: client.bucket_name.to_owned(),
key: destination_filename.to_owned(),
..Default::default()
};
// Start the multipart upload and note the upload_id generated
let response = client
.s3
.create_multipart_upload(create_multipart_request)
.await
.expect("Couldn't create multipart upload");
let upload_id = response.upload_id.unwrap();
let upload_id_clone = upload_id.clone();
// Create upload parts
let create_upload_part = move |body: Vec<u8>, part_number: i64| -> UploadPartRequest {
UploadPartRequest {
body: Some(body.into()),
bucket: client.bucket_name.to_owned(),
key: destination_filename_clone.to_owned(),
upload_id: upload_id_clone.to_owned(),
part_number: part_number,
..Default::default()
}
};
let create_upload_part_arc = Arc::new(create_upload_part);
let completed_parts = Arc::new(Mutex::new(vec![]));
let mut part_number = 1;
let mut multiple_parts_futures = Vec::new();
loop {
let maximum_bytes_to_read = CHUNK_SIZE - buffer.len();
println!("maximum_bytes_to_read: {}", maximum_bytes_to_read);
file.by_ref()
.take(maximum_bytes_to_read as u64)
.read_to_end(&mut buffer)
.unwrap();
println!("length: {}", buffer.len());
println!("part_number: {}", part_number);
if buffer.len() == 0 {
// The file has ended.
break;
}
let next_buffer = Vec::with_capacity(CHUNK_SIZE);
let data_to_send = buffer;
let completed_parts_cloned = completed_parts.clone();
let create_upload_part_arc_cloned = create_upload_part_arc.clone();
let send_part_task_future = tokio::task::spawn(async move {
let part = create_upload_part_arc_cloned(data_to_send.to_vec(), part_number);
{
let part_number = part.part_number;
let client = super::get_client().await;
let response = client.s3.upload_part(part).await;
completed_parts_cloned.lock().unwrap().push(CompletedPart {
e_tag: response
.expect("Couldn't complete multipart upload")
.e_tag
.clone(),
part_number: Some(part_number),
});
}
});
multiple_parts_futures.push(send_part_task_future);
buffer = next_buffer;
part_number = part_number + 1;
}
let client = super::get_client().await;
println!("waiting for futures");
let _results = futures::future::join_all(multiple_parts_futures).await;
let mut completed_parts_vector = completed_parts.lock().unwrap().to_vec();
completed_parts_vector.sort_by_key(|part| part.part_number);
println!("futures done");
let completed_upload = CompletedMultipartUpload {
parts: Some(completed_parts_vector),
};
let complete_req = CompleteMultipartUploadRequest {
bucket: client.bucket_name.to_owned(),
key: destination_filename.to_owned(),
upload_id: upload_id.to_owned(),
multipart_upload: Some(completed_upload),
..Default::default()
};
client
.s3
.complete_multipart_upload(complete_req)
.await
.expect("Couldn't complete multipart upload");
println!(
"time taken: {}, with chunk:: {}",
now.elapsed().as_secs(),
CHUNK_SIZE
);
}

How to write using tokio Framed LinesCodec?

The following code reads from my server successfully, however I cannot seem to get the correct syntax or semantics to write back to the server when a particular line command is recognized. Do I need to create a FramedWriter? Most examples I have found split the socket but that seems overkill I expect the codec to be able to handle the bi-directional io by providing some async write method.
# Cargo.toml
[dependencies]
tokio = { version = "0.3", features = ["full"] }
tokio-util = { version = "0.4", features = ["codec"] }
//! main.rs
use tokio::net::{ TcpStream };
use tokio_util::codec::{ Framed, LinesCodec };
use tokio::stream::StreamExt;
use std::error::Error;
use std::net::{IpAddr, Ipv4Addr, SocketAddr};
#[tokio::main]
async fn main() -> Result<(), Box<dyn Error>> {
let saddr = SocketAddr::new(IpAddr::V4(Ipv4Addr::new(127, 0, 0, 1)), 8081);
let conn = TcpStream::connect(saddr).await?;
let mut server = Framed::new(conn, LinesCodec::new_with_max_length(1024));
while let Some(Ok(line)) = server.next().await {
match line.as_str() {
"READY" => println!("Want to write a line to the stream"),
_ => println!("{}", line),
}
}
Ok({})
}
According to the documentation, Framed implements Stream and Sink traits. Sink defines only the bare minimum of low-level sending methods. To get the high-level awaitable methods like send() and send_all(), you need to use the SinkExt extension trait.
For example (playground):
use futures::sink::SinkExt;
// ...
while let Some(Ok(line)) = server.next().await {
match line.as_str() {
"READY" => server.send("foo").await?,
_ => println!("{}", line),
}
}

Grabbing a response header value with reqwest in rust

Ive mainly been experimenting with the reqwest module over the past few days to see what i can accomplish, but i came over a certain problem which im not able to resolve. Im trying to retrieve the a response headers value after doing a post request. The code in which i tried is
extern crate reqwest;
fn main() {
let client = reqwest::Client::new();
let res = client
.post("https://google.com")
.header("testerheader", "test")
.send();
println!("Headers:\n{:#?}", res.headers().get("content-length").unwrap());
}
This code seems to return this error
error[E0599]: no method named `headers` found for opaque type `impl std::future::Future` in the current scope
The latest reqwest is async by default, so in your example res is a future, not the actual response. Either you need to await the response or use reqwest's blocking API.
async/await
In your Cargo.toml add tokio as a dependency.
[dependencies]
tokio = { version = "0.2.22", features = ["full"] }
reqwest = "0.10.8"
Use tokio as the async runtime and await the response.
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let client = reqwest::Client::new();
let res = client
.post("https://google.com")
.header("testerheader", "test")
.send()
.await?;
println!(
"Headers:\n{:#?}",
res.headers().get("content-length").unwrap()
);
Ok(())
}
Blocking API
In your Cargo.toml enable the blocking feature.
[dependencies]
reqwest = { version = "0.10.8", features = ["blocking"] }
Now you can use the Client from the reqwest::blocking module.
fn main() {
let client = reqwest::blocking::Client::new();
let res = client
.post("https://google.com")
.header("testerheader", "test")
.send()
.unwrap();
println!(
"Headers:\n{:#?}",
res.headers().get("content-length").unwrap()
);
}

Is there an easy way to get the system's external IP in Rust? [duplicate]

How can I make an HTTP request from Rust? I can't seem to find anything in the core library.
I don't need to parse the output, just make a request and check the HTTP response code.
Bonus marks if someone can show me how to URL encode the query parameters on my URL!
The easiest way to make HTTP requests in Rust is with the reqwest crate:
use std::error::Error;
fn main() -> Result<(), Box<dyn Error>> {
let resp = reqwest::blocking::get("https://httpbin.org/ip")?.text()?;
println!("{:#?}", resp);
Ok(())
}
In Cargo.toml:
[dependencies]
reqwest = { version = "0.11", features = ["blocking"] }
Async
Reqwest also supports making asynchronous HTTP requests using Tokio:
use std::error::Error;
#[tokio::main]
async fn main() -> Result<(), Box<dyn Error>> {
let resp = reqwest::get("https://httpbin.org/ip")
.await?
.text()
.await?;
println!("{:#?}", resp);
Ok(())
}
In Cargo.toml:
[dependencies]
reqwest = "0.11"
tokio = { version = "1", features = ["full"] }
Hyper
Reqwest is an easy to use wrapper around Hyper, which is a popular HTTP library for Rust. You can use it directly if you need more control over managing connections. A Hyper-based example is below and is largely inspired by an example in its documentation:
use hyper::{body::HttpBody as _, Client, Uri};
use std::error::Error;
#[tokio::main]
async fn main() -> Result<(), Box<dyn Error>> {
let client = Client::new();
let res = client
.get(Uri::from_static("http://httpbin.org/ip"))
.await?;
println!("status: {}", res.status());
let buf = hyper::body::to_bytes(res).await?;
println!("body: {:?}", buf);
}
In Cargo.toml:
[dependencies]
hyper = { version = "0.14", features = ["full"] }
tokio = { version = "1", features = ["full"] }
Original answer (Rust 0.6)
I believe what you're looking for is in the standard library. now in rust-http and Chris Morgan's answer is the standard way in current Rust for the foreseeable future. I'm not sure how far I can take you (and hope I'm not taking you the wrong direction!), but you'll want something like:
// Rust 0.6 -- old code
extern mod std;
use std::net_ip;
use std::uv;
fn main() {
let iotask = uv::global_loop::get();
let result = net_ip::get_addr("www.duckduckgo.com", &iotask);
io::println(fmt!("%?", result));
}
As for encoding, there are some examples in the unit tests in src/libstd/net_url.rs.
Update: This answer refers to fairly ancient history. For the current best practices, please look at Isaac Aggrey's answer instead.
I've been working on rust-http, which has become the de facto HTTP library for Rust (Servo uses it); it's far from complete and very poorly documented at present. Here's an example of making a request and doing something with the status code:
extern mod http;
use http::client::RequestWriter;
use http::method::Get;
use http::status;
use std::os;
fn main() {
let request = RequestWriter::new(Get, FromStr::from_str(os::args()[1]).unwrap());
let response = match request.read_response() {
Ok(response) => response,
Err(_request) => unreachable!(), // Uncaught condition will have failed first
};
if response.status == status::Ok {
println!("Oh goodie, I got me a 200 OK response!");
} else {
println!("That URL ain't returning 200 OK, it returned {} instead", response.status);
}
}
Run this code with a URL as the sole command-line argument and it'll check the status code! (HTTP only; no HTTPS.)
Compare with src/examples/client/client.rs for an example that does a little more.
rust-http is tracking the master branch of rust. At present it'll work in the just-released Rust 0.8, but there are likely to be breaking changes soon. Actually, no version of rust-http works on Rust 0.8—there was a breaking change which can't be worked around in privacy rules just before the release, leaving something that rust-http depends on in extra::url inaccessible. This has since been fixed, but it leaves rust-http incompatible with Rust 0.8.
As for the query string encoding matter, at present that should be done with extra::url::Query (a typedef for ~[(~str, ~str)]). Appropriate functions for conversions:
extra::url::query_to_str
extra::url::query_from_str (sorry, can't use this just at present as it's private. PR to make it public about to come. In the mean time, this link actually shouldn't work, it's only available because of https://github.com/mozilla/rust/issues/7476.)
Using curl bindings. Stick this in your Cargo.toml:
[dependencies.curl]
git = "https://github.com/carllerche/curl-rust"
...and this in the src/main.rs:
extern crate curl;
use curl::http;
fn main(){
let resp = http::handle()
.post("http://localhost:3000/login", "username=dude&password=sikrit")
.exec().unwrap();
println!("code={}; headers={}; body={}",
resp.get_code(), resp.get_headers(), resp.get_body());
}
I prefer Crates with low dependency count, so I would recommend these:
MinReq (0 deps)
use minreq;
fn main() -> Result<(), minreq::Error> {
let o = minreq::get("https://speedtest.lax.hivelocity.net").send()?;
let s = o.as_str()?;
print!("{}", s);
Ok(())
}
HTTP_Req (35 deps)
use {http_req::error, http_req::request, std::io, std::io::Write};
fn main() -> Result<(), error::Error> {
let mut a = Vec::new();
request::get("https://speedtest.lax.hivelocity.net", &mut a)?;
io::stdout().write(&a)?;
Ok(())
}
To elaborate on Isaac Aggrey's answer, here's an example of making a POST request with query parameters using the reqwest library.
Cargo.toml
[package]
name = "play_async"
version = "0.1.0"
edition = "2018"
[dependencies]
reqwest = "0.10.4"
tokio = { version = "0.2.21", features = ["macros"] }
Code
use reqwest::Client;
type Error = Box<dyn std::error::Error>;
type Result<T, E = Error> = std::result::Result<T, E>;
async fn post_greeting() -> Result<()> {
let client = Client::new();
let req = client
// or use .post, etc.
.get("https://webhook.site/1dff66fd-07ff-4cb5-9a77-681efe863747")
.header("Accepts", "application/json")
.query(&[("hello", "1"), ("world", "ABCD")]);
let res = req.send().await?;
println!("{}", res.status());
let body = res.bytes().await?;
let v = body.to_vec();
let s = String::from_utf8_lossy(&v);
println!("response: {} ", s);
Ok(())
}
#[tokio::main]
async fn main() -> Result<()> {
post_greeting().await?;
Ok(())
}
Go to https://webhook.site and create your webhook link and change the code to match. You'll see the request was received on server in realtime.
This example was originally based on Bastian Gruber's example and has been updated for modern Rust syntax and newer crate versions.
Building upon Patrik Stas' answer, if you want to do an HTTP form URL-encoded POST, here is what you have to do. In this case, it's to get an OAuth client_credentials token.
Cargo.toml
[dependencies]
reqwest = "0.10.4"
tokio = { version = "0.2.21", features = ["macros"] }
Code
use reqwest::{Client, Method};
type Error = Box<dyn std::error::Error>;
type Result<T, E = Error> = std::result::Result<T, E>;
async fn print_access_token() -> Result<()> {
let client = Client::new();
let host = "login.microsoftonline.com";
let tenant = "TENANT";
let client_id = "CLIENT_ID";
let client_secret = "CLIENT_SECRET";
let scope = "https://graph.microsoft.com/.default";
let grant_type = "client_credentials";
let url_string = format!("https://{}/{}/oauth2/v2.0/token", host, tenant);
let body = format!(
"client_id={}&client_secret={}&scope={}&grant_type={}",
client_id, client_secret, scope, grant_type,
);
let req = client.request(Method::POST, &url_string).body(body);
let res = req.send().await?;
println!("{}", res.status());
let body = res.bytes().await?;
let v = body.to_vec();
let s = String::from_utf8_lossy(&v);
println!("response: {} ", s);
Ok(())
}
#[tokio::main]
async fn main() -> Result<()> {
print_access_token().await?;
Ok(())
}
This will print something like the following.
200 OK
response: {"token_type":"Bearer","expires_in":3599,"ext_expires_in":3599,"access_token":"ACCESS_TOKEN"}
Dropping a version here that uses the surf crate (dual to the tide crate):
let res = surf::get("https://httpbin.org/get").await?;
assert_eq!(res.status(), 200);
Using hyper "0.13"
Also using hyper-tls for HTTPS support.
File Cargo.toml
hyper = "0.13"
hyper-tls = "0.4.1"
tokio = { version = "0.2", features = ["full"] }
Code
extern crate hyper;
use hyper::Client;
use hyper::body::HttpBody as _;
use tokio::io::{stdout, AsyncWriteExt as _};
use hyper_tls::HttpsConnector;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
// HTTP only
// let client = Client::new();
// http or https connections
let client = Client::builder().build::<_, hyper::Body>(HttpsConnector::new());
let mut resp = client.get("https://catfact.ninja/fact".parse()?).await?;
println!("Response: {}", resp.status());
while let Some(chunk) = resp.body_mut().data().await {
stdout().write_all(&chunk?).await?;
}
Ok(())
}
Adapted from https://hyper.rs/guides/client/basic/
Simple http request with this crate: wsd
fn test() {
wsd::http::get("https://docs.rs/", |data| {
println!("status = {}, data = {}", data.status(), data.text());
});
}

Resources