How should I get a copy of this object in rust? - rust

I've got a small program that is supposed to print out the response status of an html get request and also print out the raw html of that response. (I am using the latest version of the reqwest crate for this)
fn main() {
let req = reqwest::blocking::get("https://www.rust-lang.org");
let rawhtml = req.clone().unwrap().text().unwrap();
let status = req.unwrap().status();
println!("status: {}\n\n", status);
println!("{}", rawhtml);
}
running with cargo run gives me error[E0599] saying
method cannot be called on Result<reqwest::blocking::Response, reqwest::Error> due to unsatisfied trait bounds
So if I cannot clone the object then how am I to use more than one of its functions that consumes "self"? I don't want to have to make multiple get requests when all the info I need should be in one.

First of all, reqwest::blocking::Response doesn't implement Clone or provide an alternative, so "getting a copy" is not an option; you need to structure your program so it doesn't need a copy.
The problem you're having with your current code is not mainly with the Response; it's that you're calling Result::unwrap, which does consume the Result it's called on — in order to give you its contents. The right thing to do here is to call unwrap only once.
let req = reqwest::blocking::get("https://www.rust-lang.org")
.unwrap();
let rawhtml = req.text().unwrap();
let status = req.status();
This still won't compile, but that's because you are calling the methods in the wrong order: you must ask for whatever things you need from the headers of the response before using the body. This is not an arbitrary constraint; it's because HTTP gives you that information in that order, and so having the API work this way allows reqwest to not need to store the entire response as it is downloaded — the Response object is not just an unchanging data structure but actually represents the response as it is being sent to your computer.
This version will work:
let req = reqwest::blocking::get("https://www.rust-lang.org")
.unwrap();
let status = req.status();
let rawhtml = req.text().unwrap();
status and rawhtml both implement Clone, so you can keep those around and make copies as much as you need, unlike the Response.
(Disclaimer: I haven't actually used reqwest myself; this answer is based on reading the docs and source, and general Rust principles.)

Related

Rust/WebAssembly -- streaming HTTP request: convert JsValue from ReadableStreamDefaultReader.read into vector

I'm new to Rust and trying to implement a web page that shows a graph with many edges. I plan to use WebAssembly to lay out the graph and determine the positions of the nodes (and a WebGL library to draw the graph).
Context
I want Rust/wasm to make a streaming request to a biggish (7mb potentially)binary composed of sixteen bit integers, delimited by the max value, representing target node indices for each index.
Because of the potentially big file, I'd like Rust to stream it and start laying out the graph as soon as it has the first chunk.
Problem
Eventually, I'd like to turn the JsValue chunks of the response body into vectors of 16-bit integers, but just coercing the chunks into some kind of array type would be enough to unblock me.
Attempted solutions
Below is the start of my lib.rs, which as far as I can tell at least works.
use js_sys::Uint8Array;
use wasm_bindgen::prelude::*;
use wasm_bindgen::JsCast;
use wasm_bindgen_futures::JsFuture;
use web_sys::{ReadableStreamDefaultReader, Response};
#[wasm_bindgen]
extern "C" {
#[wasm_bindgen(js_namespace = console)]
fn log(s: &str);
}
#[wasm_bindgen]
pub async fn fetch_and_compute_graph() -> Result<JsValue, JsValue> {
let window = web_sys::window().unwrap();
let resp_promise = window.fetch_with_str(&"./edges.bin");
let resp_value = JsFuture::from(resp_promise).await?;
let resp: Response = resp_value.dyn_into().unwrap();
log(&format!("Response status code: {}", resp.status()));
if resp.status() != 200 {
return Err(JsValue::FALSE);
}
let reader_obj = resp.body().unwrap().get_reader();
let stream_reader: ReadableStreamDefaultReader = reader_obj.dyn_into().unwrap();
Below are two snippets I've tried putting after the above code:
Update: I've realised that JsFuture::from(stream_reader.read()).await?; results in a JS object with two properties. So maybe I can just cast to Object
1. Type casting solution?
let chunk_obj = JsFuture::from(stream_reader.read()).await?;
let chunk_bytes: Uint8Array = chunk_obj.dyn_into().unwrap();
I would have expected this to work, given that the other type casts do. Maybe I've got the type wrong; the API docs aren't clear on what the promise should resolve to. However, manually inspecting one of the chunks resulting from a fetch call in my browser console confirmed that it was a Uint8Array.
Here's as much of a stack trace as I could get:
Uncaught (in promise) RuntimeError: unreachable executed
__wbg_adapter_14 http://localhost:8080/pkg/rust_wasm_centrality.js:204
real http://localhost:8080/pkg/rust_wasm_centrality.js:189
promise callback*getImports/imports.wbg.__wbg_then_11f7a54d67b4bfad http://localhost:8080/pkg/rust_wasm_centrality.js:326
__wbg_adapter_14 http://localhost:8080/pkg/rust_wasm_centrality.js:204
real http://localhost:8080/pkg/rust_wasm_centrality.js:189
rust_wasm_centrality_bg.wasm:24649:1
Uncaught (in promise) RuntimeError: unreachable executed
__wbg_adapter_14 http://localhost:8080/pkg/rust_wasm_centrality.js:204
real http://localhost:8080/pkg/rust_wasm_centrality.js:189
promise callback*getImports/imports.wbg.__wbg_then_11f7a54d67b4bfad http://localhost:8080/pkg/rust_wasm_centrality.js:326
__wbg_adapter_14 http://localhost:8080/pkg/rust_wasm_centrality.js:204
real http://localhost:8080/pkg/rust_wasm_centrality.js:189
rust_wasm_centrality_bg.wasm:24649:1
2. Deserialisation solution?
let chunk_obj = JsFuture::from(stream_reader.read()).await?;
let bytes = serde_wasm_bindgen::from_value(chunk_obj)?;
This seems tantalisingly close to working. The output in the browser console suggests that the chunk has been converted to a JS or JSON object, which seems odd to me.
Loading graph result: Error: invalid type: JsValue(Object({"done":false,"value":{"0":170,"1":4,"2":180,"3":4,"4":194,"5":6,"6":213,"7":18,"8":40,"9":19,"10":38,"11":3,"12":175,"13":2,"14":90,"15":10,"16":204,"17":1,"18":110,"19":0,"20":223,"21":31,"22":1,"23":1,"24":55,"25":2,"26":77,"27":2,"28":75,"29":2,"30":78,"31":3,"32":36,"33":2,"34":3,"35":6,"36":112,"37":27,"38":187,"39":8,"40":37,"41":0,"42":19,"43":0,"44":7,"45":0,"46":32,"47":0,"48":148,"49":0,"50":27,"51":51,"52":39,"53":0,"54":6,"55":0,"56":14,"57":0,"58":8,"59":0,"60":12,"61":0,"62":22,"63":0,"64":236,"65":0,"66":211,"67":13,"68":11,"69":0,"70":2,"71":0,"72":54,"73":1,"74":232,"75":32,"76":196,"77":54,"78":159,"79":20,"80":156,"81":0,"82":120,"83":35,"84":118,"85":2,"86":250,"87":23,"88":217,"89":27,"90":190,"91":6,"92":121,"93":2,"94":211,"95":25,"96":206,"97":9,"98":111,"99":19,"100":22,"101":40,"102":207,"103":9,"104":30,"105":58,"106":34,"107":22,"108":141,"109":40,"110":218,"111":15,"112":144,"113":10,"114":68,"115":…
localhost:8080:93:17
Other code
The contents of my Cargo.tomlmay also be useful.
name = "rust-wasm-centrality"
version = "0.1.0"
authors = ["Simon Crowe <simon.r.crowe#pm.me>"]
description = "Display a network graph and cenrality-ranked table of nodes"
license = "MIT"
repository = "https://github.com/simoncrowe/rust-wasm-centrality"
edition = "2021"
[lib]
crate-type = ["cdylib"]
[profile.release]
lto = "thin"
[dependencies]
wasm-bindgen = "0.2.63"
wasm-bindgen-futures = "0.4.33"
js-sys = "0.3.60"
serde = { version = "1.0", features = ["derive"] }
serde_bytes = "0.11"
serde-wasm-bindgen = "0.4"
[dependencies.web-sys]
version = "0.3.60"
features = [
'console',
'ReadableStream',
'ReadableStreamDefaultReader',
'Response',
'Window',
]
I realised what was wrong soon after I posted this question. The chunks that the promise returned by fetch resolve into are objects like this: {"done": false, "value":[...]}. So I just needed to do some casting and reflection using Rust's JS bindings to get the array object I needed.
I'm leaving this question and answer up in case someone else has similar issues when getting to grips with Rust and WebAssembly.
Specifically, I needed to cast the result of the read method on ReadableStreamDefaultReader from JsValue to Object, then access its value property and cast that to Uint8Array. Once I had the array, I could just call to_vec on it. Below is the code needed to make a GET request and convert the first chunk of the response body to Vec<u8>.
let window = web_sys::window().unwrap();
let resp_promise = window.fetch_with_str(&"./edges.bin");
let resp_value = JsFuture::from(resp_promise).await?;
let resp: Response = resp_value.dyn_into().unwrap();
log(&format!("Response status code: {}", resp.status()));
if resp.status() != 200 {
return Err(JsValue::FALSE);
}
let reader_value = resp.body().unwrap().get_reader();
let reader: ReadableStreamDefaultReader = reader_value.dyn_into().unwrap();
let result_value = JsFuture::from(reader.read()).await?;
let result: Object = result_value.dyn_into().unwrap();
let chunk_value = js_sys::Reflect::get(&result, &JsValue::from_str("value")).unwrap();
let chunk_array: Uint8Array = chunk_value.dyn_into().unwrap();
let chunk = chunk_array.to_vec();

Add heap-allocated string to Panic handler

I am in the context of a web application, where each request is assigned a unique correlation ID.
I am running in a wasm environment with the wasm32-unknown-unknown target. One request is always served by one thread, and the entire environment is torn down afterwards.
I would like to register a panic handler that if a request panics, it also logs this request ID.
This has proven to be difficult, as anything that has to go into the set_hook method needs the 'static lifetime constraint, which a request ID obviously doesn't have.
I would like code along the following lines to compile
// Assume we have a request here from somewhere.
let request = get_request_from_framework();
// This is at the start of the request
panic::set_hook(Box::new(|info| {
let request_id = request.get_request_id();
// Log panic messages here with request_id here
}));
Potential solutions
I have a few potential approaches. I am not sure which one is best, or if there are any approaches that I am missing.
1. Leaking the memory
As I know my environment is torn down after each request, one way to get a String moved into the 'static lifetime to leak it is like this
let request_id = uuid::Uuid::new_v4().to_string();
let request_id: &'static str = Box::leak(request_id.into_boxed_str());
request_id
This will work in practice, as the request id is theoretically 'static (as after the request is served, the application is closed) - however it has the disadvantage that if I ever move this code into a non-wasm environment, we'll end up leaking memory pretty quickly.
2. Threadlocal
As I know that each request is served by one thread, I could stuff the request id into a ThreadLocal, and read from that ThreadLocal on panics.
pub fn create_request_id() -> &'static str {
let request_id = uuid::Uuid::new_v4().to_string();
CURRENT_REQUEST_ID.with(|current_request_id| {
*current_request_id.borrow_mut() = request_id;
});
}
thread_local! {
pub static CURRENT_REQUEST_ID: RefCell<String> = RefCell::new(uuid::Uuid::new_v4().to_string());
}
// And then inside the panic handler get the request_id with something like
let request_id = CURRENT_REQUEST_ID.with(|current_request_id| {
let current_request_id = current_request_id.try_borrow();
match current_request_id {
Ok(current_request_id) => current_request_id.clone(),
Err(err) => "Unknown".to_string(),
}
});
This seems like the "best" solution I can come up with. However I'm not sure what the perf. implications are of initializing a ThreadLocal on each request is, particularly because we panic extremely rarely, so I'd hate to pay a big cost up-front for something I almost never use.
3. Catch_unwind
I experimented with the catch_unwind API, as that seemed like a good choice. I would then wrap the handling of each request with catch_unwind. However it seems like wasm32-unknown-unknown currently doesn't respect catch_unwind
What is the best solution here? Is there any way to get something that's heap-allocated into a Rust panic hook that I'm not aware of?
As per your example, you could move the id into the clusure:
// Assume we have a request here from somewhere.
let request = get_request_from_framework();
let request_id = request.get_request_id();
// This is at the start of the request
panic::set_hook(Box::new(move |info| {
let panic_message = format!("Request {} failed", request_id);
// Log panic messages here with request_id here
}));
Playground

Iterating over Stream in Rust

I'm trying to iterate over logs from a docker container by using the bollard crate.
Here's my code:
use std::default::Default;
use bollard::container::LogsOptions;
use bollard::Docker;
fn main() {
let docker = Docker::connect_with_http_defaults().unwrap();
let options = Some(LogsOptions::<String>{
stdout: true,
..Default::default()
});
let data = docker.logs("2f6c52410d", options);
// ...
}
docker.logs() returns impl Stream<Item = Result<LogOutput, Error>>. I'd like to iterate over the results, but I have no idea how to do that. I've managed to find an example that uses try_collect::<Vec<LogOutput>>() from the future_utils crate, but I'd like to iterate over the results in a while loop instead of collecting the results in a vector. I know that I can iterate over a vector, but performing tasks in a loop will be better for my use case.
I've tried to call poll_next() method for the stream, but it requires a mysterious Context object which I don't understand. The poll_next() method was unavailable until I've used pin_mut!() macro on the stream.
How do I iterate over stream? What should I read to understand what's going on here? I know that the streams are related to Futures, but calling await or next() doesn't work here.
You typically bring in your library of choice's StreamExt trait, and then do something like
while let Some(foo) = stream.next().await {
// ...
}

Can you return type tiberius::QueryResult from function that uses Sql Client?

When trying to return tiberius::QueryResult I am unable to do so because it references data owned. How do I return stream if this is now allowed?
pub async fn sql_conn(str_query: &str) -> std::result::Result<tiberius::QueryResult<'_>, tiberius::error::Error>{
let mut config = Config::new();
config.host("host");
config.port(1433);
config.authentication(AuthMethod::sql_server("usr", "pw"));
config.trust_cert();
let tcp = TcpStream::connect(config.get_addr()).await?;
tcp.set_nodelay(true)?;
let mut client = Client::connect(config, tcp.compat_write()).await?;
let stream = client.query(
str_query
, &[]).await?;
Ok(stream)
}
Error:
cannot return value referencing local variable `client`
returns a value referencing data owned by the current function
The reason this isn't working is because your query result object references your client and depends on resources that it uses. Most likely, that's because your query result is streaming and the client owns the connection required for that streaming to occur.
Rust won't let you return the query result because it needs the client and the client, as a local variable, is destroyed when the function returns, since it goes out of scope. If Rust let you return the query result, it would likely reference the closed client, and your program would either fail or segfault. This is a common problem in many languages that don't provide garbage collection, and Rust is specifically designed not to allow you to make this mistake.
There are a couple of options here. First, you can create a function which creates the SQL connection and returns a client, then use the client and the query results it returns in the function where you want the data. That way, both the client and the query results will have the right lifetimes.
You could also try to create a struct which instantiates and holds your client and then use that to make the query. For example (untested):
struct Connection<'a> {
client: tiberius::Client<'a>
}
impl<'a> Connection<'a> {
fn query(&mut self, query: &str) -> Result<tiberius::QueryResult<'a>, tiberius::error::Error> {
client.query(str_query, &[]).await
}
}
This is essentially the same as the first situation, just with a different structure.
The third option is to both instantiate the client and totally consume the results in the same function, and then return some structure (like a Vec) with the results. This means that you will have to consume the entirety of the data, which you may not want to do for efficiency reasons, but it does solve the lifetime issue, and depending on your scenario, may be a valid option.

Some errors E0425 & E0599 write_fmt

mod loginfo{
use std::io::Error;
use chrono::prelude::*;
use std::io::prelude::*;
use std::fs::OpenOptions;
const LOG_SYS :&'static str = "log.txt";
const LOG_ERR :&'static str = "log_error.txt";
pub fn set_log_error(info: String)->Result<(), String>{
let mut handler = OpenOptions::new().append(true)
.open(LOG_ERR);
if handler.is_err(){
create_file(LOG_ERR.to_owned()).unwrap();
set_log_error(info).unwrap();
}
if let Err(_errno) = handler.write_fmt(
format_args!("{:?}\t{:?} ->[Last OS error({:?})]\n",
Utc::now().to_rfc2822().to_string(), info,
Error::last_os_error()) ){
panic!(
"\nCannot write info log error\t Info\t:{:?}\n",
Error::last_os_error());
}
Ok(())
}
pub fn set_log(info: String)->Result<(), String>{
let mut handler = OpenOptions::new().append(true)
.open(LOG_SYS);
if handler.is_err(){
set_log_error("Cannot write info log".to_owned())
.unwrap();
}
if let Err(_errno) = handler.write_fmt(
format_args!("{:?}\t{:?}\n",
Utc::now().to_rfc2822().to_string(), info)){
set_log_error("Cannot write data log file".to_owned())
.unwrap();
}
Ok(())
}
pub fn create_file(filename : String)->Result<(), String>{
let handler = OpenOptions::new().write(true)
.create(true).open(filename);
if handler.is_err(){
panic!(
"\nCannot create log file\t Info\t:{:?}\n",
Error::last_os_error());
}
Ok(())
}
}
When compiling, I get the following errors, "error[E0599]: no method named write_fmt found for enum std::result::Result<std::fs::File, std::io::Error> in the current scope --> src/loginfo.rs:19:38`"
but despite using the right imports, I still get the same errors. Is this due to a bad implementation of the module?
Thank you in advance for your answers and remarks?
+1 #Masklinn Ok I think I understand it would be easier to just write
pub fn foo_write_log( info: String){
let mut handler = OpenOptions::new().append(true)
.create(true).open(LOG_SYS).expect("Cannot create log");
handler.write_fmt(
format_args!("{:?}\t{:?} ->[Last OS error({:?})]\n",
Utc::now().to_rfc2822().to_string(), info,
Error::last_os_error())).unwrap();
}
but despite using the right imports, I still get the same errors. Is this due to a bad implementation of the module?
Kind-of? If you look at the type specified in the error, handler is a Result<File, Error>. And while io::Write is implemented on File, it's not implemented on Result.
The problem is that while you're checking whether handler.is_err() you never get the file out of it, nor do you ever return in the error case. Normally you'd use something like match or if let or one of the higher-order methods (e.g. Result::map, Result::and_then) in order to handle or propagate the various cases.
And to be honest the entire thing is rather odd and awkward e.g. your functions can fail but they panic instead (you never actually return an Err); if you're going to try and create a file when opening it for writing fails, why not just do that directly[0]; you are manually calling write_fmt and format_args why not just write!; write_fmt already returns an io::Error why do you discard it then ask for it again via Error::last_os_error; etc...
It's also a bit strange to hand-roll your own logger thing when the rust ecosystem already has a bunch of them though you do you; and the naming is also somewhat awkward e.g. I'd expect something called set_X to actually set the X, so to me set_log would be a way to set the file being logged to.
[0] .create(true).append(true) should open the file in append mode if it exists and create it otherwise; not to mention your version has a concurrency issue: if the open-for-append fails you create the file in write mode, but someone else could have created the file -- with content -- between the two calls, in which case you're going to partially overwrite the file

Resources