Is it possible to increase the channel size of OpenTelemetry span batch processor?
I'm currently getting OpenTelemetry trace error occurred. cannot send span to the batch span processor because the channel is full error on a high-traffic scenario.
Here is my configuration code:
let mut exporter_metadata = MetadataMap::new();
exporter_metadata.insert(
"api-key",
"<redacted>".parse()?,
);
let exporter = opentelemetry_otlp::new_exporter()
.tonic()
.with_endpoint("https://otlp.nr-data.net:4317")
.with_tls_config(ClientTlsConfig::default())
.with_metadata(exporter_metadata);
let trace_config = opentelemetry::sdk::trace::config()
.with_resource(Resource::new(vec![
KeyValue::new(
opentelemetry_semantic_conventions::resource::SERVICE_NAME,
"worker",
),
KeyValue::new(
opentelemetry_semantic_conventions::resource::SERVICE_INSTANCE_ID,
"dev-instance",
),
KeyValue::new("kind", "server"),
]))
.with_sampler(Sampler::TraceIdRatioBased(1.0));
let tracer = opentelemetry_otlp::new_pipeline()
.tracing()
.with_exporter(exporter)
.with_trace_config(trace_config)
.install_batch(opentelemetry::runtime::Tokio)?;
let otel_filter = Targets::new().with_target("worker", LevelFilter::INFO);
let otel_layer = tracing_opentelemetry::layer()
.with_tracer(tracer)
.with_filter(otel_filter);
I've looked through the documentation, but there was no configuration mentioning the size of the channel.
After digging through the source code, I've come up with the following solution:
In batch_config.rs, declare new trait and implement it for OtlpTracePipeline:
use anyhow::Result;
use core::convert::Into;
use core::option::Option;
use core::option::Option::{None, Some};
use core::result::Result::Ok;
use opentelemetry::sdk::trace::{BatchConfig, BatchSpanProcessor, Config, TraceRuntime};
use opentelemetry::trace::{TraceError, TracerProvider};
use opentelemetry::{global, sdk};
use opentelemetry_otlp::{Error, OtlpTracePipeline, SpanExporter, SpanExporterBuilder};
pub trait BatchConfigurable {
fn install_batch_manual<B, R>(
self,
trace_config: Option<Config>,
exporter_builder: Option<B>,
batch_config: BatchConfig,
runtime: R,
) -> Result<sdk::trace::Tracer, TraceError>
where
B: Into<SpanExporterBuilder>,
R: TraceRuntime;
}
fn build_batch_with_config_and_exporter<R: TraceRuntime>(
exporter: SpanExporter,
batch_config: BatchConfig,
trace_config: Option<Config>,
runtime: R,
) -> sdk::trace::Tracer {
let batch = BatchSpanProcessor::builder(exporter, runtime)
.with_batch_config(batch_config)
.build();
let mut provider_builder = sdk::trace::TracerProvider::builder().with_span_processor(batch);
if let Some(config) = trace_config {
provider_builder = provider_builder.with_config(config);
}
let provider = provider_builder.build();
let tracer =
provider.versioned_tracer("opentelemetry-otlp", Some(env!("CARGO_PKG_VERSION")), None);
let _ = global::set_tracer_provider(provider);
tracer
}
impl BatchConfigurable for OtlpTracePipeline {
fn install_batch_manual<B, R>(
self,
trace_config: Option<Config>,
exporter_builder: Option<B>,
batch_config: BatchConfig,
runtime: R,
) -> Result<sdk::trace::Tracer, TraceError>
where
B: Into<SpanExporterBuilder>,
R: TraceRuntime,
{
let exporter_builder = exporter_builder.map(|b| b.into());
Ok(build_batch_with_config_and_exporter(
exporter_builder
.ok_or(Error::NoExporterBuilder)?
.build_span_exporter()?,
batch_config,
trace_config,
runtime,
))
}
}
And then use it like this:
let batch_config = BatchConfig::default().with_max_queue_size(1048576);
let tracer = opentelemetry_otlp::new_pipeline()
.tracing()
.install_batch_manual(
Some(trace_config),
Some(exporter),
batch_config,
opentelemetry::runtime::Tokio,
)?;
Roughly speaking, install_batch_manual is equivalent to calling .with_exporter().with_trace_config().install_batch, but with customizable BatchConfig option.
Related
I have a Rodio's Sink wrapper in HAudioSink. I also implement a try_new_from_haudio function that, in short, creates a Sink instance, wrap it in HAudioSink and already starts playing the first audio.
In Sink's docs it states: "Dropping the Sink stops all sounds. You can use detach if you want the sounds to continue playing". So when try_new_from_haudiois returning, it drops the original sink and the sound is stopping when it shouldn't.
So my question here is: what should I do to avoid it dropping when I create an instance of HAudioSink? Is ManuallyDrop the way to go?
struct HAudioSink {
sink: Sink,
}
impl HAudioSink {
pub fn try_new_from_haudio<T>(haudio: HAudio<T>) -> HResult<Self>
where
T: NativeType + Float + ToPrimitive,
{
let (_stream, stream_handle) = OutputStream::try_default()?;
let sink = Sink::try_new(&stream_handle).unwrap();
let nchannels = haudio.nchannels();
let nframes = haudio.nframes();
let sr = haudio.sr();
let mut data_interleaved: Vec<f32> = Vec::with_capacity(nchannels * nframes);
let values = haudio
.inner()
.inner()
.values()
.as_any()
.downcast_ref::<PrimitiveArray<T>>()
.unwrap();
for f in 0..nframes {
for ch in 0..nchannels {
data_interleaved.push(values.value(f + ch * nframes).to_f32().unwrap());
}
}
let source = SamplesBuffer::new(u16::try_from(nchannels).unwrap(), sr, data_interleaved);
sink.append(source);
Ok(HAudioSink { sink })
}
// Sleeps the current thread until the sound ends.
pub fn sleep_until_end(&self) {
self.sink.sleep_until_end();
}
}
#[cfg(test)]
mod tests {
use super::*;
//this test doesn't work
#[test]
fn play_test() {
let sink = HAudioSink::try_new_from_file("../testfiles/gs-16b-2c-44100hz.wav").unwrap();
sink.append_from_file("../testfiles/gs-16b-2c-44100hz.wav")
.unwrap();
sink.sleep_until_end();
}
}
If I put sink.sleep_until_end() inside try_new_from_haudio, just before returning Ok, it works.
check the following link for the reproducible example of this issue: https://github.com/RustAudio/rodio/issues/476
The problem is that for the _stream too "If this is dropped playback will end & attached OutputStreamHandles will no longer work."
see the docs on OutputStream. So you have to store it alongside your Sink:
pub struct SinkWrapper {
pub sink: Sink,
pub stream: OutputStream,
}
impl SinkWrapper {
pub fn new() -> Self {
let (stream, stream_handle) = OutputStream::try_default().unwrap();
let sink = Sink::try_new(&stream_handle).unwrap();
// Add a dummy source of the sake of the example.
let source = SineWave::new(440.0)
.take_duration(Duration::from_secs_f32(1.))
.amplify(2.);
sink.append(source);
Self { sink, stream }
}
}
I want to capture the duration of execution of a span in rust tracing and send that as metric.
I have found that fmt() helps in printing that as mentioned here:How can I log span duration with Rust tracing?
I have also tried this example about creating layer and implementing on_new_span() and on_event(). I added on_close() as well to check what metadata do we get here. The code for that I wrote is:
use tracing::{info, info_span};
use tracing_subscriber::prelude::*;
mod custom_layer;
use custom_layer::CustomLayer;
fn main() {
tracing_subscriber::registry()
.with(CustomLayer)
.init();
let outer_span = info_span!("Outer", level = 0, other_field = tracing::field::Empty);
let _outer_entered = outer_span.enter();
outer_span.record("other_field", &7);
let inner_span = info_span!("inner", level = 1);
let _inner_entered = inner_span.enter();
info!(a_bool = true, answer = 42, message = "first example");
}
custom_layer.rs:
use std::collections::BTreeMap;
use tracing_subscriber::Layer;
pub struct CustomLayer;
impl<S> Layer<S> for CustomLayer
where
S: tracing::Subscriber,
S: for<'lookup> tracing_subscriber::registry::LookupSpan<'lookup>,
{
fn on_new_span(
&self,
attrs: &tracing::span::Attributes<'_>,
id: &tracing::span::Id,
ctx: tracing_subscriber::layer::Context<'_, S>,
) {
let span = ctx.span(id).unwrap();
let mut fields = BTreeMap::new();
let mut visitor = JsonVisitor(&mut fields);
attrs.record(&mut visitor);
let storage = CustomFieldStorage(fields);
let mut extensions = span.extensions_mut();
extensions.insert(storage);
}
fn on_record(
&self,
id: &tracing::span::Id,
values: &tracing::span::Record<'_>,
ctx: tracing_subscriber::layer::Context<'_, S>,
) {
// Get the span whose data is being recorded
let span = ctx.span(id).unwrap();
// Get a mutable reference to the data we created in new_span
let mut extensions_mut = span.extensions_mut();
let custom_field_storage: &mut CustomFieldStorage =
extensions_mut.get_mut::<CustomFieldStorage>().unwrap();
let json_data: &mut BTreeMap<String, serde_json::Value> = &mut custom_field_storage.0;
// And add to using our old friend the visitor!
let mut visitor = JsonVisitor(json_data);
values.record(&mut visitor);
}
fn on_event(&self, event: &tracing::Event<'_>, ctx: tracing_subscriber::layer::Context<'_, S>) {
// All of the span context
let scope = ctx.event_scope(event).unwrap();
let mut spans = vec![];
for span in scope.from_root() {
let extensions = span.extensions();
let storage = extensions.get::<CustomFieldStorage>().unwrap();
let field_data: &BTreeMap<String, serde_json::Value> = &storage.0;
spans.push(serde_json::json!({
"target": span.metadata().target(),
"name": span.name(),
"level": format!("{:?}", span.metadata().level()),
"fields": field_data,
}));
}
// The fields of the event
let mut fields = BTreeMap::new();
let mut visitor = JsonVisitor(&mut fields);
event.record(&mut visitor);
// And create our output
let output = serde_json::json!({
"target": event.metadata().target(),
"name": event.metadata().name(),
"level": format!("{:?}", event.metadata().level()),
"fields": fields,
"spans": spans,
});
println!("{}", serde_json::to_string_pretty(&output).unwrap());
}
fn on_close(
&self,
id: tracing::span::Id,
ctx: tracing_subscriber::layer::Context<'_, S>,
) {
// Get the span whose data is being recorded
let span = ctx.span(&id).unwrap();
let output = serde_json::json!({
"target": span.metadata().target(),
"name": span.name(),
"level": format!("{:?}", span.metadata().level()),
"fields": format!("{:?}", span.metadata().fields()),
});
println!("On_close{}", serde_json::to_string_pretty(&output).unwrap());
}
}
struct JsonVisitor<'a>(&'a mut BTreeMap<String, serde_json::Value>);
impl<'a> tracing::field::Visit for JsonVisitor<'a> {
fn record_f64(&mut self, field: &tracing::field::Field, value: f64) {
self.0
.insert(field.name().to_string(), serde_json::json!(value));
}
fn record_i64(&mut self, field: &tracing::field::Field, value: i64) {
self.0
.insert(field.name().to_string(), serde_json::json!(value));
}
fn record_u64(&mut self, field: &tracing::field::Field, value: u64) {
self.0
.insert(field.name().to_string(), serde_json::json!(value));
}
fn record_bool(&mut self, field: &tracing::field::Field, value: bool) {
self.0
.insert(field.name().to_string(), serde_json::json!(value));
}
fn record_str(&mut self, field: &tracing::field::Field, value: &str) {
self.0
.insert(field.name().to_string(), serde_json::json!(value));
}
fn record_error(
&mut self,
field: &tracing::field::Field,
value: &(dyn std::error::Error + 'static),
) {
self.0.insert(
field.name().to_string(),
serde_json::json!(value.to_string()),
);
}
fn record_debug(&mut self, field: &tracing::field::Field, value: &dyn std::fmt::Debug) {
self.0.insert(
field.name().to_string(),
serde_json::json!(format!("{:?}", value)),
);
}
}
#[derive(Debug)]
struct CustomFieldStorage(BTreeMap<String, serde_json::Value>);
Cargo.toml
[package]
name = "tracing-custom-logging"
version = "0.1.0"
edition = "2021"
[dependencies]
serde_json = "1"
tracing = "0.1"
tracing-subscriber = "0.3.16"
snafu = "0.7.3"
thiserror = "1.0.31"
tracing-opentelemetry = "0.18.0"
Unfortunately I have not been able to get the data about duration of a span anywhere. Can you guys help me identify how/where can I get it from?
You cannot "get" the span duration from the tracing crate because it doesn't store it. It only stores the basic metadata and allows for hooking into framework events in a lightweight way. It is the job of the Subscriber to keep track of any additional data.
You could use the tracing-timing crate if you only need periodic histograms. Otherwise, you can't really use data from an existing layer which may already store timing data, because they often don't expose it. You'll have to keep track of it yourself.
Using the tracing-subscriber crate, you can create a Layer and store additional data using the Registry. Here's an example of how that can be done:
use std::time::Instant;
use tracing::span::{Attributes, Id};
use tracing::Subscriber;
use tracing_subscriber::layer::{Context, Layer};
use tracing_subscriber::registry::LookupSpan;
struct Timing {
started_at: Instant,
}
pub struct CustomLayer;
impl<S> Layer<S> for CustomLayer
where
S: Subscriber,
S: for<'lookup> LookupSpan<'lookup>,
{
fn on_new_span(&self, _attrs: &Attributes<'_>, id: &Id, ctx: Context<'_, S>) {
let span = ctx.span(id).unwrap();
span.extensions_mut().insert(Timing {
started_at: Instant::now(),
});
}
fn on_close(&self, id: Id, ctx: Context<'_, S>) {
let span = ctx.span(&id).unwrap();
let started_at = span.extensions().get::<Timing>().unwrap().started_at;
println!(
"span {} took {}",
span.metadata().name(),
(Instant::now() - started_at).as_micros(),
);
}
}
This just prints out the results where they are calculated, but you can emit the results elsewhere, or store it in some shared resource as you see fit.
Some example usage:
use std::time::Duration;
use tracing_subscriber::layer::SubscriberExt;
use tracing_subscriber::util::SubscriberInitExt;
#[tracing::instrument]
fn test(n: u64) {
std::thread::sleep(Duration::from_secs(n));
}
fn main() {
tracing_subscriber::registry::Registry::default()
.with(CustomLayer)
.init();
test(1);
test(2);
test(3);
}
span test took 1000081
span test took 2000106
span test took 3000127
You may also need to be aware of on_enter() and on_exit(), which are relevant when using async functions because their execution may be suspended and resumed later, and you can use those functions to be notified when that happens. Depending on what you're looking for, you may need to add filtering so that you only track the spans you're interested in (by name or target or whatever).
I have a bunch of math that has real time constraints. My main loop will just call this function repeatedly and it will always store results into an existing buffer. However, I want to be able to spawn the threads at init time and then allow the threads to run and do their work and then wait for more data. The synchronization I will use a Barrier and have that part working. What I can't get working and have tried various iterations of Arc or crossbeam is splitting the thread spawning up and the actual workload. This is what I have now.
pub const WORK_SIZE: usize = 524_288;
pub const NUM_THREADS: usize = 6;
pub const NUM_TASKS_PER_THREAD: usize = WORK_SIZE / NUM_THREADS;
fn main() {
let mut work: Vec<f64> = Vec::with_capacity(WORK_SIZE);
for i in 0..WORK_SIZE {
work.push(i as f64);
}
crossbeam::scope(|scope| {
let threads: Vec<_> = work
.chunks(NUM_TASKS_PER_THREAD)
.map(|chunk| scope.spawn(move |_| chunk.iter().cloned().sum::<f64>()))
.collect();
let threaded_time = std::time::Instant::now();
let thread_sum: f64 = threads.into_iter().map(|t| t.join().unwrap()).sum();
let threaded_micros = threaded_time.elapsed().as_micros() as f64;
println!("threaded took: {:#?}", threaded_micros);
let serial_time = std::time::Instant::now();
let no_thread_sum: f64 = work.iter().cloned().sum();
let serial_micros = serial_time.elapsed().as_micros() as f64;
println!("serial took: {:#?}", serial_micros);
assert_eq!(thread_sum, no_thread_sum);
println!(
"Threaded performace was {:?}",
serial_micros / threaded_micros
);
})
.unwrap();
}
But I can't find a way to spin these threads up in an init function and then in a do_work function pass work into them. I attempted to do something like this with Arc's and Mutex's but couldn't get everything straight there either. What I want to turn this into is something like the following
use std::sync::{Arc, Barrier, Mutex};
use std::{slice::Chunks, thread::JoinHandle};
pub const WORK_SIZE: usize = 524_288;
pub const NUM_THREADS: usize = 6;
pub const NUM_TASKS_PER_THREAD: usize = WORK_SIZE / NUM_THREADS;
//simplified version of what actual work that code base will do
fn do_work(data: &[f64], result: Arc<Mutex<f64>>, barrier: Arc<Barrier>) {
loop {
barrier.wait();
let sum = data.into_iter().cloned().sum::<f64>();
let mut result = *result.lock().unwrap();
result += sum;
}
}
fn init(
mut data: Chunks<'_, f64>,
result: &Arc<Mutex<f64>>,
barrier: &Arc<Barrier>,
) -> Vec<std::thread::JoinHandle<()>> {
let mut handles = Vec::with_capacity(NUM_THREADS);
//spawn threads, in actual code these would be stored in a lib crate struct
for i in 0..NUM_THREADS {
let result = result.clone();
let barrier = barrier.clone();
let chunk = data.nth(i).unwrap();
handles.push(std::thread::spawn(|| {
//Pass the particular thread the particular chunk it will operate on.
do_work(chunk, result, barrier);
}));
}
handles
}
fn main() {
let mut work: Vec<f64> = Vec::with_capacity(WORK_SIZE);
let mut result = Arc::new(Mutex::new(0.0));
for i in 0..WORK_SIZE {
work.push(i as f64);
}
let work_barrier = Arc::new(Barrier::new(NUM_THREADS + 1));
let threads = init(work.chunks(NUM_TASKS_PER_THREAD), &result, &work_barrier);
loop {
work_barrier.wait();
//actual code base would do something with summation stored in result.
println!("{:?}", result.lock().unwrap());
}
}
I hope this expresses the intent clearly enough of what I need to do. The issue with this specific implementation is that the chunks don't seem to live long enough and when I tried wrapping them in an Arc as it just moved the argument doesn't live long enough to the Arc::new(data.chunk(_)) line.
use std::sync::{Arc, Barrier, Mutex};
use std::thread;
pub const WORK_SIZE: usize = 524_288;
pub const NUM_THREADS: usize = 6;
pub const NUM_TASKS_PER_THREAD: usize = WORK_SIZE / NUM_THREADS;
//simplified version of what actual work that code base will do
fn do_work(data: &[f64], result: Arc<Mutex<f64>>, barrier: Arc<Barrier>) {
loop {
barrier.wait();
let sum = data.iter().sum::<f64>();
*result.lock().unwrap() += sum;
}
}
fn init(
work: Vec<f64>,
result: Arc<Mutex<f64>>,
barrier: Arc<Barrier>,
) -> Vec<thread::JoinHandle<()>> {
let mut handles = Vec::with_capacity(NUM_THREADS);
//spawn threads, in actual code these would be stored in a lib crate struct
for i in 0..NUM_THREADS {
let slice = work[i * NUM_TASKS_PER_THREAD..(i + 1) * NUM_TASKS_PER_THREAD].to_owned();
let result = Arc::clone(&result);
let w = Arc::clone(&barrier);
handles.push(thread::spawn(move || {
do_work(&slice, result, w);
}));
}
handles
}
fn main() {
let mut work: Vec<f64> = Vec::with_capacity(WORK_SIZE);
let result = Arc::new(Mutex::new(0.0));
for i in 0..WORK_SIZE {
work.push(i as f64);
}
let work_barrier = Arc::new(Barrier::new(NUM_THREADS + 1));
let _threads = init(work, Arc::clone(&result), Arc::clone(&work_barrier));
loop {
thread::sleep(std::time::Duration::from_secs(3));
work_barrier.wait();
//actual code base would do something with summation stored in result.
println!("{:?}", result.lock().unwrap());
}
}
I am working on a procedural generation system and want to be able to modify a mesh frame by frame in bevy rust.
I have tried using assets.get_mut() however this results in an error: help: trait `DerefMut` is required to modify through a dereference, but it is not implemented for `bevy::prelude::Res<'_, bevy::prelude::Assets<bevy::prelude::Mesh>>
Any help would be greatly appreciated.
Here is what my current code looks like roughly:
// Function which is executed at the very start
fn setup (
mut commands: Commands,
mut meshes: ResMut<Assets<Mesh>>,
mut materials: ResMut<Assets<StandardMaterial>>,
asset_server: Res<AssetServer>
) {
let mut mesh = Mesh::from(bevy::prelude::shape::Icosphere { radius: 0.5, subdivisions: 10 });
commands.spawn()
.insert_bundle(PbrBundle {
mesh: meshes.add(mesh),
material: materials.add(colour.into()),
..Default::default()
})
.insert(Transform::from_xyz(0.0, 0.0, 0.0));
}
// Function which is executed every frame
fn update_planets (
mut query: Query<(&Transform, &Handle<Mesh>)>,
assets: Res<Assets<Mesh>>
) {
let (transform, handle) = query.get_single_mut().expect("");
let mut mesh = assets.get_mut(handle.id); // Error caused here
if mesh.is_some() {
let positions = temp.attribute(Mesh::ATTRIBUTE_POSITION).unwrap();
if let VertexAttributeValues::Float32x3(thing) = positions {
let mut temporary = Vec::new();
for i in thingy {
let temp = Vec3::new(i[0], i[1], i[2]);
... // Modify temp here
temporary.push(temp);
}
mesh.unwrap().insert_attribute(Mesh::ATTRIBUTE_POSITION, temporary);
}
}
Fairly new to Bevy, but if you want to mutate an asset, I believe you should be using ResMut rather than Res.
i.e. assets: Res<Assets<Mesh>> should actually be mut assets: ResMut<Assets<Mesh>>.
This might be useful for me:
I have no idea how you're meant to go about parsing a multipart form
besides doing it manually using just the raw post-data string as input
I will try to adjust the Hyper example but any help will be much appreciated.
Relevant issues:
Support Multipart Forms.
support rocket
Rocket's primary abstraction for data is the FromData trait. Given the POST data and the request, you can construct a given type:
pub trait FromData<'a>: Sized {
type Error;
type Owned: Borrow<Self::Borrowed>;
type Borrowed: ?Sized;
fn transform(
request: &Request,
data: Data
) -> Transform<Outcome<Self::Owned, Self::Error>>;
fn from_data(
request: &Request,
outcome: Transformed<'a, Self>
) -> Outcome<Self, Self::Error>;
}
Then, it's just a matter of reading the API for multipart and inserting tab A into slot B:
#![feature(proc_macro_hygiene, decl_macro)]
use multipart::server::Multipart; // 0.16.1, default-features = false, features = ["server"]
use rocket::{
data::{Data, FromData, Outcome, Transform, Transformed},
post, routes, Request,
}; // 0.4.2
use std::io::Read;
#[post("/", data = "<upload>")]
fn index(upload: DummyMultipart) -> String {
format!("I read this: {:?}", upload)
}
#[derive(Debug)]
struct DummyMultipart {
alpha: String,
one: i32,
file: Vec<u8>,
}
// All of the errors in these functions should be reported
impl<'a> FromData<'a> for DummyMultipart {
type Owned = Vec<u8>;
type Borrowed = [u8];
type Error = ();
fn transform(_request: &Request, data: Data) -> Transform<Outcome<Self::Owned, Self::Error>> {
let mut d = Vec::new();
data.stream_to(&mut d).expect("Unable to read");
Transform::Owned(Outcome::Success(d))
}
fn from_data(request: &Request, outcome: Transformed<'a, Self>) -> Outcome<Self, Self::Error> {
let d = outcome.owned()?;
let ct = request
.headers()
.get_one("Content-Type")
.expect("no content-type");
let idx = ct.find("boundary=").expect("no boundary");
let boundary = &ct[(idx + "boundary=".len())..];
let mut mp = Multipart::with_body(&d[..], boundary);
// Custom implementation parts
let mut alpha = None;
let mut one = None;
let mut file = None;
mp.foreach_entry(|mut entry| match &*entry.headers.name {
"alpha" => {
let mut t = String::new();
entry.data.read_to_string(&mut t).expect("not text");
alpha = Some(t);
}
"one" => {
let mut t = String::new();
entry.data.read_to_string(&mut t).expect("not text");
let n = t.parse().expect("not number");
one = Some(n);
}
"file" => {
let mut d = Vec::new();
entry.data.read_to_end(&mut d).expect("not file");
file = Some(d);
}
other => panic!("No known key {}", other),
})
.expect("Unable to iterate");
let v = DummyMultipart {
alpha: alpha.expect("alpha not set"),
one: one.expect("one not set"),
file: file.expect("file not set"),
};
// End custom
Outcome::Success(v)
}
}
fn main() {
rocket::ignite().mount("/", routes![index]).launch();
}
I've never used either of these APIs for real, so there's no guarantee that this is a good implementation. In fact, all the panicking on error definitely means it's suboptimal. A production usage would handle all of those cleanly.
However, it does work:
%curl -X POST -F alpha=omega -F one=2 -F file=#hello http://localhost:8000/
I read this: DummyMultipart { alpha: "omega", one: 2, file: [104, 101, 108, 108, 111, 44, 32, 119, 111, 114, 108, 100, 33, 10] }
An advanced implementation would allow for some abstraction between the user-specific data and the generic multipart aspects. Something like Multipart<MyForm> would be nice.
The author of Rocket points out that this solution allows a malicious end user to POST an infinitely sized file, which would cause the machine to run out of memory. Depending on the intended use, you may wish to establish some kind of cap on the number of bytes read, potentially writing to the filesystem at some breakpoint.
Official support for multipart form parsing in Rocket is still being discussed. Until then, take a look at the official example on how to integrate the multipart crate with Rocket: https://github.com/abonander/multipart/blob/master/examples/rocket.rs