Send an Image with an HttpResponse ActixWeb - rust

i try to send an image with an HttpResponse with actix web
my response looks like this
my problem is that i can just return an static u8 but buffer is a [u8; 4096] and not static is there any way to make it possible to send an image?
HttpResponse::Ok()
.content_type("image/jpeg")
.body(buffer)
buffer is:
let mut f = fs::File::open(x).expect("Somthing went wrong");
let mut buffer = [0;4096];
let n = f.read(&mut buffer[..]);
The full func:
fn img_response(x: PathBuf, y: Image)->HttpResponse{
let mut f = fs::File::open(x).expect("Somthing went wrong");
let mut buffer = [0;4096];
let n = f.read(&mut buffer[..]);
match y{
Image::JPG =>{
HttpResponse::Ok()
.content_type("image/jpeg")
.body(buffer)}
Image::PNG =>{
HttpResponse::Ok()
.content_type("image/png")
.body(buffer)}
Image::ICO => {
HttpResponse::Ok()
.content_type("image/x-icon")
.body(buffer)}
}
}
The func img_response get called in my index func
match path.extension().unwrap().to_str().unwrap(){
"png" => {return img_response(path, Image::PNG);}
"jpeg" => {return img_response(path, Image::JPG);}
"ico" => {return img_response(path, Image::ICO);}
};
full code: https://github.com/Benn1x/Kiwi
The Code Compressed:
#![allow(non_snake_case)]
use actix_web::{ web, App, HttpRequest,HttpResponse , HttpServer};
use mime;
use std::path::PathBuf;
use serde_derive::Deserialize;
use std::process::exit;
use toml;
use std::fs::read_to_string;
use actix_web::http::header::ContentType;
use std::fs;
use std::io::prelude::*;
use std::io;
fn img_response(x: PathBuf)->HttpResponse{
let mut f = fs::File::open(x).expect("Somthing went wrong");
let mut buffer = [0;4096];
let n = f.read(&mut buffer[..]);
HttpResponse::Ok()
.content_type("image/jpeg")
.body(buffer)
}
async fn index(req: HttpRequest) -> HttpResponse {
let mut path: PathBuf = req.match_info().query("file").parse().unwrap();
match path.extension().unwrap().to_str().unwrap(){
"jpeg" => {return img_response(path);}
_ => {return img_response(path);}
}
}
#[actix_web::main]
async fn main() -> std::io::Result<()> {
HttpServer::new(move || {
App::new()
.route("/{file:.*}", web::get().to(index))
.service(actix_files::Files::new("/", ".").index_file("index.html"))
})
.bind(("127.0.0.1", 8080))?
.run()
.await
}
This is the The main.rs but just with the func that returns an image

HttpResponse::Ok() returns a HttpResponseBuilder.
Its method .body() takes a generic argument that has to implement the MessageBody trait.
Now here is your problem: [u8; 4096] does not implement MessageBody. What does, however, implement MessageBody, is Vec<u8>.
Therefore by modifying your static array to a dynamic vector, your code seems to compile:
#![allow(non_snake_case)]
use actix_web::{web, App, HttpRequest, HttpResponse, HttpServer};
use std::fs;
use std::io::prelude::*;
use std::path::PathBuf;
fn img_response(x: PathBuf) -> HttpResponse {
let mut f = fs::File::open(x).expect("Somthing went wrong");
let mut buffer = vec![0; 4096];
let n = f.read(&mut buffer[..]);
HttpResponse::Ok().content_type("image/jpeg").body(buffer)
}
async fn index(req: HttpRequest) -> HttpResponse {
let mut path: PathBuf = req.match_info().query("file").parse().unwrap();
match path.extension().unwrap().to_str().unwrap() {
"jpeg" => {
return img_response(path);
}
_ => {
return img_response(path);
}
}
}
#[actix_web::main]
async fn main() -> std::io::Result<()> {
HttpServer::new(move || {
App::new()
.route("/{file:.*}", web::get().to(index))
.service(actix_files::Files::new("/", ".").index_file("index.html"))
})
.bind(("127.0.0.1", 8080))?
.run()
.await
}
There are still problems with your code, though:
The buffer doesn't get sliced to the correct size
Images larger than the buffer get cut down to the first 4096 bytes
Here is a working version of your code:
#![allow(non_snake_case)]
use actix_web::{web, App, HttpRequest, HttpResponse, HttpServer};
use std::fs;
use std::io::prelude::*;
use std::path::PathBuf;
fn img_response(x: PathBuf) -> HttpResponse {
let mut f = fs::File::open(x).expect("Somthing went wrong");
let mut image_data = vec![];
let mut buffer = [0; 4096];
loop {
let n = f.read(&mut buffer[..]).unwrap();
if n == 0 {
break;
}
image_data.extend_from_slice(&buffer[..n]);
}
HttpResponse::Ok()
.content_type("image/jpeg")
.body(image_data)
}
async fn index(req: HttpRequest) -> HttpResponse {
let path: PathBuf = req.match_info().query("file").parse().unwrap();
match path.extension().unwrap().to_str().unwrap() {
"jpeg" => {
return img_response(path);
}
_ => {
return img_response(path);
}
}
}
#[actix_web::main]
async fn main() -> std::io::Result<()> {
HttpServer::new(move || {
App::new()
.route("/{file:.*}", web::get().to(index))
.service(actix_files::Files::new("/", ".").index_file("index.html"))
})
.bind(("127.0.0.1", 8080))?
.run()
.await
}
Note that for large image sizes, it would be benefitial to use the .streaming() body instead:
#![allow(non_snake_case)]
use actix_web::{web, App, Error, HttpRequest, HttpResponse, HttpServer};
use async_stream::{try_stream, AsyncStream};
use bytes::Bytes;
use std::fs;
use std::io::prelude::*;
use std::path::PathBuf;
fn img_response(x: PathBuf) -> HttpResponse {
let stream: AsyncStream<Result<Bytes, Error>, _> = try_stream! {
let mut f = fs::File::open(x)?;
let mut buffer = [0; 4096];
loop{
let n = f.read(&mut buffer[..])?;
if n == 0 {
break;
}
yield Bytes::copy_from_slice(&buffer[..n]);
}
};
HttpResponse::Ok()
.content_type("image/jpeg")
.streaming(stream)
}
async fn index(req: HttpRequest) -> HttpResponse {
let path: PathBuf = req.match_info().query("file").parse().unwrap();
match path.extension().unwrap().to_str().unwrap() {
"jpeg" => {
return img_response(path);
}
_ => {
return img_response(path);
}
}
}
#[actix_web::main]
async fn main() -> std::io::Result<()> {
HttpServer::new(move || {
App::new()
.route("/{file:.*}", web::get().to(index))
.service(actix_files::Files::new("/", ".").index_file("index.html"))
})
.bind(("127.0.0.1", 8080))?
.run()
.await
}

As an alternative to:
fn img_response(x: PathBuf) -> HttpResponse {
let stream: AsyncStream<Result<Bytes, Error>, _> = try_stream! {
let mut f = fs::File::open(x)?;
let mut buffer = [0; 4096];
loop{
let n = f.read(&mut buffer[..])?;
if n == 0 {
break;
}
yield Bytes::copy_from_slice(&buffer[..n]);
}
};
HttpResponse::Ok()
.content_type("image/jpeg")
.streaming(stream)
}
You could do something like (also streams the file):
async fn img_response(x: PathBuf) -> HttpResponse {
let file = actix_files::NamedFile::open_async(x).await.unwrap();
file.into_response()
}
https://github.com/actix/actix-web/discussions/2720#discussioncomment-2510752

Related

How to decode large file without runing out of memory?

I need to convert a ~4GB IBM866 encoded xml file into UTF-8. I tried this
and this crates, but with both of them I run out of memory.
I tried to do it this way:
fn ibm866_to_utf8(ibm866: &[u8]) -> Result<String, MyError> {
use encoding_rs::IBM866;
let (utf8, _, had_error) = IBM866.decode(ibm866);
if (had_error == true) {
Err(MyError::DecodingError)
} else {
Ok(utf8.to_string())
}
}
fn main() {
let path = "ibm866_file";
let mut file = File::open(path).unwrap();
let mut vec: Vec<u8> = Vec::with_capacity(file.metadata().unwrap().len() as usize);
file.read_to_end(&mut vec);
let utf8_string = ibm866_to_utf8(&vec).unwrap();
// write to file
}
I also tried to iterate file by lines like this:
fn main() {
let path = "ibm866_file";
let mut file = File::open(path).unwrap();
let mut reader = BufReader::new(file);
let mut utf8_string = String::new();
for line in reader.lines() {
let utf8_line = ibm866_to_utf8(line.unwrap().as_bytes())
utf8_string = format!("{}{}", utf8_string, utf8_line.unwrap());
}
// write to file
}
But it panics when reader meets non UTF-8 character.
How to decode large files properly?
link to file: https://drive.google.com/file/d/1fHFS5GWPhApoNRl3CRK-pRMNthIakcZY/view?usp=sharing
You need to use the streaming functionality of encoding_rs.
This requires a bit of boilerplate code, though, to properly feed the chunks read from a file into the conversion function.
This code seems to work on a simple example, as well as on your multi-GB large legends.xml file and reports no conversion errors.
use std::{
fs::File,
io::{Read, Write},
};
use encoding_rs::{CoderResult, IBM866};
const BUF_SIZE: usize = 4096;
struct ConversionBuffers {
buf1: [u8; BUF_SIZE],
buf2: [u8; BUF_SIZE],
buf1_active: bool,
content: usize,
}
impl ConversionBuffers {
fn new() -> Self {
Self {
buf1: [0; BUF_SIZE],
buf2: [0; BUF_SIZE],
buf1_active: true,
content: 0,
}
}
fn move_leftovers_and_flip(&mut self, consumed: usize) {
let (src, dst) = if self.buf1_active {
(&mut self.buf1, &mut self.buf2)
} else {
(&mut self.buf2, &mut self.buf1)
};
let leftover = self.content - consumed;
dst[..leftover].clone_from_slice(&src[consumed..self.content]);
self.buf1_active = !self.buf1_active;
self.content = leftover;
}
fn append(&mut self, append_action: impl FnOnce(&mut [u8]) -> usize) {
let buf = if self.buf1_active {
&mut self.buf1[self.content..]
} else {
&mut self.buf2[self.content..]
};
let appended = append_action(buf);
self.content += appended;
}
fn get_data(&mut self) -> &[u8] {
if self.buf1_active {
&self.buf1[..self.content]
} else {
&self.buf2[..self.content]
}
}
}
fn main() {
let mut decoder = IBM866.new_decoder();
let mut file_in = File::open("test_ibm866.txt").unwrap();
let mut file_out = File::create("out_utf8-2.txt").unwrap();
let mut buffer_in = ConversionBuffers::new();
let mut buffer_out = vec![0u8; decoder.max_utf8_buffer_length(BUF_SIZE).unwrap_or(BUF_SIZE)];
let mut file_eof = false;
let mut errors = false;
loop {
if !file_eof {
buffer_in.append(|buf| {
let num_read = file_in.read(buf).unwrap();
if num_read == 0 {
file_eof = true;
}
num_read
});
}
let (result, num_consumed, num_produced, had_error) =
decoder.decode_to_utf8(buffer_in.get_data(), &mut buffer_out, file_eof);
if had_error {
errors = true;
}
let produced_data = &buffer_out[..num_produced];
file_out.write_all(produced_data).unwrap();
if file_eof && result == CoderResult::InputEmpty {
break;
}
buffer_in.move_leftovers_and_flip(num_consumed);
}
println!("Had conversion errors: {:?}", errors);
}
As #BurntSushi5 pointed out, there is the encoding_rs_io crate that allows us to skip all the boilerplate code:
use std::fs::File;
use encoding_rs::IBM866;
use encoding_rs_io::DecodeReaderBytesBuilder;
fn main() {
let file_in = File::open("test_ibm866.txt").unwrap();
let mut file_out = File::create("out_utf8.txt").unwrap();
let mut decoded_stream = DecodeReaderBytesBuilder::new()
.encoding(Some(IBM866))
.build(file_in);
std::io::copy(&mut decoded_stream, &mut file_out).unwrap();
}

Why does this program block until someone connects to the FIFO / named pipe?

I found this script in the post Recommended way of IPC in Rust where a server and client are created with named pipes.
I want to understand how it works so I started debugging. When I start the server with cargo run listen, the program reaches the open function and the following happens. I know this is a feature and not a bug, but I do not understand why it happens.
In the main function the listen function is called and then the listen function calls the open function:
use libc::{c_char, mkfifo};
use serde::{Deserialize, Serialize};
use std::env::args;
use std::fs::{File, OpenOptions};
use std::io::{Error, Read, Result, Write};
use std::os::unix::ffi::OsStrExt;
use std::path::{Path, PathBuf};
fn main() -> Result<()> {
let mut args = args();
let _ = args.next();
match args.next().as_ref().map(String::as_str) {
Some("listen") => listen()?,
Some("send") => {
let msg = args.next().unwrap();
send(msg)?;
}
_ => {
eprintln!("Please either listen or send.");
}
}
Ok(())
}
pub struct Fifo {
path: PathBuf,
}
impl Fifo {
pub fn new(path: PathBuf) -> Result<Self> {
let os_str = path.clone().into_os_string();
let slice = os_str.as_bytes();
let mut bytes = Vec::with_capacity(slice.len() + 1);
bytes.extend_from_slice(slice);
bytes.push(0); // zero terminated string
let _ = std::fs::remove_file(&path);
if unsafe { mkfifo((&bytes[0]) as *const u8 as *const c_char, 0o644) } != 0 {
Err(Error::last_os_error())
} else {
Ok(Fifo { path })
}
}
/// Blocks until anyone connects to this fifo.
pub fn open(&self) -> Result<FifoHandle> {
let mut pipe = OpenOptions::new().read(true).open(&self.path)?;
let mut pid_bytes = [0u8; 4];
pipe.read_exact(&mut pid_bytes)?;
let pid = u32::from_ne_bytes(pid_bytes);
drop(pipe);
let read = OpenOptions::new()
.read(true)
.open(format!("/tmp/rust-fifo-read.{}", pid))?;
let write = OpenOptions::new()
.write(true)
.open(format!("/tmp/rust-fifo-write.{}", pid))?;
Ok(FifoHandle { read, write })
}
}
impl Drop for Fifo {
fn drop(&mut self) {
let _ = std::fs::remove_file(&self.path);
}
}
#[derive(Serialize, Deserialize)]
pub enum Message {
Print(String),
Ack(),
}
pub struct FifoHandle {
read: File,
write: File,
}
impl FifoHandle {
pub fn open<P: AsRef<Path>>(path: P) -> Result<Self> {
let pid = std::process::id();
let read_fifo_path = format!("/tmp/rust-fifo-write.{}", pid);
let read_fifo = Fifo::new(read_fifo_path.into())?;
let write_fifo_path = format!("/tmp/rust-fifo-read.{}", pid);
let write_fifo = Fifo::new(write_fifo_path.into())?;
let mut pipe = OpenOptions::new().write(true).open(path.as_ref())?;
let pid_bytes: [u8; 4] = u32::to_ne_bytes(pid);
pipe.write_all(&pid_bytes)?;
pipe.flush()?;
let write = OpenOptions::new().write(true).open(&write_fifo.path)?;
let read = OpenOptions::new().read(true).open(&read_fifo.path)?;
Ok(Self { read, write })
}
pub fn send_message(&mut self, msg: &Message) -> Result<()> {
let msg = bincode::serialize(msg).expect("Serialization failed");
self.write.write_all(&usize::to_ne_bytes(msg.len()))?;
self.write.write_all(&msg[..])?;
self.write.flush()
}
pub fn recv_message(&mut self) -> Result<Message> {
let mut len_bytes = [0u8; std::mem::size_of::<usize>()];
self.read.read_exact(&mut len_bytes)?;
let len = usize::from_ne_bytes(len_bytes);
let mut buf = vec![0; len];
self.read.read_exact(&mut buf[..])?;
Ok(bincode::deserialize(&buf[..]).expect("Deserialization failed"))
}
}
fn listen() -> Result<()> {
let fifo = Fifo::new(PathBuf::from("/tmp/rust-fifo"))?;
loop {
let mut handle = fifo.open()?;
std::thread::spawn(move || {
match handle.recv_message().expect("Failed to recieve message") {
Message::Print(p) => println!("{}", p),
Message::Ack() => panic!("Didn't expect Ack now."),
}
#[allow(deprecated)]
std::thread::sleep_ms(1000);
handle
.send_message(&Message::Ack())
.expect("Send message failed.");
});
}
}
fn send(s: String) -> Result<()> {
let mut handle = FifoHandle::open("/tmp/rust-fifo")?;
#[allow(deprecated)]
std::thread::sleep_ms(1000);
handle.send_message(&Message::Print(s))?;
match handle.recv_message()? {
Message::Print(p) => println!("{}", p),
Message::Ack() => {}
}
Ok(())
}

tokio::timeout on Stream always occurs

I'm trying to accept a UDP message but only if it happens within 5 seconds, I have a Stream abstraction built with both manually implementing Stream and by using the combinators in the futures library. Each way, after the recv_from future resolves, the duration will expire and the stream will return an Err(Elapsed(())). This is not the expected behaviour, if a value is returned, no error is supposed to be returned.
The expected behaviour is that the stream will resolve either the timeout or the Vec, but not one, then the other 5 seconds later.
use futures::{pin_mut, ready, stream::unfold, FutureExt};
use tokio::{
net::{udp, UdpSocket},
stream::{Stream, StreamExt},
time::{self, Duration},
};
use std::{
io,
net::SocketAddr,
pin::Pin,
task::{Context, Poll},
};
#[derive(Debug)]
pub(crate) struct UdpStream {
stream: udp::RecvHalf,
}
impl UdpStream {
fn new(stream: udp::RecvHalf) -> Self {
Self { stream }
}
fn stream(self) -> impl Stream<Item = io::Result<(Vec<u8>, SocketAddr)>> {
unfold(self.stream, |mut stream| async move {
let mut buf = [0; 4096];
match time::timeout(Duration::from_secs(5), stream.recv_from(&mut buf)).await {
Ok(Ok((len, src))) => {
Some((Ok((buf.iter().take(len).cloned().collect(), src)), stream))
}
e => {
println!("{:?}", e);
None
}
}
})
}
}
impl Stream for UdpStream {
type Item = io::Result<(Vec<u8>, SocketAddr)>;
fn poll_next(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Option<Self::Item>> {
let socket = &mut self.stream;
pin_mut!(socket);
let mut buf = [0u8; 4096];
let (len, src) = ready!(Box::pin(socket.recv_from(&mut buf)).poll_unpin(cx))?;
Poll::Ready(Some(Ok((buf.iter().take(len).cloned().collect(), src))))
}
}
async fn listen_udp(addr: SocketAddr) -> io::Result<()> {
let udp = UdpSocket::bind(addr).await?;
let (mut udp_recv, mut udp_send) = udp.split();
let mut msg_stream = Box::pin(UdpStream::new(udp_recv).stream());
// use the manually implemented stream with this:
// let mut msg_stream = UdpStream::new(udp_recv).timeout(Duration::from_secs(5));
while let Some(msg) = msg_stream.next().await {
match msg {
Ok((buf, src)) => {
udp_send.send_to(&buf, &src).await?;
println!("Message recv: {:?}", buf);
}
Err(e) => {
eprintln!("timed out: {:?}", e);
}
}
}
Ok(())
}
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync + 'static>> {
listen_udp("127.0.0.1:9953".parse()?).await?;
Ok(())
}
You can try by running this code and making udp requests with echo "foo" | nc 127.0.0.1 9953 -u or with dig
cargo.toml
[package]
name = "udp_test"
version = "0.1.0"
edition = "2018"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
tokio = { version = "0.2", features = ["full"] }
futures = "0.3"
After your stream returns the first (and the only) element, it goes back to wait for the next; it never ends until the timeout.
Basically a stream abstraction is not necessary here. A future wrapped in a timeout will do:
use std::{io, net::SocketAddr};
use tokio::{
net::UdpSocket,
time::{self, Duration},
};
async fn listen_udp(addr: SocketAddr) -> io::Result<()> {
let mut udp = UdpSocket::bind(addr).await?;
let mut buf = [0; 4096];
match time::timeout(Duration::from_secs(5), udp.recv_from(&mut buf)).await? {
Ok((count, src)) => {
udp.send_to(&buf[..count], &src).await?;
println!("Message recv: {:?}", &buf[..count]);
}
Err(e) => {
eprintln!("timed out: {:?}", e);
}
}
Ok(())
}
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
listen_udp("127.0.0.1:9953".parse()?).await?;
Ok(())
}

Deserializing newline-delimited JSON from a socket using Serde

I am trying to use serde for sending a JSON struct from a client to a server. A newline from the client to the server marks that the socket is done. My server looks like this
#[derive(Serialize, Deserialize, Debug)]
struct Point3D {
x: u32,
y: u32,
z: u32,
}
fn handle_client(mut stream: TcpStream) -> Result<(), Error> {
println!("Incoming connection from: {}", stream.peer_addr()?);
let mut buffer = [0; 512];
loop {
let bytes_read = stream.read(&mut buffer)?;
if bytes_read == 0 {
return Ok(());
}
let buf_str: &str = str::from_utf8(&buffer).expect("Boom");
let input: Point3D = serde_json::from_str(&buf_str)?;
let result: String = (input.x.pow(2) + input.y.pow(2) + input.z.pow(2)).to_string();
stream.write(result.as_bytes())?;
}
}
fn main() {
let args: Vec<_> = env::args().collect();
if args.len() != 2 {
eprintln!("Please provide --client or --server as argument");
std::process::exit(1);
}
if args[1] == "--server" {
let listener = TcpListener::bind("0.0.0.0:8888").expect("Could not bind");
for stream in listener.incoming() {
match stream {
Err(e) => eprintln!("failed: {}", e),
Ok(stream) => {
thread::spawn(move || {
handle_client(stream).unwrap_or_else(|error| eprintln!("{:?}", error));
});
}
}
}
} else if args[1] == "--client" {
let mut stream = TcpStream::connect("127.0.0.1:8888").expect("Could not connect to server");
println!("Please provide a 3D point as three comma separated integers");
loop {
let mut input = String::new();
let mut buffer: Vec<u8> = Vec::new();
stdin()
.read_line(&mut input)
.expect("Failed to read from stdin");
let parts: Vec<&str> = input.trim_matches('\n').split(',').collect();
let point = Point3D {
x: parts[0].parse().unwrap(),
y: parts[1].parse().unwrap(),
z: parts[2].parse().unwrap(),
};
stream
.write(serde_json::to_string(&point).unwrap().as_bytes())
.expect("Failed to write to server");
let mut reader = BufReader::new(&stream);
reader
.read_until(b'\n', &mut buffer)
.expect("Could not read into buffer");
print!(
"{}",
str::from_utf8(&buffer).expect("Could not write buffer as string")
);
}
}
}
How do I know what length of buffer to allocate before reading in the string? If my buffer is too large, serde fails to deserialize it with an error saying that there are invalid characters. Is there a better way to do this?
Place the TcpStream into a BufReader. This allows you to read until a specific byte (in this case a newline). You can then parse the read bytes with Serde:
use std::io::{BufRead, BufReader};
use std::io::Write;
fn handle_client(mut stream: TcpStream) -> Result<(), Error> {
let mut data = Vec::new();
let mut stream = BufReader::new(stream);
loop {
data.clear();
let bytes_read = stream.read_until(b'\n', &mut data)?;
if bytes_read == 0 {
return Ok(());
}
let input: Point3D = serde_json::from_slice(&data)?;
let value = input.x.pow(2) + input.y.pow(2) + input.z.pow(2);
write!(stream.get_mut(), "{}", value)?;
}
}
I'm being a little fancy by reusing the allocation of data, which means it's very important to reset the buffer at the beginning of each loop. I also avoid allocating memory for the result and just print directly to the output stream.

How can I change the return type of this function?

I'm going through the matasano crypto challenges using rust, with rust-crypto for the AES implementation. I have this function to do basic ECB mode encryption (basically taken nearly verbatim from the rust-crypto repository's example):
pub fn aes_enc_ecb_128(key: &[u8], data: &[u8])
-> Result<Vec<u8>, symmetriccipher::SymmetricCipherError> {
let mut encryptor = aes::ecb_encryptor(
aes::KeySize::KeySize128,
key,
blockmodes::NoPadding);
let mut final_result = Vec::<u8>::new();
let mut read_buffer = buffer::RefReadBuffer::new(data);
let mut buffer = [0; 4096];
let mut write_buffer = buffer::RefWriteBuffer::new(&mut buffer);
loop {
let result = encryptor.encrypt(&mut read_buffer,
&mut write_buffer,
true);
final_result.extend(write_buffer
.take_read_buffer()
.take_remaining().iter().map(|&i| i));
match result {
Ok(BufferResult::BufferUnderflow) => break,
Ok(_) => {},
Err(e) => return Err(e)
}
}
Ok(final_result)
}
The above version compiles with no problem, and works as expected. However, to make it fit with the rest of my error handling scheme I'd like to change the return type to Result<Vec<u8>,&'static str>. This is the function with that change applied:
pub fn aes_enc_ecb_128(key: &[u8], data: &[u8])
-> Result<Vec<u8>, &'static str> {
let mut encryptor = aes::ecb_encryptor(
aes::KeySize::KeySize128,
key,
blockmodes::NoPadding);
let mut final_result = Vec::<u8>::new();
let mut read_buffer = buffer::RefReadBuffer::new(data);
let mut buffer = [0; 4096];
let mut write_buffer = buffer::RefWriteBuffer::new(&mut buffer);
loop {
let result = encryptor.encrypt(&mut read_buffer,
&mut write_buffer,
true);
final_result.extend(write_buffer
.take_read_buffer()
.take_remaining().iter().map(|&i| i));
match result {
Ok(BufferResult::BufferUnderflow) => break,
Ok(_) => {},
Err(_) => return Err("Encryption failed")
}
}
Ok(final_result)
}
When I attempt to compile this version, I get the following error (paths removed for clarity):
error: source trait is private
let result = encryptor.encrypt(&mut read_buffer,
&mut write_buffer,
true);
error: source trait is private
let r = decryptor.decrypt(&mut read_buffer, &mut write_buffer, true);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The only way I've been able to change this type is to wrap the original function in a conversion function like this:
pub fn converted_enc(key: &[u8], data: &[u8])
-> Result<Vec<u8>, &'static str> {
match aes_enc_ecb_128(key,data) {
Ok(v) => Ok(v),
Err(_) => Err("Encryption failed")
}
}
What should I do instead of the above in order to get the return value to fit with the rest of my API, and why is the more direct method failing?
I'm using the following versions of rust/cargo:
rustc 1.2.0-nightly (0cc99f9cc 2015-05-17) (built 2015-05-18)
cargo 0.2.0-nightly (ac61996 2015-05-17) (built 2015-05-17)
I think you have come across a bug of the compiler. Your code should compile
You can use crypto::symmetriccipher::Encryptor; as a workaround:
pub fn aes_enc_ecb_128(key: &[u8], data: &[u8])
-> Result<Vec<u8>, &'static str> {
use crypto::symmetriccipher::Encryptor;
let mut encryptor = aes::ecb_encryptor(
aes::KeySize::KeySize128,
key,
blockmodes::NoPadding);
let mut final_result = Vec::<u8>::new();
let mut read_buffer = buffer::RefReadBuffer::new(data);
let mut buffer = [0; 4096];
let mut write_buffer = buffer::RefWriteBuffer::new(&mut buffer);
loop {
let result = encryptor.encrypt(&mut read_buffer,
&mut write_buffer,
true);
final_result.extend(write_buffer
.take_read_buffer()
.take_remaining().iter().map(|&i| i));
match result {
Ok(BufferResult::BufferUnderflow) => break,
Ok(_) => {},
Err(_) => return Err("Encryption failed")
}
}
Ok(final_result)
}

Resources