Stack overflow issue in rust when making a simple VM - rust

I have come to you guys today with an error I cannot seem to fix sadly.
First let me explain what I'm trying to do. I am writing a little VM in Rust. I have just finished writing the bare minimum for the VM as you can tell by how unfinished the code is. I have made a system where you can load the program into a certain spot in memory so you can jump to that spot for subroutines later on. The run function in the Arcate struct runs whatever operation is on 0x0000.
As you can also see I have gone with using a databuss just so I can use external memory devices like creating another ArcateMemory struct as a different "drive".
It seems I am getting a stack overflow but since Rust has the best stack overflow messages all I am getting is that it is in main.
Thanks again for your help. Sorry if it is a stupid mistake I'm a little new to Rust.
main.rs
#![allow(dead_code)]
type Instr = u8;
type Program = Vec<Instr>;
#[derive(PartialEq, Eq)]
enum Signals {
BusWrite,
BusRead,
ErrNoInstruction,
Success,
Halt,
}
struct Arcate {
mem : ArcateMem,
ci: Instr,
pc: i32,
acr: i32,
gp1: i32,
gp2: i32,
gp3: i32,
gp4: i32,
gp5: i32,
gp6: i32,
gp7: i32,
gp8: i32,
}
impl Arcate {
fn new(memory: ArcateMem) -> Self {
Arcate {
mem: memory,
ci: 0,
pc: 0,
acr: 0,
gp1: 0,
gp2: 0,
gp3: 0,
gp4: 0,
gp5: 0,
gp6: 0,
gp7: 0,
gp8: 0,
}
}
fn fetchi(&mut self) {
let i: Instr = ArcateBus {data: 0, addr: self.pc, todo: Signals::BusRead, mem: self.mem}.exec();
self.ci = i;
self.pc += 1;
}
fn fetcha(&mut self) -> Instr{
let i: Instr = ArcateBus {data: 0, addr: self.pc, todo: Signals::BusRead, mem: self.mem}.exec();
self.pc += 1;
i
}
fn getr(&mut self, reg: Instr) -> i32 {
match reg {
0x01 => { self.ci as i32},
0x02 => { self.pc },
0x03 => { self.acr },
0x04 => { self.gp1 },
0x05 => { self.gp2 },
0x06 => { self.gp3 },
0x07 => { self.gp4 },
0x08 => { self.gp5 },
0x09 => { self.gp6 },
0x0a => { self.gp7 },
0x0b => { self.gp8 },
_ => { panic!("Register not found {}", reg) }
}
}
fn setr(&mut self, reg: Instr, val: i32) {
match reg {
0x01 => { self.ci = val as u8 },
0x02 => { self.pc = val },
0x03 => { self.acr = val },
0x04 => { self.gp1 = val },
0x05 => { self.gp2 = val },
0x06 => { self.gp3 = val },
0x07 => { self.gp4 = val },
0x08 => { self.gp5 = val },
0x09 => { self.gp6 = val },
0x0a => { self.gp7 = val },
0x0b => { self.gp8 = val },
_ => { panic!("Register not found {}", reg) }
}
}
fn dbg(&mut self, regs: bool, mem: bool) {
if regs {
println!("ci : {}", self.ci );
println!("pc : {}", self.pc );
println!("acr: {}", self.acr);
println!("gp1: {}", self.gp1);
println!("gp2: {}", self.gp2);
println!("gp3: {}", self.gp3);
println!("gp4: {}", self.gp4);
println!("gp5: {}", self.gp5);
println!("gp6: {}", self.gp6);
println!("gp7: {}", self.gp7);
println!("gp8: {}\n", self.gp8);
}
}
fn exec(&mut self) -> Signals {
match self.ci {
// 01: 2 args. movir imm, reg
0x01 => {
let imm1 = self.fetcha();
let imm2 = self.fetcha();
let imm = ((imm1 as i32) << 8) + imm2 as i32;
let reg = self.fetcha();
self.setr(reg, imm);
Signals::Success
}
// 02: 2 args. movrr reg, reg
0x02 => {
let regf = self.fetcha();
let regt = self.fetcha();
let regv = self.getr(regf);
self.setr(regt, regv);
Signals::Success
}
// 03: 2 args. movrm reg, mem
// 04: 2 args. movim imm, mem
// 05: 2 args. addrr reg, reg
0x05 => {
let rego = self.fetcha();
let regt = self.fetcha();
let regov = self.getr(rego);
let regtv = self.getr(regt);
self.acr = regov + regtv;
Signals::Success
}
// 06: 2 args. addir imm, reg
// ff: 0 args. halt
0xff => {
Signals::Halt
}
_ => {
Signals::ErrNoInstruction
}
}
}
fn load(&mut self, prog: Program, loc: usize) {
let mut ld = 0;
for i in loc..prog.len() {
self.mem.mem[i] = prog[ld];
println!("{:2x} -> {:12x}", prog[ld], i);
ld += 1;
}
}
fn run(&mut self, dbgR: bool) {
let mut sig: Signals = Signals::Success;
self.dbg(dbgR, false);
while sig == Signals::Success {
self.fetchi();
sig = self.exec();
self.dbg(dbgR, false);
}
}
}
struct ArcateBus {
data: Instr,
addr: i32,
todo: Signals,
mem : ArcateMem,
}
impl ArcateBus {
fn exec(&mut self) -> Instr {
if self.todo == Signals::BusWrite {
self.mem.mem[self.addr as usize] = self.data;
0x00
} else if self.todo == Signals::BusRead {
self.mem.mem[self.addr as usize]
}else {
0xFF
}
}
}
#[derive(Copy, Clone)]
struct ArcateMem {
mem: [Instr; 0xFFFF*0xFFFF],
}
impl ArcateMem {
fn new() -> Self {
ArcateMem {
mem: [0; 0xFFFF*0xFFFF],
}
}
}
fn main() {
let mem: ArcateMem = ArcateMem::new();
let mut arc: Arcate = Arcate::new(mem);
let prog = vec![
0x01, 0xfe, 0xfe, 0x04,
0x01, 0x01, 0x01, 0x05,
0x05, 0x04, 0x05,
0xff,
];
arc.load(prog, 0x0000);
arc.run(true);
}

I think the issue is around ArcateMem, you are allocating 0xFFFF*0xFFFF Instr, which is about 4GB. With the way the code is written right now, you are allocating this on the stack, which generally can't support allocations this large. You'll probably want to use Box<> to allocate your memory in the heap, as it is more likely able to deal with allocations this large.
It's possible you could configure your operating system to increase your stack size, but I'd recommend using the heap.

Related

How to add partition in kafka rust configuratgion

I want to config this file to add a number of partition option here as by default it is creating only 1 partition , but I need 10 for my data .
I dont have much knowledge of rdkafka library in rust , as I am directly using this plugin file
Can anyone guide me where can I find solution to this or what direction .
Thanks
use rdkafka::error::{KafkaError};
use rdkafka::{ClientConfig};
use rdkafka::producer::{FutureProducer, FutureRecord};
use std::fmt::Error;
use std::os::raw::{c_char, c_int, c_void};
use std::sync::mpsc::TrySendError;
use suricata::conf::ConfNode;
use suricata::{SCLogError, SCLogNotice};
const DEFAULT_BUFFER_SIZE: &str = "65535";
const DEFAULT_CLIENT_ID: &str = "rdkafka";
#[derive(Debug, Clone)]
struct ProducerConfig {
brokers: String,
topic: String,
client_id: String,
buffer: usize,
}
impl ProducerConfig {
fn new(conf: &ConfNode) -> Result<Self,Error> {
let brokers = if let Some(val) = conf.get_child_value("brokers"){
val.to_string()
}else {
SCLogError!("brokers parameter required!");
panic!();
};
let topic = if let Some(val) = conf.get_child_value("topic"){
val.to_string()
}else {
SCLogError!("topic parameter required!");
panic!();
};
let client_id = conf.get_child_value("client-id").unwrap_or(DEFAULT_CLIENT_ID);
let buffer_size = match conf
.get_child_value("buffer-size")
.unwrap_or(DEFAULT_BUFFER_SIZE)
.parse::<usize>()
{
Ok(size) => size,
Err(_) => {
SCLogError!("invalid buffer-size!");
panic!();
},
};
let config = ProducerConfig {
brokers: brokers.into(),
topic: topic.into(),
client_id: client_id.into(),
buffer: buffer_size,
};
Ok(config)
}
}
struct KafkaProducer {
producer: FutureProducer,
config: ProducerConfig,
rx: std::sync::mpsc::Receiver<String>,
count: usize,
}
impl KafkaProducer {
fn new(
config: ProducerConfig,
rx: std::sync::mpsc::Receiver<String>,
) -> Result<Self,KafkaError> {
let producer: FutureProducer = ClientConfig::new()
.set("bootstrap.servers", &config.brokers)
.set("client.id",&config.client_id)
.set("message.timeout.ms", "5000")
.create()?;
Ok(Self {
config,
producer,
rx,
count: 0,
})
}
fn run(&mut self) {
// Get a peekable iterator from the incoming channel. This allows us to
// get the next message from the channel without removing it, we can
// then remove it once its been sent to the server without error.
//
// Not sure how this will work with pipe-lining tho, will probably have
// to do some buffering here, or just accept that any log records
// in-flight will be lost.
let mut iter = self.rx.iter().peekable();
loop {
if let Some(buf) = iter.peek() {
self.count += 1;
if let Err(err) = self.producer.send_result(
FutureRecord::to(&self.config.topic)
.key("")
.payload(&buf),
) {
SCLogError!("Failed to send event to Kafka: {:?}", err);
break;
} else {
// Successfully sent. Pop it off the channel.
let _ = iter.next();
}
} else {
break;
}
}
SCLogNotice!("Producer finished: count={}", self.count,);
}
}
struct Context {
tx: std::sync::mpsc::SyncSender<String>,
count: usize,
dropped: usize,
}
unsafe extern "C" fn output_open(conf: *const c_void, init_data: *mut *mut c_void) -> c_int {
// Load configuration.
let config = ProducerConfig::new(&ConfNode::wrap(conf)).unwrap();
let (tx, rx) = std::sync::mpsc::sync_channel(config.buffer);
let mut kafka_producer = match KafkaProducer::new(config, rx) {
Ok(producer) => {
SCLogNotice!(
"KafKa Producer initialize success with brokers:{:?} | topic: {:?} | client_id: {:?} | buffer-size: {:?}",
producer.config.brokers,
producer.config.topic,
producer.config.client_id,
producer.config.buffer
);
producer
}
Err(err) => {
SCLogError!("Failed to initialize Kafka Producer: {:?}", err);
panic!()
}
};
let context = Context {
tx,
count: 0,
dropped: 0,
};
std::thread::spawn(move || {kafka_producer.run()});
// kafka_producer.run();
*init_data = Box::into_raw(Box::new(context)) as *mut _;
0
}
unsafe extern "C" fn output_close(init_data: *const c_void) {
let context = Box::from_raw(init_data as *mut Context);
SCLogNotice!(
"Kafka produce finished: count={}, dropped={}",
context.count,
context.dropped
);
std::mem::drop(context);
}
unsafe extern "C" fn output_write(
buffer: *const c_char,
buffer_len: c_int,
init_data: *const c_void,
) -> c_int {
let context = &mut *(init_data as *mut Context);
let buf = if let Ok(buf) = ffi::str_from_c_parts(buffer, buffer_len) {
buf
} else {
return -1;
};
context.count += 1;
if let Err(err) = context.tx.try_send(buf.to_string()) {
context.dropped += 1;
match err {
TrySendError::Full(_) => {
SCLogError!("Eve record lost due to full buffer");
}
TrySendError::Disconnected(_) => {
SCLogError!("Eve record lost due to broken channel{}",err);
}
}
}
00
}
unsafe extern "C" fn init_plugin() {
let file_type =
ffi::SCPluginFileType::new("kafka", output_open, output_close, output_write);
ffi::SCPluginRegisterFileType(file_type);
}
#[no_mangle]
extern "C" fn SCPluginRegister() -> *const ffi::SCPlugin {
// Rust plugins need to initialize some Suricata internals so stuff like logging works.
suricata::plugin::init();
// Register our plugin.
ffi::SCPlugin::new("Kafka Eve Filetype", "GPL-2.0", "1z3r0", init_plugin)
}

Procedural macro for retrieving data from a nested struct by index

I am trying to write a rust derive macro for retrieving data from a nested struct by index. The struct only contains primitive types u8, i8, u16, i16, u32, i32, u64, i64, or other structs thereof. I have an Enum which encapsulates the leaf field data in a common type which I call an Item(). I want the macro to create a .get() implementation which returns an item based on a u16 index.
Here is the desired behavior.
#[derive(Debug, PartialEq, PartialOrd, Copy, Clone)]
pub enum Item {
U8(u8),
I8(i8),
U16(u16),
I16(i16),
U32(u32),
I32(i32),
U64(u64),
I64(i64),
}
struct NestedData {
a: u16,
b: i32,
}
#[derive(GetItem)]
struct Data {
a: i32,
b: u64,
c: NestedData,
}
let data = Data {
a: 42,
b: 1000,
c: NestedData { a: 500, b: -2 },
};
assert_eq!(data.get(0).unwrap(), Item::I32(42));
assert_eq!(data.get(1).unwrap(), Item::U64(1000));
assert_eq!(data.get(2).unwrap(), Item::U16(500));
assert_eq!(data.get(3).unwrap(), Item::I32(-2));
For this particular example, I want the macro to expand to the following...
impl Data {
pub fn get(&self, index: u16) -> Result<Item, Error> {
match index {
0 => Ok(Item::U16(self.a)),
1 => Ok(Item::I32(self.b)),
2 => Ok(Item::I32(self.c.a)),
3 => Ok(Item::U64(self.c.b)),
_ => Err(Error::BadIndex),
}
}
}
I have a working macro for a single layer struct, but I am not sure about how to modify it to support nested structs. Here is where I am at...
use proc_macro2::TokenStream;
use quote::quote;
use syn::{Data, DataStruct, DeriveInput, Fields, Type, TypePath};
pub fn impl_get_item(input: DeriveInput) -> syn::Result<TokenStream> {
let model_name = input.ident;
let fields = match input.data {
Data::Struct(DataStruct {
fields: Fields::Named(fields),
..
}) => fields.named,
_ => panic!("The GetItem derive can only be applied to structs"),
};
let mut matches = TokenStream::new();
let mut item_index: u16 = 0;
for field in fields {
let item_name = field.ident;
let item_type = field.ty;
let ts = match item_type {
Type::Path(TypePath { path, .. }) if path.is_ident("u8") => {
quote! {#item_index => Ok(Item::U8(self.#item_name)),}
}
Type::Path(TypePath { path, .. }) if path.is_ident("i8") => {
quote! {#item_index => Ok(Item::I8(self.#item_name)),}
}
Type::Path(TypePath { path, .. }) if path.is_ident("u16") => {
quote! {#item_index => Ok(Item::U16(self.#item_name)),}
}
Type::Path(TypePath { path, .. }) if path.is_ident("i16") => {
quote! {#item_index => Ok(Item::I16(self.#item_name)),}
}
Type::Path(TypePath { path, .. }) if path.is_ident("u32") => {
quote! {#item_index => Ok(Item::U32(self.#item_name)),}
}
Type::Path(TypePath { path, .. }) if path.is_ident("i32") => {
quote! {#item_index => Ok(Item::I32(self.#item_name)),}
}
Type::Path(TypePath { path, .. }) if path.is_ident("u64") => {
quote! {#item_index => Ok(Item::U64(self.#item_name)),}
}
Type::Path(TypePath { path, .. }) if path.is_ident("i64") => {
quote! {#item_index => Ok(Item::I64(self.#item_name)),}
}
_ => panic!("{:?} uses unsupported type {:?}", item_name, item_type),
};
matches.extend(ts);
item_index += 1;
}
let output = quote! {
#[automatically_derived]
impl #model_name {
pub fn get(&self, index: u16) -> Result<Item, Error> {
match index {
#matches
_ => Err(Error::BadIndex),
}
}
}
};
Ok(output)
}
I'm not going to give a complete answer as my proc-macro skills are non-existant, but I don't think the macro part is tricky once you've got the structure right.
The way I'd approach this is to define a trait that all the types will use. I'm going to call this Indexible which is probably bad. The point of the trait is to provide the get function and a count of all fields contained within this object.
trait Indexible {
fn nfields(&self) -> usize;
fn get(&self, idx:usize) -> Result<Item>;
}
I'm using fn nfields(&self) -> usize rather than fn nfields() -> usize as taking &self means I can use this on vectors and slices and probably some other types (It also makes the following code slightly neater).
Next you need to implement this trait for your base types:
impl Indexible for u8 {
fn nfields(&self) -> usize { 1 }
fn get(&self, idx:usize) -> Result<Item> { Ok(Item::U8(*self)) }
}
...
Generating all these is probably a good use for a macro (but the proc macro that you're talking about).
Next, you need to generate these for your desired types: My implementations look like this:
impl Indexible for NestedData {
fn nfields(&self) -> usize {
self.a.nfields() +
self.b.nfields()
}
fn get(&self, idx:usize) -> Result<Item> {
let idx = idx;
// member a
if idx < self.a.nfields() {
return self.a.get(idx)
}
let idx = idx - self.a.nfields();
// member b
if idx < self.b.nfields() {
return self.b.get(idx)
}
Err(())
}
}
impl Indexible for Data {
fn nfields(&self) -> usize {
self.a.nfields() +
self.b.nfields() +
self.c.nfields()
}
fn get(&self, idx:usize) -> Result<Item> {
let idx = idx;
if idx < self.a.nfields() {
return self.a.get(idx)
}
let idx = idx - self.a.nfields();
if idx < self.b.nfields() {
return self.b.get(idx)
}
let idx = idx - self.b.nfields();
if idx < self.c.nfields() {
return self.c.get(idx)
}
Err(())
}
}
You can see a complete running version in the playground.
These look like they can be easily generated by a macro.
If you want slightly better error messages on types that wont work, you should explicitly trea each member as an Indexible like this: (self.a as Indexible).get(..).
It might seem that this is not going to be particularly efficient, but the compiler is able to determine that most of these pieces are constant and inline them. For example, using rust 1.51 with -C opt-level=3, the following function
pub fn sum(data: &Data) -> usize {
let mut sum = 0;
for i in 0..data.nfields() {
sum += match data.get(i) {
Err(_) => panic!(),
Ok(Item::U8(v)) => v as usize,
Ok(Item::U16(v)) => v as usize,
Ok(Item::I32(v)) => v as usize,
Ok(Item::U64(v)) => v as usize,
_ => panic!(),
}
}
sum
}
compiles to just this
example::sum:
movsxd rax, dword ptr [rdi + 8]
movsxd rcx, dword ptr [rdi + 12]
movzx edx, word ptr [rdi + 16]
add rax, qword ptr [rdi]
add rax, rdx
add rax, rcx
ret
You can see this in the compiler explorer

rust bindgen unaligned tcache chunk error

I am working on binding for the avahi lib in rust, and I am encountering a runtime error:
malloc(): unaligned tcache chunk detected
the code at fault:
pub fn register_service(
&mut self,
name: String,
svc_type: String,
port: u16,
txt: &[String],
) -> Result<(), AvahiError> {
let group = match self.group {
Some(group) => group,
None => {
let group = unsafe {
ffi::avahi_entry_group_new(
self.client_inner,
Some(group_callback),
std::ptr::null_mut() as *mut c_void,
)
};
if group.is_null() {
return Err(AvahiError::GroupCreateError);
}
self.group.replace(group);
return self.register_service(name, svc_type, port, txt);
}
};
// avahi_entry_group_is_empty or any other function that uses group causes this error
if unsafe { ffi::avahi_entry_group_is_empty(group) != 0 } {
let name = CString::new(name).unwrap();
let svc_type = CString::new(svc_type).unwrap();
let ret = unsafe {
ffi::avahi_entry_group_add_service(
group,
ffi::AVAHI_IF_UNSPEC,
ffi::AVAHI_PROTO_UNSPEC,
0,
name.as_ptr(),
svc_type.as_ptr(),
std::ptr::null_mut(),
std::ptr::null_mut(),
port,
std::ptr::null_mut() as *mut i8,
)
};
if ret < 0 {
let msg = unsafe { ffi::avahi_strerror(ret) };
return Err(AvahiError::CreateService(unsafe {
CString::from_raw(msg as *mut i8)
.to_str()
.unwrap()
.to_owned()
}));
}
if unsafe { ffi::avahi_entry_group_commit(group) == 0 } {
return Err(AvahiError::EntryGroupCommit);
}
}
Ok(())
}
I am using this this example as reference, and I got it to work in C, so I think the error must be comming from the bindings. I am not sure to understand what this error is either.
What am I doing wrong?

color_quant::NeuQuant compiled to WebAssembly outputs zero values

I am trying to load an image in the browser and use the NewQuant algorithm to quantize my image buffer it in Rust via WebAssembly. However, the NewQuant output contains zero values, regardless of what PNG I try to feed it.
I expose two Rust methods to WASM:
alloc for allocating a byte buffer
read_img which will read and process the img buffer
I know that I get zero values because I imported a JavaScript method called log_nr for logging simple u8 numbers. The buffer seems to contain valid pixel values.
extern crate color_quant;
extern crate image;
use color_quant::NeuQuant;
use image::{DynamicImage, GenericImage, Pixel, Rgb};
use std::collections::BTreeMap;
use std::mem;
use std::os::raw::c_void;
static NQ_SAMPLE_FACTION: i32 = 10;
static NQ_PALETTE_SIZE: usize = 256;
extern "C" {
fn log(s: &str, len: usize);
fn log_nr(nr: u8);
}
fn get_pixels(img: DynamicImage) -> Vec<u8> {
let mut pixels = Vec::new();
for (_, _, px) in img.pixels() {
let rgba = px.to_rgba();
for channel in px.channels() {
pixels.push(*channel);
}
}
pixels
}
#[no_mangle]
pub extern "C" fn alloc(size: usize) -> *mut c_void {
let mut buf = Vec::with_capacity(size);
let ptr = buf.as_mut_ptr();
mem::forget(buf);
return ptr as *mut c_void;
}
fn process_img(img: DynamicImage) {
let pixels: Vec<u8> = get_pixels(img);
let quantized = NeuQuant::new(NQ_SAMPLE_FACTION, NQ_PALETTE_SIZE, &pixels);
let q = quantized.color_map_rgb();
for c in &q {
unsafe {
log_nr(*c);
}
}
}
#[no_mangle]
pub extern "C" fn read_img(buff_ptr: *mut u8, buff_len: usize) {
let mut img: Vec<u8> = unsafe { Vec::from_raw_parts(buff_ptr, buff_len, buff_len) };
return match image::load_from_memory(&img) {
Ok(img) => {
process_img(img);
}
Err(err) => {
let err_msg: String = err.to_string().to_owned();
let mut ns: String = "[load_from_memory] ".to_owned();
ns.push_str(&err_msg);
unsafe {
log(&ns, ns.len());
}
}
};
}
fn main() {
println!("Hello from rust 2");
}
The JavaScript code is the following:
run('sample.png');
function run(img) {
return compile().then(m => {
return loadImgIntoMem(img, m.instance.exports.memory, m.instance.exports.alloc).then(r => {
return m.instance.exports.read_img(r.imgPtr, r.len);
});
})
}
function compile(wasmFile = 'distil_wasm.gc.wasm') {
return fetch(wasmFile)
.then(r => r.arrayBuffer())
.then(r => {
let module = new WebAssembly.Module(r);
let importObject = {}
for (let imp of WebAssembly.Module.imports(module)) {
if (typeof importObject[imp.module] === "undefined")
importObject[imp.module] = {};
switch (imp.kind) {
case "function": importObject[imp.module][imp.name] = () => {}; break;
case "table": importObject[imp.module][imp.name] = new WebAssembly.Table({ initial: 256, maximum: 256, element: "anyfunc" }); break;
case "memory": importObject[imp.module][imp.name] = new WebAssembly.Memory({ initial: 256 }); break;
case "global": importObject[imp.module][imp.name] = 0; break;
}
}
importObject.env = Object.assign({}, importObject.env, {
log: (ptr, len) => console.log(ptrToStr(ptr, len)),
log_nr: (nr) => console.log(nr),
});
return WebAssembly.instantiate(r, importObject);
});
}
function loadImgIntoMemEmscripten(img) {
return new Promise(resolve => {
fetch(img)
.then(r => r.arrayBuffer())
.then(buff => {
const imgPtr = Module._malloc(buff.byteLength);
const imgHeap = new Uint8Array(Module.HEAPU8.buffer, imgPtr, buff.byteLength);
imgHeap.set(new Uint8Array(buff));
resolve({ imgPtr });
});
});
}

Polymorphism in Rust and trait references (trait objects?)

I'm writing a process memory scanner with a console prompt interface in Rust.
I need scanner types such as a winapi scanner or a ring0 driver scanner so I'm trying to implement polymorphism.
I have the following construction at this moment:
pub trait Scanner {
fn attach(&mut self, pid: u32) -> bool;
fn detach(&mut self);
}
pub struct WinapiScanner {
pid: u32,
hprocess: HANDLE,
addresses: Vec<usize>
}
impl WinapiScanner {
pub fn new() -> WinapiScanner {
WinapiScanner {
pid: 0,
hprocess: 0 as HANDLE,
addresses: Vec::<usize>::new()
}
}
}
impl Scanner for WinapiScanner {
fn attach(&mut self, pid: u32) -> bool {
let handle = unsafe { OpenProcess(PROCESS_ALL_ACCESS, FALSE, pid) };
if handle == 0 as HANDLE {
self.pid = pid;
self.hprocess = handle;
true
} else {
false
}
}
fn detach(&mut self) {
unsafe { CloseHandle(self.hprocess) };
self.pid = 0;
self.hprocess = 0 as HANDLE;
self.addresses.clear();
}
}
In future, I'll have some more scanner types besides WinapiScanner, so, if I understand correctly, I should use a trait reference (&Scanner) to implement polymorphism. I'm trying to create Scanner object like this (note the comments):
enum ScannerType {
Winapi
}
pub fn start() {
let mut scanner: Option<&mut Scanner> = None;
let mut scanner_type = ScannerType::Winapi;
loop {
let line = prompt();
let tokens: Vec<&str> = line.split_whitespace().collect();
match tokens[0] {
// commands
"scanner" => {
if tokens.len() != 2 {
println!("\"scanner\" command takes 1 argument")
} else {
match tokens[1] {
"list" => {
println!("Available scanners: winapi");
},
"winapi" => {
scanner_type = ScannerType::Winapi;
println!("Scanner type set to: winapi");
},
x => {
println!("Unknown scanner type: {}", x);
}
}
}
},
"attach" => {
if tokens.len() > 1 {
match tokens[1].parse::<u32>() {
Ok(pid) => {
scanner = match scanner_type {
// ----------------------
// Problem goes here.
// Object, created by WinapiScanner::new() constructor
// doesn't live long enough to borrow it here
ScannerType::Winapi => Some(&mut WinapiScanner::new())
// ----------------------
}
}
Err(_) => {
println!("Wrong pid");
}
}
}
},
x => println!("Unknown command: {}", x)
}
}
}
fn prompt() -> String {
use std::io::Write;
use std::io::BufRead;
let stdout = io::stdout();
let mut lock = stdout.lock();
let _ = lock.write(">> ".as_bytes());
let _ = lock.flush();
let stdin = io::stdin();
let mut lock = stdin.lock();
let mut buf = String::new();
let _ = lock.read_line(&mut buf);
String::from(buf.trim())
}
It's not a full program; I've pasted important parts only.
What am I doing wrong and how do I implement what I want in Rust?
Trait objects must be used behind a pointer. But references are not the only kind of pointers; Box is also a pointer!
let mut scanner: Option<Box<Scanner>> = None;
scanner = match scanner_type {
ScannerType::Winapi => Some(Box::new(WinapiScanner::new()))
}

Resources