I need to build a byte array that represents commands to a device. It may look something like this:
let cmds = [
0x01, // cmd 1
0x02, // cmd 2
0x03, 0xaa, 0xbb, // cmd 3
0x04, // cmd 4
0x05, 0xaa, // cmd 5
];
Some commands take parameters, some don't. Some parameters require calculations. Each command is fixed in size, so it's known at compile time how big the array needs to be.
It'd be nice to construct it like this, where I abstract groups of bytes into commands:
let cmds = [
cmd1(),
cmd2(),
cmd3(0, true, [3, 4]),
cmd4(),
cmd5(0xaa)
];
I haven't found any way to do this with functions or macros. I am in no_std, so I am not using collections.
How to achieve something resembling this in Rust?
You can have each command function return an array or Vec of bytes:
fn cmd1() -> [u8; 1] { [0x01] }
fn cmd2() -> [u8; 1] { [0x02] }
fn cmd3(_a: u8, _b: bool, _c: [u8; 2]) -> [u8; 3] { [0x03, 0xaa, 0xbb] }
fn cmd4() -> [u8; 1] { [0x04] }
fn cmd5(a: u8) -> Vec<u8> { vec![0x05, a] }
And then build your commands like so:
let cmds = [
&cmd1() as &[u8],
&cmd2(),
&cmd3(0, true, [3, 4]),
&cmd4(),
&cmd5(0xaa),
];
This builds an array of slices of bytes. To get the full stream of bytes, use flatten:
println!("{:?}", cmds);
println!("{:?}", cmds.iter().copied().flatten().collect::<Vec<_>>());
[[1], [2], [3, 170, 187], [4], [5, 170]]
[1, 2, 3, 170, 187, 4, 5, 170]
You can make this more elaborate by returning some types that implement a Command trait and collecting them into an array of trait objects, but I'll leave that up to OP.
Edit: Here's a macro that can build the array directly, using the arrayvec crate:
use arrayvec::ArrayVec;
fn cmd1() -> [u8; 1] { [0x01] }
fn cmd2() -> [u8; 1] { [0x02] }
fn cmd3(_a: u8, _b: bool, _c: [u8; 2]) -> [u8; 3] { [0x03, 0xaa, 0xbb] }
fn cmd4() -> [u8; 1] { [0x04] }
fn cmd5(a: u8) -> [u8; 2] { [0x05, a] }
macro_rules! combine {
($($cmd:expr),+ $(,)?) => {
{
let mut vec = ArrayVec::new();
$(vec.try_extend_from_slice(&$cmd).unwrap();)*
vec.into_inner().unwrap()
}
}
}
fn main() {
let cmds: [u8; 8] = combine![
cmd1(),
cmd2(),
cmd3(0, true, [3, 4]),
cmd4(),
cmd5(0xaa),
];
println!("{:?}", cmds);
}
If you're worried about performance, this example compiles the array into a single instruction:
movabs rax, -6195540508320529919 // equal to [0x01, 0x02, 0x03, 0xAA, 0xBB, 0x04, 0x05, 0xAA]
See it on the playground. Its limited to types that are Copy. The length of the array must be supplied. It will panic at runtime if the array size doesn't match the combined size of the results.
You can do it with no external dependencies if you do it as a macro:
macro_rules! cmd_array {
(# [ $($acc:tt)* ]) => { [ $($acc)* ] };
(# [ $($acc:tt)* ] cmd1(), $($tail:tt)*) => { cmd_array!{# [ $($acc)* 0x01, ] $($tail)* } };
(# [ $($acc:tt)* ] cmd2(), $($tail:tt)*) => { cmd_array!{# [ $($acc)* 0x02, ] $($tail)* } };
(# [ $($acc:tt)* ] cmd3 ($a:expr, $b:expr, $c:expr), $($tail:tt)*) => { cmd_array!{# [ $($acc)* 0x03, 0xaa, 0xbb, ] $($tail)* } };
(# [ $($acc:tt)* ] cmd4(), $($tail:tt)*) => { cmd_array!{# [ $($acc)* 0x04, ] $($tail)* } };
(# [ $($acc:tt)* ] cmd5 ($a:expr), $($tail:tt)*) => { cmd_array!{# [ $($acc)* 0x05, $a, ] $($tail)* } };
($($tail:tt)*) => {
cmd_array!(# [] $($tail)*)
}
}
fn main() {
let cmds: [u8; 8] = cmd_array![
cmd1(),
cmd2(),
cmd3(0, true, [3, 4]),
cmd4(),
cmd5(0xaa),
];
println!("{:?}", cmds);
}
This macro is built using an incremental TT muncher to parse the commands, combined with push-down accumulation to build the final array.
Related
I'm trying to load a shader into an amd video card. After all the buffers are created, I try to create a new Compute pipeline. As i started to debug it printing messages i found out that the "Finished Creating the compute pipeline" is never printed. When i run it with `cargo run --release` it prints the: "Creating pipeline with shader" but after some seconds it freezes my whole computer and i have to turn it off and back on again...
My Vulkano version is: 0.32.1;
My vulkano-shaders version is: 0.32.1;
My Video Card is: AMD RX570 4GB
Vulkano Physical device properties:
buffer_image_granularity: 64,
compute_units_per_shader_array: Some(
8,
),
conformance_version: Some(
1.2.0,
),
Cargo.toml:
[package]
name = "vulkano_matrix"
version = "0.1.0"
edition = "2021"
[dependencies]
vulkano = "0.32.1"
vulkano-shaders = "0.32.0"
rand = "0.8.4"
nalgebra="0.31.4"
colored = "2.0.0"
bytemuck = "1.12.3"
// main.rs
extern crate nalgebra as na;
use bytemuck::{Pod, Zeroable};
use colored::Colorize;
use na::{dmatrix, DMatrix};
use std::{
io::{stdin, stdout, Write},
sync::Arc,
time::Instant,
};
use vulkano::{
buffer::{BufferUsage, CpuAccessibleBuffer, DeviceLocalBuffer},
command_buffer::{
allocator::{CommandBufferAllocator, StandardCommandBufferAllocator},
AutoCommandBufferBuilder, PrimaryAutoCommandBuffer, PrimaryCommandBufferAbstract,
},
descriptor_set::{
allocator::StandardDescriptorSetAllocator, PersistentDescriptorSet, WriteDescriptorSet,
},
device::{
physical::PhysicalDevice, Device, DeviceCreateInfo, DeviceExtensions, Features,
QueueCreateInfo, QueueFlags,
},
instance::{Instance, InstanceCreateInfo},
memory::allocator::{MemoryAllocator, StandardMemoryAllocator},
pipeline::Pipeline,
pipeline::{ComputePipeline, PipelineBindPoint},
sync::GpuFuture,
VulkanLibrary,
};
#[derive(Clone, Copy)]
pub enum Padding {
None,
Fixed(usize, usize),
Same,
}
#[repr(C)]
#[derive(Default, Copy, Clone, Debug, Zeroable, Pod)]
struct Dimension {
pub rows: usize,
pub columns: usize,
pub channels: usize,
}
impl Dimension {
pub fn from_matrix<T>(mat: &DMatrix<T>) -> Self {
let shape = mat.shape();
Self {
rows: shape.0,
columns: shape.1,
channels: 1,
}
}
}
#[repr(C)]
#[derive(Default, Copy, Clone, Debug, Zeroable, Pod)]
struct BufferDimensions {
pub input_matrix: Dimension,
pub kernel: Dimension,
pub output_matrix: Dimension,
}
#[repr(C)]
#[derive(Default, Copy, Clone, Debug, Zeroable, Pod)]
struct ConvolutionOptions {
pub padding: [i32; 2],
pub stride: u32,
}
fn input(question: impl Into<String>) -> String {
let mut result = "".to_string();
print!("{} ", question.into().bold().cyan());
stdout().flush().expect("Could not flush stdout");
stdin()
.read_line(&mut result)
.expect("Could not read stdin");
result
}
fn main() {
let library = VulkanLibrary::new().expect("Could not find vulkan.dll");
let instance =
Instance::new(library, InstanceCreateInfo::default()).expect("Failed to Create Instance");
println!("Available GPUs:");
let physical_devices = instance
.enumerate_physical_devices()
.expect("Could not enumerate the physical devices")
.enumerate()
.map(|(i, physical)| {
println!(
"[{}]: \"{}\"; TYPE: \"{:?}\"; API_VERSION: \"{}\"",
i.to_string().bold().bright_magenta(),
physical.properties().device_name.to_string().bold().green(),
physical.properties().device_type,
physical.api_version()
);
physical
})
.collect::<Vec<Arc<PhysicalDevice>>>();
let physical_index = input(format!("Type the chosen [{}]:", "index".bright_magenta()))
.replace("\n", "")
.parse::<usize>()
.expect("Please type a number.");
let physical = physical_devices[physical_index].clone();
println!(
"Using {}; TYPE: \"{:?}\"; \n\n {:?} \n\n {:#?}",
physical.properties().device_name.to_string().bold().green(),
physical.properties().device_type,
physical.api_version(),
physical.properties()
);
return;
let queue_family_index = physical
.queue_family_properties()
.iter()
.position(|q| {
q.queue_flags.intersects(&QueueFlags {
compute: true,
..QueueFlags::empty()
})
})
.unwrap() as u32;
let (device, mut queues) = Device::new(
physical,
DeviceCreateInfo {
enabled_features: Features::empty(),
queue_create_infos: vec![QueueCreateInfo {
queue_family_index,
..Default::default()
}],
..Default::default()
},
)
.expect("Failed to create device");
let queue = queues.next().unwrap();
let memory_allocator = StandardMemoryAllocator::new_default(device.clone());
let descriptor_set_allocator = StandardDescriptorSetAllocator::new(device.clone());
let command_buffer_allocator =
StandardCommandBufferAllocator::new(device.clone(), Default::default());
let mut builder = AutoCommandBufferBuilder::primary(
&command_buffer_allocator,
queue.queue_family_index(),
vulkano::command_buffer::CommandBufferUsage::OneTimeSubmit,
)
.unwrap();
let stride = 1;
let get_result_shape = |input_shape: usize, padding: usize, ker_shape: usize| {
(input_shape + 2 * padding - ker_shape) / stride + 1
};
let padding = Padding::Same;
let input_data = dmatrix![1.0f32, 2., 3.; 4., 5., 6.; 7., 8., 9.];
let kernel_data = dmatrix![11.0f32, 19.; 31., 55.];
let input_shape = Dimension::from_matrix(&input_data);
let kernel_shape = Dimension::from_matrix(&kernel_data);
let padding = match padding {
Padding::None => (0, 0),
Padding::Fixed(x_p, y_p) => (x_p, y_p),
Padding::Same => {
let get_padding = |input_shape: usize, ker_shape: usize| {
(((stride - 1) as i64 * input_shape as i64 - stride as i64 + ker_shape as i64)
as f64
/ 2.0)
.ceil() as usize
};
(
/* rows */
get_padding(input_shape.rows, kernel_shape.rows),
/* columns */
get_padding(input_shape.columns, kernel_shape.columns),
)
}
};
let dimensions = BufferDimensions {
input_matrix: input_shape,
kernel: kernel_shape,
output_matrix: Dimension {
rows: get_result_shape(input_shape.rows, padding.0, kernel_shape.rows),
columns: get_result_shape(input_shape.columns, padding.1, kernel_shape.columns),
channels: 1,
},
};
let options = ConvolutionOptions {
padding: [padding.0 as i32, padding.1 as i32],
stride: stride as u32,
};
let dimensions_buffer = DeviceLocalBuffer::from_data(
&memory_allocator,
dimensions,
BufferUsage {
uniform_buffer: true,
..BufferUsage::empty()
},
&mut builder,
)
.expect("Failed to create uniform buffer.");
let options_buffer = DeviceLocalBuffer::from_data(
&memory_allocator,
options,
BufferUsage {
uniform_buffer: true,
..BufferUsage::empty()
},
&mut builder,
)
.expect("Failed to create uniform buffer.");
println!(
"{:?} {:?} {:?} {:?}",
input_data, dimensions, options, kernel_data
);
let input_buffer = DeviceLocalBuffer::from_iter(
&memory_allocator,
input_data.data.as_vec().to_owned(),
BufferUsage {
uniform_buffer: true,
..BufferUsage::empty()
},
&mut builder,
)
.expect("Failed to create uniform buffer.");
let kernel_buffer = DeviceLocalBuffer::from_iter(
&memory_allocator,
kernel_data.data.as_vec().to_owned(),
BufferUsage {
uniform_buffer: true,
..BufferUsage::empty()
},
&mut builder,
)
.expect("Failed to create uniform buffer.");
let output_buffer = CpuAccessibleBuffer::from_iter(
&memory_allocator,
BufferUsage {
storage_buffer: true,
..BufferUsage::empty()
},
false,
[0..(dimensions.output_matrix.channels
* dimensions.output_matrix.rows
* dimensions.output_matrix.columns)]
.map(|__| 0.0f32)
.to_owned(),
)
.expect("Failed to create storage buffer.");
println!("Loading shader");
let cs = cs::load(device.clone()).unwrap();
println!("Creating pipeline with shader"); // This line prints just fine
let compute_pipeline = ComputePipeline::new(
device.clone(),
cs.entry_point("main").unwrap(),
&(),
None,
|_| {},
)
.expect("Failed to create compute shader");
println!("Finished Creating the compute pipeline"); // THIS LINE NEVER GETS RUN
}
pub mod cs {
use vulkano_shaders::shader;
shader! {
ty: "compute",
path: "./matrix_convolution.glsl"
}
}
The shader is:
#version 450
#pragma shader_stage(compute)
layout(local_size_x=32, local_size_y=32, local_size_z=16) in;
struct Dimension {
uint rows;
uint columns;
uint channels;
};
layout(set=0, binding=0) buffer Dimensions {
Dimension input_matrix;
Dimension kernel;
Dimension output_matrix;
} dims_buf;
layout(set=0, binding=1) buffer readonly InputMatrix {
float[] input_matrix;
};
layout(set=0, binding=2) buffer readonly Kernel {
float[] kernel;
};
layout(set=0, binding=3) buffer writeonly OutputMatrix {
float[] output_matrix;
};
layout(set=0, binding=4) buffer Options {
ivec2 padding;
uint stride;
} options_buf;
void main() {
const uint raw_row = gl_GlobalInvocationID.x;
const uint raw_column = gl_GlobalInvocationID.y;
const uint raw_channel = gl_GlobalInvocationID.z;
}
I tried to run similar programs with different shaders and it worked just fine.
It turns out that the work groups sizes must be fewer less than the
Therefore: local_size_x * local_size_y * local_size_z must be less than max_compute_work_group_invocations
physical.properties().max_compute_work_group_invocations ```
code first:
use std::collections::HashMap;
macro_rules! arr{
([$($t:expr=>[$($c:expr),*]),*]) => {
vec![
$({
let mut m = HashMap::new();
m.insert($t, vec![$($c),*]);
m
}),*
]
};
}
fn main() {
let a = arr!([
"A"=>[1,2,3],
"B"=>[3,4]
]);
println!("{:?}", a);
//print: [{"A": [1, 2, 3]}, {"B": [3, 4]}]
}
I have above macro to generate a vec, contains several HashMap, in which these HashMap value is a vec as well,
{"A": [1, 2, 3]} => vec value length: 3,
{"B": [3, 4]} => vec value length: 2,
I wanna all the HashMap have the same length,
how to write in the macro to control this?
You can change the macro so that it creates a block (second set of {} encapsulating the macro definition) that you can set helper variables in and do a second pass over your vector, resizing anything that is smaller than the largest array.
In this case I've resized the arrays with the default value of the type to keep it simple. You may wish to wrap the data in Some().
This:
use std::cmp;
use std::collections::HashMap;
use std::default::Default;
macro_rules! arr{
([$($t:expr=>[$($c:expr),*]),*]) => {{
let mut max = 0;
let mut result = vec![
$({
let mut m = HashMap::new();
m.insert($t, vec![$($c),*]);
// Simply unwrap here as we know we inserted at this key above
max = cmp::max(max, m.get($t).unwrap().len());
m
}),*
];
for m in result.iter_mut() {
for v in m.values_mut() {
if v.len() < max {
v.resize_with(max, Default::default);
}
}
}
result
}};
}
fn main() {
let a = arr!([
"A"=>[1,2,3],
"B"=>[3,4]
]);
println!("{:?}", a);
//print: [{"A": [1, 2, 3]}, {"B": [3, 4]}]
}
Yields:
[{"A": [1, 2, 3]}, {"B": [3, 4, 0]}]
Is there a way in Rust to initialize the first n elements of an array manually, and specify a default value to be used for the rest?
Specifically, when initializing structs, we can specify some fields, and use .. to initialize the remaining fields from another struct, e.g.:
let foo = Foo {
x: 1,
y: 2,
..Default::default()
};
Is there a similar mechanism for initializing part of an array manually? e.g.
let arr: [i32; 5] = [1, 2, ..3];
to get [1, 2, 3, 3, 3]?
Edit: I realized this can be done on stable. For the original answer, see below.
I had to juggle with the compiler so it will be able to infer the type of the array, but it works:
// A workaround on the same method on `MaybeUninit` being unstable.
// Copy-paste from https://doc.rust-lang.org/stable/src/core/mem/maybe_uninit.rs.html#943-953.
pub unsafe fn maybe_uninit_array_assume_init<T, const N: usize>(
array: [core::mem::MaybeUninit<T>; N],
) -> [T; N] {
// SAFETY:
// * The caller guarantees that all elements of the array are initialized
// * `MaybeUninit<T>` and T are guaranteed to have the same layout
// * `MaybeUninit` does not drop, so there are no double-frees
// And thus the conversion is safe
(&array as *const _ as *const [T; N]).read()
}
macro_rules! array_with_default {
(#count) => { 0usize };
(#count $e:expr, $($rest:tt)*) => { 1usize + array_with_default!(#count $($rest)*) };
[$($e:expr),* ; $default:expr; $default_size:expr] => {{
// There is no hygiene for items, so we use unique names here.
#[allow(non_upper_case_globals)]
const __array_with_default_EXPRS_LEN: usize = array_with_default!(#count $($e,)*);
#[allow(non_upper_case_globals)]
const __array_with_default_DEFAULT_SIZE: usize = $default_size;
let mut result = unsafe { ::core::mem::MaybeUninit::<
[::core::mem::MaybeUninit<_>; {
__array_with_default_EXPRS_LEN + __array_with_default_DEFAULT_SIZE
}],
>::uninit().assume_init() };
let mut dest = result.as_mut_ptr();
$(
let expr = $e;
unsafe {
::core::ptr::write((*dest).as_mut_ptr(), expr);
dest = dest.add(1);
}
)*
for default_value in [$default; __array_with_default_DEFAULT_SIZE] {
unsafe {
::core::ptr::write((*dest).as_mut_ptr(), default_value);
dest = dest.add(1);
}
}
unsafe { maybe_uninit_array_assume_init(result) }
}};
}
Playground.
Based on the example from #Denys, here is a macro that works on nightly. Note that I had problems matching the .. syntax (though I'm not entirely sure that's impossible; just didn't put much time into that):
#![feature(generic_const_exprs)]
#![allow(incomplete_features)]
use std::mem::MaybeUninit;
pub fn concat_arrays<T, const N: usize, const M: usize>(a: [T; N], b: [T; M]) -> [T; N + M] {
unsafe {
let mut result = MaybeUninit::<[T; N + M]>::uninit();
let dest = result.as_mut_ptr().cast::<[T; N]>();
dest.write(a);
let dest = dest.add(1).cast::<[T; M]>();
dest.write(b);
result.assume_init()
}
}
macro_rules! array_with_default {
[$($e:expr),* ; $default:expr; $default_size:expr] => {
concat_arrays([$($e),*], [$default; $default_size])
};
}
fn main() {
dbg!(array_with_default![1, 2; 3; 7]);
}
Playground.
As another option, you can build a default filled array and just modify the positions you require in runtime:
#![feature(explicit_generic_args_with_impl_trait)]
fn array_with_default_and_positions<T: Copy, const SIZE: usize>(
default: T,
init_values: impl IntoIterator<Item = (usize, T)>,
) -> [T; SIZE] {
let mut res = [default; SIZE];
for (i, e) in init_values.into_iter() {
res[i] = e;
}
res
}
Playground
Notice the use of #![feature(explicit_generic_args_with_impl_trait)],which is nightly, it could be replaced by an slice since T and usize are copy:
fn array_with_default_and_positions_v2<T: Copy, const SIZE: usize>(
default: T,
init_values: &[(usize, T)],
) -> [T; SIZE] {
let mut res = [default; SIZE];
for &(i, e) in init_values.into_iter() {
res[i] = e;
}
res
}
I have come to you guys today with an error I cannot seem to fix sadly.
First let me explain what I'm trying to do. I am writing a little VM in Rust. I have just finished writing the bare minimum for the VM as you can tell by how unfinished the code is. I have made a system where you can load the program into a certain spot in memory so you can jump to that spot for subroutines later on. The run function in the Arcate struct runs whatever operation is on 0x0000.
As you can also see I have gone with using a databuss just so I can use external memory devices like creating another ArcateMemory struct as a different "drive".
It seems I am getting a stack overflow but since Rust has the best stack overflow messages all I am getting is that it is in main.
Thanks again for your help. Sorry if it is a stupid mistake I'm a little new to Rust.
main.rs
#![allow(dead_code)]
type Instr = u8;
type Program = Vec<Instr>;
#[derive(PartialEq, Eq)]
enum Signals {
BusWrite,
BusRead,
ErrNoInstruction,
Success,
Halt,
}
struct Arcate {
mem : ArcateMem,
ci: Instr,
pc: i32,
acr: i32,
gp1: i32,
gp2: i32,
gp3: i32,
gp4: i32,
gp5: i32,
gp6: i32,
gp7: i32,
gp8: i32,
}
impl Arcate {
fn new(memory: ArcateMem) -> Self {
Arcate {
mem: memory,
ci: 0,
pc: 0,
acr: 0,
gp1: 0,
gp2: 0,
gp3: 0,
gp4: 0,
gp5: 0,
gp6: 0,
gp7: 0,
gp8: 0,
}
}
fn fetchi(&mut self) {
let i: Instr = ArcateBus {data: 0, addr: self.pc, todo: Signals::BusRead, mem: self.mem}.exec();
self.ci = i;
self.pc += 1;
}
fn fetcha(&mut self) -> Instr{
let i: Instr = ArcateBus {data: 0, addr: self.pc, todo: Signals::BusRead, mem: self.mem}.exec();
self.pc += 1;
i
}
fn getr(&mut self, reg: Instr) -> i32 {
match reg {
0x01 => { self.ci as i32},
0x02 => { self.pc },
0x03 => { self.acr },
0x04 => { self.gp1 },
0x05 => { self.gp2 },
0x06 => { self.gp3 },
0x07 => { self.gp4 },
0x08 => { self.gp5 },
0x09 => { self.gp6 },
0x0a => { self.gp7 },
0x0b => { self.gp8 },
_ => { panic!("Register not found {}", reg) }
}
}
fn setr(&mut self, reg: Instr, val: i32) {
match reg {
0x01 => { self.ci = val as u8 },
0x02 => { self.pc = val },
0x03 => { self.acr = val },
0x04 => { self.gp1 = val },
0x05 => { self.gp2 = val },
0x06 => { self.gp3 = val },
0x07 => { self.gp4 = val },
0x08 => { self.gp5 = val },
0x09 => { self.gp6 = val },
0x0a => { self.gp7 = val },
0x0b => { self.gp8 = val },
_ => { panic!("Register not found {}", reg) }
}
}
fn dbg(&mut self, regs: bool, mem: bool) {
if regs {
println!("ci : {}", self.ci );
println!("pc : {}", self.pc );
println!("acr: {}", self.acr);
println!("gp1: {}", self.gp1);
println!("gp2: {}", self.gp2);
println!("gp3: {}", self.gp3);
println!("gp4: {}", self.gp4);
println!("gp5: {}", self.gp5);
println!("gp6: {}", self.gp6);
println!("gp7: {}", self.gp7);
println!("gp8: {}\n", self.gp8);
}
}
fn exec(&mut self) -> Signals {
match self.ci {
// 01: 2 args. movir imm, reg
0x01 => {
let imm1 = self.fetcha();
let imm2 = self.fetcha();
let imm = ((imm1 as i32) << 8) + imm2 as i32;
let reg = self.fetcha();
self.setr(reg, imm);
Signals::Success
}
// 02: 2 args. movrr reg, reg
0x02 => {
let regf = self.fetcha();
let regt = self.fetcha();
let regv = self.getr(regf);
self.setr(regt, regv);
Signals::Success
}
// 03: 2 args. movrm reg, mem
// 04: 2 args. movim imm, mem
// 05: 2 args. addrr reg, reg
0x05 => {
let rego = self.fetcha();
let regt = self.fetcha();
let regov = self.getr(rego);
let regtv = self.getr(regt);
self.acr = regov + regtv;
Signals::Success
}
// 06: 2 args. addir imm, reg
// ff: 0 args. halt
0xff => {
Signals::Halt
}
_ => {
Signals::ErrNoInstruction
}
}
}
fn load(&mut self, prog: Program, loc: usize) {
let mut ld = 0;
for i in loc..prog.len() {
self.mem.mem[i] = prog[ld];
println!("{:2x} -> {:12x}", prog[ld], i);
ld += 1;
}
}
fn run(&mut self, dbgR: bool) {
let mut sig: Signals = Signals::Success;
self.dbg(dbgR, false);
while sig == Signals::Success {
self.fetchi();
sig = self.exec();
self.dbg(dbgR, false);
}
}
}
struct ArcateBus {
data: Instr,
addr: i32,
todo: Signals,
mem : ArcateMem,
}
impl ArcateBus {
fn exec(&mut self) -> Instr {
if self.todo == Signals::BusWrite {
self.mem.mem[self.addr as usize] = self.data;
0x00
} else if self.todo == Signals::BusRead {
self.mem.mem[self.addr as usize]
}else {
0xFF
}
}
}
#[derive(Copy, Clone)]
struct ArcateMem {
mem: [Instr; 0xFFFF*0xFFFF],
}
impl ArcateMem {
fn new() -> Self {
ArcateMem {
mem: [0; 0xFFFF*0xFFFF],
}
}
}
fn main() {
let mem: ArcateMem = ArcateMem::new();
let mut arc: Arcate = Arcate::new(mem);
let prog = vec![
0x01, 0xfe, 0xfe, 0x04,
0x01, 0x01, 0x01, 0x05,
0x05, 0x04, 0x05,
0xff,
];
arc.load(prog, 0x0000);
arc.run(true);
}
I think the issue is around ArcateMem, you are allocating 0xFFFF*0xFFFF Instr, which is about 4GB. With the way the code is written right now, you are allocating this on the stack, which generally can't support allocations this large. You'll probably want to use Box<> to allocate your memory in the heap, as it is more likely able to deal with allocations this large.
It's possible you could configure your operating system to increase your stack size, but I'd recommend using the heap.
I'm trying to deserialise a binary format (OpenType) which consists of data in multiple tables (binary structs). I would like to be able to deserialise the tables independently (because of how they're stored in the top-level file structure; imagine them being in separate files, so they have to be deserialised separately), but sometimes there are dependencies between them.
A simple example is the loca table which contains an array of either 16-bit or 32-bit offsets, depending on the value of the indexToLocFormat field in the head table. As a more complex example, these loca table offsets in turn are used as offsets into the binary data of the glyf table to locate elements. So I need to get indexToLocFormat and loca: Vec<32> "into" the serializer somehow.
Obviously I need to implement Deserialize myself and write visitors, and I've got my head around doing that. When there are dependencies from a table to a subtable, I've also been able to work that out using deserialize_seed inside the table's visitor. But I don't know how to apply that to pass in information between tables.
I think I need to store what is essentially configuration information (value of indexToLocFormat, array of offsets) when constructing my serializer object:
pub struct Deserializer<'de> {
input: &'de [u8],
ptr: usize,
locaShortVersion: Option<bool>,
glyfOffsets: Option<Vec<u32>>,
...
}
The problem is that I don't know how to retrieve that information when I'm inside the Visitor impl for the struct; I don't know how to get at the deserializer object at all, let alone how to type things so that I get at my Deserializer object with the configuration fields, not just a generic serde::de::Deserializer:
impl<'de> Visitor<'de> for LocaVisitor {
type Value = Vec<u32>;
fn expecting(&self, formatter: &mut std::fmt::Formatter) -> std::fmt::Result {
write!(formatter, "A loca table")
}
fn visit_seq<A: SeqAccess<'de>>(self, mut seq: A) -> Result<Self::Value, A::Error> {
let locaShortVersion = /* what goes here? */;
if locaShortVersion {
Ok(seq.next_element::Vec<u16>()?
.ok_or_else(|| serde::de::Error::custom("Oops"))?
.map { |x| x as u32 }
} else {
Ok(seq.next_element::Vec<u32>()?
.ok_or_else(|| serde::de::Error::custom("Oops"))?
}
}
}
(terrible code here; if you're wondering why I'm writing Yet Another OpenType Parsing Crate, it's because I want to both read and write font files.)
Actually, I think I've got it. The trick is to do the deserialization in stages. Rather than calling the deserializer module's from_bytes function (which wraps the struct creation, and T::deserialize call), do this instead:
use serde::de::DeserializeSeed; // Having this trait in scope is also key
let mut de = Deserializer::from_bytes(&binary_loca_table);
let ssd: SomeSpecialistDeserializer { ... configuration goes here .. };
let loca_table: Vec<u32> = ssd.deserialize(&mut de).unwrap();
In this case, I use a LocaDeserializer defined like so:
pub struct LocaDeserializer { locaIs32Bit: bool }
impl<'de> DeserializeSeed<'de> for LocaDeserializer {
type Value = Vec<u32>;
fn deserialize<D>(self, deserializer: D) -> std::result::Result<Self::Value, D::Error>
where
D: serde::de::Deserializer<'de>,
{
struct LocaDeserializerVisitor {
locaIs32Bit: bool,
}
impl<'de> Visitor<'de> for LocaDeserializerVisitor {
type Value = Vec<u32>;
fn expecting(&self, formatter: &mut fmt::Formatter) -> fmt::Result {
write!(formatter, "a loca table")
}
fn visit_seq<A>(self, mut seq: A) -> std::result::Result<Vec<u32>, A::Error>
where
A: SeqAccess<'de>,
{
if self.locaIs32Bit {
Ok(seq.next_element::<u32>()?.ok_or_else(|| serde::de::Error::custom(format!("Expecting a 32 bit glyph offset")))?)
} else {
Ok(seq.next_element::<u16>()?.ok_or_else(|| serde::de::Error::custom(format!("Expecting a 16 bit glyph offset")))?
.iter()
.map(|x| (*x as u32) * 2)
.collect())
}
}
}
deserializer.deserialize_seq(LocaDeserializerVisitor {
locaIs32Bit: self.locaIs32Bit,
})
}
}
And now:
fn loca_de() {
let binary_loca = vec![
0x00, 0x01, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x1a,
];
let mut de = Deserializer::from_bytes(&binary_loca);
let cs: loca::LocaDeserializer = loca::LocaDeserializer { locaIs32Bit: false };
let floca: Vec<u32> = cs.deserialize(&mut de).unwrap();
println!("{:?}", floca);
// [2, 0, 2, 0, 0, 52]
let mut de = Deserializer::from_bytes(&binary_loca);
let cs: loca::LocaDeserializer = loca::LocaDeserializer { locaIs32Bit: true };
let floca: Vec<u32> = cs.deserialize(&mut de).unwrap();
println!("{:?}", floca);
// [65536, 65536, 26]
}
Serde is very nice - once you have got your head around it.