Is there a Rust equivalent of the following C++ sample (that I've written for this question):
union example {
uint32_t fullValue;
struct {
unsigned sixteen1: 16;
unsigned sixteen2: 16;
};
struct {
unsigned three: 3;
unsigned twentynine: 29;
};
};
example e;
e.fullValue = 12345678;
std::cout << e.sixteen1 << ' ' << e.sixteen2 << ' ' << e.three << ' ' << e.twentynine;
For reference, I'm writing a CPU emulator & easily being able to split out binary parts of a variable like this & reference them by different names, makes the code much simpler. I know how to do this in C++ (as above), but am struggling to work out how to do the equivalent in Rust.
You could do this by creating a newtype struct and extracting the relevant bits using masking and/or shifts.
This code to do this is slightly longer (but not much so) and importantly avoids the undefined behavior you are triggering in C++.
#[derive(Debug, Clone, Copy)]
struct Example(pub u32);
impl Example {
pub fn sixteen1(self) -> u32 {
self.0 & 0xffff
}
pub fn sixteen2(self) -> u32 {
self.0 >> 16
}
pub fn three(self) -> u32 {
self.0 & 7
}
pub fn twentynine(self) -> u32 {
self.0 >> 3
}
}
pub fn main() {
let e = Example(12345678);
println!("{} {} {} {}", e.sixteen1(), e.sixteen2(), e.three(), e.twentynine());
}
Update
You can make some macros for extracting certain bits:
// Create a u32 mask that's all 0 except for one patch of 1's that
// begins at index `start` and continues for `len` digits.
macro_rules! mask {
($start:expr, $len:expr) => {
{
assert!($start >= 0);
assert!($len > 0);
assert!($start + $len <= 32);
if $len == 32 {
assert!($start == 0);
0xffffffffu32
} else {
((1u32 << $len) - 1) << $start
}
}
}
}
const _: () = assert!(mask!(3, 7) == 0b1111111000);
const _: () = assert!(mask!(0, 32) == 0xffffffff);
// Select `num_bits` bits from `value` starting at `start`.
// For example, select_bits!(0xabcd1234, 8, 12) == 0xd12
// because the created mask is 0x000fff00.
macro_rules! select_bits {
($value:expr, $start:expr, $num_bits:expr) => {
{
let mask = mask!($start, $num_bits);
($value & mask) >> mask.trailing_zeros()
}
}
}
const _: () = assert!(select_bits!(0xabcd1234, 8, 12) == 0xd12);
Then either use these directly on a u32 or make a struct to implement taking certain bits:
struct Example {
v: u32,
}
impl Example {
pub fn first_16(&self) -> u32 {
select_bits!(self.v, 0, 16)
}
pub fn last_16(&self) -> u32 {
select_bits!(self.v, 16, 16)
}
pub fn first_3(&self) -> u32 {
select_bits!(self.v, 0, 3)
}
pub fn last_29(&self) -> u32 {
select_bits!(self.v, 3, 29)
}
}
fn main() {
// Use hex for more easily checking the expected values.
let e = Example { v: 0x12345678 };
println!("{:x} {:x} {:x} {:x}", e.first_16(), e.last_16(), e.first_3(), e.last_29());
// Or use decimal for checking with the provided C code.
let e = Example { v: 12345678 };
println!("{} {} {} {}", e.first_16(), e.last_16(), e.first_3(), e.last_29());
}
Original Answer
While Rust does have unions, it may be better to use a struct for your use case and just get bits from the struct's single value.
// Create a u32 mask that's all 0 except for one patch of 1's that
// begins at index `start` and continues for `len` digits.
macro_rules! mask {
($start:expr, $len:expr) => {
{
assert!($start >= 0);
assert!($len > 0);
assert!($start + $len <= 32);
let mut mask = 0u32;
for i in 0..$len {
mask |= 1u32 << (i + $start);
}
mask
}
}
}
struct Example {
v: u32,
}
impl Example {
pub fn first_16(&self) -> u32 {
self.get_bits(mask!(0, 16))
}
pub fn last_16(&self) -> u32 {
self.get_bits(mask!(16, 16))
}
pub fn first_3(&self) -> u32 {
self.get_bits(mask!(0, 3))
}
pub fn last_29(&self) -> u32 {
self.get_bits(mask!(3, 29))
}
// Get the bits of `self.v` specified by `mask`.
// Example:
// self.v == 0xa9bf01f3
// mask == 0x00fff000
// The result is 0xbf0
fn get_bits(&self, mask: u32) -> u32 {
// Find how many trailing zeros `mask` (in binary) has.
// For example, the mask 0xa0 == 0b10100000 has 5.
let mut trailing_zeros_count_of_mask = 0;
while mask & (1u32 << trailing_zeros_count_of_mask) == 0 {
trailing_zeros_count_of_mask += 1;
}
(self.v & mask) >> trailing_zeros_count_of_mask
}
}
fn main() {
// Use hex for more easily checking the expected values.
let e = Example { v: 0x12345678 };
println!("{:x} {:x} {:x} {:x}", e.first_16(), e.last_16(), e.first_3(), e.last_29());
// Or use decimal for checking with the provided C code.
let e = Example { v: 12345678 };
println!("{} {} {} {}", e.first_16(), e.last_16(), e.first_3(), e.last_29());
}
This setup makes it easy to select any range of bits you want. For example, if you want to get the middle 16 bits of the u32, you just define:
pub fn middle_16(&self) -> u32 {
self.get_bits(mask!(8, 16))
}
And you don't even really need the struct. Instead of having get_bits() be a method, you could define it to take a u32 value and mask, and then define functions like
pub fn first_3(v: u32) -> u32 {
get_bits(v, mask!(0, 3))
}
Note
I think this Rust code works the same regardless of your machine's endianness, but I've only run it on my little-endian machine. You should double check it if it could be a problem for you.
You could use the bitfield crate.
This appears to approximate what you are looking for at least on a syntactic level.
For reference, your original C++ code prints:
24910 188 6 1543209
Now there is no built-in functionality in Rust for bitfields, but there is the bitfield crate.
It allows specifying a newtype struct and then generates setters/getters for parts of the wrapped value.
For example pub twentynine, set_twentynine: 31, 3; means that it should generate the setter set_twentynine() and getter twentynine() that sets/gets the bits 3 through 31, both included.
So transferring your C++ union into a Rust bitfield, this is how it could look like:
use bitfield::bitfield;
bitfield! {
pub struct Example (u32);
pub full_value, set_full_value: 31, 0;
pub sixteen1, set_sixteen1: 15, 0;
pub sixteen2, set_sixteen2: 31, 16;
pub three, set_three: 2, 0;
pub twentynine, set_twentynine: 31, 3;
}
fn main() {
let mut e = Example(0);
e.set_full_value(12345678);
println!(
"{} {} {} {}",
e.sixteen1(),
e.sixteen2(),
e.three(),
e.twentynine()
);
}
24910 188 6 1543209
Note that those generated setters/getters are small enough to have a very high chance to be inlined by the compiler, giving you zero overhead.
Of course if you want to avoid adding an additional dependency and instead want to implement the getters/setters by hand, look at #apilat's answer instead.
Alternative: the c2rust-bitfields crate:
use c2rust_bitfields::BitfieldStruct;
#[repr(C, align(1))]
#[derive(BitfieldStruct)]
struct Example {
#[bitfield(name = "full_value", ty = "u32", bits = "0..=31")]
#[bitfield(name = "sixteen1", ty = "u16", bits = "0..=15")]
#[bitfield(name = "sixteen2", ty = "u16", bits = "16..=31")]
#[bitfield(name = "three", ty = "u8", bits = "0..=2")]
#[bitfield(name = "twentynine", ty = "u32", bits = "3..=31")]
data: [u8; 4],
}
fn main() {
let mut e = Example { data: [0; 4] };
e.set_full_value(12345678);
println!(
"{} {} {} {}",
e.sixteen1(),
e.sixteen2(),
e.three(),
e.twentynine()
);
}
24910 188 6 1543209
Advantage of this one is that you can specify the type of the union parts yourself; the first one was u32 for all of them.
I'm unsure, however, how endianess plays into this one. It might yield different results on a system with different endianess. Might require further research to be sure.
Related
Currently I'm using the following code to return a number as a binary (base 2), octal (base 8), or hexadecimal (base 16) string.
fn convert(inp: u32, out: u32, numb: &String) -> Result<String, String> {
match isize::from_str_radix(numb, inp) {
Ok(a) => match out {
2 => Ok(format!("{:b}", a)),
8 => Ok(format!("{:o}", a)),
16 => Ok(format!("{:x}", a)),
10 => Ok(format!("{}", a)),
0 | 1 => Err(format!("No base lower than 2!")),
_ => Err(format!("printing in this base is not supported")),
},
Err(e) => Err(format!(
"Could not convert {} to a number in base {}.\n{:?}\n",
numb, inp, e
)),
}
}
Now I want to replace the inner match statement so I can return the number as an arbitrarily based string (e.g. base 3.) Are there any built-in functions to convert a number into any given radix, similar to JavaScript's Number.toString() method?
For now, you cannot do it using the standard library, but you can:
use my crate radix_fmt
or roll your own implementation:
fn format_radix(mut x: u32, radix: u32) -> String {
let mut result = vec![];
loop {
let m = x % radix;
x = x / radix;
// will panic if you use a bad radix (< 2 or > 36).
result.push(std::char::from_digit(m, radix).unwrap());
if x == 0 {
break;
}
}
result.into_iter().rev().collect()
}
fn main() {
assert_eq!(format_radix(1234, 10), "1234");
assert_eq!(format_radix(1000, 10), "1000");
assert_eq!(format_radix(0, 10), "0");
}
If you wanted to eke out a little more performance, you can create a struct and implement Display or Debug for it. This avoids allocating a String. For maximum over-engineering, you can also have a stack-allocated array instead of the Vec.
Here is Boiethios' answer with these changes applied:
struct Radix {
x: i32,
radix: u32,
}
impl Radix {
fn new(x: i32, radix: u32) -> Result<Self, &'static str> {
if radix < 2 || radix > 36 {
Err("Unnsupported radix")
} else {
Ok(Self { x, radix })
}
}
}
use std::fmt;
impl fmt::Display for Radix {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
let mut x = self.x;
// Good for binary formatting of `u128`s
let mut result = ['\0'; 128];
let mut used = 0;
let negative = x < 0;
if negative {
x*=-1;
}
let mut x = x as u32;
loop {
let m = x % self.radix;
x /= self.radix;
result[used] = std::char::from_digit(m, self.radix).unwrap();
used += 1;
if x == 0 {
break;
}
}
if negative {
write!(f, "-")?;
}
for c in result[..used].iter().rev() {
write!(f, "{}", c)?;
}
Ok(())
}
}
fn main() {
assert_eq!(Radix::new(1234, 10).to_string(), "1234");
assert_eq!(Radix::new(1000, 10).to_string(), "1000");
assert_eq!(Radix::new(0, 10).to_string(), "0");
}
This could still be optimized by:
creating an ASCII array instead of a char array
not zero-initializing the array
Since these avenues require unsafe or an external crate like arraybuf, I have not included them. You can see sample code in internal implementation details of the standard library.
Here is an extended solution based on the first comment which does not bind the parameter x to be a u32:
fn format_radix(mut x: u128, radix: u32) -> String {
let mut result = vec![];
loop {
let m = x % radix as u128;
x = x / radix as u128;
// will panic if you use a bad radix (< 2 or > 36).
result.push(std::char::from_digit(m as u32, radix).unwrap());
if x == 0 {
break;
}
}
result.into_iter().rev().collect()
}
This is faster than the other answer:
use std::char::from_digit;
fn encode(mut n: u32, r: u32) -> Option<String> {
let mut s = String::new();
loop {
if let Some(c) = from_digit(n % r, r) {
s.insert(0, c)
} else {
return None
}
n /= r;
if n == 0 {
break
}
}
Some(s)
}
Note I also tried these, but they were slower:
https://doc.rust-lang.org/std/collections/struct.VecDeque.html#method.push_front
https://doc.rust-lang.org/std/string/struct.String.html#method.push
https://doc.rust-lang.org/std/vec/struct.Vec.html#method.insert
The following is only an example. If there's a native solution for this exact problem with reading bytes - cool, but my goal is to learn how to do it by myself, for any other purpose as well.
I'd like to do something like this: (pseudo-code below)
let mut reader = Reader::new(bytesArr);
let int32: i32 = reader.read(); // separate implementation to read 4 bits and convert into int32
let int64: i64 = reader.read(); // separate implementation to read 8 bits and convert into int64
I imagine it looking like this: (pseudo-code again)
impl Reader {
read<T>(&mut self) -> T {
// if T is i32 ... else if ...
}
}
or like this:
impl Reader {
read(&mut self) -> i32 {
// ...
}
read(&mut self) -> i64 {
// ...
}
}
But haven't found anything relatable yet.
(I actually have, for the first case (if T is i32 ...), but it looked really unreadable and inconvenient)
You could do this by having a Readable trait which you implement on i32 and i64, which does the operation. Then on Reader you could have a generic function which takes any type that is Readable and return it, for example:
struct Reader {
n: u8,
}
trait Readable {
fn read_from_reader(reader: &mut Reader) -> Self;
}
impl Readable for i32 {
fn read_from_reader(reader: &mut Reader) -> i32 {
reader.n += 1;
reader.n as i32
}
}
impl Readable for i64 {
fn read_from_reader(reader: &mut Reader) -> i64 {
reader.n += 1;
reader.n as i64
}
}
impl Reader {
fn read<T: Readable>(&mut self) -> T {
T::read_from_reader(self)
}
}
fn main() {
let mut r = Reader { n: 0 };
let int32: i32 = r.read();
let int64: i64 = r.read();
println!("{} {}", int32, int64);
}
You can try it on the playground
After some trials and searches, I found that implementing them in current Rust seems a bit difficult, but not impossible.
Here is the code, I'll explain it afterwards:
#![feature(generic_const_exprs)]
use std::{
mem::{self, MaybeUninit},
ptr,
};
static DATA: [u8; 8] = [
u8::MAX,
u8::MAX,
u8::MAX,
u8::MAX,
u8::MAX,
u8::MAX,
u8::MAX,
u8::MAX,
];
struct Reader;
impl Reader {
fn read<T: Copy + Sized>(&self) -> T
where
[(); mem::size_of::<T>()]: ,
{
let mut buf = [unsafe { MaybeUninit::uninit().assume_init() }; mem::size_of::<T>()];
unsafe {
ptr::copy_nonoverlapping(DATA.as_ptr(), buf.as_mut_ptr(), buf.len());
mem::transmute_copy(&buf)
}
}
}
fn main() {
let reader = Reader;
let v_u8: u8 = reader.read();
dbg!(v_u8);
let v_u16: u16 = reader.read();
dbg!(v_u16);
let v_u32: u32 = reader.read();
dbg!(v_u32);
let v_u64: u64 = reader.read();
dbg!(v_u64);
}
Suppose the global static variable DATA is the target data you want to read.
In current Rust, we cannot directly use the size of a generic parameter as the length of an array. This does not work:
fn example<T: Copy + Sized>() {
let mut _buf = [0_u8; mem::size_of::<T>()];
}
The compiler gives a weird error:
error: unconstrained generic constant
--> src\main.rs:34:31
|
34 | let mut _buf = [0_u8; mem::size_of::<T>()];
| ^^^^^^^^^^^^^^^^^^^
|
= help: try adding a `where` bound using this expression: `where [(); mem::size_of::<T>()]:`
There is an issue that is tracking it, if you want to go deeper into this error you can take a look.
We just follow the compiler's suggestion to add a where bound. This requires feature generic_const_exprs to be enabled.
Next, unsafe { MaybeUninit::uninit().assume_init() } is optional, which drops the overhead of initializing this array, since we will eventually overwrite it completely. You can replace it with 0_u8 if you don't like it.
Finally, copy the data you need and transmute this array to your generic type, return.
I think you will see the output you expect:
[src\main.rs:38] v_u8 = 255
[src\main.rs:41] v_u16 = 65535
[src\main.rs:44] v_u32 = 4294967295
[src\main.rs:47] v_u64 = 18446744073709551615
I would like to have macro splitting one byte into tuple with 2-8 u8 parts using bitreader crate.
I managed to achieve that by following code:
use bitreader::BitReader;
trait Tupleprepend<T> {
type ResultType;
fn prepend(self, t: T) -> Self::ResultType;
}
macro_rules! impl_tuple_prepend {
( () ) => {};
( ( $t0:ident $(, $types:ident)* ) ) => {
impl<$t0, $($types,)* T> Tupleprepend<T> for ($t0, $($types,)*) {
type ResultType = (T, $t0, $($types,)*);
fn prepend(self, t: T) -> Self::ResultType {
let ($t0, $($types,)*) = self;
(t, $t0, $($types,)*)
}
}
impl_tuple_prepend! { ($($types),*) }
};
}
impl_tuple_prepend! {
(_1, _2, _3, _4, _5, _6, _7, _8)
}
macro_rules! split_byte (
($reader:ident, $bytes:expr, $count:expr) => {{
($reader.read_u8($count).unwrap(),)
}};
($reader:ident, $bytes:expr, $count:expr, $($next_counts:expr),+) => {{
let head = split_byte!($reader, $bytes, $count);
let tail = split_byte!($reader, $bytes, $($next_counts),+);
tail.prepend(head.0)
}};
($bytes:expr $(, $count:expr)* ) => {{
let mut reader = BitReader::new($bytes);
split_byte!(reader, $bytes $(, $count)+)
}};
);
Now I can use this code as I would like to:
let buf: &[u8] = &[0x72];
let (bit1, bit2, bits3to8) = split_byte!(&buf, 1, 1, 6);
Is there a way to avoid using Tupleprepend trait and create only 1 tuple instead of 8 in the worst scenario?
Because the number of bit widths directly corresponds to the number of returned values, I'd solve the problem using generics and arrays instead. The macro only exists to remove the typing of the [], which I don't really think is worth it.
fn split_byte<A>(b: u8, bit_widths: A) -> A
where
A: Default + std::ops::IndexMut<usize, Output = u8>,
for<'a> &'a A: IntoIterator<Item = &'a u8>,
{
let mut result = A::default();
let mut start = 0;
for (idx, &width) in bit_widths.into_iter().enumerate() {
let shifted = b >> (8 - width - start);
let mask = (0..width).fold(0, |a, _| (a << 1) | 1);
result[idx] = shifted & mask;
start += width;
}
result
}
macro_rules! split_byte {
($b:expr, $($w:expr),+) => (split_byte($b, [$($w),+]));
}
fn main() {
let [bit1, bit2, bits3_to_8] = split_byte!(0b1010_1010, 1, 1, 6);
assert_eq!(bit1, 0b1);
assert_eq!(bit2, 0b0);
assert_eq!(bits3_to_8, 0b10_1010);
}
See also:
How does for<> syntax differ from a regular lifetime bound?
How to write a trait bound for adding two references of a generic type?
How do I write the lifetimes for references in a type constraint when one of them is a local reference?
If it's ok to target nightly Rust, I'd use the unstable min_const_generics feature:
#![feature(min_const_generics)]
fn split_byte<const N: usize>(b: u8, bit_widths: [u8; N]) -> [u8; N] {
let mut result = [0; N];
let mut start = 0;
for (idx, &width) in bit_widths.iter().enumerate() {
let shifted = b >> (8 - width - start);
let mask = (0..width).fold(0, |a, _| (a << 1) | 1);
result[idx] = shifted & mask;
start += width;
}
result
}
macro_rules! split_byte {
($b:expr, $($w:expr),+) => (split_byte($b, [$($w),+]));
}
fn main() {
let [bit1, bit2, bits3_to_8] = split_byte!(0b1010_1010, 1, 1, 6);
assert_eq!(bit1, 0b1);
assert_eq!(bit2, 0b0);
assert_eq!(bits3_to_8, 0b10_1010);
}
See also:
Is it possible to control the size of an array using the type parameter of a generic?
My code looks like:
macro_rules! mask {
($bitmap: tt, [..$count: tt], for type = $ty: ty) => {
{
let bit_count = std::mem::size_of::<$ty>() * 8;
let dec_bit_count = bit_count - 1;
$bitmap & [(1 << ($count & dec_bit_count)) - 1, <$ty>::MAX][((($count & !dec_bit_count)) != 0) as usize]
}
};
}
fn main() {
let bitmap: u8 = 0b_1111_1111;
let masked_bitmap = mask!(bitmap, [..5], for type = u8);
println!("{:#010b}", masked_bitmap);
}
The above code will mask the bitmap. In the above example, 0b_1111_1111 on being masked by [..5] will become 0b_0001_1111.
I want my macro to be like this:
macro_rules! mask {
($bitmap: tt, [..$count: tt]) => {
{
let bit_count = std::mem::size_of::<decltype($bitmap)>() * 8;
let dec_bit_count = bit_count - 1;
$bitmap & [(1 << ($count & dec_bit_count)) - 1, <decltype($bitmap)>::MAX][((($count & !dec_bit_count)) != 0) as usize]
}
};
}
But I have to pass type to the macro to get this done. Is there something like decltype() from C++ that I could use?
No, Rust does not have the ability to get the type of an arbitrary expression. typeof is a reserved keyword to potentially allow it in the future:
fn main() {
let a: i32 = 42;
let b: typeof(a) = a;
}
error[E0516]: `typeof` is a reserved keyword but unimplemented
--> src/main.rs:3:12
|
3 | let b: typeof(a) = a;
| ^^^^^^^^^ reserved keyword
There are RFCs suggesting that it be added.
See also:
How do I match the type of an expression in a Rust macro?
Is it possible to access the type of a struct member for function signatures or declarations?
.type` for getting concrete type of a binding — issue #2704
For your specific case, I would use traits instead:
use std::ops::RangeTo;
trait Mask {
fn mask(self, range: RangeTo<usize>) -> Self;
}
impl Mask for u8 {
#[inline]
fn mask(self, range: RangeTo<usize>) -> Self {
// Feel free to make this your more complicated bitwise logic
let mut m = 0;
for _ in 0..range.end {
m <<= 1;
m |= 1;
}
self & m
}
}
fn main() {
let bitmap: u8 = 0b_1111_1111;
let masked_bitmap = bitmap.mask(..5);
println!("{:#010b}", masked_bitmap);
}
You could use macros to implement the trait however:
macro_rules! impl_mask {
($($typ:ty),*) => {
$(
impl Mask for $typ {
#[inline]
fn mask(self, range: RangeTo<usize>) -> Self {
let mut m = 0;
for _ in 0..range.end {
m <<= 1;
m |= 1;
}
self & m
}
}
)*
};
}
impl_mask!(u8, u16, u32, u64, u128);
I'm working through the Rust WASM tutorial for Conway's game of life.
One of the simplest functions in the file is called Universe.render (it's the one for rendering a string representing game state). It's causing an error when I run wasm-pack build:
Fatal: error in validating input
Error: failed to execute `wasm-opt`: exited with exit code: 1
full command: "/home/vaer/.cache/.wasm-pack/wasm-opt-4d7a65327e9363b7/wasm-opt" "/home/vaer/src/learn-rust/wasm-game-of-life/pkg/wasm_game_of_life_bg.wasm" "-o" "/home/vaer/src/learn-rust/wasm-game-of-life/pkg/wasm_game_of_life_bg.wasm-opt.wasm" "-O"
To disable `wasm-opt`, add `wasm-opt = false` to your package metadata in your `Cargo.toml`.
If I remove that function, the code builds without errors. If I replace it with the following function, the build fails with the same error:
pub fn wtf() -> String {
String::from("wtf")
}
It seems like any function that returns a String causes this error. Why?
Following is the entirety of my code:
mod utils;
use wasm_bindgen::prelude::*;
// When the `wee_alloc` feature is enabled, use `wee_alloc` as the global
// allocator.
#[cfg(feature = "wee_alloc")]
#[global_allocator]
static ALLOC: wee_alloc::WeeAlloc = wee_alloc::WeeAlloc::INIT;
// Begin game of life impl
use std::fmt;
#[wasm_bindgen]
#[repr(u8)]
#[derive(Clone, Copy, Debug, PartialEq, Eq)]
pub enum Cell {
Dead = 0,
Alive = 1,
}
#[wasm_bindgen]
pub struct Universe {
width: u32,
height: u32,
cells: Vec<Cell>,
}
impl fmt::Display for Universe {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
for line in self.cells.as_slice().chunks(self.width as usize) {
for &cell in line {
let symbol = if cell == Cell::Dead { '◻' } else { '◼' };
write!(f, "{}", symbol)?;
}
write!(f, "\n")?;
}
Ok(())
}
}
impl Universe {
fn get_index(&self, row: u32, column: u32) -> usize {
(row * self.width + column) as usize
}
fn live_neighbor_count(&self, row: u32, column: u32) -> u8 {
let mut count = 0;
for delta_row in [self.height - 1, 0, 1].iter().cloned() {
for delta_col in [self.width - 1, 0, 1].iter().cloned() {
if delta_row == 0 && delta_col == 0 {
continue;
}
let neighbor_row = (row + delta_row) % self.height;
let neighbor_col = (column + delta_col) % self.width;
let idx = self.get_index(neighbor_row, neighbor_col);
count += self.cells[idx] as u8;
}
}
count
}
}
/// Public methods, exported to JavaScript.
#[wasm_bindgen]
impl Universe {
pub fn tick(&mut self) {
let mut next = self.cells.clone();
for row in 0..self.height {
for col in 0..self.width {
let idx = self.get_index(row, col);
let cell = self.cells[idx];
let live_neighbors = self.live_neighbor_count(row, col);
let next_cell = match (cell, live_neighbors) {
// Rule 1: Any live cell with fewer than two live neighbours
// dies, as if caused by underpopulation.
(Cell::Alive, x) if x < 2 => Cell::Dead,
// Rule 2: Any live cell with two or three live neighbours
// lives on to the next generation.
(Cell::Alive, 2) | (Cell::Alive, 3) => Cell::Alive,
// Rule 3: Any live cell with more than three live
// neighbours dies, as if by overpopulation.
(Cell::Alive, x) if x > 3 => Cell::Dead,
// Rule 4: Any dead cell with exactly three live neighbours
// becomes a live cell, as if by reproduction.
(Cell::Dead, 3) => Cell::Alive,
// All other cells remain in the same state.
(otherwise, _) => otherwise,
};
next[idx] = next_cell;
}
}
self.cells = next;
}
pub fn new() -> Universe {
let width = 64;
let height = 64;
let cells = (0..width * height)
.map(|i| {
if i % 2 == 0 || i % 7 == 0 {
Cell::Alive
} else {
Cell::Dead
}
})
.collect();
Universe {
width,
height,
cells,
}
}
pub fn render(&self) -> String {
self.to_string()
}
}
Simply removing the render function at the bottom of this file causes the build to succeed. Replacing the render function with any function returning a String causes the build to fail. Why?
It turns out that this is not expected behavior; instead it is a bug with wasm-pack.
The issue can be resolved for now by adding the following to the project's cargo.toml:
[package.metadata.wasm-pack.profile.release]
wasm-opt = ["-Oz", "--enable-mutable-globals"]