Zero-initialize a union in Rust - rust

Is there a reasonable way to zero-initialize a union data type in Rust?
The approach mentioned by the Rust Reference works well if the user initializes the largest field in the union, but doesn't work if a smaller field is set. For example, given a type:
union TypeFromFfi {
small: u8,
medium: u64,
large: u128,
}
These don't seem to zero-initialize all of the union's memory:
let data = unsafe { TypeFromFfi { small: 0 } };
let data = unsafe { TypeFromFfi { medium: 0 } };
This seems to:
let data = unsafe { TypeFromFfi { large: 0 } };
This is as I expect, and I am hoping to avoid this approach because it requires the caller to remember the size of each field of the union.
Is there a general way to do zero-initialize a union? So far I have thought of:
let data = unsafe { std::mem::zeroed::<TypeFromFfi>() };
It seems to work, but is there something that I could be missing?
Could there be an issue with padding bytes from using std::mem::zeroed? I can't think of anything...

std::mem::zeroed() is the correct way. There is no problem with initializing padding bytes.
Do note however that if zero is valid for none of the possible variants it is unclear yet whether zero-initializing the union is valid. If at any bit position zero is a valid pattern for at least one of the variants everything is fine.
Also note that your union probably has to be marked #[repr(C)], otherwise you cannot rely on its layout (including the abovementioned zero validity).

Related

How to write a wrapper enum for C error codes?

I am writing a Rust wrapper for a C API. It contains a function that may fail, in which case it returns an error code encoded as an int. Let's call these SOME_ERROR and OTHER_ERROR, and they will have the values 1 and 2, respectively. I want to write an enum wrapping these error codes, as follows:
// Declared in a seperate C header
const SOME_ERROR: c_int = 1;
const OTHER_ERROR: c_int = 2;
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
#[repr(i32)]
enum ErrorCodeWrapper {
SomeError = SOME_ERROR,
OtherError = OTHER_ERROR,
}
Here comes my first question. It does not seem to be possible to specify std::os::raw::c_int as the underlying type of an enum. But I do feel like it should be, as int isn't required to be 32 bits wide. Is there any way to achieve this?
I'd then like some methods to convert to and from a raw error code:
use std::os::raw::c_int;
impl ErrorCodeWrapper {
fn from_raw(raw: c_int) -> Option<Self> {
match raw {
SOME_ERROR => Some(Self::SomeError),
OTHER_ERROR => Some(Self::OtherError),
_ => None
}
}
unsafe fn from_raw_unchecked(raw: c_int) -> Self {
*(&raw as *const _ as *const Self)
}
fn as_raw(self) -> c_int {
unsafe { *(&self as *const _ as *const c_int) }
}
}
The only way I could find to "bit-cast" c_int to and from ErrorCodeWrapper is to do it C-style, by casting a pointer and then dereferencing it. This should work as ErrorCodeWrapper and int have the same size and alignment, and the value of every ErrorCodeWrapper variant maps to its corresponding error code. However, this solution is a bit to hackery for my taste; is there a more idiomatic one, like C++'s std::bit_cast?
Furthermore, is it possible to replace the match statement in ErrorCodeWrapper::from_raw with a simple validity check, for simpler code in the case of more variants?
The last bit of code, the necessary error implementations:
use std::{fmt::Display, error::Error};
impl Display for ErrorCodeWrapper {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "{}", match self {
Self::SomeError => "some error",
Self::OtherError => "some other error",
})
}
}
impl Error for ErrorCodeWrapper {}
Now let's imagine a second wrapper, SuperErrorCodeWrapper, that includes some or all of the variants of ErrorCodeWrapper, with the same description and everything. That would mean that either:
One could "factor out" the common variants of ErrorCodeWrapper and SuperErrorCodeWrapper into a separate enum. ErrorCodeWrapper and SuperErrorCodeWrapper would then have a variant containing this enum. However I am not really fond of this kind of nesting, which would seem arbitrary when focusing on one particular error.
Duplicating the common variants across both enums.
The latter would add a lot to the existing boilerplate. Could a macro be a viable option to handle this?
Is there a library that could handle all this for me?
Here comes my first question. It does not seem to be possible to specify std::os::raw::c_int as the underlying type of an enum. But I do feel like it should be, as int isn't required to be 32 bits wide. Is there any way to achieve this?
No. There was an RFC in 2016 (I can't even access the RFC text. it seems it was removed), but it was closed:
We discussed in the #rust-lang/lang meeting and decided that while the RFC is well-motivated, it doesn't sufficiently address the various implementation complexities that must be overcome nor the interaction with hygiene. It would make sense to extend the attribute system to support more general paths before considering this RFC (but that is a non-trivial undertaking).
The best you can do is to use #[cfg_attr] with all configurations. c_int is defined here as, and all current options are
#[cfg_attr(any(target_arch = "avr", target_arch = "msp430"), repr(i16))]
#[cfg_attr(not(any(target_arch = "avr", target_arch = "msp430")), repr(i32))]
enum ErrorCodeWrapper { ... }
is there a more idiomatic one, like C++'s std::bit_cast?
Yes; std::mem::transmute().
One could "factor out" the common variants of ErrorCodeWrapper and SuperErrorCodeWrapper into a separate enum. ErrorCodeWrapper and SuperErrorCodeWrapper would then have a variant containing this enum.
If you do that, you lose the ability to transmute() (or pointer cast, it doesn't matter), as they'll be no longer layout compatible with int.
Could a macro be a viable option to handle this?
Probably yes.
Is there a library that could handle all this for me?
I don't know all library that handles all of this (although it is possible that one exists), but there is thiserror (and friends) for the Error and Display implementations, and strum::FromRepr that can help you with from_raw().

struct with reference to element of a vector in another field

I have the below example where I want a struct which holds a vector of data in one field, and has another field which contains the currently selected field. My understanding is that this is not possible in rust because I could remove the element in tables which selected points to, thereby creating a dangling pointer (can't borrow mut when an immutable borrow exists). The obvious workaround is to instead store a usize index for the element, rather than a &'a String. But this means I need to update the index if I remove an element from tables. Is there any way to avoid this using smart pointers, or just any better solutions in general? I've looked at other questions but they are not quite the same as below, and have extra information which makes them harder to follow for a beginner like myself, whereas below is a very minimal example.
struct Data<'a> {
selected: &'a String,
tables: Vec<String>,
}
fn main() {
let tables = vec!["table1".to_string(), "table2".to_string()];
let my_stuff = Data {
selected: &tables[0],
tables: tables,
};
}
You quite rightfully assessed that the way you wrote it is not possible, because Rust guarantees memory safety and storing it as a reference would give the possibility to create a dangling pointer.
There are several solutions that I could see here.
Static Strings
This of course only works if you store compile-time static strings.
struct Data {
selected: &'static str,
tables: Vec<&'static str>,
}
fn main() {
let tables = vec!["table1", "table2"];
let my_stuff = Data {
selected: &tables[0],
tables,
};
}
The reason this works is because static strings are non-mutable and guaranteed to never be deallocated. Also, in case this is confusing, I recommend reading up on the differences between Strings and str slices.
You can even go one further and reduce the lifetime down to 'a. But then, you have to store them as &'a str in the vector, to ensure they cannot be edited.
But that then allows you to store Strings in them, as long as the strings can be borrowed for the entire lifetime of the Data object.
struct Data<'a> {
selected: &'a str,
tables: Vec<&'a str>,
}
fn main() {
let str1 = "table1".to_string();
let str2 = "table2".to_string();
let tables = vec![str1.as_str(), str2.as_str()];
let my_stuff = Data {
selected: &tables[0],
tables,
};
}
Reference counting smart pointers
Depending your situation, there are several types that are recommended:
Rc<...> - if your data is immutable. Otherwise, you need to create interior mutability with:
Rc<Cell<...>> - safest and best solution IF your problem is single-threaded and deals with simple data types
Rc<RefCell<...>> - for more complex data types that have to be updated in-place and can't just be moved in and out
Arc<Mutex<...>> - as soon as your problem stretches over multiple threads
In your case, the data is in fact simple and your program is single-threaded, so I'd go with:
use std::{cell::Cell, rc::Rc};
struct Data {
selected: Rc<Cell<String>>,
tables: Vec<Rc<Cell<String>>>,
}
fn main() {
let tables = vec![
Rc::new(Cell::new("table1".to_string())),
Rc::new(Cell::new("table2".to_string())),
];
let my_stuff = Data {
selected: tables[0].clone(),
tables,
};
}
Of course, if you don't want to modify your strings after creation, you could go with:
use std::rc::Rc;
struct Data {
selected: Rc<String>,
tables: Vec<Rc<String>>,
}
fn main() {
let tables = vec![Rc::new("table1".to_string()), Rc::new("table2".to_string())];
let my_stuff = Data {
selected: tables[0].clone(),
tables,
};
}
Hiding the data structure and using an index
As you already mentioned, you could use an index instead. Then you would have to hide the vector and provide getters/setters/modifiers to make sure the index is kept in sync when the vector changes.
I'll keep the implementation up to the reader and won't provide an example here :)
I hope this helped already, or at least gave you a couple of new ideas. I'm happy to see new people come to the community, so feel free to ask further questions if you have any :)

Best way in Rust to count leaves in a binary search tree?

I'm developing a basic implementation of a binary search tree in Rust. I was creating a method for counting leaves, but ran into some very strange looking code to get it to work. I wanted to clarify if the way I did it is:
Considered appropriate by Rust standards/convention
Efficient
I'm using an enum that differentiates between a node or nothing being present:
pub enum BST<T: Ord> {
Node {
value: T, // template with type T
left: Box<BST<T>>,
right: Box<BST<T>>,
},
Empty,
}
Now, count_leaves(&self) is first checking if the provided type is either a Node or Empty. If it's Empty, I can just return 0, but if it's a valid Node then I need to check if the left and right children are Empty. If so, then I can return a 1 because I'm at a leaf.
pub fn count_leaves(&self) -> u32 {
match self {
BST::Node {
value: _,
ref left,
ref right,
} => {
match (&**left, &**right) {
(BST::Empty, BST::Empty) => 1,
_ => {
left.count_leaves() + right.count_leaves()
}
}
},
BST::Empty => 0
}
}
So, to check if both left and right are BST::Empty, I wanted to use a tuple! But in doing so, Rust tries to move both left and right into the tuple. Since my type BST<T> does not implement the Copy trait, this is not possible. Also, since left and right are both boxes and borrowed, something simply like this is not possible:
match (left, right) {
BST::Empty => {},
_ => {}
}
In order to use this tuple, it looks like I need to first dereference the borrowed box using *, then dereference that box again into its type using a second *, and then finally borrow using & to avoid a move. This gives the weird looking (&**left, &**right).
From my testing this works, but I thought it looked really strange. Should I rewrite this in a more readable way (if there is one)?
I've considered using Option<> instead of the enum with the Node and Empty, but I wasn't sure if that would lead to anything more readable or more efficient.
Thanks!
EDIT:
Just wanted to clarify that when I say leaves I mean a node in the tree with no children, not a non-empty node.
You're just overthinking it. You already have a base case for when a node is empty so you don't need both matches. When possible you want to ignore the boxes in favor of implicitly using Deref to perform operations on them.
pub fn count_leaves(&self) -> u32 {
match self {
BST::Node { left, right, .. } => 1 + left.count_leaves() + right.count_leaves(),
BST::Empty => 0,
}
}
By manually checking if both sides are empty before calling count_leaves on both sides, you might actually be decreasing performance. A recursive function call (or any function call really) can be very cheap since your code is already at the processor. However, it takes (a very tiny) time for a processor to read a value from a pointer so ideally you only needs to do it once per value. However the compiler is made of eldritch sorcery so it will probably figure out the best way to optimize your code either way. Another option which may help is to add an #[inline] hint to the function to ask the compiler to unroll the recursive call one or more times if it thinks it would be helpful for performance.
You may find it helpful to change the structure of your BST. By making your tree an enum, then it needs to be matched every time you perform any operation on it.
pub struct BST<T> {
left: Option<Box<BST<T>>>,
right: Option<Box<BST<T>>>,
data: T,
}
impl<T> BST<T> {
pub fn new_root(data: T) -> Self {
BST {
left: None,
right: None,
data,
}
}
pub fn count_leaves(&self) -> u64 {
let left_leaves = self.left.as_ref().map_or(0, |x| x.count_leaves());
let right_leaves = self.right.as_ref().map_or(0, |x| x.count_leaves());
left_leaves + right_leaves + 1
}
}
impl<T: Ord> BST<T> {
pub fn insert(&mut self, data: T) {
let side = match self.data.cmp(&data) {
Ordering::Less | Ordering::Equal => &mut self.left,
Ordering::Greater => &mut self.right,
};
if let Some(node) = side {
node.insert(data);
} else {
*side = Some(Box::new(Self::new_root(data)));
}
}
}
Now this works well, but it also introduces a new problem that I'm guessing you were attempting to avoid with your solution. You can't create an empty BST<T>. This may make initializing your program difficult. We can fix this by using a small wrapper struct (Ex: pub struct BinarySearchTree<T>(Option<BST<T>>)). This is also what std::collections::LinkedList does. You may also be surprised to learn that this cuts our memory footprint in half compared to the original post. This is caused by Empty requiring just as much space as Node. So this means we need to allocate the entire next layer of the tree even though we don't use it.

When should we use a struct as opposed to an enum?

Structs and enums are similar to each other.
When would it be better to use a struct as opposed to an enum (or vice-versa)? Can someone give a clear example where using a struct is preferable to using an enum?
Perhaps the easiest way to explain the fundamental difference is that an enum contains "variants", of which you can only ever have one at a time, whereas a struct contains one or more fields, all of which you must have.
So you might use an enum to model something like an error code, where you can only ever have one at a time:
enum ErrorCode {
NoDataReceived,
CorruptedData,
BadResponse,
}
Enum variants can contain values if needed. For example, we could add a case to ErrorCode like so:
enum ErrorCode {
NoDataReceived,
CorruptedData,
BadResponse,
BadHTTPCode(u16),
}
In this case, an instance of ErrorCode::BadHTTPCode always contains a u16.
This makes each individual variant behave kind of like either a tuple struct or unit struct:
// Unit structs have no fields
struct UnitStruct;
// Tuple structs contain anonymous values.
struct TupleStruct(u16, &'static str);
However, the advantage of writing them as enum variants is that each of the cases of ErrorCode can be stored in a value of type ErrorCode, as below (this would not be possible with unrelated structs).
fn handle_error(error: ErrorCode) {
match error {
ErrorCode::NoDataReceived => println!("No data received."),
ErrorCode::CorruptedData => println!("Data corrupted."),
ErrorCode::BadResponse => println!("Bad response received from server."),
ErrorCode::BadHTTPCode(code) => println!("Bad HTTP code received: {}", code)
};
}
fn main() {
handle_error(ErrorCode::NoDataReceived); // prints "No data received."
handle_error(ErrorCode::BadHTTPCode(404)); // prints "Bad HTTP code received: 404"
}
You can then match on the enum to determine which variant you've been given, and perform different actions depending on which one it is.
By contrast, the third type of struct that I didn't mention above is the most commonly used - it's the type of struct that everyone is referring to when they simply say "struct".
struct Response {
data: Option<Data>,
response: HTTPResponse,
error: String,
}
fn main() {
let response = Response {
data: Option::None,
response: HTTPResponse::BadRequest,
error: "Bad request".to_owned()
}
}
Note that in that case, in order to create a Response, all of its fields must be given values.
Also, the way that the value of response is created (i.e. HTTPResponse::Something) implies that HTTPResponse is an enum. It might look something like this:
enum HTTPResponse {
Ok, // 200
BadRequest, // 400
NotFound, // 404
}
Enums have multiple possibilities. Structs have only one possible "type" of thing they can be. Mathematically, we say a struct is a product type and an enum is a sum of products. If you only have one possibility, use a struct. For example, a point in space is always going to be three numbers. It's never going to be a string, or a function, or something else. So it should be a struct containing three numbers. On the other hand, if you're building a mathematical expression, it could be (for instance) a number or two expressions joined by an operator. It has multiple possibilities, so it should be an enum.
In short, if a struct works, use a struct. Rust can optimize around it, and it's going to be clearer to anyone reading your code what the value is supposed to be treated as.
An Enum is a type with a constrained set of values.
enum Rainbow {
Red,
Orange,
Yellow,
Green,
Blue,
Indigo,
Violet
}
let color = Red;
match color {
Red => { handle Red case },
// all the rest would go here
}
You can store data in the Enum if you need it.
enum ParseData {
Whitespace,
Token(String),
Number(i32),
}
fn parse(input: String) -> Result<String, ParseData>;
A struct is a way to represent a thing.
struct Window {
title: String,
position: Position,
visible: boolean,
}
Now you can make new Window objects that represent a window on your screen.

What are the performance implications of consuming self and returning it?

I've been reading questions like Why does a function that accepts a Box<MyType> complain of a value being moved when a function that accepts self works?, Preferable pattern for getting around the "moving out of borrowed self" checker, and How to capture self consuming variable in a struct?, and now I'm curious about the performance characteristics of consuming self but possibly returning it to the caller.
To make a simpler example, imagine I want to make a collection type that's guaranteed to be non-empty. To achieve this, the "remove" operation needs to consume the collection and optionally return itself.
struct NonEmptyCollection { ... }
impl NonEmptyCollection {
fn pop(mut self) -> Option<Self> {
if self.len() == 1 {
None
} else {
// really remove the element here
Some(self)
}
}
}
(I suppose it should return the value it removed from the list too, but it's just an example.) Now let's say I call this function:
let mut c = NonEmptyCollection::new(...);
if let Some(new_c) = c.pop() {
c = new_c
} else {
// never use c again
}
What actually happens to the memory of the object? What if I have some code like:
let mut opt: Option<NonEmptyCollection> = Some(NonEmptyCollection::new(...));
opt = opt.take().pop();
The function's signature can't guarantee that the returned object is actually the same one, so what optimizations are possible? Does something like the C++ return value optimization apply, allowing the returned object to be "constructed" in the same memory it was in before? If I have the choice between an interface like the above, and an interface where the caller has to deal with the lifetime:
enum PopResult {
StillValid,
Dead
};
impl NonEmptyCollection {
fn pop(&mut self) -> PopResult {
// really remove the element
if self.len() == 0 { PopResult::Dead } else { PopResult::StillValid }
}
}
is there ever a reason to choose this dirtier interface for performance reasons? In the answer to the second example I linked, trentcl recommends storing Options in a data structure to allow the caller to do a change in-place instead of doing remove followed by insert every time. Would this dirty interface be a faster alternative?
YMMV
Depending on the optimizer's whim, you may end up with:
close to a no-op,
a few register moves,
a number of bit-copies.
This will depend whether:
the call is inlined, or not,
the caller re-assigns to the original variable or creates a fresh variable (and how well LLVM handles reusing dead space),
the size_of::<Self>().
The only guarantees you get is that no deep-copy will occur, as there is no .clone() call.
For anything else, you need to check the LLVM IR or assembly.

Resources