I am working on implementing a sieve of atkins as my first decently sized program in rust. This algorithm takes a number and returns a vector of all primes below that number. There are two different vectors I need use this function.
BitVec 1 for prime 0 for not prime (flipped back and forth as part of the algorithm).
Vector containing all known primes.
The size of the BitVec is known as soon as the function is called. While the final size of the vector containing all known primes is not known, there are relatively accurate upper limits for the number of primes in a range. Using these I can set the size of the vector to an upper bound then shrink_to_fit it before returning. The upshot of this neither array should ever need to have it's capacity increased while the algorithm is running, and if this happens something has gone horribly wrong with the algorithm.
Therefore, I would like my function to panic if the capacity of either the vector or the bitvec is changed during the running of the function. Is this possible and if so how would I be best off implementing it?
Thanks,
You can assert that the vecs capacity() and len() are different before each push:
assert_ne!(v.capacity(), v.len());
v.push(value);
If you want it done automatically you'd have to wrap your vec in a newtype:
struct FixedSizeVec<T>(Vec<T>);
impl<T> FixedSizeVec<T> {
pub fn push(&mut self, value: T) {
assert_ne!(self.0.len(), self.0.capacity())
self.0.push(value)
}
}
To save on forwarding unchanged methods you can impl Deref(Mut) for your newtype.
use std::ops::{Deref, DerefMut};
impl<T> Deref for FixedSizeVec<T> {
type Target = Vec<T>;
fn deref(&self) -> &Vec<T> {
&self.0
}
}
impl<T> DerefMut for FixedSizeVec<T> {
fn deref_mut(&mut self) -> &mut Vec<T> {
&mut self.0
}
}
An alternative to the newtype pattern is to create a new trait with a method that performs the check, and implement it for the vector like so:
trait PushCheck<T> {
fn push_with_check(&mut self, value: T);
}
impl<T> PushCheck<T> for std::vec::Vec<T> {
fn push_with_check(&mut self, value: T) {
let prev_capacity = self.capacity();
self.push(value);
assert!(prev_capacity == self.capacity());
}
}
fn main() {
let mut v = Vec::new();
v.reserve(4);
dbg!(v.capacity());
v.push_with_check(1);
v.push_with_check(1);
v.push_with_check(1);
v.push_with_check(1);
// This push will panic
v.push_with_check(1);
}
The upside is that you aren't creating a new type, but the obvious downside is you need to remember to use the newly defined method.
Related
The question is not new and there are two approaches, as far as I can tell:
Use Vec<T>, as suggested here
Manage the heap memory yourself, using std::alloc::alloc, as shown here
My question is whether these are indeed the two (good) alternatives.
Just to make this perfectly clear: Both approaches work. The question is whether there is another, maybe preferred way. The example below is introduced to identify where use of Vec is not good, and where other approaches therefore may be better.
Let's state the problem: Suppose there's a C library that requires some buffer to write into. This could be a compression library, for example. It is easiest to have Rust allocate the heap memory and manage it instead of allocating in C/C++ with malloc/new and then somehow passing ownership to Rust.
Let's go with the compression example. If the library allows incremental (streaming) compression, then I would need a buffer that keeps track of some offset.
Following approach 1 (that is: "abuse" Vec<T>) I would wrap Vec and use len and capacity for my purposes:
/// `Buffer` is basically a Vec
pub struct Buffer<T>(Vec<T>);
impl<T> Buffer<T> {
/// Create new buffer of length `len`
pub fn new(len: usize) -> Self {
Buffer(Vec::with_capacity(len))
}
/// Return length of `Buffer`
pub fn len(&self) -> usize {
return self.0.len()
}
/// Return total allocated size of `Buffer`
pub fn capacity(&self) -> usize {
return self.0.capacity()
}
/// Return remaining length of `Buffer`
pub fn remaining(&self) -> usize {
return self.0.capacity() - self.len()
}
/// Increment the offset
pub fn increment(&mut self, by:usize) {
unsafe { self.0.set_len(self.0.len()+by); }
}
/// Returns an unsafe mutable pointer to the buffer
pub fn as_mut_ptr(&mut self) -> *mut T {
unsafe { self.0.as_mut_ptr().add(self.0.len()) }
}
/// Returns ref to `Vec<T>` inside `Buffer`
pub fn as_vec(&self) -> &Vec<T> {
&self.0
}
}
The only interesting functions are increment and as_mut_ptr.
Buffer would be used like this
fn main() {
// allocate buffer for compressed data
let mut buf: Buffer<u8> = Buffer::new(1024);
loop {
// perform C function call
let compressed_len: usize = compress(some_input, buf.as_mut_ptr(), buf.remaining());
// increment
buf.increment(compressed_len);
}
// get Vec inside buf
let compressed_data = buf.as_vec();
}
Buffer<T> as shown here is clearly dangerous, for example if any reference type is used. Even T=bool may result in undefined behaviour. But the problems with uninitialised instance of T can be avoided by introducing a trait that limits the possible types T.
Also, if alignment matters, then Buffer<T> is not a good idea.
But otherwise, is such a Buffer<T> really the best way to do this?
There doesn't seem to be an out-of-the box solution. The bytes crate comes close, it offers a "container for storing and operating on contiguous slices of memory", but the interface is not flexible enough.
You absolutely can use a Vec's spare capacity as to write into manually. That is why .set_len() is available. However, compress() must know that the given pointer is pointing to uninitialized memory and thus is not allowed to read from it (unless written to first) and you must guarantee that the returned length is the number of bytes initialized. I think these rules are roughly the same between Rust and C or C++ in this regard.
Writing this in Rust would look like this:
pub struct Buffer<T>(Vec<T>);
impl<T> Buffer<T> {
pub fn new(len: usize) -> Self {
Buffer(Vec::with_capacity(len))
}
/// SAFETY: `by` must be less than or equal to `space_len()` and the bytes at
/// `space_ptr_mut()` to `space_ptr_mut() + by` must be initialized
pub unsafe fn increment(&mut self, by: usize) {
self.0.set_len(self.0.len() + by);
}
pub fn space_len(&self) -> usize {
self.0.capacity() - self.0.len()
}
pub fn space_ptr_mut(&mut self) -> *mut T {
unsafe { self.0.as_mut_ptr().add(self.0.len()) }
}
pub fn as_vec(&self) -> &Vec<T> {
&self.0
}
}
unsafe fn compress(_input: i32, ptr: *mut u8, len: usize) -> usize {
// right now just writes 5 bytes if there's space for them
let written = usize::min(5, len);
for i in 0..written {
ptr.add(i).write(0);
}
written
}
fn main() {
let mut buf: Buffer<u8> = Buffer::new(1024);
let some_input = 5i32;
unsafe {
let compressed_len: usize = compress(some_input, buf.space_ptr_mut(), buf.space_len());
buf.increment(compressed_len);
}
let compressed_data = buf.as_vec();
println!("{:?}", compressed_data);
}
You can see it on the playground. If you run it through Miri, you'll see it picks up no undefined behavior, but if you over-advertise how much you've written (say return written + 10) then it does produce an error that reading uninitialized memory was detected.
One of the reasons there isn't an out-of-the-box type for this is because Vec is that type:
fn main() {
let mut buf: Vec<u8> = Vec::with_capacity(1024);
let some_input = 5i32;
let spare_capacity = buf.spare_capacity_mut();
unsafe {
let compressed_len: usize = compress(
some_input,
spare_capacity.as_mut_ptr().cast(),
spare_capacity.len(),
);
buf.set_len(buf.len() + compressed_len);
}
println!("{:?}", buf);
}
Your Buffer type doesn't really add any convenience or safety and a third-party crate can't do so because it relies on the correctness of compress().
Is such a Buffer really the best way to do this?
Yes, this is pretty much the lowest cost ways to provide a buffer for writing. Looking at the generated release assembly, it is just one call to allocate and that's it. You can get tricky by using a special allocator or simply pre-allocate and reuse allocations if you're doing this many times (but be sure to measure since the built-in allocator will do this anyway, just more generally).
I'm a complete newbie in Rust and I'm trying to get some understanding of the basics of the language.
Consider the following trait
trait Function {
fn value(&self, arg: &[f64]) -> f64;
}
and two structs implementing it:
struct Add {}
struct Multiply {}
impl Function for Add {
fn value(&self, arg: &[f64]) -> f64 {
arg[0] + arg[1]
}
}
impl Function for Multiply {
fn value(&self, arg: &[f64]) -> f64 {
arg[0] * arg[1]
}
}
In my main() function I want to group two instances of Add and Multiply in a vector, and then call the value method. The following works:
fn main() {
let x = vec![1.0, 2.0];
let funcs: Vec<&dyn Function> = vec![&Add {}, &Multiply {}];
for f in funcs {
println!("{}", f.value(&x));
}
}
And so does:
fn main() {
let x = vec![1.0, 2.0];
let funcs: Vec<Box<dyn Function>> = vec![Box::new(Add {}), Box::new(Multiply {})];
for f in funcs {
println!("{}", f.value(&x));
}
}
Is there any better / less verbose way? Can I work around wrapping the instances in a Box? What is the takeaway with trait objects in this case?
Is there any better / less verbose way?
There isn't really a way to make this less verbose. Since you are using trait objects, you need to tell the compiler that the vectors's items are dyn Function and not the concrete type. The compiler can't just infer that you meant dyn Function trait objects because there could have been other traits that Add and Multiply both implement.
You can't abstract out the calls to Box::new either. For that to work, you would have to somehow map over a heterogeneous collection, which isn't possible in Rust. However, if you are writing this a lot, you might consider adding helper constructor functions for each concrete impl:
impl Add {
fn new() -> Add {
Add {}
}
fn new_boxed() -> Box<Add> {
Box::new(Add::new())
}
}
It's idiomatic to include a new constructor wherever possible, but it's also common to include alternative convenience constructors.
This makes the construction of the vector a bit less noisy:
let funcs: Vec<Box<dyn Function>> = vec!(Add::new_boxed(), Multiply::new_boxed()));
What is the takeaway with trait objects in this case?
There is always a small performance hit with using dynamic dispatch. If all of your objects are the same type, they can be densely packed in memory, which can be much faster for iteration. In general, I wouldn't worry too much about this unless you are creating a library crate, or if you really want to squeeze out the last nanosecond of performance.
In the Rustonomicon's guide to PhantomData, there is a part about what happens if a Vec-like struct has *const T field, but no PhantomData<T>:
The drop checker will generously determine that Vec<T> does not own any values of type T. This will in turn make it conclude that it doesn't need to worry about Vec dropping any T's in its destructor for determining drop check soundness. This will in turn allow people to create unsoundness using Vec's destructor.
What does it mean? If I implement Drop for a struct and manually destroy all Ts in it, why should I care if compiler knows that my struct owns some Ts?
The PhantomData<T> within Vec<T> (held indirectly via a Unique<T> within RawVec<T>) communicates to the compiler that the vector may own instances of T, and therefore the vector may run destructors for T when the vector is dropped.
Deep dive: We have a combination of factors here:
We have a Vec<T> which has an impl Drop (i.e. a destructor implementation).
Under the rules of RFC 1238, this would usually imply a relationship between instances of Vec<T> and any lifetimes that occur within T, by requiring that all lifetimes within T strictly outlive the vector.
However, the destructor for Vec<T> specifically opts out of this semantics for just that destructor (of Vec<T> itself) via the use of special unstable attributes (see RFC 1238 and RFC 1327). This allows for a vector to hold references that have the same lifetime of the vector itself. This is considered sound; after all, the vector itself will not dereference data pointed to by such references (all its doing is dropping values and deallocating the backing array), as long as an important caveat holds.
The important caveat: While the vector itself will not dereference pointers within its contained values while destructing itself, it will drop the values held by the vector. If those values of type T themselves have destructors, those destructors for T get run. And if those destructors access the data held within their references, then we would have a problem if we allowed dangling pointers within those references.
So, diving in even more deeply: the way that we confirm dropck validity for a given structure S, we first double check if S itself has an impl Drop for S (and if so, we enforce rules on S with respect to its type parameters). But even after that step, we then recursively descend into the structure of S itself, and double check for each of its fields that everything is kosher according to dropck. (Note that we do this even if a type parameter of S is tagged with #[may_dangle].)
In this specific case, we have a Vec<T> which (indirectly via RawVec<T>/Unique<T>) owns a collection of values of type T, represented in a raw pointer *const T. However, the compiler attaches no ownership semantics to *const T; that field alone in a structure S implies no relationship between S and T, and thus enforces no constraint in terms of the relationship of lifetimes within the types S and T (at least from the viewpoint of dropck).
Therefore, if the Vec<T> had solely a *const T, the recursive descent into the structure of the vector would fail to capture the ownership relation between the vector and the instances of T contained within the vector. That, combined with the #[may_dangle] attribute on T, would cause the compiler to accept unsound code (namely cases where destructors for T end up trying to access data that has already been deallocated).
BUT: Vec<T> does not solely contain a *const T. There is also a PhantomData<T>, and that conveys to the compiler "hey, even though you can assume (due to the #[may_dangle] T) that the destructor for Vec won't access data of T when the vector is dropped, it is still possible that some destructor of T itself will access data of T as the vector is dropped."
The end effect: Given Vec<T>, if T doesn't have a destructor, then the compiler provides you with more flexibility (namely, it allows a vector to hold data with references to data that lives for the same amount of time as the vector itself, even though such data may be torn down before the vector is). But if T does have a destructor (and that destructor is not otherwise communicating to the compiler that it won't access any referenced data), then the compiler is more strict, requiring any referenced data to strictly outlive the vector (thus ensuring that when the destructor for T runs, all the referenced data will still be valid).
If one wants to try to understand this via concrete exploration, you can try comparing how the compiler differs in its treatment of little container types that vary in their use of #[may_dangle] and PhantomData.
Here is some sample code I have whipped up to illustrate this:
// Illustration of a case where PhantomData is providing necessary ownership
// info to rustc.
//
// MyBox2<T> uses just a `*const T` to hold the `T` it owns.
// MyBox3<T> has both a `*const T` AND a PhantomData<T>; the latter communicates
// its ownership relationship with `T`.
//
// Skim down to `fn f2()` to see the relevant case,
// and compare it to `fn f3()`. When you run the program,
// the output will include:
//
// drop PrintOnDrop(mb2b, PrintOnDrop("v2b", 13, INVALID), Valid)
//
// (However, in the absence of #[may_dangle], the compiler will constrain
// things in a manner that may indeed imply that PhantomData is unnecessary;
// pnkfelix is not 100% sure of this claim yet, though.)
#![feature(alloc, dropck_eyepatch, generic_param_attrs, heap_api)]
extern crate alloc;
use alloc::heap;
use std::fmt;
use std::marker::PhantomData;
use std::mem;
use std::ptr;
#[derive(Copy, Clone, Debug)]
enum State { INVALID, Valid }
#[derive(Debug)]
struct PrintOnDrop<T: fmt::Debug>(&'static str, T, State);
impl<T: fmt::Debug> PrintOnDrop<T> {
fn new(name: &'static str, t: T) -> Self {
PrintOnDrop(name, t, State::Valid)
}
}
impl<T: fmt::Debug> Drop for PrintOnDrop<T> {
fn drop(&mut self) {
println!("drop PrintOnDrop({}, {:?}, {:?})",
self.0,
self.1,
self.2);
self.2 = State::INVALID;
}
}
struct MyBox1<T> {
v: Box<T>,
}
impl<T> MyBox1<T> {
fn new(t: T) -> Self {
MyBox1 { v: Box::new(t) }
}
}
struct MyBox2<T> {
v: *const T,
}
impl<T> MyBox2<T> {
fn new(t: T) -> Self {
unsafe {
let p = heap::allocate(mem::size_of::<T>(), mem::align_of::<T>());
let p = p as *mut T;
ptr::write(p, t);
MyBox2 { v: p }
}
}
}
unsafe impl<#[may_dangle] T> Drop for MyBox2<T> {
fn drop(&mut self) {
unsafe {
// We want this to be *legal*. This destructor is not
// allowed to call methods on `T` (since it may be in
// an invalid state), but it should be allowed to drop
// instances of `T` as it deconstructs itself.
//
// (Note however that the compiler has no knowledge
// that `MyBox2<T>` owns an instance of `T`.)
ptr::read(self.v);
heap::deallocate(self.v as *mut u8,
mem::size_of::<T>(),
mem::align_of::<T>());
}
}
}
struct MyBox3<T> {
v: *const T,
_pd: PhantomData<T>,
}
impl<T> MyBox3<T> {
fn new(t: T) -> Self {
unsafe {
let p = heap::allocate(mem::size_of::<T>(), mem::align_of::<T>());
let p = p as *mut T;
ptr::write(p, t);
MyBox3 { v: p, _pd: Default::default() }
}
}
}
unsafe impl<#[may_dangle] T> Drop for MyBox3<T> {
fn drop(&mut self) {
unsafe {
ptr::read(self.v);
heap::deallocate(self.v as *mut u8,
mem::size_of::<T>(),
mem::align_of::<T>());
}
}
}
fn f1() {
// `let (v, _mb1);` and `let (_mb1, v)` won't compile due to dropck
let v1; let _mb1;
v1 = PrintOnDrop::new("v1", 13);
_mb1 = MyBox1::new(PrintOnDrop::new("mb1", &v1));
}
fn f2() {
{
let (v2a, _mb2a); // Sound, but not distinguished from below by rustc!
v2a = PrintOnDrop::new("v2a", 13);
_mb2a = MyBox2::new(PrintOnDrop::new("mb2a", &v2a));
}
{
let (_mb2b, v2b); // Unsound!
v2b = PrintOnDrop::new("v2b", 13);
_mb2b = MyBox2::new(PrintOnDrop::new("mb2b", &v2b));
// namely, v2b dropped before _mb2b, but latter contains
// value that attempts to access v2b when being dropped.
}
}
fn f3() {
let v3; let _mb3; // `let (v, mb3);` won't compile due to dropck
v3 = PrintOnDrop::new("v3", 13);
_mb3 = MyBox3::new(PrintOnDrop::new("mb3", &v3));
}
fn main() {
f1(); f2(); f3();
}
Caveat emptor — I'm not that strong in the extremely deep theory that truly answers your question. I'm just a layperson who has used Rust a bit and has read the related RFCs. Always refer back to those original sources for a less-diluted version of the truth.
RFC 769 introduced the actual The Drop-Check Rule:
Let v be some value (either temporary or named) and 'a be some
lifetime (scope); if the type of v owns data of type D, where (1.)
D has a lifetime- or type-parametric Drop implementation, and (2.)
the structure of D can reach a reference of type &'a _, and (3.)
either:
(A.) the Drop impl for D instantiates D at 'a
directly, i.e. D<'a>, or,
(B.) the Drop impl for D has some type parameter with a
trait bound T where T is a trait that has at least
one method,
then 'a must strictly outlive the scope of v.
It then goes further to define some of those terms, including what it means for one type to own another. This goes further to mention PhantomData specifically:
Therefore, as an additional special case to the criteria above for when the type E owns data of type D, we include:
If E is PhantomData<T>, then recurse on T.
A key problem occurs when two variables are defined at the same time:
struct Noisy<'a>(&'a str);
impl<'a> Drop for Noisy<'a> {
fn drop(&mut self) { println!("Dropping {}", self.0 )}
}
fn main() -> () {
let (mut v, s) = (Vec::new(), "hi".to_string());
let noisy = Noisy(&s);
v.push(noisy);
}
As I understand it, without The Drop-Check Rule and indicating that Vec owns Noisy, code like this might compile. When the Vec is dropped, the drop implementation could access an invalid reference; introducing unsafety.
Returning to your points:
If I implement Drop for a struct and manually destroy all Ts in it, why should I care if compiler knows that my struct owns some Ts?
The compiler must know that you own the value because you can/will call drop. Since the implementation of drop is arbitrary, if you are going to call it, the compiler must forbid you from accepting values that would cause unsafe behavior during drop.
Always remember that any arbitrary T can be a value, a reference, a value containing a reference, etc. When trying to puzzle out these types of things, it's important to try to use the most complicated variant for any thought experiments.
All of that should provide enough pieces to connect-the-dots; for full understanding, reading the RFC a few times is probably better than relying on my flawed interpretation.
Then it gets more complicated. RFC 1238 further modifies The Drop-Check Rule, removing this specific reasoning. It does say:
parametricity is a necessary but not sufficient condition to justify the inferences that dropck makes
Continuing to use PhantomData seems the safest thing to do, but it may not be required. An anonymous Twitter benefactor pointed out this code:
use std::marker::PhantomData;
#[derive(Debug)] struct MyGeneric<T> { x: Option<T> }
#[derive(Debug)] struct MyDropper<T> { x: Option<T> }
#[derive(Debug)] struct MyHiddenDropper<T> { x: *const T }
#[derive(Debug)] struct MyHonestHiddenDropper<T> { x: *const T, boo: PhantomData<T> }
impl<T> Drop for MyDropper<T> { fn drop(&mut self) { } }
impl<T> Drop for MyHiddenDropper<T> { fn drop(&mut self) { } }
impl<T> Drop for MyHonestHiddenDropper<T> { fn drop(&mut self) { } }
fn main() {
// Does Compile! (magic annotation on destructor)
{
let (a, mut b) = (0, vec![]);
b.push(&a);
}
// Does Compile! (no destructor)
{
let (a, mut b) = (0, MyGeneric { x: None });
b.x = Some(&a);
}
// Doesn't Compile! (has destructor, no attribute)
{
let (a, mut b) = (0, MyDropper { x: None });
b.x = Some(&a);
}
{
let (a, mut b) = (0, MyHiddenDropper { x: 0 as *const _ });
b.x = &&a;
}
{
let (a, mut b) = (0, MyHonestHiddenDropper { x: 0 as *const _, boo: PhantomData });
b.x = &&a;
}
}
This suggests that the changes in RFC 1238 made the compiler more conservative, such that simply having a lifetime or type parameter is enough to prevent it from compiling.
You can also note that Vec doesn't have this problem because it uses the unsafe_destructor_blind_to_params attribute described in the the RFC.
I can shuffle a regular vector quite simply like this:
extern crate rand;
use rand::Rng;
fn shuffle(coll: &mut Vec<i32>) {
rand::thread_rng().shuffle(coll);
}
The problem is, my code now requires the use of a std::collections::VecDeque instead, which causes this code to not compile.
What's the simplest way of getting around this?
As of Rust 1.48, VecDeque supports the make_contiguous() method. That method doesn't allocate and has complexity of O(n), like shuffling itself. Therefore you can shuffle a VecDeque by calling make_contiguous() and then shuffling the returned slice:
use rand::prelude::*;
use std::collections::VecDeque;
pub fn shuffle<T>(v: &mut VecDeque<T>, rng: &mut impl Rng) {
v.make_contiguous().shuffle(rng);
}
Playground
Historical answer follows below.
Unfortunately, the rand::Rng::shuffle method is defined to shuffle slices. Due to its own complexity constraints a VecDeque cannot store its elements in a slice, so shuffle can never be directly invoked on a VecDeque.
The real requirement of the values argument to shuffle algorithm are finite sequence length, O(1) element access, and the ability to swap elements, all of which VecDeque fulfills. It would be nice if there were a trait that incorporates these, so that values could be generic on that, but there isn't one.
With the current library, you have two options:
Use Vec::from(deque) to copy the VecDeque into a temporary Vec, shuffle the vector, and return the contents back to VecDeque. The complexity of the operation will remain O(n), but it will require a potentially large and costly heap allocation of the temporary vector.
Implement the shuffle on VecDeque yourself. The Fisher-Yates shuffle used by rand::Rng is well understood and easy to implement. While in theory the standard library could switch to a different shuffle algorithm, that is not likely to happen in practice.
A generic form of the second option, using a trait to express the len-and-swap requirement, and taking the code of rand::Rng::shuffle, could look like this:
use std::collections::VecDeque;
// Real requirement for shuffle
trait LenAndSwap {
fn len(&self) -> usize;
fn swap(&mut self, i: usize, j: usize);
}
// A copy of an earlier version of rand::Rng::shuffle, with the signature
// modified to accept any type that implements LenAndSwap
fn shuffle(values: &mut impl LenAndSwap, rng: &mut impl rand::Rng) {
let mut i = values.len();
while i >= 2 {
// invariant: elements with index >= i have been locked in place.
i -= 1;
// lock element i in place.
values.swap(i, rng.gen_range(0..=i));
}
}
// VecDeque trivially fulfills the LenAndSwap requirement, but
// we have to spell it out.
impl<T> LenAndSwap for VecDeque<T> {
fn len(&self) -> usize {
self.len()
}
fn swap(&mut self, i: usize, j: usize) {
self.swap(i, j)
}
}
fn main() {
let mut v: VecDeque<u64> = [1, 2, 3, 4].into_iter().collect();
shuffle(&mut v, &mut rand::thread_rng());
println!("{:?}", v);
}
You can use make_contiguous (documentation) to create a mutable slice that you can then shuffle:
use rand::prelude::*;
use std::collections::VecDeque;
fn main() {
let mut deque = VecDeque::new();
for p in 0..10 {
deque.push_back(p);
}
deque.make_contiguous().shuffle(&mut rand::thread_rng());
println!("Random deque: {:?}", deque)
}
Playground Link if you want to try it out online.
Shuffle the components of the VecDeque separately, starting with VecDeque.html::as_mut_slices:
use rand::seq::SliceRandom; // 0.6.5;
use std::collections::VecDeque;
fn shuffle(coll: &mut VecDeque<i32>) {
let mut rng = rand::thread_rng();
let (a, b) = coll.as_mut_slices();
a.shuffle(&mut rng);
b.shuffle(&mut rng);
}
As Lukas Kalbertodt points out, this solution never swaps elements between the two slices so a certain amount of randomization will not happen. Depending on your needs of randomization, this may be unnoticeable or a deal breaker.
Is there an equivalent of alloca to create variable length arrays in Rust?
I'm looking for the equivalent of the following C99 code:
void go(int n) {
int array[n];
// ...
}
It is not possible directly, as in there is not direct syntax in the language supporting it.
That being said, this particular feature of C99 is debatable, it has certain advantages (cache locality & bypassing malloc) but it also has disadvantages (easy to blow-up the stack, stumps a number of optimizations, may turn static offsets into dynamic offsets, ...).
For now, I would advise you to use Vec instead. If you have performance issues, then you may look into the so-called "Small Vector Optimization". I have regularly seen the following pattern in C code where performance is required:
SomeType array[64] = {};
SomeType* pointer, *dynamic_pointer;
if (n <= 64) {
pointer = array;
} else {
pointer = dynamic_pointer = malloc(sizeof(SomeType) * n);
}
// ...
if (dynamic_pointer) { free(dynamic_pointer); }
Now, this is something that Rust supports easily (and better, in a way):
enum InlineVector<T, const N: usize> {
Inline(usize, [T; N]),
Dynamic(Vec<T>),
}
You can see an example simplistic implementation below.
What matters, however, is that you now have a type which:
uses the stack when less than N elements are required
moves off to the heap otherwise, to avoid blowing up the stack
Of course, it also always reserves enough space for N elements on the stack even if you only use 2; however in exchange there is no call to alloca so you avoid the issue of having dynamic offsets to your variants.
And contrary to C, you still benefit from lifetime tracking so that you cannot accidentally return a reference to your stack-allocated array outside the function.
Note: prior to Rust 1.51, you wouldn't be able to customize by N.
I will show off the most "obvious" methods:
enum SmallVector<T, const N: usize> {
Inline(usize, [T; N]),
Dynamic(Vec<T>),
}
impl<T: Copy + Clone, const N: usize> SmallVector<T, N> {
fn new(v: T, n: usize) -> Self {
if n <= N {
Self::Inline(n, [v; N])
} else {
Self::Dynamic(vec![v; n])
}
}
}
impl<T, const N: usize> SmallVector<T, N> {
fn as_slice(&self) -> &[T] {
match self {
Self::Inline(n, array) => &array[0..*n],
Self::Dynamic(vec) => vec,
}
}
fn as_mut_slice(&mut self) -> &mut [T] {
match self {
Self::Inline(n, array) => &mut array[0..*n],
Self::Dynamic(vec) => vec,
}
}
}
use std::ops::{Deref, DerefMut};
impl<T, const N: usize> Deref for SmallVector<T, N> {
type Target = [T];
fn deref(&self) -> &Self::Target {
self.as_slice()
}
}
impl<T, const N: usize> DerefMut for SmallVector<T, N> {
fn deref_mut(&mut self) -> &mut Self::Target {
self.as_mut_slice()
}
}
Usage:
fn main() {
let mut v = SmallVector::new(1u32, 4);
v[2] = 3;
println!("{}: {}", v.len(), v[2])
}
Which prints 4: 3 as expected.
No.
Doing that in Rust would entail the ability to store Dynamically Sized Types (DSTs) like [i32] on the stack, which the language doesn't support.
A deeper reason is that LLVM, to my knowledge, doesn't really support this. I'm given to believe that you can do it, but it significantly interferes with optimisations. As such, I'm not aware of any near-terms plans to allow this.