I am new to rust. I want to write a function which later can be imported into Python as a module using the pyo3 crate.
Below is the Python implementation of the function I want to implement in Rust:
def pcompare(a, b):
letters = []
for i, letter in enumerate(a):
if letter != b[i]:
letters.append(f'{letter}{i + 1}{b[i]}')
return letters
The first Rust implemention I wrote looks like this:
use pyo3::prelude::*;
#[pyfunction]
fn compare_strings_to_vec(a: &str, b: &str) -> PyResult<Vec<String>> {
if a.len() != b.len() {
panic!(
"Reads are not the same length!
First string is length {} and second string is length {}.",
a.len(), b.len());
}
let a_vec: Vec<char> = a.chars().collect();
let b_vec: Vec<char> = b.chars().collect();
let mut mismatched_chars = Vec::new();
for (mut index,(i,j)) in a_vec.iter().zip(b_vec.iter()).enumerate() {
if i != j {
index += 1;
let mutation = format!("{i}{index}{j}");
mismatched_chars.push(mutation);
}
}
Ok(mismatched_chars)
}
#[pymodule]
fn compare_strings(_py: Python<'_>, m: &PyModule) -> PyResult<()> {
m.add_function(wrap_pyfunction!(compare_strings_to_vec, m)?)?;
Ok(())
}
Which I builded in --release mode. The module could be imported to Python, but the performance was quite similar to the performance of the Python implementation.
My first question is: Why is the Python and Rust function similar in speed?
Now I am working on a parallelization implementation in Rust. When just printing the result variable, the function works:
use rayon::prelude::*;
fn main() {
let a: Vec<char> = String::from("aaaa").chars().collect();
let b: Vec<char> = String::from("aaab").chars().collect();
let length = a.len();
let index: Vec<_> = (1..=length).collect();
let mut mismatched_chars: Vec<String> = Vec::new();
(a, index, b).into_par_iter().for_each(|(x, i, y)| {
if x != y {
let mutation = format!("{}{}{}", x, i, y).to_string();
println!("{mutation}");
//mismatched_chars.push(mutation);
}
});
}
However, when I try to push the mutation variable to the mismatched_charsvector:
use rayon::prelude::*;
fn main() {
let a: Vec<char> = String::from("aaaa").chars().collect();
let b: Vec<char> = String::from("aaab").chars().collect();
let length = a.len();
let index: Vec<_> = (1..=length).collect();
let mut mismatched_chars: Vec<String> = Vec::new();
(a, index, b).into_par_iter().for_each(|(x, i, y)| {
if x != y {
let mutation = format!("{}{}{}", x, i, y).to_string();
//println!("{mutation}");
mismatched_chars.push(mutation);
}
});
}
I get the following error:
error[E0596]: cannot borrow `mismatched_chars` as mutable, as it is a captured variable in a `Fn` closure
--> src/main.rs:16:13
|
16 | mismatched_chars.push(mutation);
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ cannot borrow as mutable
For more information about this error, try `rustc --explain E0596`.
error: could not compile `testing_compare_strings` due to previous error
I tried A LOT of different things. When I do:
use rayon::prelude::*;
fn main() {
let a: Vec<char> = String::from("aaaa").chars().collect();
let b: Vec<char> = String::from("aaab").chars().collect();
let length = a.len();
let index: Vec<_> = (1..=length).collect();
let mut mismatched_chars: Vec<&str> = Vec::new();
(a, index, b).into_par_iter().for_each(|(x, i, y)| {
if x != y {
let mutation = format!("{}{}{}", x, i, y).to_string();
mismatched_chars.push(&mutation);
}
});
}
The error becomes:
error[E0596]: cannot borrow `mismatched_chars` as mutable, as it is a captured variable in a `Fn` closure
--> src/main.rs:16:13
|
16 | mismatched_chars.push(&mutation);
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ cannot borrow as mutable
error[E0597]: `mutation` does not live long enough
--> src/main.rs:16:35
|
10 | let mut mismatched_chars: Vec<&str> = Vec::new();
| -------------------- lifetime `'1` appears in the type of `mismatched_chars`
...
16 | mismatched_chars.push(&mutation);
| ----------------------^^^^^^^^^-
| | |
| | borrowed value does not live long enough
| argument requires that `mutation` is borrowed for `'1`
17 | }
| - `mutation` dropped here while still borrowed
I suspect that the solution is quite simple, but I cannot see it myself.
You have the right idea with what you are doing, but you will want to try to use an iterator chain with filter and map to remove or convert iterator items into different values. Rayon also provides a collect method similar to regular iterators to convert items into a type T: FromIterator (such as Vec<T>).
fn compare_strings_to_vec(a: &str, b: &str) -> Vec<String> {
// Same as with the if statement, but just a little shorter to write
// Plus, it will print out the two values it is comparing if it errors.
assert_eq!(a.len(), b.len(), "Reads are not the same length!");
// Zip the character iterators from a and b together
a.chars().zip(b.chars())
// Iterate with the index of each item
.enumerate()
// Rayon function which turns a regular iterator into a parallel one
.par_bridge()
// Filter out values where the characters are the same
.filter(|(_, (a, b))| a != b)
// Convert the remaining values into an error string
.map(|(index, (a, b))| {
format!("{}{}{}", a, index + 1, b)
})
// Turn the items of this iterator into a Vec (Or any other FromIterator type).
.collect()
}
Rust Playground
Optimizing for speed
On the other hand, if you want speed we need to approach this problem from a different direction. You may have noticed, but the rayon version is quite slow since the cost of spawning a thread and using concurrency structures is orders of magnitude more than just simply comparing the bytes in the original thread. In my benchmarks, I found that even with better workload distribution, additional threads were only helpful on my machine (64GB RAM, 16 cores) when the strings were at least 1-2 million bytes long. Given that you have stated they are typically ~30,000 bytes long I think using rayon (or really any other threading for comparisons of this size) will only slow down your code.
Using criterion for benchmarking, I eventually came to this implementation. It generally gets about 2.8156 µs per run on strings of 30,000 characters with 10 different bytes. For comparison, the code posted in the original question usually gets around 61.156 µs on my system under the same conditions so this should give a ~20x speedup. It can vary a bit, but it consistently got the best results in the benchmark. I'm guessing this should be fast enough to have this step no-longer be the bottleneck in your code.
This key focus of this implementation is to do the comparisons in batches. We can take advantage of the 128bit registers on most CPUs to compare the input in 16 byte batches. Upon an inequality being found, the 16 byte section it covers is re-scanned for the exact position of the discrepancy. This gives a decent boost to performance. I initially thought that a usize would work better, but it seems that was not the case. I also attempted to use the portable_simd nightly feature to write a simd version of this code, but I was unable to match the speed of this code. I suspect this was either due to missed optimizations or a lack of experience to effectively use simd on my part.
I was worried about drops in speed due to alignment of chunks not being enforced for u128 values, but it seems to mostly be a non-issue. First of all, it is generally quite difficult to find allocators which are willing to allocate to an address which is not a multiple of the system word size. Of course, this is due to practicality rather than any actual requirement. When I manually gave it unaligned slices (unaligned for u128s), it is not significantly effected. This is why I do not attempt to enforce that the start index of the slice be aligned to align_of::<u128>().
fn compare_strings_to_vec(a: &str, b: &str) -> Vec<String> {
let a_bytes = a.as_bytes();
let b_bytes = b.as_bytes();
let remainder = a_bytes.len() % size_of::<u128>();
// Strongly suggest to the compiler we are iterating though u128
a_bytes
.chunks_exact(size_of::<u128>())
.zip(b_bytes.chunks_exact(size_of::<u128>()))
.enumerate()
.filter(|(_, (a, b))| {
let a_block: &[u8; 16] = (*a).try_into().unwrap();
let b_block: &[u8; 16] = (*b).try_into().unwrap();
u128::from_ne_bytes(*a_block) != u128::from_ne_bytes(*b_block)
})
.flat_map(|(word_index, (a, b))| {
fast_path(a, b).map(move |x| word_index * size_of::<u128>() + x)
})
.chain(
fast_path(
&a_bytes[a_bytes.len() - remainder..],
&b_bytes[b_bytes.len() - remainder..],
)
.map(|x| a_bytes.len() - remainder + x),
)
.map(|index| {
format!(
"{}{}{}",
char::from(a_bytes[index]),
index + 1,
char::from(b_bytes[index])
)
})
.collect()
}
/// Very similar to regular route, but with nothing fancy, just get the indices of the overlays
#[inline(always)]
fn fast_path<'a>(a: &'a [u8], b: &'a [u8]) -> impl 'a + Iterator<Item = usize> {
a.iter()
.zip(b.iter())
.enumerate()
.filter_map(|(x, (a, b))| (a != b).then_some(x))
}
You cannot directly access the field mismatched_chars in a multithreading environment.
You can use Arc<RwLock> to access the field in multithreading.
use rayon::prelude::*;
use std::sync::{Arc, RwLock};
fn main() {
let a: Vec<char> = String::from("aaaa").chars().collect();
let b: Vec<char> = String::from("aaab").chars().collect();
let length = a.len();
let index: Vec<_> = (1..=length).collect();
let mismatched_chars: Arc<RwLock<Vec<String>>> = Arc::new(RwLock::new(Vec::new()));
(a, index, b).into_par_iter().for_each(|(x, i, y)| {
if x != y {
let mutation = format!("{}{}{}", x, i, y);
mismatched_chars
.write()
.expect("could not acquire write lock")
.push(mutation);
}
});
for mismatch in mismatched_chars
.read()
.expect("could not acquire read lock")
.iter()
{
eprintln!("{}", mismatch);
}
}
Related
I'm experimenting with rust by porting some c++ code. I write a lot of code that uses vectors as object pools by moving elements to the back in various ways and then resizing. Here's a ported function:
use rand::{thread_rng, Rng};
fn main() {
for n in 1..11 {
let mut a: Vec<u8> = (1..11).collect();
keep_n_rand(&mut a, n);
println!("{}: {:?}", n, a);
}
}
fn keep_n_rand<T>(x: &mut Vec<T>, n: usize) {
let mut rng = thread_rng();
for i in n..x.len() {
let j = rng.gen_range(0..i);
if j < n {
x.swap(i, j);
}
}
x.truncate(n);
}
It keeps n elements chosen at random. It is done this way because it does not reduce the capacity of the vector so that more objects can be added later without allocating (on average). This might be iterated millions of times.
In c++, I would use x[j] = std::move(x[i]); because I am about to truncate the vector. While it has no impact in this example, if the swap was expensive, it would make sense to move. Is that possible and desirable in rust? I can live with a swap. I'm just curious.
Correct me if I'm wrong: you're looking for a way to retain n random elements in a Vec and discard the rest. In that case, the easiest way would be to use partial_shuffle(), a rand function implemented for slices.
Shuffle a slice in place, but exit early.
Returns two mutable slices from the source slice. The first contains amount elements randomly permuted. The second has the remaining elements that are not fully shuffled.
use rand::{thread_rng, seq::SliceRandom};
fn main() {
let mut rng = thread_rng();
// Use the `RangeInclusive` (`..=`) syntax at times like this.
for n in 1..=10 {
let mut elements: Vec<u8> = (1..=10).collect();
let (elements, _rest) = elements.as_mut_slice().partial_shuffle(&mut rng, n);
println!("{n}: {elements:?}");
}
}
Run this snippet on Rust Playground.
elements is shadowed, going from a Vec to a &mut [T]. If you're only going to use it inside the function, that's probably all you'll need. However, since it's a reference, you can't return it; the data it's pointing to is owned by the original vector, which will be dropped when it goes out of scope. If that's what you need, you'll have to turn the slice into a Vec.
While you can simply construct a new one from it using Vec::from, I suspect (but haven't tested) that it's more efficient to use Vec::split_off.
Splits the collection into two at the given index.
Returns a newly allocated vector containing the elements in the range [at, len). After the call, the original vector will be left containing the elements [0, at) with its previous capacity unchanged.
use rand::{thread_rng, seq::SliceRandom};
fn main() {
let mut rng = thread_rng();
for n in 1..=10 {
let mut elements: Vec<u8> = (1..=10).collect();
elements.as_mut_slice().partial_shuffle(&mut rng, n);
let elements = elements.split_off(elements.len() - n);
// `elements` is still a `Vec`; this time, containing only
// the shuffled elements. You can use it as the return value.
println!("{n}: {elements:?}");
}
}
Run this snippet on Rust Playground.
Since this function lives on a performance-critical path, I'd recommend benchmarking it against your current implementation. At the time of writing this, criterion is the most popular way to do that. That said, rand is an established library, so I imagine it will perform as well or better than a manual implementation.
Sample Benchmark
I don't know what kind of numbers you're working with, but here's a sample benchmark with for n in 1..=100 and (1..=100).collect() (i.e. 100 instead of 10 in both places) without the print statements:
manual time: [73.683 µs 73.749 µs 73.821 µs]
rand with slice time: [68.074 µs 68.147 µs 68.226 µs]
rand with vec time: [54.147 µs 54.213 µs 54.288 µs]
Bizarrely, splitting off a Vec performed vastly better than not. Unless I made an error in my benchmarks, the compiler is probably doing something under the hood that you'll need a more experienced Rustacean than me to explain.
Benchmark Implementation
Cargo.toml
[dependencies]
rand = "0.8.5"
[dev-dependencies]
criterion = "0.4.0"
[[bench]]
name = "rand_benchmark"
harness = false
[[bench]]
name = "rand_vec_benchmark"
harness = false
[[bench]]
name = "manual_benchmark"
harness = false
benches/manual_benchmark.rs
use criterion::{criterion_group, criterion_main, Criterion};
fn manual_solution() {
for n in 1..=100 {
let mut elements: Vec<u8> = (1..=100).collect();
keep_n_rand(&mut elements, n);
}
}
fn keep_n_rand<T>(elements: &mut Vec<T>, n: usize) {
use rand::{thread_rng, Rng};
let mut rng = thread_rng();
for i in n..elements.len() {
let j = rng.gen_range(0..i);
if j < n {
elements.swap(i, j);
}
}
elements.truncate(n);
}
fn benchmark(c: &mut Criterion) {
c.bench_function("manual", |b| b.iter(manual_solution));
}
criterion_group!(benches, benchmark);
criterion_main!(benches);
benches/rand_benchmark.rs
use criterion::{criterion_group, criterion_main, Criterion};
fn rand_solution() {
use rand::{seq::SliceRandom, thread_rng};
let mut rng = thread_rng();
for n in 1..=100 {
let mut elements: Vec<u8> = (1..=100).collect();
let (_elements, _) = elements.as_mut_slice().partial_shuffle(&mut rng, n);
}
}
fn benchmark(c: &mut Criterion) {
c.bench_function("rand with slice", |b| b.iter(rand_solution));
}
criterion_group!(benches, benchmark);
criterion_main!(benches);
benches/rand_vec_benchmark.rs
use criterion::{criterion_group, criterion_main, Criterion};
fn rand_solution() {
use rand::{seq::SliceRandom, thread_rng};
let mut rng = thread_rng();
for n in 1..=100 {
let mut elements: Vec<u8> = (1..=100).collect();
elements.as_mut_slice().partial_shuffle(&mut rng, n);
let _elements = elements.split_off(elements.len() - n);
}
}
fn benchmark(c: &mut Criterion) {
c.bench_function("rand with vec", |b| b.iter(rand_solution));
}
criterion_group!(benches, benchmark);
criterion_main!(benches);
Is that possible and desirable in rust?
It is not possible unless you constrain T: Copy or T: Clone: while C++ uses non-destructive moves (the source is in a valid but unspecified state) Rust uses destructive moves (the source is gone).
There are ways around it using unsafe but they require being very careful and it's probably not worth the hassle (you can look at Vec::swap_remove for a taste, it basically does what you're doing here except only between j and the last element of the vec).
I'd also recommend verified_tinker's solution, as I'm not convinced your shuffle is unbiased.
I'm trying to write some Rust code to decode GPS data from an SDR receiver. I'm reading samples in from a file and converting the binary data to a series of complex numbers, which is a time-consuming process. However, there are times when I want to stream samples in without keeping them in memory (e.g. one very large file processed only one way or samples directly from the receiver) and other times when I want to keep the whole data set in memory (e.g. one small file processed in multiple different ways) to avoid repeating the work of parsing the binary file.
Therefore, I want to write functions or structs with iterators to be as general as possible, but I know they aren't sized, so I need to put them in a Box. I would have expected something like this to work.
This is the simplest example I could come up with to demonstrate the same basic problem.
fn sum_squares_plus(iter: Box<Iterator<Item = usize>>, x: usize) -> usize {
let mut ans: usize = 0;
for i in iter {
ans += i * i;
}
ans + x
}
fn main() {
// Pretend this is an expensive operation that I don't want to repeat five times
let small_data: Vec<usize> = (0..10).collect();
for x in 0..5 {
// Want to iterate over immutable references to the elements of small_data
let iterbox: Box<Iterator<Item = usize>> = Box::new(small_data.iter());
println!("{}: {}", x, sum_squares_plus(iterbox, x));
}
// 0..100 is more than 0..10 and I'm only using it once,
// so I want to 'stream' it instead of storing it all in memory
let x = 55;
println!("{}: {}", x, sum_squares_plus(Box::new(0..100), x));
}
I've tried several different variants of this, but none seem to work. In this particular case, I'm getting
error[E0271]: type mismatch resolving `<std::slice::Iter<'_, usize> as std::iter::Iterator>::Item == usize`
--> src/main.rs:15:52
|
15 | let iterbox: Box<Iterator<Item = usize>> = Box::new(small_data.iter());
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^ expected reference, found usize
|
= note: expected type `&usize`
found type `usize`
= note: required for the cast to the object type `dyn std::iter::Iterator<Item = usize>`
I'm not worried about concurrency and I'd be happy to just get it working sequentially on a single thread, but a concurrent solution would be a nice bonus.
The current error you're running into is here:
let iterbox:Box<Iterator<Item = usize>> = Box::new(small_data.iter());
You're declaring that you want an iterator that returns usize items, but small_data.iter() is an iterator that returns references to usize items (&usize). That why you get the error "expected reference, found usize". usize is a small type that's cloneable so you can simply use the .cloned() iterator adapter to provide an iterator that actually returns a usize.
let iterbox: Box<Iterator<Item = usize>> = Box::new(small_data.iter().cloned());
Once you're past that hurdle, the next problem is that the iterator returned over small_data contains a reference to the small_data. Since sum_squares_plus is defined to accept a Box<Iterator<Item = usize>>, it's implied in that signature that the Iterator trait object within the box has a 'static lifetime. The iterator you're providing does not because it borrows small_data. To fix that you need to adjust the sum_squares_plus definition to
fn sum_squares_plus<'a>(iter: Box<Iterator<Item = usize> + 'a>, x: usize) -> usize
Note the 'a lifetime annotations. The code should then compile, but unless there's some constraints other than what's clearly defined here, a more idiomatic and efficient approach would be to avoid using trait objects and the associated allocations. The below code should work using static dispatch without any trait objects.
fn sum_squares_plus<I: Iterator<Item = usize>>(iter: I, x: usize) -> usize {
let mut ans: usize = 0;
for i in iter {
ans += i * i;
}
ans + x
}
fn main() {
// Pretend this is an expensive operation that I don't want to repeat five times
let small_data: Vec<usize> = (0..10).collect();
for x in 0..5 {
println!("{}: {}", x, sum_squares_plus(small_data.iter().cloned(), x));
}
// 0..100 is more than 0..10 and I'm only using it once,
// so I want to 'stream' it instead of storing it all in memory
let x = 55;
println!("{}: {}", x, sum_squares_plus(Box::new(0..100), x));
}
I'm looking for the string which occurs most frequently in the second part of the tuple of Vec<(String, Vec<String>)>:
use itertools::Itertools; // 0.8.0
fn main() {
let edges: Vec<(String, Vec<String>)> = vec![];
let x = edges
.iter()
.flat_map(|x| &x.1)
.map(|x| &x[..])
.sorted()
.group_by(|x| x)
.max_by_key(|x| x.len());
}
Playground
This:
takes the iterator
flat-maps to the second part of the tuple
turns elements into a &str
sorts it (via itertools)
groups it by string (via itertools)
find the group with the highest count
This supposedly gives me the group with the most frequently occurring string, except it doesn't compile:
error[E0599]: no method named `max_by_key` found for type `itertools::groupbylazy::GroupBy<&&str, std::vec::IntoIter<&str>, [closure#src/lib.rs:9:19: 9:24]>` in the current scope
--> src/lib.rs:10:10
|
10 | .max_by_key(|x| x.len());
| ^^^^^^^^^^
|
= note: the method `max_by_key` exists but the following trait bounds were not satisfied:
`&mut itertools::groupbylazy::GroupBy<&&str, std::vec::IntoIter<&str>, [closure#src/lib.rs:9:19: 9:24]> : std::iter::Iterator`
I'm totally lost in these types.
You didn't read the documentation for a function you are using. This is not a good idea.
This type implements IntoIterator (it is not an iterator itself),
because the group iterators need to borrow from this value. It should
be stored in a local variable or temporary and iterated.
Personally, I'd just use a BTreeMap or HashMap:
let mut counts = BTreeMap::new();
for word in edges.iter().flat_map(|x| &x.1) {
*counts.entry(word).or_insert(0) += 1;
}
let max = counts.into_iter().max_by_key(|&(_, count)| count);
println!("{:?}", max);
If you really wanted to use the iterators, it could look something like this:
let groups = edges
.iter()
.flat_map(|x| &x.1)
.sorted()
.group_by(|&x| x);
let max = groups
.into_iter()
.map(|(key, group)| (key, group.count()))
.max_by_key(|&(_, count)| count);
I am trying to measure the speed of Vec's [] indexing vs. .get(index) using the following code:
extern crate time;
fn main() {
let v = vec![1; 1_000_000];
let before_rec1 = time::precise_time_ns();
for (i, v) in (0..v.len()).enumerate() {
v[i]
}
let after_rec1 = time::precise_time_ns();
println!("Total time: {}", after_rec1 - before_rec1);
let before_rec2 = time::precise_time_ns();
for (i, v) in (0..v.len()).enumerate() {
v.get(i)
}
let after_rec2 = time::precise_time_ns();
println!("Total time: {}", after_rec2 - before_rec2);
}
but this returns the following errors:
error: cannot index a value of type `usize`
--> src/main.rs:8:9
|
8 | v[i]
| ^^^^
error: no method named `get` found for type `usize` in the current scope
--> src/main.rs:17:11
|
17 | v.get(i)
| ^^^
I'm confused why this doesn't work, since enumerate should give me an index which, by its very name, I should be able to use to index the vector.
Why is this error being thrown?
I know I can/should use iteration rather than C-style way of indexing, but for learning's sake what do I use to iterate over the index values like I'm trying to do here?
You, pal, are mightily confused here.
fn main() {
let v = vec![1; 1_000_000];
This v has type Vec<i32>.
for (i, v) in (0..v.len()).enumerate() {
v[i]
}
You are iterating over a range of indexes, from 0 to v.len(), and using enumerate to generate indices as you go:
This v has type usize
In the loop, v == i, always
So... indeed, the compiler is correct, you cannot use [] on usize.
The program "fixed":
extern crate time;
fn main() {
let v = vec![1; 1_000_000];
let before_rec1 = time::precise_time_ns();
for i in 0..v.len() {
v[i]
}
let after_rec1 = time::precise_time_ns();
println!("Total time: {}", after_rec1 - before_rec1);
let before_rec2 = time::precise_time_ns();
for i in 0..v.len() {
v.get(i)
}
let after_rec2 = time::precise_time_ns();
println!("Total time: {}", after_rec2 - before_rec2);
}
I would add a disclaimer, though, that if I were a compiler, this useless loop would be optimized into a noop. If, after compiling with --release, your programs reports 0, this is what happened.
Rust has built-in benchmarking support, I advise that you use it rather than going the naive way. And... you will also need to inspect the assembly emitted, which is the only way to make sure that you are measuring what you think you are (optimizing compilers are tricky like that).
In Go, copying slices is standard-fare and looks like this:
# It will figure out the details to match slice sizes
dst = copy(dst[n:], src[:m])
In Rust, I couldn't find a similar method as replacement. Something I came up with looks like this:
fn copy_slice(dst: &mut [u8], src: &[u8]) -> usize {
let mut c = 0;
for (&mut d, &s) in dst.iter_mut().zip(src.iter()) {
d = s;
c += 1;
}
c
}
Unfortunately, I get this compile-error that I am unable to solve:
error[E0384]: re-assignment of immutable variable `d`
--> src/main.rs:4:9
|
3 | for (&mut d, &s) in dst.iter_mut().zip(src.iter()) {
| - first assignment to `d`
4 | d = s;
| ^^^^^ re-assignment of immutable variable
How can I set d? Is there a better way to copy a slice?
Yes, use the method clone_from_slice(), it is generic over any element type that implements Clone.
fn main() {
let mut x = vec![0; 8];
let y = [1, 2, 3];
x[..3].clone_from_slice(&y);
println!("{:?}", x);
// Output:
// [1, 2, 3, 0, 0, 0, 0, 0]
}
The destination x is either a &mut [T] slice, or anything that derefs to that, like a mutable Vec<T> vector. You need to slice the destination and source so that their lengths match.
As of Rust 1.9, you can also use copy_from_slice(). This works the same way but uses the Copy trait instead of Clone, and is a direct wrapper of memcpy. The compiler can optimize clone_from_slice to be equivalent to copy_from_slice when applicable, but it can still be useful.
This code works, even though I am not sure if it the best way to do it.
fn copy_slice(dst: &mut [u8], src: &[u8]) -> usize {
let mut c = 0;
for (d, s) in dst.iter_mut().zip(src.iter()) {
*d = *s;
c += 1;
}
c
}
Apparently not specifying access permissions explicitly did the trick. However, I am still confused about this and my mental model doesn't yet cover what's truly going on there.
My solutions are mostly trial and error when it comes to these things, and I'd rather like to truly understand instead.
Another variant would be
fn copy_slice(dst: &mut [u8], src: &[u8]) -> usize {
dst.iter_mut().zip(src).map(|(x, y)| *x = *y).count()
}
Note that you have to use count in this case, since len would use the ExactSizeIterator shortcut and thus never call next, resulting in a no-op.