I wrote a function that accepts a slice of single digit numbers and returns a number.
pub fn from_digits<T>(digits: &[T]) -> T
where
T: Num + FromPrimitive + Add + Mul + Copy,
{
let mut ret: T = T::zero();
let ten: T = T::from_i8(10).unwrap();
for d in digits {
ret = ret * ten + **d;
}
ret
}
For example, from_digits(&vec![1,2,3,4,5]) returns 12345. This seems to work fine.
Now, I want to use this function in another code:
let ret: Vec<Vec<i64>> = digits // &[i64]
.iter() // Iter<i64>
.rev() // impl Iterator<Item = i64>
.permutations(len) // Permutations<Rev<Iter<i64>>>
.map(|ds| from_digits(&ds)) // <-- ds: Vec<&i64>
.collect();
The problem is that after permutations(), the type in the lambda in map is Vec<&i64>, not Vec<i64>. This caused the compile error, as the expected parameter type is &[T], not &[&T].
I don't understand why the type of ds became Vec<&i64>. I tried to change the from_digits like this:
pub fn from_digits<T>(digits: &[&T]) -> T
...
But I am not sure if this is the correct way to fix the issue. Also, this caused another problem that I cannot pass simple data like vec![1,2,3] to the function.
Can you let me know the correct way to fix this?
The problem is that slice's iter() function returns an iterator over &T, so &i64.
The fix is to use the Iterator::copied() or Iterator::cloned() adapters that converts and iterator over &T to an iterator over T when T: Copy or T: Clone, respectively:
digits.iter().copied().rev() // ...
Related
I modified code found on the internet to create a function that obtains the statistical mode of any Hashable type that implements Eq, but I do not understand some of the syntax. Here is the function:
use std::hash::Hash;
use std::collections::HashMap;
pub fn mode<'a, I, T>(items: I) -> &'a T
where I: IntoIterator<Item = &'a T>,
T: Hash + Clone + Eq, {
let mut occurrences: HashMap<&T, usize> = HashMap::new();
for value in items.into_iter() {
*occurrences.entry(value).or_insert(0) += 1;
}
occurrences
.into_iter()
.max_by_key(|&(_, count)| count)
.map(|(val, _)| val)
.expect("Cannot compute the mode of zero items")
}
(I think requiring Clone may be overkill.)
The syntax I do not understand is in the closure passed to map_by_key:
|&(_, count)| count
What is the &(_, count) doing? I gather the underscore means I can ignore that parameter. Is this some sort of destructuring of a tuple in a parameter list? Does this make count take the reference of the tuple's second item?
.max_by_key(|&(_, count)| count) is equivalent to .max_by_key(f) where f is this:
fn f<T>(t: &(T, usize)) -> usize {
(*t).1
}
f() could also be written using pattern matching, like this:
fn f2<T>(&(_, count): &(T, usize)) -> usize {
count
}
And f2() is much closer to the first closure you're asking about.
The second closure is essentially the same, except there is no reference slightly complicating matters.
I'm trying to write some Rust code to decode GPS data from an SDR receiver. I'm reading samples in from a file and converting the binary data to a series of complex numbers, which is a time-consuming process. However, there are times when I want to stream samples in without keeping them in memory (e.g. one very large file processed only one way or samples directly from the receiver) and other times when I want to keep the whole data set in memory (e.g. one small file processed in multiple different ways) to avoid repeating the work of parsing the binary file.
Therefore, I want to write functions or structs with iterators to be as general as possible, but I know they aren't sized, so I need to put them in a Box. I would have expected something like this to work.
This is the simplest example I could come up with to demonstrate the same basic problem.
fn sum_squares_plus(iter: Box<Iterator<Item = usize>>, x: usize) -> usize {
let mut ans: usize = 0;
for i in iter {
ans += i * i;
}
ans + x
}
fn main() {
// Pretend this is an expensive operation that I don't want to repeat five times
let small_data: Vec<usize> = (0..10).collect();
for x in 0..5 {
// Want to iterate over immutable references to the elements of small_data
let iterbox: Box<Iterator<Item = usize>> = Box::new(small_data.iter());
println!("{}: {}", x, sum_squares_plus(iterbox, x));
}
// 0..100 is more than 0..10 and I'm only using it once,
// so I want to 'stream' it instead of storing it all in memory
let x = 55;
println!("{}: {}", x, sum_squares_plus(Box::new(0..100), x));
}
I've tried several different variants of this, but none seem to work. In this particular case, I'm getting
error[E0271]: type mismatch resolving `<std::slice::Iter<'_, usize> as std::iter::Iterator>::Item == usize`
--> src/main.rs:15:52
|
15 | let iterbox: Box<Iterator<Item = usize>> = Box::new(small_data.iter());
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^ expected reference, found usize
|
= note: expected type `&usize`
found type `usize`
= note: required for the cast to the object type `dyn std::iter::Iterator<Item = usize>`
I'm not worried about concurrency and I'd be happy to just get it working sequentially on a single thread, but a concurrent solution would be a nice bonus.
The current error you're running into is here:
let iterbox:Box<Iterator<Item = usize>> = Box::new(small_data.iter());
You're declaring that you want an iterator that returns usize items, but small_data.iter() is an iterator that returns references to usize items (&usize). That why you get the error "expected reference, found usize". usize is a small type that's cloneable so you can simply use the .cloned() iterator adapter to provide an iterator that actually returns a usize.
let iterbox: Box<Iterator<Item = usize>> = Box::new(small_data.iter().cloned());
Once you're past that hurdle, the next problem is that the iterator returned over small_data contains a reference to the small_data. Since sum_squares_plus is defined to accept a Box<Iterator<Item = usize>>, it's implied in that signature that the Iterator trait object within the box has a 'static lifetime. The iterator you're providing does not because it borrows small_data. To fix that you need to adjust the sum_squares_plus definition to
fn sum_squares_plus<'a>(iter: Box<Iterator<Item = usize> + 'a>, x: usize) -> usize
Note the 'a lifetime annotations. The code should then compile, but unless there's some constraints other than what's clearly defined here, a more idiomatic and efficient approach would be to avoid using trait objects and the associated allocations. The below code should work using static dispatch without any trait objects.
fn sum_squares_plus<I: Iterator<Item = usize>>(iter: I, x: usize) -> usize {
let mut ans: usize = 0;
for i in iter {
ans += i * i;
}
ans + x
}
fn main() {
// Pretend this is an expensive operation that I don't want to repeat five times
let small_data: Vec<usize> = (0..10).collect();
for x in 0..5 {
println!("{}: {}", x, sum_squares_plus(small_data.iter().cloned(), x));
}
// 0..100 is more than 0..10 and I'm only using it once,
// so I want to 'stream' it instead of storing it all in memory
let x = 55;
println!("{}: {}", x, sum_squares_plus(Box::new(0..100), x));
}
I have an ASCII string slice and I need to compute the sum of all characters when seen as bytes.
let word = "Hello, World";
let sum = word.as_bytes().iter().sum::<u8>();
I need to specify the type for sum, otherwise Rust will not compile. The problem is that u8 is a too small type, and if the sum overflows the program will panic.
I'd like to avoid that, but I cannot find a way to specify a bigger type such as u16 or u32 for example, when using sum().
I may try to use fold(), but I was wondering if there is a way to use sum() by specifying another type.
let sum = word.as_bytes().iter().fold(0u32, |acc, x| acc + *x as u32);
You can use map to cast each byte to a bigger type:
let sum: u32 = word.as_bytes().iter().map(|&b| b as u32).sum();
or
let sum: u32 = word.as_bytes().iter().cloned().map(u32::from).sum();
The reason why you can't sum to u32 using your original attempt is that the Sum trait which provides it has the following definition:
pub trait Sum<A = Self> {
fn sum<I>(iter: I) -> Self
where
I: Iterator<Item = A>;
}
Which means that its method sum returns by default the same type as the items of the iterator it is built from. You can see it's the case with u8 by looking at its implementation of Sum:
fn sum<I>(iter: I) -> u8
where
I: Iterator<Item = u8>,
This solution seems rather inelegant:
fn parse_range(&self, string_value: &str) -> Vec<u8> {
let values: Vec<u8> = string_value
.splitn(2, "-")
.map(|part| part.parse().ok().unwrap())
.collect();
{ values[0]..(values[1] + 1) }.collect()
}
Since splitn(2, "-") returns exactly two results for any valid string_value, it would be better to assign the tuple directly to two variables first and last rather than a seemingly arbitrary-length Vec. I can't seem to do this with a tuple.
There are two instances of collect(), and I wonder if it can be reduced to one (or even zero).
Trivial implementation
fn parse_range(string_value: &str) -> Vec<u8> {
let pos = string_value.find(|c| c == '-').expect("No valid string");
let (first, second) = string_value.split_at(pos);
let first: u8 = first.parse().expect("Not a number");
let second: u8 = second[1..].parse().expect("Not a number");
{ first..second + 1 }.collect()
}
Playground
I would recommend returning a Result<Vec<u8>, Error> instead of panicking with expect/unwrap.
Nightly implementation
My next thought was about the second collect. Here is a code example which uses nightly code, but you won't need any collect at all.
#![feature(conservative_impl_trait, inclusive_range_syntax)]
fn parse_range(string_value: &str) -> impl Iterator<Item = u8> {
let pos = string_value.find(|c| c == '-').expect("No valid string");
let (first, second) = string_value.split_at(pos);
let first: u8 = first.parse().expect("Not a number");
let second: u8 = second[1..].parse().expect("Not a number");
first..=second
}
fn main() {
println!("{:?}", parse_range("3-7").collect::<Vec<u8>>());
}
Instead of calling collect the first time, just advance the iterator:
let mut values = string_value
.splitn(2, "-")
.map(|part| part.parse().unwrap());
let start = values.next().unwrap();
let end = values.next().unwrap();
Do not call .ok().unwrap() — that converts the Result with useful error information to an Option, which has no information. Just call unwrap directly on the Result.
As already mentioned, if you want to return a Vec, you'll want to call collect to create it. If you want to return an iterator, you can. It's not bad even in stable Rust:
fn parse_range(string_value: &str) -> std::ops::Range<u8> {
let mut values = string_value
.splitn(2, "-")
.map(|part| part.parse().unwrap());
let start = values.next().unwrap();
let end = values.next().unwrap();
start..end + 1
}
fn main() {
assert!(parse_range("1-5").eq(1..6));
}
Sadly, inclusive ranges are not yet stable, so you'll need to continue to use +1 or switch to nightly.
Since splitn(2, "-") returns exactly two results for any valid string_value, it would be better to assign the tuple directly to two variables first and last rather than a seemingly arbitrary-length Vec. I can't seem to do this with a tuple.
This is not possible with Rust's type system. You are asking for dependent types, a way for runtime values to interact with the type system. You'd want splitn to return a (&str, &str) for a value of 2 and a (&str, &str, &str) for a value of 3. That gets even more complicated when the argument is a variable, especially when it's set at run time.
The closest workaround would be to have a runtime check that there are no more values:
assert!(values.next().is_none());
Such a check doesn't feel valuable to me.
See also:
What is the correct way to return an Iterator (or any other trait)?
How do I include the end value in a range?
fn increment(number: &mut int) {
// this fails with: binary operation `+` cannot be applied to type `&mut int`
//let foo = number + number;
let foo = number.add(number);
println!("{}", foo);
}
fn main() {
let mut test = 5;
increment(&mut test);
println!("{}", test);
}
Why does number + number fail but number.add(number) works?
As a bonus question: The above prints out
10
5
Am I right to assume that test is still 5 because the data is copied over to increment? The only way that the original test variable could be mutated by the increment function would be if it was send as Box<int>, right?
number + number fails because they're two references, not two ints. The compiler also tells us why: the + operator isn't implemented for the type &mut int.
You have to dereference with the dereference operator * to get at the int. This would work let sum = *number + *number;
number.add(number); works because the signature of add is fn add(&self, &int) -> int;
Am I right to assume that test is still 5 because the data is copied
over to increment? The only way that the original test variable could
be mutated by the increment function would be if it was send as
Box, right?
Test is not copied over, but is still 5 because it's never actually mutated. You could mutate it in the increment function if you wanted.
PS: to mutate it
fn increment(number: &mut int) {
*number = *number + *number;
println!("{}", number);
}