I'm trying to retrieve a key from a BTreeMap and manipulate it in subsequent code.
use std::collections::BTreeMap;
fn main() {
let mut map: BTreeMap<u64, u64> = BTreeMap::new();
map.entry(0).or_insert(0);
// It seems like this should work:
let y = map[0] + 1; // expected reference, found integral variable
// Other things I've tried:
let value_at_0 = map[0]; // expected reference, found integral variable
let ref value_at_0 = map[0]; // expected reference, found integral variable
let y = value_at_0 + 1;
let y = (&map[0]) + 1; // expected reference, found integral variable
let y = (&mut map[0]) + 1; // binary operation `+` cannot be applied to type `&mut u64`
let y = (*map[0]) + 1; // type `u64` cannot be dereferenced
println!("{}", y);
}
The error is particularly confusing, since I would think an integral variable would be precisely the kind of thing you could add 1 to.
To show what I would like this code to do, here is how this would be implemented in Python:
>>> map = {}
>>> map.setdefault(0, 0)
0
>>> y = map[0] + 1
>>> print(y)
1
For SEO purposes, since my Googling failed, the originating error in somewhat more complex code is expected reference, found u64
For reference, the full compilation error is:
error[E0308]: mismatched types
--> ./map_weirdness.rs:8:15
|
8 | let y = map[0] + 1; // expected reference, found integral variable
| ^^^^^^ expected reference, found integral variable
|
= note: expected type `&u64`
= note: found type `{integer}`
The bug was in what was being passed to [], though the error highlighted the whole map[0], suggesting that the error was in the type of the value of map[0] when it was actually in computing the value. The correct implementation needs to pass a reference to [] as follows:
use std::collections::BTreeMap;
fn main() {
let mut map: BTreeMap<u64, u64> = BTreeMap::new();
map.entry(0).or_insert(0);
let y = map[&0] + 1;
println!("{}", y);
}
Related
I wanted to implement a function computing the number of digits within any generic type of integer. Here is the code I came up with:
extern crate num;
use num::Integer;
fn int_length<T: Integer>(mut x: T) -> u8 {
if x == 0 {
return 1;
}
let mut length = 0u8;
if x < 0 {
length += 1;
x = -x;
}
while x > 0 {
x /= 10;
length += 1;
}
length
}
fn main() {
println!("{}", int_length(45));
println!("{}", int_length(-45));
}
And here is the compiler output
error[E0308]: mismatched types
--> src/main.rs:5:13
|
5 | if x == 0 {
| ^ expected type parameter, found integral variable
|
= note: expected type `T`
found type `{integer}`
error[E0308]: mismatched types
--> src/main.rs:10:12
|
10 | if x < 0 {
| ^ expected type parameter, found integral variable
|
= note: expected type `T`
found type `{integer}`
error: cannot apply unary operator `-` to type `T`
--> src/main.rs:12:13
|
12 | x = -x;
| ^^
error[E0308]: mismatched types
--> src/main.rs:15:15
|
15 | while x > 0 {
| ^ expected type parameter, found integral variable
|
= note: expected type `T`
found type `{integer}`
error[E0368]: binary assignment operation `/=` cannot be applied to type `T`
--> src/main.rs:16:9
|
16 | x /= 10;
| ^ cannot use `/=` on type `T`
I understand that the problem comes from my use of constants within the function, but I don't understand why the trait specification as Integer doesn't solve this.
The documentation for Integer says it implements the PartialOrd, etc. traits with Self (which I assume refers to Integer). By using integer constants which also implement the Integer trait, aren't the operations defined, and shouldn't the compiler compile without errors?
I tried suffixing my constants with i32, but the error message is the same, replacing _ with i32.
Many things are going wrong here:
As Shepmaster says, 0 and 1 cannot be converted to everything implementing Integer. Use Zero::zero and One::one instead.
10 can definitely not be converted to anything implementing Integer, you need to use NumCast for that
a /= b is not sugar for a = a / b but an separate trait that Integer does not require.
-x is an unary operation which is not part of Integer but requires the Neg trait (since it only makes sense for signed types).
Here's an implementation. Note that you need a bound on Neg, to make sure that it results in the same type as T
extern crate num;
use num::{Integer, NumCast};
use std::ops::Neg;
fn int_length<T>(mut x: T) -> u8
where
T: Integer + Neg<Output = T> + NumCast,
{
if x == T::zero() {
return 1;
}
let mut length = 0;
if x < T::zero() {
length += 1;
x = -x;
}
while x > T::zero() {
x = x / NumCast::from(10).unwrap();
length += 1;
}
length
}
fn main() {
println!("{}", int_length(45));
println!("{}", int_length(-45));
}
The problem is that the Integer trait can be implemented by anything. For example, you could choose to implement it on your own struct! There wouldn't be a way to convert the literal 0 or 1 to your struct. I'm too lazy to show an example of implementing it, because there's 10 or so methods. ^_^
num::Zero and num::One
This is why Zero::zero and One::one exist. You can (very annoyingly) create all the other constants from repeated calls to those.
use num::{One, Zero}; // 0.4.0
fn three<T>() -> T
where
T: Zero + One,
{
let mut three = Zero::zero();
for _ in 0..3 {
three = three + One::one();
}
three
}
From and Into
You can also use the From and Into traits to convert to your generic type:
use num::Integer; // 0.4.0
use std::ops::{DivAssign, Neg};
fn int_length<T>(mut x: T) -> u8
where
T: Integer + Neg<Output = T> + DivAssign,
u8: Into<T>,
{
let zero = 0.into();
if x == zero {
return 1;
}
let mut length = 0u8;
if x < zero {
length += 1;
x = -x;
}
while x > zero {
x /= 10.into();
length += 1;
}
length
}
fn main() {
println!("{}", int_length(45));
println!("{}", int_length(-45));
}
See also:
How do I use floating point number literals when using generic types?
I am attempting to generate a Vec<(Point, f64)>:
let grid_size = 5;
let points_in_grid = (0..grid_size).flat_map(|x| {
(0..grid_size)
.map(|y| Point::new(f64::from(x), f64::from(y)))
.collect::<Vec<Point>>()
});
let origin = Point::origin();
let points_and_distances = points_in_grid
.map(|point| (point, point.distance_to(&origin)))
.collect::<Vec<(Point, f64)>>();
I get the following error:
use of moved value: point
I understand that I cannot use point in both elements of the tuple, but when I attempt to store a reference, I get an error regarding lifetime.
I am presuming your Point struct looks like the following:
#[derive(Debug)]
struct Point(f64, f64);
impl Point {
fn new(x: f64, y: f64) -> Self { Point(x, y) }
fn origin() -> Self { Point(0.,0.) }
fn distance_to(&self, other: &Point) -> f64 {
((other.0 - self.0).powi(2) + (other.1 - self.1).powi(2)).sqrt()
}
}
Now let's look at an even simpler example that will not compile:
let x = Point::new(2.5, 1.0);
let y = x;
let d = x.distance_to(&y);
Which gives the error:
error[E0382]: use of moved value: `x`
--> <anon>:15:13
|
14 | let y = x;
| - value moved here
15 | let d = x.distance_to(&y);
| ^ value used here after move
|
= note: move occurs because `x` has type `Point`, which does not implement the `Copy` trait
Because x has been moved into y, it now can't have a reference taken in order to call the distance_to function.
The important thing to note here is that order matters - if we swap the lines over we can call distance_to by borrowing x, the borrow will end and then x can be moved into y.
let x = Point(0., 0.);
let d = x.distance_to(&y);
let y = x; // compiles
In your case, a very similar thing is happening when constructing the tuple. point gets moved into the tuple, and then tries to borrow it to form the second element. The simplest solution is to do the same thing as here: swap the order of the elements of the tuple.
let points_and_distances = points_in_grid
.map(|point| (point.distance_to(&origin), point))
.collect::<Vec<(f64, Point)>>(); // compiles
Playground link
N.B. if you want to retain the order:
.map(|(a, b)| (b, a))
In this example, the compiler can not infer the matrix type:
type Mat4x4<T> = [T; 16];
fn main() {
let m: Mat4x4 = [0.4323f32; 16];
println!("{:?}", m);
}
The working code is:
type Mat4x4<T> = [T; 16];
fn main() {
let m: Mat4x4<f32> = [0.4323f32; 16];
println!("{:?}", m);
}
Is this an expected act?
This is not a type inference issue:
type Mat4x4<T> = [T; 16];
fn main() {
let m: Mat4x4 = [0.4323f32; 16];
println!("{:?}", m);
}
Yields the following error message:
error[E0107]: wrong number of type arguments: expected 1, found 0
--> src/main.rs:4:12
|
4 | let m: Mat4x4 = [0.4323f32; 16];
| ^^^^^^ expected 1 type argument
The complaint here is that Mat4x4 is not a type, it's a template or blueprint to create a type.
An analogy would be that Mat4x4 is a waffle iron, and Mat4x4<f32> is a waffle that comes out of it. If you are served the waffle iron (with maple syrup on top, of course) you will likely be disappointed!
The same applies here: when you give the compiler the blueprint where it expects the final product, it signals you that it was not what it expected.
You can supply a dummy argument (_), and it will be inferred:
let m: Mat4x4<_> = [0.4323f32; 16];
You cannot omit required type parameters, but you can use _ to infer them:
let m: Mat4x4<_> = [0.4323f32; 16];
Alternatively, you could add a default type parameter so you could omit the <…> when the type T is exactly f32 (but this is not type inference, you still need to write Mat4x4<f64> explicitly).
type Mat4x4<T = f32> = [T; 16];
let m: Mat4x4 = [0.4323f32; 16];
This is as simple as it gets, yet I have no clue why it doesn't work.
fn main() {
let vector = vec![("foo".to_string(), "bar".to_string())];
let string = vector[0].0 + vector[0].1;
}
Error
src/main.rs:3:29: 3:40 error: mismatched types:
expected `&str`,
found `collections::string::String`
(expected &-ptr,
found struct `collections::string::String`) [E0308]
src/main.rs:3 let string = vector[0].0 + vector[0].1;
^~~~~~~~~~~
So then I change it to this:
fn main() {
let vector = vec![("foo".to_string(), "bar".to_string())];
let string = &*vector[0].0 + &*vector[0].1;
}
Get another error
src/main.rs:3:15: 3:28 error: binary operation `+` cannot be applied to type `&str` [E0369]
src/main.rs:3 let string = &*vector[0].0 + &*vector[0].1;
^~~~~~~~~~~~~
src/main.rs:3:15: 3:28 help: run `rustc --explain E0369` to see a detailed explanation
src/main.rs:3:15: 3:28 note: an implementation of `std::ops::Add` might be missing for `&str`
src/main.rs:3 let string = &*vector[0].0 + &*vector[0].1;
^~~~~~~~~~~~~
I've exhausted all the combinations I could think of. What am I missing here?
This does not work because concatenation is defined only on String, and it consumes its left operand:
let s = "hello ".to_string();
let c = s + "world";
println!("{}", c); // hello world
println!("{}", s); // compilation error
Therefore it needs by-value access to the string, but it cannot be done with indexing on a vector - they can only return references into the vector, not values.
There are several ways to overcome this, for example, you can clone the string:
let string = vector[0].0.clone() + &vector[0].1;
Or you can use formatting:
let string = format!("{}{}", vector[0].0, vector[0].1);
Or you can take the value out of the vector with remove() or swap_remove():
let string = match vector.swap_remove(0) {
(left, right) => left + right
};
The latter, naturally, is appropriate if it's okay for you to lose the state of the vector. If you want to do this for the whole vector, it is better to iterate it by value, consuming it in the process:
for (left, right) in vector {
let string = left + right;
}
for a given set of iterators a, b, c, one can chain them successfully with a.chain(b).chain(c). Since the CLI util I am trying to write provides a vector of paths (strings, --dirs "a/b/c" "d/e/f" ...), I would like to use walkd_dir on each of them and then chain them together. My first thought is:
fn main() {
let a = 0..3;
let b = 3..6;
let c = 6..9;
let v = vec![b, c];
v.iter().cloned().fold(a, |acc, e| acc.chain(e));
}
http://is.gd/hfNQd2, returns
<anon>:6:40: 6:52 error: mismatched types:
expected `core::ops::Range<_>`,
found `core::iter::Chain<core::ops::Range<_>, core::ops::Range<_>>`
(expected struct `core::ops::Range`,
found struct `core::iter::Chain`) [E0308]
<anon>:6 v.iter().cloned().fold(a, |acc, e| acc.chain(e));
Another attempt http://is.gd/ZKdxZM, although a.chain(b).chain(c) works.
Use flat_map:
fn main() {
let a = 0..3;
let b = 3..6;
let c = 6..9;
let v = vec![a, b, c];
v.iter().flat_map(|it| it.clone());
}
As the error message states, the type of a Range is different than the type of Chain<Range, Range>, and the type of the accumulator in the call the fold must always be consistent. Otherwise, what would the return type be from the fold if there were no items in the vector?
The simplest solution is to use a trait object, specifically Box<Iterator>:
type MyIter = Box<Iterator<Item=i32>>;
fn main() {
let a = 0..3;
let b = 3..6;
let c = 6..9;
let v = vec![b, c];
let z = v.into_iter().fold(Box::new(a) as MyIter, |acc, e| {
Box::new(acc.chain(Box::new(e) as MyIter)) as MyIter
});
for i in z {
println!("{}", i);
}
}
This adds a level of indirection but unifies the two concrete types (Range, Chain) as a single type.
A potentially more efficient but longer-to-type version would be to create an enum that represented either a Range or a Chain, and then implement Iterator for that new type.
Actually, I don't think the enum would work as it would require a recursive definition, which is not allowed.