I have the following data structure. When I do a binary search on it, and pass in a &limit_price, it only matches if the values in size and orders matches as well. Because I am ignoring PartialEq I would assume it should only match on the price field.
Am I missing something?
#[derive(Derivative)]
#[derivative(Hash, PartialEq, Eq, PartialOrd, Ord, Debug)]
struct LimitPrice {
price: OrderedFloat<f32>,
#[derivative(Hash="ignore", PartialEq = "ignore", PartialOrd = "ignore")]
size: OrderedFloat<f32>,
#[derivative(Hash="ignore", PartialEq = "ignore", PartialOrd = "ignore")]
orders: Vec<Order>,
}
A reproducible example shows that only Ord is incorrect.
Any time you implement Ord and PartialOrd, you need to ensure that the implementations agree. With #[derive] this is automatic, but because derivative allows you to skip fields it also makes it possible to get Ord and PartialOrd that don't agree with each other. In this code, it would make sense for them to both have the same "ignore" annotation.
#[derivative(Hash="ignore", PartialEq = "ignore", PartialOrd = "ignore", Ord = "ignore")]
// ^^^^^^^^^^^^^^
(playground)
Ord and PartialOrd have a different relationship than Eq and PartialEq. Eq is a marker trait – it has no behavior of its own, only a contract about the behavior of PartialEq. Ord, on the other hand, carries both a promise about the behavior of PartialOrd (that partial_cmp always returns Some) and its own behavior (a function that returns an Ordering directly). This is why you don't need an "ignore" annotation for Eq but you do need one for Ord.
Related
In the following example code the trait Foo requires that the associated type X implement the Clone trait.
When using the impl Foo<X = Baz> syntax in the do_it function signature, cargo check does not complain that Baz does not implement the Clone trait.
However, cargo check does complain about this issue, in the impl Foo for Bar block.
I would have expected impl Foo<X = Baz> to complain in the same way.
trait Foo {
type X: Clone;
}
struct Bar;
struct Baz;
impl Foo for Bar {
type X = Baz; // <- complains Baz does not impl Clone trait
}
fn do_it(foo: impl Foo<X = Baz>) {} // <- does not complain
This is not the case if X is a generic parameter. In that case, cargo check indicates that the Clone trait bound is not satisfied by foo: impl Foo<Bar>
trait Foo<X>
where
X: Clone,
{
}
struct Bar;
struct Baz;
fn do_it(foo: impl Foo<Baz>) {} // <- complains Baz does not impl Clone trait
Is this intended behavior and if so why?
This is described in the RFC introducing associated types, in a short sentence:
The BOUNDS and WHERE_CLAUSE on associated types are obligations for the implementor of the trait, and assumptions for users of the trait:
trait Graph {
type N: Show + Hash;
type E: Show + Hash;
...
}
impl Graph for MyGraph {
// Both MyNode and MyEdge must implement Show and Hash
type N = MyNode;
type E = MyEdge;
...
}
fn print_nodes<G: Graph>(g: &G) {
// here, can assume G::N implements Show
...
}
What this means is that the person that is responsible to prove that the bounds hold is not the user of the trait (do_it() in your example) but the implementor of the trait. This is in contrast to generic parameters of traits, where the proof obligation is on the user.
The difference should be obvious when you look at it: with generic parameters, the types are foreign and unknown inside the trait implementation, so it must assume the bounds hole. The user of the trait, on the other hand, has concrete types for them (even if they're themselves generic, they're still concrete types from the point of view of the trait) and so it should prove the bounds hold. In contrast, with associated types the story is different: the implementor knows the concrete type, while the user assumes a generic type (even if, like in your code, it constrains them to a specific type, in the general case it is still unknown).
Note that with where bounds on associated types (type Foo where Self::Foo: Clone), that were introduced with generic associated types (yes, I know the RFC I linked bring them, but as far as I know they were not implemented and eventually implemented as part of GATs with different semantics), the story is again different from normal associated type bounds: the user has to prove them too (I think the both need to prove, but I'm not sure). This is because they're expected to be used for generic parameters on associated types, so they're similar to generic parameters on traits, or where clauses in them.
This question already has an answer here:
Does Rust have an equivalent to C++'s decltype() to get the type of an expression?
(1 answer)
Closed 11 months ago.
I want to use a tuple struct as the key of a HashMap. The tuple struct has exactly one component, which is an u32. The struct itself does not implement Hash, so itself can't be directly used as key. However, I can always use the underlying u32 as key.
struct St(u32); // defined in a crate not owned by me
let s = St(1);
let mut m = HashMap::<u32, i32>::new();
m.insert(s.0, 2);
Question: is there a way to instead of hard-coding the u32 in the HashMap declaration, we use the actual component type of St, so that if St changes it to something like isize, it still works. Something like C++'s decltype:
struct St(isize); // the crate changes this
let mut m = HashMap::<decltype(St.0), i32>::new();
Rust has no equivalent of C++'s decltype, but you can always make a type alias which is used in both places
type StImpl = u32;
struct St(StImpl);
let mut m = HashMap::<StImpl, i32>::new();
And, of course, you could also have St just directly derive Hash and Eq to use it in the hash map in the first place.
#[derive(Hash, PartialEq, Eq)]
struct St(isize);
let mut m = HashMap::<St, i32>::new();
Now the derived impls will automatically update if you change the internals of St.
In response to the question edit, I'll propose what I would do here. If you really want to future-proof your code against changes to the library you don't control, you can wrap St in a custom struct whose Hash and Eq delegate to the internals. Something like
use std::hash::{Hash, Hasher};
// Derive Clone, Copy, and anything else useful St implements here.
struct StWrapper(St);
impl Hash for StWrapper {
fn hash<H: Hasher>(&self, hasher: &mut H) {
self.0.0.hash(hasher);
}
}
impl PartialEq for StWrapper {
fn eq(&self, rhs: &Self) -> bool {
self.0.0 == rhs.0.0
}
}
impl Eq for StWrapper {}
Our implementations all reference the inside of St (via self.0.0) but we never actually mention the type of it, so as long as the inner type is something that at least gives us Eq and Hash, this will continue to work even if the specific type changes. So we can use StWrapper as our key type in a HashMap and be assured that it's protected against future changes to the library.
It is up to you to weigh whether your particular use case warrants this much boilerplate to future-proof against changes to another library, or whether it's worth it to just use u32 and tank the cost if they do change implementations. Note that, while this approach does require a decent bit of code, it is very likely to be a zero-cost abstraction at runtime, since St, StWrapper, and the inner type should all compile down to the same representation in most cases (not guaranteed, but likely), and all of the hash and eq calls are resolved statically.
How would I add const generics? Lets say I have a type foo:
pub struct foo <const bar: i64> {
value: f64,
}
and I want to implement mul so I can multiply 2 foos together. I want to treat bar as a dimension, so foo<baz>{value: x} * foo<quux>{value: k} == foo<baz + quux>{value: x * k}, as follows:
impl<const baz: i64, const quux: i64> Mul<foo<quux>> for foo<baz> {
type Output = foo<{baz + quux}>;
fn mul(self, rhs: foo<quux>) -> Self::Output {
Self::Output {
value: self.value * rhs.value,
}
}
}
I get an error telling me I need to add a where bound on {baz+quux} within the definition of the output type. What exactly does this mean and how do I implement it? I can't find any seemingly relevant information on where.
The solution
I got a variation on your code to work here:
impl<const baz: i64, const quux: i64> Mul<Foo<quux>> for Foo<baz>
where Foo<{baz + quux}>: Sized {
type Output = Foo<{baz + quux}>;
fn mul(self, rhs: Foo<quux>) -> Self::Output {
Self::Output {
value: self.value * rhs.value,
}
}
}
How I got there
I've reproduced the full error that you get without the added where clause below:
error: unconstrained generic constant
--> src/main.rs:11:5
|
11 | type Output = Foo<{baz + quux}>;
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
help: try adding a `where` bound using this expression: `where [u8; {baz + quux}]: Sized`
Now, the clause that it suggests is not very useful, for one reason: the length parameter of a statically sized slice must be a usize, but our values baz and quux (and their sum) are i64. I'd imagine that the compiler authors included that particular suggestion because the primary use case for const generics is embedding array sizes in types. I've opened an issue on GitHub about this diagnostic.
Why is this necessary?
A where clause specifies constraints on some generic code element---a function, type, trait, or in this case, implementation---based on the traits and lifetimes that one or more generic parameters, or a derivative thereof, must satisfy. There are equivalent shorthands for many cases, but the overall requirement is that the constraints are fully specified.
In our case, it may seem superficially that this implementation works for any combination of baz and quux, but this is not the case, due to integer overflow; if we supply sufficiently large values of the same sign for both, their sum cannot be represented by i64. This means that i64 is not closed under addition.
The constraint that we add requires that the sum of the two values is in the set of possible values of an i64, indirectly, by requiring something of the type which consumes it. Hence, supplying 2^31 for both baz and quux is not valid, since the resulting type Foo<{baz + quux}> does not exist, so it cannot possibly implement the Sized trait. While this technically is a stricter constraint than we need (Sized is a stronger requirement than a type simply existing), all Foo<bar> which exist implement Sized, so in our case it is the same. On the other hand, without the constraint, no where clause, explicit or shorthand, specifies this constraint.
I have a variable tokens: &[AsRef<str>] and I want to concatenate it into single string:
// tokens: &[AsRef<str>]
let text = tokens.join("") // Error
let text = tokens.iter().map(|x|x.as_ref()).collect::<Vec<_>>().join("") // Ok, but...
The second is awkward and inefficient because it reallocates items to a new Vec.
According to the source code, join can be applied to tokens if its type is &[Borrow<str>]:
// if tokens: &[Borrow<str>]
let text = tokens.join("") // OK
// so I want to convert &[AsRef<str>] to &[Borrow<str>]
let text = convert_to_borrow(tokens).join("")
How should I do that? Why is Join implemented for types that implement Borrow but not AsRef?
It may be slightly slower, but you can collect an iterator of &strs directly into a String.
let text: String = tokens.iter().map(|s| s.as_ref()).collect();
This is possible because String implements FromIterator<&'_ str>. This method grows the String by repeatedly calling push_str, which may mean it has to be reallocated a number of times, but it does not create an intermediate Vec<&str>. Depending on the size of the slices and strings being used this may be slower (although in some cases it could also be slightly faster). If the difference would be significant to you, you should benchmark both versions.
There is no way to treat a slice of T: AsRef<str> as if it were a slice of T: Borrow<str> because not everything that implements AsRef implements Borrow, so in generic code the compiler can't know what Borrow implementation to apply.
Trentcl's answer gives you a solution to your actual problem by relying only on the AsRef<str> implementation. What follows is more of an answer to the more general question in your title.
Certain traits carry with them invariants which implementations must enforce. In particular, if implementations of Borrow<T> also implement Eq, Hash, and Ord then the implementations of those traits for T must behave identically. This requirement is a way of saying that the borrowed value is "the same" as the original value, but just viewed in a different way. For example the String: Borrow<str> implementation must return the entire string slice; it would be incorrect to return a subslice.
AsRef does not have this restriction. An implementation of AsRef<T> can implement traits like Hash and Eq in a completely different way from T. If you need to return a reference to just a part of a struct, then AsRef can do it, while Borrow cannot.
All this means that you cannot derive a valid Borrow<T> implementation from an arbitrary AsRef<T> implementation: the AsRef implementation may not enforce the invariants that Borrow requires.
However, the other way around does work. You can create an implementation of AsRef<T>, given an arbitrary Borrow<T>:
use std::borrow::Borrow;
use std::convert::AsRef;
use std::marker::PhantomData;
pub struct BorrowAsRef<'a, T: ?Sized, U: ?Sized>(&'a T, PhantomData<U>);
impl<'a, T, U> AsRef<U> for BorrowAsRef<'a, T, U>
where
T: Borrow<U> + ?Sized,
U: ?Sized,
{
fn as_ref(&self) -> &U {
self.0.borrow()
}
}
pub trait ToAsRef<T: ?Sized, U: ?Sized> {
fn to_as_ref(&self) -> BorrowAsRef<'_, T, U>;
}
impl<T, U> ToAsRef<T, U> for T
where
T: ?Sized + Borrow<U>,
U: ?Sized,
{
fn to_as_ref(&self) -> BorrowAsRef<'_, T, U> {
BorrowAsRef(self, PhantomData)
}
}
fn borrowed(v: &impl Borrow<str>) {
needs_as_ref(&v.to_as_ref())
}
fn needs_as_ref(v: &impl AsRef<str>) {
println!("as_ref: {:?}", v.as_ref())
}
Why is Join implemented for types that implement Borrow but not AsRef?
This is a blanket implementation, for all types that implement Borrow<str>, which means it can't also be implemented for types that implement AsRef<str>. Even with the unstable feature min_specialization enabled, it wouldn't work because having an implementation of AsRef is not more "specific" than having an implementation of Borrow. So they had to pick one or the other.
It could be argued that AsRef would have been a better choice because it covers more types. But unfortunately I don't think this can be changed now because it would be a breaking change.
I want to make an instance of Ord, which compares my objects by struct field. Maybe I am missing something here
#[deriving(Eq, Clone)]
struct SortableLine<T>{
comparablePart: ~T,
line: ~str
}
impl Ord for SortableLine<~Ord>{
fn lt(&self, other: &SortableLine<~Ord>) -> bool{
return self.comparablePart.lt(&other.comparablePart);
}
}
This fails with
Thanks
cannot call a method whose type contains a self-type through an object
Is there a way to make ordering of parent object based on ordering of a field comparison?
Your type parameters are the problem; you're trying to use trait objects when that's not what you've actually got or want. Here's how you should implement it: with generics.
#[deriving(Eq, Clone)]
struct SortableLine<T>{
comparable_part: ~T,
line: ~str
}
impl<T: Ord> Ord for SortableLine<T> {
fn lt(&self, other: &SortableLine<T>) -> bool {
return self.comparable_part < other.comparable_part;
}
}
Note two other changes:
I used a < b rather than a.lt(&b). I reckon it's simpler, though it's much less important in an Ord impl.
I changed comparablePart to comparable_part (oh, and spacing in a couple of places) to fit in with standard Rust styles.
This sort of thing has the often-convenient side-effect that you don't need it to be a SortableLine; it can just be a Line, and it will be orderable if it's made of orderable parts, and not if not.