I would like to create BTreeMap in Rust but to compare my keys I need another data structure which I store in a variable. Unfortunately it seems, that the traits in Rust can't be defined locally and my Ord implementation can't depend on local variables.
Does it mean that I have to reimplement whole BTreeMap to take lambdas?
Though it's not optimal (because of the extra data/pointer you have to store and could mess up) you could store the local variable (or depending on size & usecase a reference to it) alongside your values:
use std::cmp::{PartialOrd, Ord, Ordering};
use std::collections::BTreeSet;
fn main() {
dbg!(by_distance_from(-5));
}
fn by_distance_from(x: i32) -> Vec<i32> {
let v = vec![-5, 0, 1, 3, 10];
struct Comp<'a> {
cmp: &'a Cmp,
v: i32,
}
struct Cmp {
x: i32,
}
let cmp = Cmp { x };
impl PartialOrd for Comp<'_> {
fn partial_cmp(&self, other: &Self) -> Option<Ordering> {
self.v.abs_diff(self.cmp.x).partial_cmp(&other.v.abs_diff(other.cmp.x))
}
}
impl Ord for Comp<'_> {
fn cmp(&self, other: &Self) -> Ordering {
self.partial_cmp(other).unwrap()
}
}
impl PartialEq for Comp<'_> {
fn eq(&self, other: &Self) -> bool {
self.v == other.v
}
}
impl Eq for Comp<'_> {}
let s: BTreeSet<_> = v.into_iter().map(|v| Comp {
cmp: &cmp,
v,
}).collect();
s.into_iter().map(|Comp { v, ..}| v).collect()
}
Playground
Related
What I am trying to do is to implement a strategy pattern in Rust. In this example I have several sorting algos that I've coded:
use std::cmp::{ Ord, Ordering };
pub trait Sorter {
fn sort<T>(arr: &mut Vec<T>, desc: bool) where T: Ord + Copy;
}
pub struct BubleSort;
pub struct SelectionSort;
impl Sorter for BubleSort {
fn sort<T>(arr: &mut Vec<T>, desc: bool) where T: Ord + Copy{
...
}
}
impl Sorter for SelectionSort {
fn sort<T>(arr: &mut Vec<T>, desc: bool) where T: Ord + Copy{
...
}
}
Then I want to pass one of them to a function to use it. Something like this:
fn sort<T>(sorter: dyn Sorter, arr: &mut Vec<T>) {
sorter::sort(arr);
}
Also is it possible to add them in lets say a Vec and iterate over it, so I can call the functions from there?
Is this possible, or do I need to make instances of the structs and use methods instead?
You can statically parameterise on the sorter type:
fn sort<Strategy: Sorter, T>(arr: &mut Vec<T>, descending: bool) where T: Ord + Copy {
Strategy::sort(arr, descending);
}
fn main() {
let mut v: Vec<u8> = Vec::new();
sort::<SelectionSort, _>(&mut v, false);
}
In other situations you might want to instantiate the structure and work with instance methods, but since the method is generic it can't be dynamically dispatched, therefore it's not super useful. Although I guess you could have an intermediate enum serving as dynamic dispatcher:
fn sort<Strategy: Sorter, T>(strategy: Strategy, arr: &mut Vec<T>, descending: bool) where T: Ord + Copy {
strategy.sort(arr, descending);
}
enum Dynamic {
BubbleSort,
SelectionSort
}
impl Sorter for Dynamic {
fn sort<T>(&self, arr: &mut Vec<T>, desc: bool) where T: Ord + Copy {
match self {
Self::BubbleSort => sort(BubbleSort, arr, desc),
Self::SelectionSort => sort(SelectionSort, arr, desc),
}
}
}
fn main() {
let mut v: Vec<u8> = Vec::new();
sort(Dynamic::BubbleSort, &mut v, false);
}
With Rust traits, I can express a Monoid type class (forgive me for the naming of the methods):
trait Monoid {
fn append(self, other: Self) -> Self;
fn neutral() -> Self;
}
Then, I can also implement the trait for strings or integers:
impl Monoid for i32 {
fn append(self, other: i32) -> i32 {
self + other
}
fn neutral() -> Self { 0 }
}
However, how could I now add another implementation on i32 for the multiplication case?
impl Monoid for i32 {
fn append(self, other: i32) -> i32 {
self * other
}
fn neutral() { 1 }
}
I tried something like what is done in functional but that solution seems to rely on having an additional type parameter on the trait instead of using Self for the elements, which gives me a warning.
The preferred solution would be using marker traits for the operations - something I also tried but didn't succeed in.
The answer, as pointed out by #rodrigo, is to use marker structs.
The following example shows a working snippet: playground
trait Op {}
struct Add;
struct Mul;
impl Op for Add {}
impl Op for Mul {}
trait Monoid<T: Op>: Copy {
fn append(self, other: Self) -> Self;
fn neutral() -> Self;
}
impl Monoid<Add> for i32 {
fn append(self, other: i32) -> i32 {
self + other
}
fn neutral() -> Self {
0
}
}
impl Monoid<Mul> for i32 {
fn append(self, other: i32) -> i32 {
self * other
}
fn neutral() -> Self {
1
}
}
pub enum List<T> {
Nil,
Cons(T, Box<List<T>>),
}
fn combine<O: Op, T: Monoid<O>>(l: &List<T>) -> T {
match l {
List::Nil => <T as Monoid<O>>::neutral(),
List::Cons(h, t) => h.append(combine(&*t)),
}
}
fn main() {
let list = List::Cons(
5,
Box::new(List::Cons(
2,
Box::new(List::Cons(
4,
Box::new(List::Cons(
5,
Box::new(List::Cons(-1, Box::new(List::Cons(8, Box::new(List::Nil))))),
)),
)),
)),
);
println!("{}", combine::<Add, _>(&list));
println!("{}", combine::<Mul, _>(&list))
}
I have a function algo which works with a type S1, I also have
a type S2 which contains all of the fields of S1 plus some additional ones.
How should I modify algo to also accept S2 as input without
creating a temporary variable with type S1 and data from S2?
struct Moo1 {
f1: String,
f2: i32,
}
struct Moo2 {
f1: String,
f2: i32,
other_fields: f32,
}
struct S1 {
x: i32,
v: Vec<Moo1>,
}
struct S2 {
x: i32,
v: Vec<Moo2>,
}
//before fn algo(s: &S1)
fn algo<???>(???) {
//work with x and v (only with f1 and f2)
}
Where I'm stuck
Let's assume algo has this implementation (my real application has another implementation):
fn algo(s: &S1) {
println!("s.x: {}", s.x);
for y in &s.v {
println!("{} {}", y.f1, y.f2);
}
}
To access the field in Moo1 and Moo2 I introduce trait AsMoo, and to access x field and v I introduce trait AsS:
trait AsMoo {
fn f1(&self) -> &str;
fn f2(&self) -> i32;
}
trait AsS {
fn x(&self) -> i32;
// fn v(&self) -> ???;
}
fn algo<S: AsS>(s: &AsS) {
println!("s.x: {}", s.x());
}
I'm stuck at the implementation of the AsS::v method. I do not allocate memory to use my algo, but I need a Vec<&AsMoo> in some way.
Maybe I need to return some kind of Iterator<&AsMoo>, but have no idea how to do it and that looks complex for this problem.
Maybe I should use macros instead?
Any problem in computer science can be solved by adding another layer of indirection; at the exception of having too many such layers, of course.
Therefore, you are correct that you miss a S trait to generalize S1 and S2. In S, you can use a feature called associated type:
trait Moo {
fn f1(&self) -> &str;
fn f2(&self) -> i32;
}
trait S {
type Mooer: Moo;
fn x(&self) -> i32;
fn v(&self) -> &[Self::Mooer];
}
The bit type Mooer: Moo; says: I don't quite know what the exact type Mooer will end up being, but it'll implement the Moo trait.
This lets you write:
impl S for S1 {
type Mooer = Moo1;
fn x(&self) -> i32 { self.x }
fn v(&self) -> &[Self::Mooer] { &self.v }
}
impl S for S2 {
type Mooer = Moo2;
fn x(&self) -> i32 { self.x }
fn v(&self) -> &[Self::Mooer] { &self.v }
}
fn algo<T: S>(s: &T) {
println!("s.x: {}", s.x());
for y in s.v() {
println!("{} {}", y.f1(), y.f2());
}
}
And your generic algo knows that whatever type Mooer ends up being, it conforms to the Moo trait so the interface of Moo is available.
I am using typenum in Rust to add compile-time dimension checking to some types I am working with. I would like to combine it with a dynamic type so that an expression with mismatched dimensions would fail at compile time if given two incompatible typenum types, but compile fine and fail at runtime if one or more of the types is Dynamic. Is this possible in Rust? If so, how would I combine Unsigned and Dynamic?
extern crate typenum;
use typenum::Unsigned;
use std::marker::PhantomData;
struct Dynamic {}
// N needs to be some kind of union type of Unsigned and Dynamic, but don't know how
struct Vector<E, N: Unsigned> {
vec: Vec<E>,
_marker: PhantomData<(N)>,
}
impl<E, N: Unsigned> Vector<E, N> {
fn new(vec: Vec<E>) -> Self {
assert!(N::to_usize() == vec.len());
Vector {
vec: vec,
_marker: PhantomData,
}
}
}
fn add<E, N: Unsigned>(vector1: &Vector<E, N>, vector2: &Vector<E, N>) {
print!("Implement addition here")
}
fn main() {
use typenum::{U3, U4};
let vector3 = Vector::<usize, U3>::new(vec![1, 2, 3]);
let vector4 = Vector::<usize, U4>::new(vec![1, 2, 3, 4]);
// Can I make the default be Dynamic here?
let vector4_dynamic = Vector::new(vec![1, 2, 3, 4]);
add(&vector3, &vector4); // should fail to compile
add(&vector3, &vector4_dynamic); // should fail at runtime
}
Specifying a default for type parameters has, sadly, still not been stabilized, so you'll need to use a nightly compiler in order for the following to work.
If you're playing with defaulted type parameters, be aware that the compiler will first try to infer the types based on usage, and only fall back to the default when there's not enough information. For example, if you were to pass a vector declared with an explicit N and a vector declared without N to add, the compiler would infer that the second vector's N must be the same as the first vector's N, instead of selecting Dynamic for the second vector's N. Therefore, if the sizes don't match, the runtime error would happen when constructing the second vector, not when adding them together.
It's possible to define multiple impl blocks for different sets of type parameters. For example, we can have an implementation of new when N: Unsigned and another when N is Dynamic.
extern crate typenum;
use std::marker::PhantomData;
use typenum::Unsigned;
struct Dynamic;
struct Vector<E, N> {
vec: Vec<E>,
_marker: PhantomData<N>,
}
impl<E, N: Unsigned> Vector<E, N> {
fn new(vec: Vec<E>) -> Self {
assert!(N::to_usize() == vec.len());
Vector {
vec: vec,
_marker: PhantomData,
}
}
}
impl<E> Vector<E, Dynamic> {
fn new(vec: Vec<E>) -> Self {
Vector {
vec: vec,
_marker: PhantomData,
}
}
}
However, this approach with two impls providing a new method doesn't work well with defaulted type parameters; the compiler will complain about the ambiguity instead of inferring the default when calling new. So instead, we need to define a trait that unifies N: Unsigned and Dynamic. This trait will contain a method to help us perform the assert in new correctly depending on whether the size is fixed or dynamic.
#![feature(default_type_parameter_fallback)]
use std::marker::PhantomData;
use std::ops::Add;
use typenum::Unsigned;
struct Dynamic;
trait FixedOrDynamic {
fn is_valid_size(value: usize) -> bool;
}
impl<T: Unsigned> FixedOrDynamic for T {
fn is_valid_size(value: usize) -> bool {
Self::to_usize() == value
}
}
impl FixedOrDynamic for Dynamic {
fn is_valid_size(_value: usize) -> bool {
true
}
}
struct Vector<E, N: FixedOrDynamic = Dynamic> {
vec: Vec<E>,
_marker: PhantomData<N>,
}
impl<E, N: FixedOrDynamic> Vector<E, N> {
fn new(vec: Vec<E>) -> Self {
assert!(N::is_valid_size(vec.len()));
Vector {
vec: vec,
_marker: PhantomData,
}
}
}
In order to support add receiving a fixed and a dynamic vector, but not fixed vectors of different lengths, we need to introduce another trait. For each N: Unsigned, only N itself and Dynamic will implement the trait.
trait SameOrDynamic<N> {
type Output: FixedOrDynamic;
fn length_check(left_len: usize, right_len: usize) -> bool;
}
impl<N: Unsigned> SameOrDynamic<N> for N {
type Output = N;
fn length_check(_left_len: usize, _right_len: usize) -> bool {
true
}
}
impl<N: Unsigned> SameOrDynamic<Dynamic> for N {
type Output = N;
fn length_check(left_len: usize, right_len: usize) -> bool {
left_len == right_len
}
}
impl<N: Unsigned> SameOrDynamic<N> for Dynamic {
type Output = N;
fn length_check(left_len: usize, right_len: usize) -> bool {
left_len == right_len
}
}
impl SameOrDynamic<Dynamic> for Dynamic {
type Output = Dynamic;
fn length_check(left_len: usize, right_len: usize) -> bool {
left_len == right_len
}
}
fn add<E, N1, N2>(vector1: &Vector<E, N1>, vector2: &Vector<E, N2>) -> Vector<E, N2::Output>
where N1: FixedOrDynamic,
N2: FixedOrDynamic + SameOrDynamic<N1>,
{
assert!(N2::length_check(vector1.vec.len(), vector2.vec.len()));
unimplemented!()
}
If you don't actually need to support calling add with a fixed and a dynamic vector, then you can simplify this drastically:
fn add<E, N: FixedOrDynamic>(vector1: &Vector<E, N>, vector2: &Vector<E, N>) -> Vector<E, N> {
// TODO: perform length check when N is Dynamic
unimplemented!()
}
You could just keep using Vec<T> for what it does best, and use your Vector<T, N> for checked length vectors. To accomplish that, you can define a trait for addition and implement it for different combinations of the two types of vector:
trait MyAdd<T> {
type Output;
fn add(&self, other: &T) -> Self::Output;
}
impl <T, N: Unsigned> MyAdd<Vector<T, N>> for Vector<T, N> {
type Output = Vector<T, N>;
fn add(&self, other: &Vector<T, N>) -> Self::Output {
Vector::new(/* implement addition here */)
}
}
impl <T, N: Unsigned> MyAdd<Vec<T>> for Vector<T, N> {
type Output = Vector<T, N>;
fn add(&self, other: &Vec<T>) -> Self::Output {
Vector::new(/* implement addition here */)
}
}
impl <T> MyAdd<Vec<T>> for Vec<T> {
type Output = Vec<T>;
fn add(&self, other: &Vec<T>) -> Self::Output {
Vec::new(/* implement addition here */)
}
}
impl <T, N: Unsigned> MyAdd<Vector<T, N>> for Vec<T> {
type Output = Vector<T, N>;
fn add(&self, other: &Vector<T, N>) -> Self::Output {
Vector::new(/* implement addition here */)
}
}
Now you can use it almost in the same way as you were trying to:
fn main() {
use typenum::{U3, U4};
let vector3 = Vector::<usize, U3>::new(vec![1, 2, 3]);
let vector4 = Vector::<usize, U4>::new(vec![1, 2, 3, 4]);
let vector4_dynamic = vec![1, 2, 3, 4];
vector3.add(&vector4); // Compile error!
vector3.add(&vector4_dynamic); // Runtime error on length assertion
}
You could avoid creating your own trait by using the built in std::ops::Add, but you wouldn't be able to implement it for Vec. The left side of the .add would always have to be Vector<E, N> with Vec limited to only being in the argument. You could get around that with another Vec "newtype" wrapper, similar to what you've done with Vector<E, T> but without the length check and phantom type.
I want to populate a binary heap with floats--more specifically, I'd like to implement a min-heap.
It seems that floats do not support Ord and thus aren't usable out of the box. My attempts to wrap them have so far failed. However it seems that if I could wrap them then I could also implement Ord in such a way that it would effectively make BinaryHeap a min-heap.
Here's an example of a wrapper I tried:
#[derive(PartialEq, PartialOrd)]
struct MinNonNan(f64);
impl Eq for MinNonNan {}
impl Ord for MinNonNan {
fn cmp(&self, other: &MinNonNan) -> Ordering {
let ord = self.partial_cmp(other).unwrap();
match ord {
Ordering::Greater => Ordering::Less,
Ordering::Less => Ordering::Greater,
Ordering::Equal => ord
}
}
}
The problem is pop returns the values as though it were a max-heap.
What exactly do I need to do to populate a BinaryHeap with f64 values as a min-heap?
Crate-based solution
Instead of writing your own MinNonNan, consider using the ordered-float crate + the std::cmp::Reverse type.
type MinNonNan = Reverse<NotNan<f64>>;
Manual solution
Since you are #[derive]ing PartialOrd, the .gt(), .lt() etc methods still compare normally, i.e. MinNonNan(42.0) < MinNonNan(47.0) is still true. The Ord bound only restricts you to provide strictly-ordered types, it doesn't mean the implementation will use .cmp() instead of </>/<=/>= everywhere, nor the compiler will suddenly change those operators to use the Ord implementation.
If you want to flip the order, you need to implement PartialOrd as well.
#[derive(PartialEq)]
struct MinNonNan(f64);
impl PartialOrd for MinNonNan {
fn partial_cmp(&self, other: &Self) -> Option<Ordering> {
other.0.partial_cmp(&self.0)
}
}
impl Ord for MinNonNan {
fn cmp(&self, other: &MinNonNan) -> Ordering {
self.partial_cmp(other).unwrap()
}
}
Working Examples
Crate-based solution
use ordered_float::NotNan; // 2.7.0
use std::{cmp::Reverse, collections::BinaryHeap};
fn main() {
let mut minheap = BinaryHeap::new();
minheap.push(Reverse(NotNan::new(2.0).unwrap()));
minheap.push(Reverse(NotNan::new(1.0).unwrap()));
minheap.push(Reverse(NotNan::new(42.0).unwrap()));
if let Some(Reverse(nn)) = minheap.pop() {
println!("{}", nn.into_inner());
}
}
Manual solution
use std::{cmp::Ordering, collections::BinaryHeap};
#[derive(PartialEq)]
struct MinNonNan(f64);
impl Eq for MinNonNan {}
impl PartialOrd for MinNonNan {
fn partial_cmp(&self, other: &Self) -> Option<Ordering> {
other.0.partial_cmp(&self.0)
}
}
impl Ord for MinNonNan {
fn cmp(&self, other: &MinNonNan) -> Ordering {
self.partial_cmp(other).unwrap()
}
}
fn main() {
let mut minheap = BinaryHeap::new();
minheap.push(MinNonNan(2.0));
minheap.push(MinNonNan(1.0));
minheap.push(MinNonNan(42.0));
if let Some(MinNonNan(root)) = minheap.pop() {
println!("{:?}", root);
}
}