I'm trying to have a Rust trait X which requires that anyone implementing X can convert to other implementations of X.
So I'm trying to make the declaration of X enforce this like so:
trait X<T> : From<T> where T: X {}
But the compiler is telling me that it doesn't find any type arguments in my specification of T, because T: X needs some type information T: X<...>. But this way there will always be one type argument too little; e.g.
trait X<T, U> : From<T> where T: X<U> {}
Can I get around this in some way? Doing where T: X<_> is not allowed.
Rather than trying to restrict the implementers, I think it would be simpler to provide the implementation as part of the trait:
trait Length {
fn unit_in_meters() -> f64;
fn value(&self) -> f64;
fn new(value: f64) -> Self;
fn convert_to<T:Length>(&self) -> T {
T::new(self.value() * Self::unit_in_meters() / T::unit_in_meters())
}
}
struct Mm {
v: f64,
}
impl Length for Mm {
fn unit_in_meters() -> f64 { 0.001 }
fn value(&self) -> f64 { self.v }
fn new(value: f64) -> Mm {
Mm{ v: value }
}
}
struct Inch {
v: f64,
}
impl Length for Inch {
fn unit_in_meters() -> f64 { 0.0254 }
fn value(&self) -> f64 { self.v }
fn new(value: f64) -> Inch {
Inch{ v: value }
}
}
fn main() {
let foot = Inch::new(12f64);
let foot_in_mm: Mm = foot.convert_to();
println!("One foot in mm: {}", foot_in_mm.value());
}
Play link
For fun, with the associated_consts feature you can swap the method for a constant conversion factor.
#![feature(associated_consts)]
trait Length {
const UNIT_IN_METERS: f64;
fn value(&self) -> f64;
fn new(value: f64) -> Self;
fn convert_to<T:Length>(&self) -> T {
T::new(self.value() * Self::UNIT_IN_METERS / T::UNIT_IN_METERS)
}
}
struct Mm {
v: f64,
}
impl Length for Mm {
const UNIT_IN_METERS: f64 = 0.001;
fn value(&self) -> f64 { self.v }
fn new(value: f64) -> Mm {
Mm{ v: value }
}
}
struct Inch {
v: f64,
}
impl Length for Inch {
const UNIT_IN_METERS: f64 = 0.0254;
fn value(&self) -> f64 { self.v }
fn new(value: f64) -> Inch {
Inch{ v: value }
}
}
fn main() {
let foot = Inch::new(12f64);
let foot_in_mm: Mm = foot.convert_to();
println!("One foot in mm: {}", foot_in_mm.value());
}
Play link
Related
How would you design this better in Rust? More Specifically is there a way to collapse the redundancy down using traits or enums?
Background: I have a C++ / Python background and this is my first attempt to see how the language actually flows after reading the Rust book. Not having class inheritance is something I don't really know how to design around yet.
trait TemperatureConversion {
// https://www.nist.gov/pml/weights-and-measures/si-units-temperature
fn to_celcius(&self) -> f64;
fn to_fahrenheit(&self) -> f64;
fn to_kelvin(&self) -> f64;
}
struct Celcius {
value: f64,
}
struct Fahrenheit {
value: f64,
}
struct Kelvin {
value: f64,
}
impl Celcius {
fn new(value: f64) -> Celcius {
Celcius { value }
}
}
impl Fahrenheit {
fn new(value: f64) -> Fahrenheit {
Fahrenheit { value }
}
}
impl Kelvin {
fn new(value: f64) -> Kelvin {
Kelvin { value }
}
}
impl TemperatureConversion for Celcius {
fn to_celcius(&self) -> f64 {
self.value
}
fn to_fahrenheit(&self) -> f64 {
(self.value * 1.8) + 32.0
}
fn to_kelvin(&self) -> f64 {
self.value + 273.15
}
}
impl TemperatureConversion for Fahrenheit {
fn to_celcius(&self) -> f64 {
(self.value - 32.0) / 1.8
}
fn to_fahrenheit(&self) -> f64 {
self.value
}
fn to_kelvin(&self) -> f64 {
(self.value - 32.0) / 1.8 + 273.15
}
}
impl TemperatureConversion for Kelvin {
fn to_celcius(&self) -> f64 {
self.value - 273.15
}
fn to_fahrenheit(&self) -> f64 {
(self.value - 273.15) * 1.8 + 32.0
}
fn to_kelvin(&self) -> f64 {
self.value
}
}
fn main() {
let c = Celcius::new(100.0);
println!("100C = {:.2}F or {:.2}K", c.to_fahrenheit(), c.to_kelvin());
let f = Fahrenheit::new(100.0);
println!("100F = {:.2}C or {:.2}K", f.to_celcius(), f.to_kelvin());
let k = Kelvin::new(100.0);
println!("100K = {:.2}C or {:.2}F", k.to_celcius(), k.to_fahrenheit());
}
edit: I believe this is the fix:
struct KelvinTemperature {
kelvin: f64,
}
impl KelvinTemperature {
fn new(kelvin: f64) -> KelvinTemperature {
KelvinTemperature { kelvin }
}
fn from_celcius(value: f64) -> KelvinTemperature {
KelvinTemperature {
kelvin: value + 273.15,
}
}
fn from_fahrenheit(value: f64) -> KelvinTemperature {
KelvinTemperature {
kelvin: (value - 32.0) / 1.8 + 273.15,
}
}
fn to_celcius(&self) -> f64 {
self.kelvin - 273.15
}
fn to_fahrenheit(&self) -> f64 {
(self.kelvin - 273.15) * 1.8 + 32.0
}
fn to_kelvin(&self) -> f64 {
self.kelvin
}
}
fn main() {
let temperature = KelvinTemperature::from_celcius(100.0);
println!(
"{:.2}C = {:.2}F = {:.2}K",
temperature.to_celcius(),
temperature.to_fahrenheit(),
temperature.to_kelvin()
);
}
The best design would likely be to avoid re-inventing the wheel and realize someone else has already done this better than we ever will in the time available. The uom (Units Of Measurement) crate provides units for almost every unit you can think of as well as every combination of them (Even composite units like K*ft^2/sec). However that does not make for a very helpful explanation so lets just ignore it for now.
The first issue I see with this code is that it isn't very easy to expand. If you want to add a new temperature you need to add to the TemperatureConversion trait and implement a bunch of functions for all of your conversion rates. The first change I would make would be to turn Temperature into a trait so it is easier to work with.
pub trait Temperature: Copy {
fn to_kelvin(self) -> f64;
fn from_kelvin(x: f64) -> Self;
/// Convert self to a different unit of temperature
fn convert<T: Temperature>(self) -> T {
T::from_kelvin(self.to_kelvin())
}
}
This also gives us the benefit of letting us use it to constrain type parameters later.
pub fn calculate_stuff<T: Temperature>(a: T, b: T) -> T;
Next, since we know that temperatures will all be implemented in the same way and there might be a bunch of them, it may be easier to make a macro for them.
macro_rules! define_temperature {
($name:ident, $kelvin_at_zero:literal, $kelvin_per_unit:literal) => {
#[derive(Debug, Copy, Clone, PartialEq, PartialOrd)]
pub struct $name(f64);
impl Temperature for $name {
fn to_kelvin(self) -> f64 {
self.0 * $kelvin_per_unit + $kelvin_at_zero
}
fn from_kelvin(x: f64) -> Self {
Self((x - $kelvin_at_zero) / $kelvin_per_unit)
}
}
};
}
define_temperature! {Kelvin, 1.0, 1.0}
define_temperature! {Celsius, 273.1, 1.0}
define_temperature! {Fahrenheit, 255.3722, 0.5555}
The macro makes it easy to implement a bunch of different units based on their conversion rates, but the trait is not too restrictive so we could potentially implement units that do not follow a linear scale.
I want to have a trait that can be implemented for T and &T but has methods that always return T.
What I would like to do is something like this
use std::borrow::ToOwned;
trait Foo<X: ToOwned> {
fn f(&self, x: X) -> f64;
fn g(&self) -> X::Owned;
}
struct Float(f64);
impl Foo<f64> for Float {
fn f(&self, x: f64) -> f64 {
x + self.0
}
fn g(&self) -> f64 {
self.0 * 2.0
}
}
struct List(Vec<f64>);
impl Foo<&Vec<f64>> for List {
fn f(&self, x: &Vec<f64>) -> f64 {
x.iter().sum()
}
// Error here - `&Vec<f64>` return type expected
fn g(&self) -> Vec<f64> {
self.0.iter().map(|&x| 2.0 * x).collect()
}
}
fn main() {
let float = Float(2.0);
println!("{} {}", float.f(3.0), float.g());
let list = List(vec![0.0, 1.0, 2.0]);
println!("{} {:?}", list.f(&vec![1.0, 2.0]), list.g());
}
I know that one option is to have a trait that defines the output type like so
trait FooReturn {
type Output;
}
trait Foo<X: FooReturn> {
fn f(&self, x: X) -> f64;
fn g(&self) -> X::Output;
}
then implement the trait for all relevant types, but I was wondering if there was a more standard/robust way to do this.
This is how you would do it once specialization is complete. Meanwhile, I couldn't even get a simple working example to compile on 1.55.0-nightly.
#![feature(specialization)]
trait MaybeOwned {
type Owned;
}
default impl<X> MaybeOwned for X {
type Owned = X;
}
impl<'a, X> MaybeOwned for &'a X {
type Owned = X;
}
trait Foo<X: MaybeOwned> {
fn f(&self, x: &X) -> f64;
fn g(&self) -> <X as MaybeOwned>::Owned;
}
I'm trying to implement trait IDimTransformer for different generic types of traits IDimOffset0Transformer and IDimOffset1Transformer but compiler give me error
trait IDimTransformer {
fn get_offset_x(&self, x: i32) -> i32;
fn get_offset_z(&self, z: i32) -> i32;
}
trait IDimOffset0Transformer : IDimTransformer {}
trait IDimOffset1Transformer : IDimTransformer {}
impl<T: IDimOffset0Transformer> IDimTransformer for T {
fn get_offset_x(&self, x: i32) -> i32 {
return x;
}
fn get_offset_z(&self, z: i32) -> i32 {
return z;
}
}
impl<T: IDimOffset1Transformer> IDimTransformer for T {
fn get_offset_x(&self, x: i32) -> i32 {
return x - 1;
}
fn get_offset_z(&self, z: i32) -> i32 {
return z - 1;
}
}
Example of use
struct Layer {}
impl IDimOffset1Transformer for Layer{}
fn some_func(dim: &impl IDimTransformer, x: i32, z: i32) -> i32 {
return dim.get_offset_x(x) + dim.get_offset_z(x);
}
fn some_func_1(layer: &Layer, x: i32, z: i32) -> i32 {
return some_func(layer, x, z);
}
Compiler error
error[E0119]: conflicting implementations of trait `layer::IDimTransformer`:
--> src/layer.rs:59:1
|
49 | impl<T: IDimOffset0Transformer> IDimTransformer for T {
| ----------------------------------------------------- first implementation here
...
59 | impl<T: IDimOffset1Transformer> IDimTransformer for T {
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ conflicting implementation
I think the pattern you are looking for looks like this in rust:
trait Transformer {
fn get_offset_x(x: i32) -> i32;
fn get_offset_z(z: i32) -> i32;
}
struct Offset0Transformer;
impl Transformer for Offset0Transformer {
fn get_offset_x(x: i32) -> i32 {
x
}
fn get_offset_z(z: i32) -> i32 {
z
}
}
struct Offset1Transformer;
impl Transformer for Offset1Transformer {
fn get_offset_x(x: i32) -> i32 {
x - 1
}
fn get_offset_z(z: i32) -> i32 {
z - 1
}
}
struct Layer { x: i32, z: i32 };
impl Layer {
fn add_transformed<T: Transformer>(&self) -> i32 {
T::get_offset_x(self.x) + T::get_offset_z(self.z)
}
}
fn main() {
let layer = Layer { x: 5, z: 2 };
let result = layer.add_transformed::<Offset1Transformer>();
println!("got: {}", result);
}
Most of the time you won't need this though. Thinking through what your code is trying to do and thinking of a simpler way usually gets you smaller better code.
In the below example implementing either one of those traits would work. Compiler doesn't let to override the implementation for specific types.
Is there any other way to achieve this?
trait Giver<T,U> {
fn give_first(&self) -> T;
fn give_second(&self) -> U;
}
struct Pair<T,U>(T,U);
impl<T,U> Giver<T,U> for Pair<T,U> where T:Clone, U:Clone {
fn give_first(&self) -> T {
(self.0).clone()
}
fn give_second(&self) -> U {
(self.1).clone()
}
}
//works
impl Giver<i32, i32> for Pair<f32, f32> {
fn give_first(&self) -> i32 {
1
}
fn give_second(&self) -> i32 {
2
}
}
//error[E0119]: conflicting implementations of trait `Giver<f32, f32>` for type `Pair<f32, f32>`:
impl Giver<f32, f32> for Pair<f32, f32> {
fn give_first(&self) -> f32 {
1.0
}
fn give_second(&self) -> f32 {
2.0
}
}
#![feature(specialization)]
trait Giver<T,U> {
fn give_first(&self) -> T;
fn give_second(&self) -> U;
}
#[derive(Debug)]
struct Pair<T,U>(T,U);
impl<T,U> Giver<T,U> for Pair<T,U> where T:Clone, U:Clone {
default fn give_first(&self) -> T {
(self.0).clone()
}
default fn give_second(&self) -> U {
(self.1).clone()
}
}
impl Giver<i32, i32> for Pair<f32, f32> {
fn give_first(&self) -> i32 {
1
}
fn give_second(&self) -> i32 {
2
}
}
impl Giver<f32, f32> for Pair<f32, f32> {
fn give_first(&self) -> f32 {
3.0
}
fn give_second(&self) -> f32 {
4.0
}
}
fn main() {
{
let p = Pair(0.0, 0.0);
let first: i32 = p.give_first();
let second: i32 = p.give_second();
println!("{}, {}", first, second); // 1, 2
}
{
let p: Pair<f32, f32> = Pair(0.0, 0.0);
let first: f32 = p.give_first();
let second: f32 = p.give_second();
println!("{}, {}", first, second); // 3, 4
}
}
rust nightly seems to support it if we explicitly specify the expected return type. I tried it with #![feature(specialization)]. There might be a more elegant solution (would wait for someone with more knowledge on this).
output: https://play.rust-lang.org/?version=nightly&mode=debug&edition=2018&gist=d618cd9b534a5ba49199a2efdcf607bd
I'm trying to implement a mean method for Iterator, like it is done with sum.
However, sum is Iterator method, so I decided to implement trait for any type that implements Iterator:
pub trait Mean<A = Self>: Sized {
fn mean<I: Iterator<Item = A>>(iter: I) -> f64;
}
impl Mean for u64 {
fn mean<I: Iterator<Item = u64>>(iter: I) -> f64 {
//use zip to start enumeration from 1, not 0
iter.zip((1..))
.fold(0., |s, (e, i)| (e as f64 + s * (i - 1) as f64) / i as f64)
}
}
impl<'a> Mean<&'a u64> for u64 {
fn mean<I: Iterator<Item = &'a u64>>(iter: I) -> f64 {
iter.zip((1..))
.fold(0., |s, (&e, i)| (e as f64 + s * (i - 1) as f64) / i as f64)
}
}
trait MeanIterator: Iterator {
fn mean(self) -> f64;
}
impl<T: Iterator> MeanIterator for T {
fn mean(self) -> f64 {
Mean::mean(self)
}
}
fn main() {
assert_eq!([1, 2, 3, 4, 5].iter().mean(), 3.);
}
Playground
The error:
error[E0282]: type annotations needed
--> src/main.rs:26:9
|
26 | Mean::mean(self)
| ^^^^^^^^^^ cannot infer type for `Self`
Is there any way to fix the code, or it is impossible in Rust?
like it is done with sum
Let's review how sum works:
pub fn sum<S>(self) -> S
where
S: Sum<Self::Item>,
sum is implemented on any iterator, so long as the result type S implements Sum for the iterated value. The caller gets to pick the result type. Sum is defined as:
pub trait Sum<A = Self> {
pub fn sum<I>(iter: I) -> Self
where
I: Iterator<Item = A>;
}
Sum::sum takes an iterator of A and produces a value of the type it is implemented from.
We can copy-paste the structure, changing Sum for Mean and put the straightforward implementations:
trait MeanExt: Iterator {
fn mean<M>(self) -> M
where
M: Mean<Self::Item>,
Self: Sized,
{
M::mean(self)
}
}
impl<I: Iterator> MeanExt for I {}
trait Mean<A = Self> {
fn mean<I>(iter: I) -> Self
where
I: Iterator<Item = A>;
}
impl Mean for f64 {
fn mean<I>(iter: I) -> Self
where
I: Iterator<Item = f64>,
{
let mut sum = 0.0;
let mut count: usize = 0;
for v in iter {
sum += v;
count += 1;
}
if count > 0 {
sum / (count as f64)
} else {
0.0
}
}
}
impl<'a> Mean<&'a f64> for f64 {
fn mean<I>(iter: I) -> Self
where
I: Iterator<Item = &'a f64>,
{
iter.copied().mean()
}
}
fn main() {
let mean: f64 = [1.0, 2.0, 3.0].iter().mean();
println!("{:?}", mean);
let mean: f64 = std::array::IntoIter::new([-1.0, 2.0, 1.0]).mean();
println!("{:?}", mean);
}
You can do it like this, for example:
pub trait Mean {
fn mean(self) -> f64;
}
impl<F, T> Mean for T
where T: Iterator<Item = F>,
F: std::borrow::Borrow<f64>
{
fn mean(self) -> f64 {
self.zip((1..))
.fold(0.,
|s, (e, i)| (*e.borrow() + s * (i - 1) as f64) / i as f64)
}
}
fn main() {
assert_eq!([1f64, 2f64, 3f64, 4f64, 5f64].iter().mean(), 3.);
assert_eq!(vec![1f64, 2f64, 3f64, 4f64, 5f64].iter().mean(), 3.);
assert_eq!(vec![1f64, 2f64, 3f64, 4f64, 5f64].into_iter().mean(), 3.);
}
I used Borrow trait to support iterators over f64 and &f64.