I'm brand new at Rust, coming from Java and I experience some difficulties with owenership and lifetimes. I'd like to do some formal calculus but obviously I'm not doing things the right way... Can you please show me why?
In my code I define those:
pub trait Function {
fn differentiate(&self) -> Box<dyn Function>;
}
pub struct Add<'a>(&'a Box<dyn Function>, &'a Box<dyn Function>);
impl<'a> Function for Add<'a> {
fn differentiate(&self) -> Box<dyn Function> {
let x = self.0.differentiate();
let y = self.1.differentiate();
let add = Add(&x, &y);
Box::new(add)
}
Compiler tells me I have a borrowing problem with x and y, I understand why but can't figure out how to solve it; I tried to set let x: Box<dyn Function + 'a> = ... but then I got lifetime problems on defining add and on the last line:
expected `Box<(dyn Function + 'static)>`
found `Box<dyn Function>`
You cannot return an object that references local variables.
This is nothing special to Rust, it is like that in every language that has references (Java doesn't, in Java everything is a reference counting smart pointer). Writing this in C/C++ would be undefined behaviour. The borrow checker is here to prevent undefined behaviour, so it rightfully complains.
Here is a wild guess of what you might have wanted to do.
I'm unsure why you use references here, so I removed them. Your code looks like Add should own its members.
pub trait Function {
fn differentiate(&self) -> Box<dyn Function>;
}
pub struct Add(Box<dyn Function>, Box<dyn Function>);
impl Function for Add {
fn differentiate(&self) -> Box<dyn Function> {
let x = self.0.differentiate();
let y = self.1.differentiate();
let add = Add(x, y);
Box::new(add)
}
}
An alternative design would be to require differentiable functions to be clonable (because you'll probably want to be able to use them in different places), and avoid dynamic dispatch (and the indirection required by trait objects) altogether. Here is the implementation of two simple operations as an example.
trait Differentiable: Clone {
type Output;
fn differentiate(&self) -> Self::Output;
}
#[derive(Clone)]
struct Add<L, R>(L, R);
impl<L: Differentiable, R: Differentiable> Differentiable for Add<L, R> {
type Output = Add<L::Output, R::Output>;
fn differentiate(&self) -> Self::Output {
Add(self.0.differentiate(), self.1.differentiate())
}
}
#[derive(Clone)]
struct Mul<L, R>(L, R);
impl<L: Differentiable, R: Differentiable> Differentiable for Mul<L, R> {
type Output = Add<Mul<L::Output, R>, Mul<L, R::Output>>;
fn differentiate(&self) -> Self::Output {
Add(Mul(self.0.differentiate(), self.1.clone()), Mul(self.0.clone(), self.1.differentiate()))
}
}
Note that this easily allows adding useful constraints, such as making them callable (if you actually want to be able to evaluate them) or stuff like that. These, alongside with the identify function and the constant function should probably be enough for you to "create" polynomial calculus.
The simplest is to remove the references, because your Box is a smart pointer. So:
pub trait Function {
fn differentiate(&self) -> Box<dyn Function>;
}
pub struct Add(Box<dyn Function>, Box<dyn Function>);
impl Function for Add {
fn differentiate(&self) -> Box<dyn Function> {
let x = self.0.differentiate();
let y = self.1.differentiate();
Box::new(Add(x, y))
}
}
I think what you are trying to return a type after adding two types that implement differentiation.
There are a few different ways to think about this... Here's one:
pub trait Differentiable {
type Result;
fn differentiate(self) -> Self::Result;
}
pub struct Add<OP1, OP2>(OP1, OP2);
impl<OP1, OP2> Differentiable for Add<OP1, OP2>
where
OP1: Differentiable,
OP2: Differentiable,
{
type Result = Add<OP1::Result, OP2::Result>;
fn differentiate(self) -> Self::Result {
let x = self.0.differentiate();
let y = self.1.differentiate();
Add(x, y)
}
}
Related
I'm trying to wrap an async function in a struct. For example:
use std::future::Future;
struct X;
struct Y;
async fn f(x: &X) -> Y {
Y
}
struct MyStruct<F, Fut>(F)
where
F: Fn(&X) -> Fut,
Fut: Future<Output = Y>;
fn main() {
MyStruct(f);
}
The compiler complains about this with the following (unhelpful) error:
error[E0308]: mismatched types
--> src/main.rs:16:5
|
16 | MyStruct(f);
| ^^^^^^^^ one type is more general than the other
|
= note: expected associated type `<for<'_> fn(&X) -> impl Future {f} as FnOnce<(&X,)>>::Output`
found associated type `<for<'_> fn(&X) -> impl Future {f} as FnOnce<(&X,)>>::Output`
Is something like this actually possible? As I understand it, f desugars to something like:
fn f<'a>(x: &'a X) -> impl Future<Output = Y> + 'a {
Y
}
so I'd need to somehow express in MyStruct that Fut has the same lifetime as x.
I actually don't know much about async.
However, when it comes to trait-bounds, typically, fewer trait-bound are better. In other words, only declare those trait-bounds that you really need.
In the case of a struct, as long as you don't need an associated type within your struct, you are mostly good without any bounds. This is pretty much to what #George Glavan wrote in his answer.
When you add methods to your struct, you are more likely to use traits and thus require trait-bounds. Sometimes it is useful to combine the trait-bound of multiple functions by declaring it on the impl-block itself. Tho, this has some restrictions. You should also consider whether each function really needs all these constraints.
For instance, consider the following code:
struct X;
struct Y;
struct MyStruct<F>(F);
impl<F> MyStruct<F> {
pub fn new(f: F) -> Self {
MyStruct(f)
}
pub fn invoke<'a, Fut>(&self) -> Fut
where
F: Fn(&'a X) -> Fut,
{
(self.0)(&X)
}
}
I added a new and a invoke function. The former doesn't require any traits, thus it doesn't have trait-bounds. The latter only calls the function, so it bounds F by Fn. And this is good enough, because in the end, the caller must already know what the return type is, i.e. whether it is some Future or not.
However, there are a few cases where one really needs additional trait-bounds, which involves additional generics such as for a function return type. In such a case, you can declare additional (phantom) generics on a struct, e.g.:
use std::future::Future;
use std::marker::PhantomData;
struct X;
struct Y;
struct MyStruct<F, Fut> {
func: F,
_fut: PhantomData<Fut>,
}
impl<'a, F, Fut> MyStruct<F, Fut>
where
F: Fn(&'a X) -> Fut,
Fut: Future<Output = Y> + Send + Sync + 'a,
{
pub fn new(f: F) -> Self {
MyStruct {
func: f,
_fut: PhantomData,
}
}
pub fn invoke(&self) {
(self.func)(&X);
}
}
Notice, that in this example the trait-bounds apply to both function new and invoke, and both are over-restricted. Still, you don't need to over-restrict the struct itself.
I want to define a lazy square() method without unnecessary runtime overhead (no dyn keyword) that can be called on any Iterable<Item = u8> and returns another Iterable<Item = u8>, like so:
fn main() {
vec![1, 2, 3, 4, 5]
.iter()
.filter(|x| x > 1)
.squared()
.filter(|x| x < 20);
}
I know how to define squared() as a standalone function:
fn squared<I: Iterator<Item = u8>>(iter: I) -> impl Iterator<Item = u8> {
iter.map(|x| x * x)
}
To define that method on Iterator<Item = u8> though, I have to first define a trait.
Here's where I struggle — traits cannot use the impl keyword in return values.
I'm looking for something like the following, which does not work:
trait Squarable<I: Iterator<Item = u8>> {
fn squared(self) -> I;
}
impl<I, J> Squarable<I> for J
where
I: Iterator<Item = u8>,
J: Iterator<Item = u8>,
{
fn squared(self) -> I {
self.map(|x| x * x)
}
}
I had many failed attempts at solving the problem, including changing the return type of squared to Map<u8, fn(u8) -> u8> and tinkering with IntoIterables, but nothing worked so far. Any help would be greatly appreciated!
First of all, your output iterator should probably be an associated type and not a trait parameter, since that type is an output of the trait (it's not something that the caller can control).
trait Squarable {
type Output: Iterator<Item = u8>;
fn squared(self) -> I;
}
That being said, there are a few different possible approaches to solve this problem, each with different advantages and disadvantages.
Using trait objects
The first is to use trait objects, e.g. dyn Iterator<Item = u8>, to erase the type at runtime. This comes at a slight runtime cost, but is definitely the simplest solution in stable Rust today:
trait Squarable {
fn squared(self) -> Box<dyn Iterator<Item = u8>>;
}
impl<I: 'static + Iterator<Item = u8>> Squarable for I {
fn squared(self) -> Box<dyn Iterator<Item = u8>> {
Box::new(self.map(|x| x * x))
}
}
Using a custom iterator type
In stable rust, this is definitely the cleanest from the point of view of the user of the trait, however it takes a bit more code to implement because you need to write your own iterator type. However, for a simple map iterator this is pretty straight forward:
trait Squarable: Sized {
fn squared(self) -> SquaredIter<Self>;
}
impl<I: Iterator<Item = u8>> Squarable for I {
fn squared(self) -> SquaredIter<I> {
SquaredIter(self)
}
}
struct SquaredIter<I>(I);
impl<I: Iterator<Item = u8>> Iterator for SquaredIter<I> {
type Item = u8;
fn next(&mut self) -> Option<u8> {
self.0.next().map(|x| x * x)
}
}
Using the explicit Map type
<I as Iterator>::map(f) has a type std::iter::Map<I, F>, so if the type F of the mapping function is known, we can use that type explicitly, at no runtime cost. This does expose the specific type as part of the function's return type though, which makes it harder to replace in the future without breaking dependent code. In most cases the function will also not be known; in this case we can use F = fn(u8) -> u8 however since the function does not keep any internal state (but often that won't work).
trait Squarable: Sized {
fn squared(self) -> std::iter::Map<Self, fn(u8) -> u8>;
}
impl<I: Iterator<Item = u8>> Squarable for I {
fn squared(self) -> std::iter::Map<Self, fn(u8) -> u8> {
self.map(|x| x * x)
}
}
Using an associated type
An alterative to the above is to give the trait an assoicated type. This still has the restriction that the function type must be known, but it's a bit more general since the Map<...> type is tied to the implementation instead of the trait itself.
trait Squarable {
type Output: Iterator<Item = u8>;
fn squared(self) -> Self::Output;
}
impl<I: Iterator<Item = u8>> Squarable for I {
type Output = std::iter::Map<Self, fn(u8) -> u8>;
fn squared(self) -> Self::Output {
self.map(|x| x * x)
}
}
Using impl in associated type
This is similar to the "Using an associated type" above, but you can hide the actual type entirely, apart from the fact that it is an iterator. I personally think this is the preferrable solution, but unfortunately it is still unstable (it depends on the type_alias_impl_trait feature) so you can only use it in nightly Rust.
#![feature(type_alias_impl_trait)]
trait Squarable {
type Output: Iterator<Item = u8>;
fn squared(self) -> Self::Output;
}
impl<I: Iterator<Item = u8>> Squarable for I {
type Output = impl Iterator<Item = u8>;
fn squared(self) -> Self::Output {
self.map(|x| x * x)
}
}
When implementing a trait, we often use the keyword self, a sample is as follows. I want to understand the representation of the many uses of self in this code sample.
struct Circle {
x: f64,
y: f64,
radius: f64,
}
trait HasArea {
fn area(&self) -> f64; // first self: &self is equivalent to &HasArea
}
impl HasArea for Circle {
fn area(&self) -> f64 { //second self: &self is equivalent to &Circle
std::f64::consts::PI * (self.radius * self.radius) // third:self
}
}
My understanding is:
The first self: &self is equivalent to &HasArea.
The second self: &self is equivalent to &Circle.
Is the third self representing Circle? If so, if self.radius was used twice, will that cause a move problem?
Additionally, more examples to show the different usage of the self keyword in varying context would be greatly appreciated.
You're mostly right.
The way I think of it is that in a method signature, self is a shorthand:
impl S {
fn foo(self) {} // equivalent to fn foo(self: S)
fn foo(&self) {} // equivalent to fn foo(self: &S)
fn foo(&mut self) {} // equivalent to fn foo(self: &mut S)
}
It's not actually equivalent since self is a keyword and there are some special rules (for example for lifetime elision), but it's pretty close.
Back to your example:
impl HasArea for Circle {
fn area(&self) -> f64 { // like fn area(self: &Circle) -> ...
std::f64::consts::PI * (self.radius * self.radius)
}
}
The self in the body is of type &Circle. You can't move out of a reference, so self.radius can't be a move even once. In this case radius implements Copy, so it's just copied out instead of moved. If it were a more complex type which didn't implement Copy then this would be an error.
You are mostly correct.
There is a neat trick to let the compiler tell you the type of variables rather than trying to infer them: let () = ...;.
Using the Playground I get for the 1st case:
9 | let () = self;
| ^^ expected &Self, found ()
and for the 2nd case:
16 | let () = self;
| ^^ expected &Circle, found ()
The first case is actually special, because HasArea is not a type, it's a trait.
So what is self? It's nothing yet.
Said another way, it poses for any possible concrete type that may implement HasArea. And thus the only guarantee we have about this trait is that it provides at least the interface of HasArea.
The key point is that you can place additional bounds. For example you could say:
trait HasArea: Debug {
fn area(&self) -> f64;
}
And in this case, Self: HasArea + Debug, meaning that self provides both the interfaces of HasArea and Debug.
The second and third cases are much easier: we know the exact concrete type for which the HasArea trait is implemented. It's Circle.
Therefore, the type of self in the fn area(&self) method is &Circle.
Note that if the type of the parameter is &Circle then it follows that in all its uses in the method it is &Circle. Rust has static typing (and no flow-dependent typing) so the type of a given binding does not change during its lifetime.
Things can get more complicated, however.
Imagine that you have two traits:
struct Segment(Point, Point);
impl Segment {
fn length(&self) -> f64;
}
trait Segmentify {
fn segmentify(&self) -> Vec<Segment>;
}
trait HasPerimeter {
fn has_perimeter(&self) -> f64;
}
Then, you can implement HasPerimeter automatically for all shapes that can be broken down in a sequence of segments.
impl<T> HasPerimeter for T
where T: Segmentify
{
// Note: there is a "functional" implementation if you prefer
fn has_perimeter(&self) -> f64 {
let mut total = 0.0;
for s in self.segmentify() { total += s.length(); }
total
}
}
What is the type of self here? It's &T.
What's T? Any type that implements Segmentify.
And therefore, all we know about T is that it implements Segmentify and HasPerimeter, and nothing else (we could not use println("{:?}", self); because T is not guaranteed to implement Debug).
I am using the From trait to convert an i32 to a structure of my own. I use this conversion in a generic function do_stuff that doesn't compile:
use std::convert::*;
struct StoredValue {
val: i32,
}
impl From<i32> for StoredValue {
fn from(value: i32) -> StoredValue {
return StoredValue {val: value};
}
}
/// Generic function that stores a value and do stuff
fn do_stuff<T>(value: T) -> i32 where T: From<T> {
let result = StoredValue::from(value);
// .... do stuff and
return 0;
}
fn main () {
let result = do_stuff(0); // call with explicit type
}
and the compilation error:
main.rs:15:18: 15:35 error: the trait `core::convert::From<T>` is not implemented for the type `StoredValue` [E0277]
main.rs:15 let result = StoredValue::from(value);
Does it make sense to implement a generic version of From<T> for StoredValue?
Your generic function is saying, "I accept any type that implements being created from itself." Which isn't what you want.
There's a few things you could be wanting to say:
"I accept any type that can be converted into an i32 so that I can create a StoredValue." This works because you know StoredValue implements From<i32>.
fn do_stuff<T>(value: T) -> i32 where T: Into<i32> {
let result = StoredValue::from(value.into());
// ...
}
Or, "I accept any type that can be converted into a StoredValue." There is a handy trait that goes along with the From<T> trait, and it's called Into<T>.
fn do_stuff<T>(value: T) -> i32 where T: Into<StoredValue> {
let result = value.into();
// ...
}
The way to remember how/when to use these two traits that go hand in hand is this:
Use Into<T> when you know what you want the end result to be, i.e from ?->T
Use From<T> when you know what you have to start with, but not the end result, i.e. T->?
The reason these two traits can go hand in hand together, is if you have a T that implements Into<U>, and you have V that implements From<U> you can get from a T->U->V.
The Rust std lib has such a conversion already baked in that says, "Any type T that implements From<U>, than U implements Into<T>."
Because of this, when you implemented From<i32> for StoredValue you can assume there is a Into<StoredValue> for i32.
To make do_stuff() work, it must be possible to convert type T into StoredValue. So its declaration should be
fn do_stuff<T>(value: T) -> i32 where StoredValue: From<T> {
Edit: I agree with Shepmaster that that should better be
fn do_stuff<T>(value: T) -> i32 where T: Into<StoredValue> {
let result = value.into();
// ...
Since there is a generic implementation that turns T: From<U> into U: Into<T>, this allows to use both kinds of conversions, those implementing From and those implementing Into. With my first version only conversions implementing From would work.
I don't understand some basics in Rust. I want to compute a function sinc(x), with x being a scalar or a slice, which modifies the values in place. I can implement methods for both types, calling them with x.sinc(), but I find it more convenient (and easier to read in long formulas) to make a function, e.g. sinc(&mut x). So how do you do that properly?
pub trait ToSinc<T> {
fn sinc(self: &mut Self) -> &mut Self;
}
pub fn sinc<T: ToSinc<T>>(y: &mut T) -> &mut T {
y.sinc()
}
impl ToSinc<f64> for f64 {
fn sinc(self: &mut Self) -> &mut Self {
*self = // omitted
self
}
}
impl<'a> ToSinc<&'a mut [f64]> for &'a mut [f64] {
fn sinc(self: &mut Self) -> &mut Self {
for yi in (**self).iter_mut() { ... }
self
}
}
This seems to work, but isn't the "double indirection" in the last impl costly? I also thought about doing
pub trait ToSinc<T> {
fn sinc(self: Self) -> Self;
}
pub fn sinc<T: ToSinc<T>>(y: T) -> T {
y.sinc()
}
impl<'a> ToSinc<&'a mut f64> for &'a mut f64 {
fn sinc(self) -> Self {
*self = ...
self
}
}
impl<'a> ToSinc<&'a mut [f64]> for &'a mut [f64] {
fn sinc(self) -> Self {
for yi in (*self).iter_mut() { ... }
self
}
}
This also works, the difference is that if x is a &mut [f64] slice, I can call sinc(x) instead of sinc(&mut x). So I have the impression there is less indirection going on in the second one, and I think that's good. Am I on the wrong track here?
I find it highly unlikely that any differences from the double-indirection won't be inlined away in this case, but you're right that the second is to be preferred.
You have ToSinc<T>, but don't use T. Drop the template parameter.
That said, ToSinc should almost certainly be by-value for f64s:
impl ToSinc for f64 {
fn sinc(self) -> Self {
...
}
}
You might also want ToSinc for &mut [T] where T: ToSinc.
You might well say, "ah - one of these is by value, and the other by mutable reference; isn't that inconsistent?"
The answer depends on what you're actually intend the trait to be used as.
An interface for sinc-able types
If your interface represents those types that you can run sinc over, as traits of this kind are intended to be used, the goal would be to write functions
fn do_stuff<T: ToSinc>(value: T) { ... }
Now note that the interface is by-value. ToSinc takes self and returns Self: that is a value-to-value function. In fact, even when T is instantiated to some mutable reference, like &mut [f64], the function is unable to observe any mutation to the underlying memory.
In essence, these functions treat the underlying memory as an allocation source, and to value transformations on the data held in these allocations, much like a Box → Box operation is a by-value transformation of heap memory. Only the caller is able to observe mutations to the memory, but even then implementations which treat their input as a value type will return a pointer that prevents needing to access the data in this memory. The caller can just treat the source data as opaque in the same way that an allocator is.
Operations which depend on mutability, like writing to buffers, should probably not be using such an interface. Sometimes to support these cases it makes sense to build a mutating basis and a convenient by-value accessor. ToString is an interesting example of this, as it's just a wrapper over Display.
pub trait ToSinc: Sized {
fn sinc_in_place(&mut self);
fn sinc(mut self) -> Self {
self.sinc_in_place();
self
}
}
where impls mostly just implement sinc_in_place and users tend to prefer sinc.
As fakery for ad-hoc overloading
In this case, one doesn't care if the trait is actually usable generically, or even that it's consistent. sinc("foo") might do a sing and dance, for all we care.
As such, although the trait is needed it should be defined as weakly as possible:
pub trait Sincable {
type Out;
fn sinc(self) -> Self::Out;
}
Then your function is far more generic:
pub fn sinc<T: Sincable>(val: T) -> T::Out {
val.sinc()
}
To implement a by-value function you do
impl Sincable for f64 {
type Out = f64;
fn sinc(self) -> f64 {
0.4324
}
}
and a by-mut-reference one is just
impl<'a, T> Sincable for &'a mut [T]
where T: Sincable<Out=T> + Copy
{
type Out = ();
fn sinc(self) {
for i in self {
*i = sinc(*i);
}
}
}
since () is the default empty type. This acts just like an ad-hoc overloading would.
Playpen example of emulated ad-hoc overloading.