I'm learning about named lifetimes in Rust, and I'm having trouble understanding what they represent when they are used in the implementation of a trait. Specifically, I'm having trouble understanding this piece of code from libserialize/hex.rs. I've removed some comments for brevity's sake.
pub trait ToHex {
fn to_hex(&self) -> ~str;
}
static CHARS: &'static[u8] = bytes!("0123456789abcdef");
impl<'a> ToHex for &'a [u8] {
fn to_hex(&self) -> ~str {
let mut v = slice::with_capacity(self.len() * 2);
for &byte in self.iter() {
v.push(CHARS[(byte >> 4) as uint]);
v.push(CHARS[(byte & 0xf) as uint]);
}
unsafe {
str::raw::from_utf8_owned(v)
}
}
}
I understand the 'static lifetime in the CHARS definition, but I'm stumped on the lifetime defined in the ToHex implementation. What do named lifetimes represent in the implementation of a trait?
In that particular case—not much. &[u8] is not a completely specified type because the lifetime is missing, and implementations must be for fully specified types. Thus, the implementation is parameterised over the arbitrary (for the generic parameter is unconstrained) lifetime 'a.
In this case, you don't use it again. There are cases where you will, however—when you wish to constrain a function argument or return value to the same lifetime.
You can then write things like this:
impl<'a, T> ImmutableVector<'a, T> for &'a [T] {
fn head(&self) -> Option<&'a T> {
if self.len() == 0 { None } else { Some(&self[0]) }
}
…
}
That means that the return value will have the same lifetime as self, 'a.
Incidentally, just to mess things up, the lifetime could be written manually on each function:
impl<'a, T> ImmutableVector<'a, T> for &'a [T] {
fn head<'a>(&'a self) -> Option<&'a T> {
if self.len() == 0 { None } else { Some(&self[0]) }
}
…
}
… and that demonstrates that having to specify the lifetime of the type that you are implementing for is just so that the type is indeed fully specified. And it allows you to write a little bit less for all the functions inside that use that lifetime.
Related
I have a trait Atom that has many associated types, of which one is an owned version OP and the other is a borrow version O of essentially the same data. I have a function to_pow_view that creates a view from an owned version and I have an equality operator.
Below is an attempt:
pub trait Atom: PartialEq {
// variants truncated for this example
type P<'a>: Pow<'a, R = Self>;
type OP: OwnedPow<R = Self>;
}
pub trait Pow<'a>: Clone + PartialEq {
type R: Atom;
}
#[derive(Debug, Copy, Clone)]
pub enum AtomView<'a, R: Atom> {
Pow(R::P<'a>),
}
#[derive(Debug, Copy, Clone, PartialEq)]
pub enum OwnedAtom<R: Atom> {
Pow(R::OP),
}
pub trait OwnedPow {
type R: Atom;
fn some_mutable_fn(&mut self);
fn to_pow_view<'a>(&'a self) -> <Self::R as Atom>::P<'a>;
// compiler said I should add 'a: 'b
fn test<'a: 'b, 'b>(&'a mut self, other: <Self::R as Atom>::P<'b>) {
if self.to_pow_view().eq(&other) {
self.some_mutable_fn();
}
}
}
impl<R: Atom> OwnedAtom<R> {
// compiler said I should add 'a: 'b, why?
pub fn eq<'a: 'b, 'b>(&'a self, other: AtomView<'b, R>) -> bool {
let a: AtomView<'_, R> = match self {
OwnedAtom::Pow(p) => {
let pp = p.to_pow_view();
AtomView::Pow(pp)
}
};
match (&a, &other) {
(AtomView::Pow(l0), AtomView::Pow(r0)) => l0 == r0,
}
}
}
// implementation
#[derive(Debug, Copy, Clone, PartialEq)]
struct Rep {}
impl Atom for Rep {
type P<'a> = PowD<'a>;
type OP = OwnedPowD;
}
#[derive(Debug, Copy, Clone, PartialEq)]
struct PowD<'a> {
data: &'a [u8],
}
impl<'a> Pow<'a> for PowD<'a> {
type R = Rep;
}
struct OwnedPowD {
data: Vec<u8>,
}
impl OwnedPow for OwnedPowD {
type R = Rep;
fn some_mutable_fn(&mut self) {
todo!()
}
fn to_pow_view<'a>(&'a self) -> <Self::R as Atom>::P<'a> {
PowD { data: &self.data }
}
}
fn main() {}
This code gives the error:
27 | fn test<'a: 'b, 'b>(&'a mut self, other: <Self::R as Atom>::P<'b>) {
| -- lifetime `'b` defined here
28 | if self.to_pow_view().eq(&other) {
| ------------------
| |
| immutable borrow occurs here
| argument requires that `*self` is borrowed for `'b`
29 | self.some_mutable_fn();
| ^^^^^^^^^^^^^^^^^^^^^^ mutable borrow occurs here
I expect this to work, since the immutable borrow should be dropped right after the eq function evaluates.
Something in the setting of the lifetimes is wrong in this code, already in the equality function eq: I would expect that there is no relation between 'a and 'b; they should just live long enough to do the comparison. However, the compiler tells me that I should add 'a: 'b and I do not understand why. The same thing happened for the function test.
These problems lead me to believe that the lifetimes in to_pow_view are wrong, but no modification I tried made it work (except for removing the 'a lifetime on &'a self, but then the OwnedPowD does not compile anymore).
Link to playground
Can someone help understand what is going on?
Here's the point: you constrained Pow to be PartialEq. However, PartialEq is PartialEq<Self>. In other words, Pow<'a> only implements PartialEq<Pow<'a>> for the same 'a.
This is usually the case for any type with lifetime and PartialEq, so why does it always work but not here?
It usually works because if we compare T<'a> == T<'b>, the compiler can shrink the lifetimes to the shortest of the two and compare that.
However, Pow is a trait. Traits are invariant over their lifetime, in other words, it must stay exactly as is, not longer nor shorter. This is because they may be used with invariant types, for example Cell<&'a i32>. Here's an example of how it could be exploited if this was allowed:
use std::cell::Cell;
struct Evil<'a> {
v: Cell<&'a i32>,
}
impl PartialEq for Evil<'_> {
fn eq(&self, other: &Self) -> bool {
// We asserted the lifetimes are the same, so we can do that.
self.v.set(other.v.get());
false
}
}
fn main() {
let foo = Evil { v: Cell::new(&123) };
{
let has_short_lifetime = 456;
_ = foo == Evil { v: Cell::new(&has_short_lifetime) };
}
// Now `foo` contains a dangling reference to `has_short_lifetime`!
dbg!(foo.v.get());
}
The above code does not compile, because Evil is invariant over 'a, but if it would, it would contain UB in safe code. For that reason, traits, that may contain types such as Evil, are also invariant over their lifetimes.
Because of that, the compiler cannot shrink the lifetime of other. It can shrink the lifetime of self.to_pow_view() (in test(), eq() is similar), because it doesn't really shrink it, it just picks a shorter lifetime for to_pow_view()'s self. But because PartialEq is only implemented for types with the same lifetime, it means that the Pow resulting from self.to_pow_view() must have the same lifetime of other. Because of that, (a) 'a must be greater than or equal to 'b, so we can pick 'b out of it, and (b) by comparing, we borrow self for potentially whole 'a, because it may be that 'a == 'b and therefore the comparison borrows self for 'a, so it is still borrowed immutably while we borrow it mutably for some_mutable_fn().
Once we understood the problem, we can think about the solution. Either we require that Pow is covariant over 'a (can be shrinked), or we require that it implements PartialEq<Pow<'b>> for any lifetime 'b. The first is impossible in Rust, but the second is possible:
pub trait Pow<'a>: Clone + for<'b> PartialEq<<Self::R as Atom>::P<'b>> {
type R: Atom;
}
This triggers an error, because the automatically-derived PartialEq does not satisfy this requirement:
error: implementation of `PartialEq` is not general enough
--> src/main.rs:73:10
|
73 | impl<'a> Pow<'a> for PowD<'a> {
| ^^^^^^^ implementation of `PartialEq` is not general enough
|
= note: `PartialEq<PowD<'0>>` would have to be implemented for the type `PowD<'a>`, for any lifetime `'0`...
= note: ...but `PartialEq` is actually implemented for the type `PowD<'1>`, for some specific lifetime `'1`
So we need to implement PartialEq manually:
impl<'a, 'b> PartialEq<PowD<'b>> for PowD<'a> {
fn eq(&self, other: &PowD<'b>) -> bool {
self.data == other.data
}
}
And now it works.
I am trying to write a trait which works with a database and represents something which can be stored. To do this, the trait inherits from others, which includes the serde::Deserialize trait.
trait Storable<'de>: Serialize + Deserialize<'de> {
fn global_id() -> &'static [u8];
fn instance_id(&self) -> Vec<u8>;
}
struct Example {
a: u8,
b: u8
}
impl<'de> Storable<'de> for Example {
fn global_id() -> &'static [u8] { b"p" }
fn instance_id(&self) -> Vec<u8> { vec![self.a, self.b] }
}
Next, I am trying to write this data using a generic function:
pub fn put<'de, S: Storable>(&mut self, obj: &'de S) -> Result<(), String> {
...
let value = bincode::serialize(obj, bincode::Infinite);
...
db.put(key, value).map_err(|e| e.to_string())
}
However, I am getting the following error:
error[E0106]: missing lifetime specifier
--> src/database.rs:180:24
|
180 | pub fn put<'de, S: Storable>(&mut self, obj: &'de S) -> Result<(), String> {
| ^^^^^^^^ expected lifetime parameter
Minimal example on the playground.
How would I resolve this, possibly avoid it altogether?
You have defined Storable with a generic parameter, in this case a lifetime. That means that the generic parameter has to be propagated throughout the entire application:
fn put<'de, S: Storable<'de>>(obj: &'de S) -> Result<(), String> { /* ... */ }
You can also decide to make the generic specific. That can be done with a concrete type or lifetime (e.g. 'static), or by putting it behind a trait object.
Serde also has a comprehensive page about deserializer lifetimes. It mentions that you can choose to use DeserializeOwned as well.
trait Storable: Serialize + DeserializeOwned { /* ... */ }
You can use the same concept as DeserializeOwned for your own trait as well:
trait StorableOwned: for<'de> Storable<'de> { }
fn put<'de, S: StorableOwned>(obj: &'de S) -> Result<(), String> {
You have the 'de lifetime in the wrong place -- you need it to specify the argument to Storable, not the lifetime of the reference obj.
Instead of
fn to_json<'de, S: Storable>(obj: &'de S) -> String {
use
fn to_json<'de, S: Storable<'de>>(obj: &S) -> String {
Playground.
The lifetime of obj doesn't actually matter here, because you're not returning any values derived from it. All you need to prove is that S implements Storable<'de> for some lifetime 'de.
If you want to eliminate the 'de altogether, you should use DeserializeOwned, as the other answer describes.
I don't understand some basics in Rust. I want to compute a function sinc(x), with x being a scalar or a slice, which modifies the values in place. I can implement methods for both types, calling them with x.sinc(), but I find it more convenient (and easier to read in long formulas) to make a function, e.g. sinc(&mut x). So how do you do that properly?
pub trait ToSinc<T> {
fn sinc(self: &mut Self) -> &mut Self;
}
pub fn sinc<T: ToSinc<T>>(y: &mut T) -> &mut T {
y.sinc()
}
impl ToSinc<f64> for f64 {
fn sinc(self: &mut Self) -> &mut Self {
*self = // omitted
self
}
}
impl<'a> ToSinc<&'a mut [f64]> for &'a mut [f64] {
fn sinc(self: &mut Self) -> &mut Self {
for yi in (**self).iter_mut() { ... }
self
}
}
This seems to work, but isn't the "double indirection" in the last impl costly? I also thought about doing
pub trait ToSinc<T> {
fn sinc(self: Self) -> Self;
}
pub fn sinc<T: ToSinc<T>>(y: T) -> T {
y.sinc()
}
impl<'a> ToSinc<&'a mut f64> for &'a mut f64 {
fn sinc(self) -> Self {
*self = ...
self
}
}
impl<'a> ToSinc<&'a mut [f64]> for &'a mut [f64] {
fn sinc(self) -> Self {
for yi in (*self).iter_mut() { ... }
self
}
}
This also works, the difference is that if x is a &mut [f64] slice, I can call sinc(x) instead of sinc(&mut x). So I have the impression there is less indirection going on in the second one, and I think that's good. Am I on the wrong track here?
I find it highly unlikely that any differences from the double-indirection won't be inlined away in this case, but you're right that the second is to be preferred.
You have ToSinc<T>, but don't use T. Drop the template parameter.
That said, ToSinc should almost certainly be by-value for f64s:
impl ToSinc for f64 {
fn sinc(self) -> Self {
...
}
}
You might also want ToSinc for &mut [T] where T: ToSinc.
You might well say, "ah - one of these is by value, and the other by mutable reference; isn't that inconsistent?"
The answer depends on what you're actually intend the trait to be used as.
An interface for sinc-able types
If your interface represents those types that you can run sinc over, as traits of this kind are intended to be used, the goal would be to write functions
fn do_stuff<T: ToSinc>(value: T) { ... }
Now note that the interface is by-value. ToSinc takes self and returns Self: that is a value-to-value function. In fact, even when T is instantiated to some mutable reference, like &mut [f64], the function is unable to observe any mutation to the underlying memory.
In essence, these functions treat the underlying memory as an allocation source, and to value transformations on the data held in these allocations, much like a Box → Box operation is a by-value transformation of heap memory. Only the caller is able to observe mutations to the memory, but even then implementations which treat their input as a value type will return a pointer that prevents needing to access the data in this memory. The caller can just treat the source data as opaque in the same way that an allocator is.
Operations which depend on mutability, like writing to buffers, should probably not be using such an interface. Sometimes to support these cases it makes sense to build a mutating basis and a convenient by-value accessor. ToString is an interesting example of this, as it's just a wrapper over Display.
pub trait ToSinc: Sized {
fn sinc_in_place(&mut self);
fn sinc(mut self) -> Self {
self.sinc_in_place();
self
}
}
where impls mostly just implement sinc_in_place and users tend to prefer sinc.
As fakery for ad-hoc overloading
In this case, one doesn't care if the trait is actually usable generically, or even that it's consistent. sinc("foo") might do a sing and dance, for all we care.
As such, although the trait is needed it should be defined as weakly as possible:
pub trait Sincable {
type Out;
fn sinc(self) -> Self::Out;
}
Then your function is far more generic:
pub fn sinc<T: Sincable>(val: T) -> T::Out {
val.sinc()
}
To implement a by-value function you do
impl Sincable for f64 {
type Out = f64;
fn sinc(self) -> f64 {
0.4324
}
}
and a by-mut-reference one is just
impl<'a, T> Sincable for &'a mut [T]
where T: Sincable<Out=T> + Copy
{
type Out = ();
fn sinc(self) {
for i in self {
*i = sinc(*i);
}
}
}
since () is the default empty type. This acts just like an ad-hoc overloading would.
Playpen example of emulated ad-hoc overloading.
I'd like to do something along the following lines:
trait GetRef<'a> {
fn get_ref(&self) -> &'a [u8];
}
struct Foo<'a> {
buf: &'a [u8]
}
impl <'a> GetRef<'a> for Foo<'a> {
fn get_ref(&self) -> &'a [u8] {
&self.buf[1..]
}
}
struct Bar {
buf: Vec<u8>
}
// this is the part I'm struggling with:
impl <'a> GetRef<'a> for Bar {
fn get_ref(&'a self) -> &'a [u8] {
&self.buf[1..]
}
The point of the explicit lifetime variable in the GetRef trait is to allow the return value of get_ref() on a Foo object to outlive the Foo itself, tying the return value's lifetime to that of the lifetime of Foo's buffer.
However, I haven't found a way to implement GetRef for Bar in a way that the compiler accepts. I've tried several variations of the above, but can't seem to find one that works. Is there any there any reason that this fundamentally cannot be done? If not, how can I do this?
Tying a trait lifetime variable to &self lifetime
Not possible.
Is there any there any reason that this fundamentally cannot be done?
Yes. An owning vector is something different than a borrowed slice. Your trait GetRef only makes sense for things that already represent a “loan” and don't own the slice. For an owning type like Bar you can't safely return a borrowed slice that outlives Self. That's what the borrow checker prevents to avoid dangling pointers.
What you tried to do is to link the lifetime parameter to the lifetime of Self. But the lifetime of Self is not a property of its type. It just depends on the scope this value was defined in. And that's why your approach cannot work.
Another way of looking at it is: In a trait you have to be explicit about whether Self is borrowed by a method and its result or not. You defined the GetRef trait to return something that is not linked to Self w.r.t. lifetimes. So, no borrowing. So, it's not implementable for types that own the data. You can't create a borrowed slice referring to a Vec's elements without borrowing the Vec.
If not, how can I do this?
Depends on what exactly you mean by “this”. If you want to write a “common denominator” trait that can be implemented for both borrowed and owning slices, you have to do it like this:
trait GetRef {
fn get_ref(&self) -> &[u8];
}
The meaning of this trait is that get_ref borrows Self and returns a kind of “loan” because of the current lifetime elision rules. It's equivalent to the more explicit form
trait GetRef {
fn get_ref<'s>(&self) -> &'s [u8];
}
It can be implemented for both types now:
impl<'a> GetRef for Foo<'a> {
fn get_ref(&self) -> &[u8] { &self.buf[1..] }
}
impl GetRef for Bar {
fn get_ref(&self) -> &[u8] { &self.buf[1..] }
}
You could make different lifetimes for &self and result in your trait like that:
trait GetRef<'a, 'b> {
fn get_ref(&'b self) -> &'a [u8];
}
struct Foo<'a> {
buf: &'a [u8]
}
impl <'a, 'b> GetRef<'a, 'b> for Foo<'a> {
fn get_ref(&'b self) -> &'a [u8] {
&self.buf[1..]
}
}
struct Bar {
buf: Vec<u8>
}
// Bar, however, cannot contain anything that outlives itself
impl<'a> GetRef<'a, 'a> for Bar {
fn get_ref(&'a self) -> &'a [u8] {
&self.buf[1..]
}
}
fn main() {
let a = vec!(1 as u8, 2, 3);
let b = a.clone();
let tmp;
{
let x = Foo{buf: &a};
tmp = x.get_ref();
}
{
let y = Bar{buf: b};
// Bar's buf cannot outlive Bar
// tmp = y.get_ref();
}
}
In theory, Dynamically-Sized Types (DST) have landed and we should now be able to use dynamically sized type instances. Practically speaking, I can neither make it work, nor understand the tests around it.
Everything seems to revolve around the Sized? keyword... but how exactly do you use it?
I can put some types together:
// Note that this code example predates Rust 1.0
// and is no longer syntactically valid
trait Foo for Sized? {
fn foo(&self) -> u32;
}
struct Bar;
struct Bar2;
impl Foo for Bar { fn foo(&self) -> u32 { return 9u32; }}
impl Foo for Bar2 { fn foo(&self) -> u32 { return 10u32; }}
struct HasFoo<Sized? X> {
pub f:X
}
...but how do I create an instance of HasFoo, which is DST, to have either a Bar or Bar2?
Attempting to do so always seems to result in:
<anon>:28:17: 30:4 error: trying to initialise a dynamically sized struct
<anon>:28 let has_foo = &HasFoo {
I understand broadly speaking that you can't have a bare dynamically sized type; you can only interface with one through a pointer, but I can't figure out how to do that.
Disclaimer: these are just the results of a few experiments I did, combined with reading Niko Matsakis's blog.
DSTs are types where the size is not necessarily known at compile time.
Before DSTs
A slice like [i32] or a bare trait like IntoIterator were not valid object types because they do not have a known size.
A struct could look like this:
// [i32; 2] is a fixed-sized vector with 2 i32 elements
struct Foo {
f: [i32; 2],
}
or like this:
// & is basically a pointer.
// The compiler always knows the size of a
// pointer on a specific architecture, so whatever
// size the [i32] has, its address (the pointer) is
// a statically-sized type too
struct Foo2<'a> {
f: &'a [i32],
}
but not like this:
// f is (statically) unsized, so Foo is unsized too
struct Foo {
f: [i32],
}
This was true for enums and tuples too.
With DSTs
You can declare a struct (or enum or tuple) like Foo above, containing an unsized type. A type containing an unsized type will be unsized too.
While defining Foo was easy, creating an instance of Foo is still hard and subject to change. Since you can't technically create an unsized type by definition, you have to create a sized counterpart of Foo. For example, Foo { f: [1, 2, 3] }, a Foo<[i32; 3]>, which has a statically known size and code some plumbing to let the compiler know how it can coerce this into its statically unsized counterpart Foo<[i32]>. The way to do this in safe and stable Rust is still being worked on as of Rust 1.5 (here is the RFC for DST coercions for more info).
Luckily, defining a new DST is not something you will be likely to do, unless you are creating a new type of smart pointer (like Rc), which should be a rare enough occurrence.
Imagine Rc is defined like our Foo above. Since it has all the plumbing to do the coercion from sized to unsized, it can be used to do this:
use std::rc::Rc;
trait Foo {
fn foo(&self) {
println!("foo")
}
}
struct Bar;
impl Foo for Bar {}
fn main() {
let data: Rc<Foo> = Rc::new(Bar);
// we're creating a statically typed version of Bar
// and coercing it (the :Rc<Foo> on the left-end side)
// to as unsized bare trait counterpart.
// Rc<Foo> is a trait object, so it has no statically
// known size
data.foo();
}
playground example
?Sized bound
Since you're unlikely to create a new DST, what are DSTs useful for in your everyday Rust coding? Most frequently, they let you write generic code that works both on sized types and on their existing unsized counterparts. Most often these will be Vec/[] slices or String/str.
The way you express this is through the ?Sized "bound". ?Sized is in some ways the opposite of a bound; it actually says that T can be either sized or unsized, so it widens the possible types we can use, instead of restricting them the way bounds typically do.
Contrived example time! Let's say that we have a FooSized struct that just wraps a reference and a simple Print trait that we want to implement for it.
struct FooSized<'a, T>(&'a T)
where
T: 'a;
trait Print {
fn print(&self);
}
We want to define a blanket impl for all the wrapped T's that implement Display.
impl<'a, T> Print for FooSized<'a, T>
where
T: 'a + fmt::Display,
{
fn print(&self) {
println!("{}", self.0)
}
}
Let's try to make it work:
// Does not compile. "hello" is a &'static str, so self print is str
// (which is not sized)
let h_s = FooSized("hello");
h_s.print();
// to make it work we need a &&str or a &String
let s = "hello"; // &'static str
let h_s = &s; // & &str
h_s.print(); // now self is a &str
Eh... this is awkward... Luckily we have a way to generalize the struct to work directly with str (and unsized types in general): ?Sized
//same as before, only added the ?Sized bound
struct Foo<'a, T: ?Sized>(&'a T)
where
T: 'a;
impl<'a, T: ?Sized> Print for Foo<'a, T>
where
T: 'a + fmt::Display,
{
fn print(&self) {
println!("{}", self.0)
}
}
now this works:
let h = Foo("hello");
h.print();
playground
For a less contrived (but simple) actual example, you can look at the Borrow trait in the standard library.
Back to your question
trait Foo for ?Sized {
fn foo(&self) -> i32;
}
the for ?Sized syntax is now obsolete. It used to refer to the type of Self, declaring that `Foo can be implemented by an unsized type, but this is now the default. Any trait can now be implemented for an unsized type, i.e. you can now have:
trait Foo {
fn foo(&self) -> i32;
}
//[i32] is unsized, but the compiler does not complain for this impl
impl Foo for [i32] {
fn foo(&self) -> i32 {
5
}
}
If you don't want your trait to be implementable for unsized types, you can use the Sized bound:
// now the impl Foo for [i32] is illegal
trait Foo: Sized {
fn foo(&self) -> i32;
}
To amend the example that Paolo Falabella has given, here is a different way of looking at it with the use of a property.
struct Foo<'a, T>
where
T: 'a + ?Sized,
{
printable_object: &'a T,
}
impl<'a, T> Print for Foo<'a, T>
where
T: 'a + ?Sized + fmt::Display,
{
fn print(&self) {
println!("{}", self.printable_object);
}
}
fn main() {
let h = Foo {
printable_object: "hello",
};
h.print();
}
At the moment, to create a HasFoo storing a type-erased Foo you need to first create one with a fixed concrete type and then coerce a pointer to it to the DST form, that is
let has_too: &HasFoo<Foo> = &HasFoo { f: Bar };
Calling has_foo.f.foo() then does what you expect.
In future these DST casts will almost certainly be possible with as, but for the moment coercion via an explicit type hint is required.
Here is a complete example based on huon's answer. The important trick is to make the type that you want to contain the DST a generic type where the generic need not be sized (via ?Sized). You can then construct a concrete value using Bar1 or Bar2 and then immediately convert it.
struct HasFoo<F: ?Sized = dyn Foo>(F);
impl HasFoo<dyn Foo> {
fn use_it(&self) {
println!("{}", self.0.foo())
}
}
fn main() {
// Could likewise use `&HasFoo` or `Rc<HasFoo>`, etc.
let ex1: Box<HasFoo> = Box::new(HasFoo(Bar1));
let ex2: Box<HasFoo> = Box::new(HasFoo(Bar2));
ex1.use_it();
ex2.use_it();
}
trait Foo {
fn foo(&self) -> u32;
}
struct Bar1;
impl Foo for Bar1 {
fn foo(&self) -> u32 {
9
}
}
struct Bar2;
impl Foo for Bar2 {
fn foo(&self) -> u32 {
10
}
}