Using rand::Distribution<T> as field in struct - rust

I'm new to Rust, and still getting used to all the idioms in the language. I want to create an object that uses a random distribution, so I imported the rand and rand_distr crates, and am defining the object as follows:
pub struct Bandit<const K: usize> {
levers: [f32; K],
dist: Box<dyn Distribution<f32>>,
}
The Distribution type is defined in the rand crate here.
The above code is giving me the following error:
error[E0038]: the trait `rand_distr::Distribution` cannot be made into an object
--> src/testbed.rs:8:15
|
8 | dist: Box<dyn Distribution<f32>>,
| ^^^^^^^^^^^^^^^^^^^^^ `rand_distr::Distribution` cannot be made into an object
|
note: for a trait to be "object safe" it needs to allow building a vtable to allow the call to be resolvable dynamically; for more information visit <https://doc.rust-lang.org/reference/items/traits.html#object-safety>
--> /Users/brendan/.cargo/registry/src/github.com-1ecc6299db9ec823/rand-0.8.5/src/distributions/distribution.rs:37:8
|
37 | fn sample<R: Rng + ?Sized>(&self, rng: &mut R) -> T;
| ^^^^^^ the trait cannot be made into an object because method `sample` has generic type parameters
My workaround for this is to define my own enum which I can map to a distribution later:
pub enum DistType {
Normal { mean: f32, variance: f32 },
Uniform,
}
impl DistType {
pub fn distribution(&self) -> Box<impl Distribution<f32>> {
match self {
DistType::Normal { mean, variance } => /* return normal */,
DistType::Uniform => /* return uniform */,
}
}
}
However, I don't like this solution for the reasons:
If I can't store the distribution object, it means I have to create one whenever I need to take random samples. I suspect this will create some performance issues.
If possible, would like to use dyn instead of impl. There may be many more of these types of distributions.
I would prefer to avoid creating custom types like the enum I defined above because I would like to avoid unnecessary indirection.
Is there a way to persist dyn Trait objects that are not object safe?
Tried dyn vs impl keywords. Reviewed the semantics of what those error and definitely do not want impl in this case
Tried searching for ways to persist traits that are not object safe and could not find anything
Looked for examples in the rand_distr crate, but all the examples create local distribution variables and use them immediately.

Generally, you cannot have dyn Trait if one of your methods contains a generic parameter. However, there is a pattern to handle this case, and it is called type erasure.
The idea is that we create a new trait, ErasedDistribution, that in its sample() method, instead of taking a generic parameter R: Rng we will take a &mut dyn Rng. As every Rng is also &mut dyn Rng we can implement our type for every Distribution out there. However, as &mut dyn Rng is also Rng we can also implement Distribution for dyn ErasedDistribution. And now we can store a Box<dyn ErasedDistribution<f32>>, and act as if it was Box<dyn Distribution<f32>> - we can store every Distribution inside it, and we can call Distribution's methods (such as sample_iter()) on it or pass it to code that expects a Distribution.
The first step is to create our trait:
use rand::distributions::Distribution;
pub trait ErasedDistribution<T> {
fn sample(&self, rng: &mut dyn rand::RngCore) -> T;
}
Rng is not object safe, so we cannot pass &mut dyn Rng. However, we got lucky: Rng has RngCore as a supertrait, and RngCore is object-safe. Furthermore, there is a blanket implementation of Rng for every RngCore, so we can just take &mut dyn RngCore.
If this was not the case, our erasure work would be more involved (but not impossible): we would also had to create a ErasedRng trait, and take &mut dyn ErasedRng as a parameter.
The next step is to create a blanket implementation, so every Distribution will also implement ErasedDistribution:
impl<T, D: Distribution<T> + ?Sized> ErasedDistribution<T> for D {
fn sample(&self, rng: &mut dyn rand::RngCore) -> T {
<Self as Distribution<T>>::sample(self, rng)
}
}
The final step is to implement Distribution for ErasedDistribution. I also implemented it for &ErasedDistribution, because Distribution's methods take self by move and we can't move dyn ErasedDistribution (we aren't covered by the blanket implementation Distribution for &Distribution because it is limited to Sized types - I sent a PR to relax that):
impl<T> Distribution<T> for dyn ErasedDistribution<T> {
fn sample<R: rand::Rng + ?Sized>(&self, mut rng: &mut R) -> T {
<dyn ErasedDistribution<T> as ErasedDistribution<T>>::sample(self, &mut rng)
}
}
impl<T> Distribution<T> for &'_ dyn ErasedDistribution<T> {
fn sample<R: rand::Rng + ?Sized>(&self, mut rng: &mut R) -> T {
<dyn ErasedDistribution<T> as ErasedDistribution<T>>::sample(&**self, &mut rng)
}
}
Now you can store a Box<dyn ErasedDistribution<f32>>. You may need to take a reference explicitly before calling Distribution's methods, because Rust defaults to calling them on dyn ErasedDistribution instead of &dyn ErasedDistribution.

Related

Conflicting implementation error that only appears when type parameters are introduced (and disappears with helper trait)

Below I define three traits. For each trait, I
implement on &'a T where T implements the trait
implement on &'a usize.
In each case, however, I observe something slightly different.
Case 1
For the trait GetOneVer1 I observe no errors.
Case 2
For the trait GetOneVer2 I get an error:
conflicting implementations of trait `GetOneVer2<_>` for type `&usize`
downstream crates may implement trait `GetOneVer2<_>` for type `usize`rustcE0119
I think I understand this error -- the compiler is worried that in the future someone will implement the trait on usize and then that implementation will automatically extend to &'a usize because I implement the trait on &'a T where T implements the trait. That means we could potentially have two competing implementations of the same trait.
Case 3
For the trait GetOneVer3 I get no error. Note that this case is essentially identical to case 2. The only difference is that, in this case, I define an extra trait "InheritTrait" and require that T implements InheritTrait in my implementation on &'a T.
Questions
Why does case 2 produce errors whereas cases 1 and 3 do not? The only difference I see is that, in case 2, the trait has a generic type parameter, and in case 3 there are both a generic type parameter and a "helper" trait.
There's a commented line in case 3 that auto-implements InheritTrait on immutable references. Uncommenting this line produces no errors, and (I believe) achieves my ultimate goal of auto-implementing a trait on immutable references. Will this method (using a helper trait that has no methods or type parameters) work to bypass the problem posed by case 2, in general, or are there other subtleties I should worry about?
// ===============================================================================
// TRAIT 1: NO ERROR
// ===============================================================================
// define trait
pub trait GetOneVer1 {
fn get_one(&self) -> usize;
}
// implement on &'a T, where T implements the trait
impl<'a, T: GetOneVer1> GetOneVer1 for &'a T {
fn get_one(&self) -> usize {
1
}
}
// implement on &'a usize
impl<'a> GetOneVer1 for &'a usize {
fn get_one(&self) -> usize {
1
}
}
// ===============================================================================
// TRAIT 2: ERROR
// ===============================================================================
// define trait
pub trait GetOneVer2<Key> {
fn get_one(&self, index: Key) -> usize;
}
// implement on &'a T, where T implements the trait
impl<'a, Key, T: GetOneVer2<Key>> GetOneVer2<Key> for &'a T {
fn get_one(&self, index: Key) -> usize {
1
}
}
// implement on &'a usize
impl<'a, Key> GetOneVer2<Key> for &'a usize {
fn get_one(&self, index: Key) -> usize {
1
}
}
// ===============================================================================
// TRAIT 3: NO ERROR (CORRECTED BY ADDING A SECOND TRAIT REQUIREMENT)
// ===============================================================================
// define "helper trait"
pub trait InheritTrait {}
// impl<'a, T: InheritTrait> InheritTrait for &'a T {} // uncommenting this line does NOT produce any errors
// define trait
pub trait GetOneVer3<Key> {
fn get_one(&self, index: Key) -> usize;
}
// implement on &'a T, where T implements the trait
impl<'a, Key, T: GetOneVer3<Key> + InheritTrait> GetOneVer3<Key> for &'a T {
fn get_one(&self, index: Key) -> usize {
1
}
}
// implement on &'a usize
impl<'a, Key> GetOneVer3<Key> for &'a usize {
fn get_one(&self, index: Key) -> usize {
1
}
}
fn main() {
println!("Hello, world!");
}
Why does case 2 produce errors whereas cases 1 and 3 do not? The only difference I see is that, in case 2, the trait has a generic type parameter, and in case 3 there are both a generic type parameter and a "helper" trait.
The generic parameter is exactly the problem.
Who can implement GetOneVer1 for usize? Only you (who defined the trait) or the owner of usize (std).
Who can implement GetOneVer2<Key> for usize for some Key? You, usize's owner, or anyone who owns Key. And since Key is generic, that means everyone.
The orphan rules have not problems with a potential conflict between the owner of the trait and the owner of the type, because the compiler can see this impl does not exist (it depends only on upstream crates, std in this example). However, in the second case, downstream crates may implement it - and thus the compiler cannot prove this impl will not exist.
In the third case, we re-added the guarantee - as long as no impl InheritTrait for usize exists (which only you and std can provide), no trait can get the blanket implementation for references. Indeed, if you will add this impl, the compiler will error.
There's a commented line in case 3 that auto-implements InheritTrait on immutable references. Uncommenting this line produces no errors, and (I believe) achieves my ultimate goal of auto-implementing a trait on immutable references. Will this method (using a helper trait that has no methods or type parameters) work to bypass the problem posed by case 2, in general, or are there other subtleties I should worry about?
It will not work, because you still need impl InheritTrait for usize, and like we saw, this impl is impossible to write.

Can Clone be implemented for a trait object with finite lifetime (without using unsafe code)?

First things first: What am I trying to achieve?
I'm trying to use Iterator::fold() with an accumulator that itself is an Iterator, but in addition I need a clone of that iterator internally.
fn whatever<T,U,V,W>(some_iterator : T, other_cloneable_iterator_init : U) -> impl Iterator<Item=W>
where T: Iterator<Item=V>,
U: Iterator<Item=W>+Clone
{
some_iterator.fold(other_cloneable_iterator_init,|other_cloneable_iterator, value| {
let computed_value =
some_function_that_consumes_an_iterator(other_cloneable_iterator.clone(), value);
other_iterator.filter(|other_value| some_function(computed_value, other_value))
}
}
This of course does not work as written above, because the return type of the closure given to fold is not the same as the type of the initializer.
Why did this problem lead to this question?
They do however have in common that they both implement the Iterator<Item=W>+Clone traits. This almost screams "erase the type by making it a trait object".
If it were just the Iterator<Item=W> trait, I'd do just that. However, the Clone trait is not object-safe. Searching for "clone boxed trait object" online yields various discussions, which all either require 'static lifetime on the trait object (what Iterators usually do not have), or using unsafe code (which I would like to avoid) like the dyn-clone crate.
The actual question:
So, if one wants to avoid using unsafe code, and does not have the luxury of getting 'static lifetimes, how can one implement Clone for a boxed trait object?
I'm talking about code like the following, which has lifetime issues and does not compile:
trait Cu32Tst<'a> : Iterator<Item=u32>+'a {
fn box_clone(&self) -> Box<dyn Cu32Tst+'a>;
}
impl<'a, T : Iterator<Item=u32>+Clone+'a> Cu32Tst<'a> for T {
fn box_clone(&self) -> Box<dyn Cu32Tst+'a> {
Box::new(self.clone())
}
}
impl<'a> Clone for Box<dyn Cu32Tst<'a>>{
fn clone(&self) -> Self {
self.box_clone()
}
}
The reason I want Clone on the Box itself is that in order to use it as accumulator type, I must be able to create a new Box<dyn IteratorTraitObject> out of the output of the Iterator::filter() method, which is only Clone if the iterator it's called on is Clone (I'd then have to implement Iterator for the Box as well then, but that could just forward to the contained value).
Long story short: Can Clone be implemented for a trait object with finite lifetime, and if yes, how?
You need to modify the code to implement Box<dyn Cu32Tst<'a>> rather than Box<dyn Cu32Tst + 'a>. The latter presumably implements Cu32Tst<'static>, which is not what you want for the blanket implementation of Clone for Cu32Tst<'a>. This compiles:
trait Cu32Tst<'a>: Iterator<Item = u32> + 'a {
fn box_clone(&self) -> Box<dyn Cu32Tst<'a>>;
}
impl<'a, T: Iterator<Item = u32> + Clone + 'a> Cu32Tst<'a> for T {
fn box_clone(&self) -> Box<dyn Cu32Tst<'a>> {
Box::new(self.clone())
}
}
impl<'a> Clone for Box<dyn Cu32Tst<'a>> {
fn clone(&self) -> Self {
self.as_ref().box_clone()
}
}
Playground
Note that the Clone implementation invokes box_clone() using self.as_ref().box_clone(). This is because the blanket impl of Cu32Tst matches against the Box<dyn Cu32Tst> as it's both Clone and Iterator, so the box itself gets a box_clone() method. As a result, self.box_clone() would compile, but would cause infinite recursion.

TryFrom<&[u8]> trait bound in trait

I'm trying to implement common trait for a bunch of types created from binary data (read from a disk). Majority of trait methods could use default implementations and only conversions etc. would be needed to be implemented separately. I would like to use TryFrom<&[u8]> trait for conversions from binary data to my types but I don't know how to express (in the context of trait) that lifetime of &[u8] and lifetimes of values of my types created from it are not related. Here is minimal example of the problem.
use std::convert::TryFrom;
struct Foo;
// Value of Foo can be created from &[u8] but it doesn't borrow anything.
impl TryFrom<&[u8]> for Foo {
type Error = ();
fn try_from(v: &[u8]) -> Result<Self, ()> {
Ok(Foo)
}
}
trait Bar<'a>
where
Self: TryFrom<&'a [u8], Error = ()>, // `&` without an explicit lifetime name cannot be used here
{
fn baz() -> Self {
let vec = Vec::new();
Self::try_from(&vec).unwrap() // ERROR: vec does not live long enough (nothing is borrowed)
}
}
Alternative solution would be to make conversions as trait methods but it would be nicer to use common std traits. Is there a way to achieve this? (Or I could use const generics but I don't want to rely on nightly compiler.)
What you want are "higher ranked trait bounds" (HRTB, or simply hearty boy). They look like this: for<'a> T: 'a. This example just means: "for every possible lifetime 'a, T must ...". In your case:
trait Bar
where
Self: for<'a> TryFrom<&'a [u8], Error = ()>,
You can also specify that requirement as super trait bound directly instead of where clause:
trait Bar: for<'a> TryFrom<&'a [u8], Error = ()> { ... }
And yes, now it just means that all implementors of Bar have to implement TryFrom<&'a [u8], Error = ()> for all possible lifetimes. That's what you want.
Working Playground

Eliminate lifetime parameter from a trait whose implementation wraps a HashMap?

I'd like to wrap a few methods of HashMap such as insert and keys. This attempt compiles, and the tests pass:
use std::collections::HashMap;
use std::hash::Hash;
pub trait Map<'a, N: 'a> {
type ItemIterator: Iterator<Item=&'a N>;
fn items(&'a self) -> Self::ItemIterator;
fn insert(&mut self, item: N);
}
struct MyMap<N> {
map: HashMap<N, ()>
}
impl<N: Eq + Hash> MyMap<N> {
fn new() -> Self {
MyMap { map: HashMap::new() }
}
}
impl<'a, N: 'a + Eq + Hash> Map<'a, N> for MyMap<N> {
type ItemIterator = std::collections::hash_map::Keys<'a, N, ()>;
fn items(&'a self) -> Self::ItemIterator {
self.map.keys()
}
fn insert(&mut self, item: N) {
self.map.insert(item, ());
}
}
#[cfg(test)]
mod tests {
use super::*;
#[derive(Eq, Hash, PartialEq, Debug)]
struct MyItem;
#[test]
fn test() {
let mut map = MyMap::new();
let item = MyItem { };
map.insert(&item);
let foo = map.items().collect::<Vec<_>>();
for it_item in map.items() {
assert_eq!(it_item, &&item);
}
assert_eq!(foo, vec![&&item]);
}
}
I'd like to eliminate the need for the lifetime parameter in Map if possible, but so far haven't found a way. The problem seems to result from the definition of std::collections::hash_map::Keys, which requires a lifetime parameter.
Attempts to redefine the Map trait work until it becomes necessary to supply the lifetime parameter on Keys:
use std::collections::HashMap;
use std::hash::Hash;
pub trait Map<N> {
type ItemIterator: Iterator<Item=N>;
fn items(&self) -> Self::ItemIterator;
fn insert(&mut self, item: N);
}
struct MyMap<N> {
map: HashMap<N, ()>
}
impl<N: Eq + Hash> MyMap<N> {
fn new() -> Self {
MyMap { map: HashMap::new() }
}
}
// ERROR: "unconstrained lifetime parameter"
impl<'a, N> Map<N> for MyMap<N> {
type ItemIterator = std::collections::hash_map::Keys<'a, N, ()>;
}
The compiler issues an error about an unconstrained lifetime parameter that I haven't been able to fix without re-introducing the lifetime into the Map trait.
The main goal of this experiment was to see how I could also eliminate Box from previous attempts. As this question explains, that's another way to return an iterator. So I'm not interested in that approach at the moment.
How can I set up Map and an implementation without introducing a lifetime parameter or using Box?
Something to think about is that since hash_map::Keys has a generic lifetime parameter, it is probably necessary for some reason, so your trait to abstract over Keys will probably need it to.
In this case, in the definition of Map, you need some way to specify how long the ItemIterator's Item lives. (The Item is &'a N).
This was your definition:
type ItemIterator: Iterator<Item=&'a N>
You are trying to say that for any struct that implements Map, the struct's associated ItemIterator must be an iterator of references; however, this constraint alone is useless without any further information: we also need to know how long the reference lives for (hence why type ItemIterator: Iterator<Item=&N> throws an error: it is missing this information, and it cannot currently be elided AFAIK).
So, you choose 'a to name a generic lifetime that you guarantee each &'a N will be valid for. Now, in order to satisfy the borrow checker, prove that &'a N will be valid for 'a, and establish some useful promises about 'a, you specify that:
Any value for the reference &self given to items() must live at least as long as 'a. This ensures that for each of the returned items (&'a N), the &self reference must still be valid in order for the item reference to remain valid, in other words, the items must outlive self. This invariant allows you to reference &self in the return value of items(). You have specified this with fn items(&'a self). (Side note: my_map.items() is really shorthand for MyMap::items(&my_map)).
Each of the Ns themselves must also remain valid for as long as 'a. This is important if the objects contain any references that won't live forever (aka non-'static references); this ensures that all of the references that the item N contains live at least as long as 'a. You have specified this with the constraint N: 'a.
So, to recap, the definition of Map<'a, N> requires that an implementors' items() function must return an ItemIterator of references that are valid for 'a to items that are valid for 'a. Now, your implementation:
impl<'a, N: 'a + Eq + Hash> Map<'a, N> for MyMap<N> { ... }
As you can see, the 'a parameter is completely unconstrained, so you can use any 'a with the methods from Map on an instance of MyMap, as long as N fulfills its constraints of N: 'a + Eq + Hash. 'a should automatically become the longest lifetime for which both N and the map passed to items() are valid.
Anyway, what you're describing here is known as a streaming iterator, which has been a problem in years. For some relevant discussion, see the approved but currently unimplemented RFC 1598 (but prepare to be overwhelmed).
Finally, as some people have commented, it's possible that your Map trait might be a bad design from the start since it may be better expressed as a combination of the built-in IntoIterator<Item=&'a N> and a separate trait for insert(). This would mean that the default iterator used in for loops, etc. would be the items iterator, which is inconsistent with the built-in HashMap, but I am not totally clear on the purpose of your trait so I think your design likely makes sense.

Why do I get the error "the trait `Foo` is not implemented for `&mut T`" even though T implements the trait?

I have this source:
pub fn draw<G, C>(&self, font: &mut C, draw_state: &DrawState, transform: Matrix2d, g: &mut G)
where
C: CharacterCache,
G: Graphics<Texture = <C as CharacterCache>::Texture>,
{
self.properties.draw(
self.text.as_str(),
&mut font,
&draw_state,
transform,
g,
);
}
And the error
the trait bound `&mut C: graphics::character::CharacterCache` is not satisfied
(the trait `graphics::character::CharacterCache` is not implemented for `&mut C`)
The only aspect of C that is defined is that it implements CharacterCache, yet the error says the opposite.
DrawState, Matrix2d, CharacterCache and its implementations, Texture, and self.properties (Text) are provided by the Piston 2d graphics library. There must be something about traits in general that I'm misunderstanding.
The Text::draw function signature:
fn draw<C, G>(
&self,
text: &str,
cache: &mut C,
draw_state: &DrawState,
transform: Matrix2d,
g: &mut G,
) where
C: CharacterCache,
G: Graphics<Texture = C::Texture>,
T, &T, and &mut T are all different types; and that means that &mut &mut T is likewise a different type. Traits are not automatically implemented for references to a type. If you wish to implement a trait for either of the references, you need to write it out explicitly.
As an example, this exhibits the same problem:
trait Foo {}
#[derive(Debug, Copy, Clone)]
struct S;
impl Foo for S {}
fn example<T>(_: T)
where
T: Foo,
{}
fn main() {
let mut s = S;
example(s);
example(&s); // the trait bound `&S: Foo` is not satisfied
example(&mut s); // the trait bound `&mut S: Foo` is not satisfied
}
Explicit implementations of the trait for the references solve the problem:
impl<'a> Foo for &'a S {}
impl<'a> Foo for &'a mut S {}
In many cases, you can delegate the function implementations to the non-reference implementation.
If this should always be true, you can make it so by applying it to all references to a type that implements a trait:
impl<'a, T> Foo for &'a T where T: Foo {}
impl<'a, T> Foo for &'a mut T where T: Foo {}
If you don't have control over the traits, you may need to specify that you take a reference to a generic type that implements the trait:
fn example<T>(_: &mut T)
where
for<'a> &'a mut T: Foo,
{}
See also:
When should I not implement a trait for references to implementors of that trait?
The error message says that "graphics::character::CharacterCache is not implemented for &mut C"; and indeed, all you have said in your where-clause is that C: CharacterCache, not &mut C: CharacterCache.
(In general, one cannot conclude &mut Type: Trait if all one knows is Type: Trait)
I'm assuming that the .draw method that you are invoking on self.properties: Text wants a &mut C for its argument, so you might be able to pass in either font or &mut *font, but I'm guessing that your extra level of indirection via &mut font is causing a problem there.
In other words:
self.properties.draw(
self.text.as_str(),
&mut font,
// ~~~~~~~~~ is not the same as `font` or `&mut *font`
&draw_state,
transform,
g,
);
Sidenote for Experienced Rustaceans:
This kind of coding "mistake" (putting in an extra level of indirection) actually occurs more than you might think when programming in Rust.
However, one often does not notice it, because the compiler will often compare the expected type with the type that was provided, and will apply so-called deref coercions to turn in the given value into an appropriate argument.
So if you consider the following code:
fn gimme_ref_to_i32(x: &i32, amt: i32) -> i32 { *x + amt }
fn gimme_mutref_to_i32(x: &mut i32, amt: i32) { *x += amt; }
let mut concrete = 0;
gimme_mutref_to_i32(&mut concrete, 1);
gimme_mutref_to_i32(&mut &mut concrete, 20);
let i1 = gimme_ref_to_i32(&concrete, 300);
let i2 = gimme_ref_to_i32(& &concrete, 4000);
println!("concrete: {} i1: {} i2: {}", concrete, i1, i2);
it will run without a problem; the compiler will automatically insert dereferences underneath the borrow, turning &mut &mut concrete into &mut *(&mut concrete), and & &concrete into & *(&concrete) (aka &mut concrete and &concrete respectively, in this case).
(You can read more about the history of Deref Coercions by reading the associated RFC.)
However, this magic does not save us when the function we are calling is expecting a reference to a type parameter, like so:
fn gimme_mutref_to_abs<T: AddAssign>(x: &mut T, amt: T) { *x += amt; }
let mut abstract_ = 0;
gimme_mutref_to_abs(&mut abstract_, 1);
gimme_mutref_to_abs(&mut &mut abstract_, 1);
// ^^^^ ^^^^^^^^^^^^^^
// compiler wants &mut T where T: AddAssign
println!("abstract: {}", abstract_);
In this code, the Rust compiler starts off assuming that the input type (&mut &mut i32) will decompose into some type &mut T that satisfies T: AddAssign.
It checks the first case that can possibly match: peel off the first &mut, and then see if the remainder (&mut i32) could possibly be the T that we are searching for.
&mut i32 does not implement AddAssign, so that attempt to solve the trait constraints fails.
Here's the crucial thing: the compiler does not then decide to try applying any coercions here (including deref coercions); it just gives up. I have not managed to find a historical record of the basis for giving up here, but my memory from conversations (and from knowledge of the compiler) is that the trait resolution step is expensive, so we choose not to try to search for potential traits on each step of a coercion. Instead, the programmer is expected to figure out an appropriate conversion expression that will turn the given type T into some intermediate type U that the compiler can accept as the expected type.

Resources