I have two different structs with similar functions. Suppose that the program choose which struct to take from the user input.
I want to write something like this
fn main() {
...
let obj = if a == first {
first_object //struct First
} else {
second_object//struct Second
};
obj.read();
obj.write();
obj.some_another_method();
}
I have tried to make an enumeration
pub enum ObjectKind {
FirstObject(First),
SecondObject(Second)
}
But I cannot use methods in this case
let something = ObjectKind::FirstObject(first_object);
something.read()
//no method named `read` found for enum `structs::ObjectKind` in the current scope
//method not found in `structs::ObjectKind`
But I cannot use methods in this case
An enum is a proper type in and of itself, it's not a "union" of existing types. You can just define the methods on the enum to forward to the relevant object e.g.
impl ObjectKind {
fn read(&self) {
match self {
FirstObject(f) => f.read()
SecondObject(s) => s.read()
}
}
}
as it would probably be a bit repetitive, you can use a macro to make the forwarding easier.
Alternatively, you might be able to define a trait for the common behaviour and use a trait object to dynamically dispatch between the two, but that can be somewhat restrictive.
Related
I am writing a Rust application. I'd like to have a method to display the text whether it is a string or number. Furthermore, I came up with the following solution, but it is duplicating the code. Is there a better way to do it in Rust?
Notice: I am not looking for a built-in function to print variables. It's just an example. I am looking for a way to implement the same feature for two types.
trait Display {
fn display(self);
}
impl Display for String {
fn display(self) -> () {
println!("You wrote: {}", self);
}
}
impl Display for i32 {
fn display(self) -> () {
println!("You wrote: {}", self);
}
}
fn main() {
let name: String = String::from("Tom Smykowski");
name.display();
let age: i32 = 22;
age.display();
}
You came close. But there is already a trait for converting things to strings - std::fmt::Display (and the automatically-implemented ToString) - so you don't need to have your own trait:
fn display<T: std::fmt::Display>(v: T) {
println!("You wrote: {v}");
}
fn main() {
let name: String = String::from("Tom Smykowski");
display(name);
let age: i32 = 22;
display(age);
}
Even if you don't need to display the types but do something else with them, we can take the idea from Display - instead of defining the whole functionality, define only the pieces that are different. For example, you can create a trait to convert the numbers to strings (or the opposite), or just have functions for each different piece - for example, printing itself without "You wrote: ".
I came up with the following solution, but it is duplicating the code. Is there a better way to do it in Rust?
Add a simple declarative macro on top, that is very common in the stdlib and all. e.g.
macro_rules! impl_display {
($t:ty) => {
impl Display for $t {
fn display(self) {
println!("You wrote {self}");
}
}
}
}
impl_display!(String);
impl_display!(i32);
impl_display!(i64);
impl_display!(f32);
Although:
usually the implementations would be different, though not always e.g. implementing an operation on all numeric types, or all unsigned numbers, that's one of the most common context you'll see it in the stdlib: the stdlib has no numeric trait but methods are usually implemented on all numeric types, so there's a handful of macros used for all of them, and when new methods are added they're just added to the relevant macro
here you're already relying on the existence and implementation of std::fmt::Display so you should just use that, your trait is not really useful
Say I have a trait that looks like this:
use std::{error::Error, fmt::Debug};
use super::CheckResult;
/// A Checker is a component that is responsible for checking a
/// particular aspect of the node under investigation, be that metrics,
/// system information, API checks, load tests, etc.
#[async_trait::async_trait]
pub trait Checker: Debug + Sync + Send {
type Input: Debug;
/// This function is expected to take input, whatever that may be,
/// and return a vec of check results.
async fn check(&self, input: &Self::Input) -> anyhow::Result<Vec<CheckResult>>;
}
And say I have two implementations of this trait:
pub struct ApiData {
some_response: String,
}
pub MetricsData {
number_of_events: u64,
}
pub struct ApiChecker;
impl Checker for ApiChecker {
type Input = ApiData;
// implement check function
}
pub struct MetricsChecker;
impl Checker for MetricsChecker {
type Input = MetricsData;
// implement check function
}
In my code I have a Vec of these Checkers that looks like this:
pub struct MyServer {
checkers: Vec<Box<dyn Checker>>,
}
What I want to do is figure out, based on what Checkers are in this Vec, what data I need to fetch. For example, if it just contained an ApiChecker, I would only need to fetch the ApiData. If both ApiChecker and MetricsChecker were there, I'd need both ApiData and MetricsData. You can also imagine a third checker where Input = (ApiData, MetricsData). In that case I'd still just need to fetch ApiData and MetricsData once.
I imagine an approach where the Checker trait has an additional function on it that looks like this:
fn required_data(&self) -> HashSet<DataId>;
This could then return something like [DataId::Api, DataId::Metrics]. I would then run this for all Checkers in my vec and then I'd end up a complete list of data I need to get. I could then do some complicated set of checks like this:
let mut required_data = HashSet::new();
for checker in checkers {
required_data.union(&mut checker.required_data());
}
let api_data: Option<ApiData> = None;
if required_data.contains(DataId::Api) {
api_data = Some(get_api_data());
}
And so on for each of the data types.
I'd then pass them into the check calls like this:
api_checker.check(
api_data.expect("There was some logic error and we didn't get the API data even though a Checker declared that it needed it")
);
The reasons I want to fetch the data outside of the Checkers is:
To avoid fetching the same data multiple times.
To support memoization between unrelated calls where the arguments are the same (this could be done inside some kind of Fetcher trait implementation for example).
To support generic retry logic.
By now you can probably see that I've got two big problems:
The declaration of what data a specific Checker needs is duplicated, once in the function signature and again from the required_data function. This naturally introduces bug potential. Ideally this information would only be declared once.
Similarly, in the calling code, I have to trust that the data that the Checkers said they needed was actually accurate (the expect in the previous snippet). If it's not, and we didn't get data we needed, there will be problems.
I think both of these problems would be solved if the function signature, and specifically the Input associated type, was able to express this "required data" declaration on its own. Unfortunately I'm not sure how to do that. I see there is a nightly feature in any that implements Provider and Demand: https://doc.rust-lang.org/std/any/index.html#provider-and-demand. This sort of sounds like what I want, but I have to use stable Rust, plus I figure I must be missing something and there is an easier way to do this without going rogue with semi dynamic typing.
tl;dr: How can I inspect what types the arguments are for a function (keeping in mind that the input might be more complex than just one thing, such as a struct or tuple) at runtime from outside the trait implementer? Alternatively, is there a better way to design this code that would eliminate the need for this kind of reflection?
Your problems start way earlier than you mention:
checkers: Vec<Box<dyn Checker>>
This is an incomplete type. The associated type Input means that Checker<Input = ApiData> and Checker<Input = MetricsData> are incompatible. How would you call checkers[0].check(input)? What type would input be? If you want a collection of "checkers" then you'll need a unified API, where the arguments to .check() are all the same.
I would suggest a different route altogether: Instead of providing the input, provide a type that can retrieve the input that they ask for. That way there's no need to coordinate what type the checkers will ask for in a type-safe way, it'll be inherent to the methods the checkers themselves call. And if your primary concern is repeatedly retrieving the same data for different checkers, then all you need to do is implement caching in the provider. Same with retry logic.
Here's my suggestion:
struct DataProvider { /* cached api and metrics */ }
impl DataProvider {
fn fetch_api_data(&mut self) -> anyhow::Result<ApiData> { todo!() }
fn fetch_metrics_data(&mut self) -> anyhow::Result<MetricsData> { todo!() }
}
#[async_trait::async_trait]
trait Checker {
async fn check(&self, data: &mut DataProvider) -> anyhow::Result<Vec<CheckResult>>;
}
struct ApiAndMetricsChecker;
#[async_trait::async_trait]
impl Checker for ApiAndMetricsChecker {
async fn check(&self, data: &mut DataProvider) -> anyhow::Result<Vec<CheckResult>> {
let _api_data = data.fetch_api_data()?;
let _metrics_data = data.fetch_metrics_data()?;
// do something with api and metrics data
todo!()
}
}
I need to use an attribute of the generic type of a struct in a macro.
A slightly contrived but minimal example would be if I wanted to implement a method for a generic struct, that returned the minimum value of its generic type.
struct Barn<T> {
hay: T
}
macro_rules! impl_min_hay{
($barntype:ident) => {
impl $barntype {
fn min_hay(&self) -> ????T {
????T::MIN
}
}
}
}
type SmallBarn = Barn<i8>;
type BigBarn = Barn<i64>;
impl_min_hay!(SmallBarn);
impl_min_hay!(BigBarn);
fn main() {
let barn = SmallBarn { hay: 5 };
println!("{}", barn.min_hay());
}
How would I resolve from SmallBarn to get the generic type and thus it's MIN attribute?
The actual problem I am trying to solve is a change to this macro. The macro is applied to, among others, BooleanChunked, which is defined as:
pub type BooleanChunked = ChunkedArray<BooleanType>
And I need to use an attribute of BooleanType
The only general solution I can think of is to define a trait that allows you to get at the type parameter (the syntax for this is <Type as Trait>::AssociatedType):
trait HasHayType {
type HayType;
}
impl<T> HasHayType for Barn<T> {
type HayType = T;
}
macro_rules! impl_min_hay{
($barntype:ident) => {
impl $barntype {
fn min_hay(&self) -> <$barntype as HasHayType>::HayType {
<$barntype as HasHayType>::HayType::MIN
}
}
}
}
Here's the complete program on play.rust-lang.org.
That said, once you have a trait, you don't really need the macro – you can just implement min_hay on the trait (this example makes use of the widely used num-traits crate, because this approach needs a trait for "things that have minimum values"):
use num_traits::Bounded;
trait HasHayType {
type HayType: Bounded;
fn min_hay(&self) -> Self::HayType;
}
impl<T: Bounded> HasHayType for Barn<T> {
type HayType = T;
fn min_hay(&self) -> T {
T::min_value()
}
}
And here's what that looks like as a complete program.
(And of course, once you've done that too, you don't really need the separate trait either: you can inline the definition of HasHayType into Barn, using a where clause if you want to be able to handle Barns with non-numerical hay types in addition to the ones where you'd use the macro. Presumably, though, the actual situation you have is more complex than the cut-down example you used for the question, so I gave the more complex versions in case the simplified versions wouldn't work.)
As a side note, min_hay doesn't actually need the &self parameter here; you could remove it, in order to be able to learn the minimum amount of hay without needing a barn to put it in.
I have an enum defined like this:
#[derive(Clone, Debug)]
pub enum JsonState {
NameReadingState(String),
StringState(String),
}
impl JsonState {
pub fn get_name_read(self) -> String {
if let JsonState::NameReadingState(name) = self {
return name;
} else {
panic!(
"Error: attempted to get name from non name state {:#?}",
self
);
}
}
}
If I were to call get_name_read on an instance of JsonState would the string be moved out of the enum or would it be copied? My understanding is that since I am passing self and not &self I am taking ownership of the instance inside the function and so I should be able to simply move the string out of it.
It is moved.
This is, in my opinion, one of the great advantages of Rust over C++: if you don't see a .clone() anywhere, then you are not cloning! In Rust, there are no implicit deep copies like in C++. If you want to create a copy/clone then you have to do it explicitly by calling a method that clones your instance.
All of this comes with one exception: types that implement Copy. These types use copy semantics instead of move semantics. It should be noted that Copy can only be implemented for types "whose values can be duplicated simply by copying bits", i.e. very simple types. String and any other types that manage heap memory do not implement Copy.
When writing callbacks for generic interfaces, it can be useful for them to define their own local data which they are responsible for creating and accessing.
In C I would just use a void pointer, C-like example:
struct SomeTool {
int type;
void *custom_data;
};
void invoke(SomeTool *tool) {
StructOnlyForThisTool *data = malloc(sizeof(*data));
/* ... fill in the data ... */
tool.custom_data = custom_data;
}
void execute(SomeTool *tool) {
StructOnlyForThisTool *data = tool.custom_data;
if (data.foo_bar) { /* do something */ }
}
When writing something similar in Rust, replacing void * with Option<Box<Any>>, however I'm finding that accessing the data is unreasonably verbose, eg:
struct SomeTool {
type: i32,
custom_data: Option<Box<Any>>,
};
fn invoke(tool: &mut SomeTool) {
let data = StructOnlyForThisTool { /* my custom data */ }
/* ... fill in the data ... */
tool.custom_data = Some(Box::new(custom_data));
}
fn execute(tool: &mut SomeTool) {
let data = tool.custom_data.as_ref().unwrap().downcast_ref::<StructOnlyForThisTool>().unwrap();
if data.foo_bar { /* do something */ }
}
There is one line here which I'd like to be able to write in a more compact way:
tool.custom_data.as_ref().unwrap().downcast_ref::<StructOnlyForThisTool>().unwrap()
tool.custom_data.as_ref().unwrap().downcast_mut::<StructOnlyForThisTool>().unwrap()
While each method makes sense on its own, in practice it's not something I'd want to write throughout a code-base, and not something I'm going to want to type out often or remember easily.
By convention, the uses of unwrap here aren't dangerous because:
While only some tools define custom data, the ones that do always define it.
When the data is set, by convention the tool only ever sets its own data. So there is no chance of having the wrong data.
Any time these conventions aren't followed, its a bug and should panic.
Given these conventions, and assuming accessing custom-data from a tool is something that's done often - what would be a good way to simplify this expression?
Some possible options:
Remove the Option, just use Box<Any> with Box::new(()) representing None so access can be simplified a little.
Use a macro or function to hide verbosity - passing in the Option<Box<Any>>: will work of course, but prefer not - would use as a last resort.
Add a trait to Option<Box<Any>> which exposes a method such as tool.custom_data.unwrap_box::<StructOnlyForThisTool>() with matching unwrap_box_mut.
Update 1): since asking this question a point I didn't include seems relevant.
There may be multiple callback functions like execute which must all be able to access the custom_data. At the time I didn't think this was important to point out.
Update 2): Wrapping this in a function which takes tool isn't practical, since the borrow checker then prevents further access to members of tool until the cast variable goes out of scope, I found the only reliable way to do this was to write a macro.
If the implementation really only has a single method with a name like execute, that is a strong indication to consider using a closure to capture the implementation data. SomeTool can incorporate an arbitrary callable in a type-erased manner using a boxed FnMut, as shown in this answer. execute() then boils down to invoking the closure stored in the struct field implementation closure using (self.impl_)(). For a more general approach, that will also work when you have more methods on the implementation, read on.
An idiomatic and type-safe equivalent of the type+dataptr C pattern is to store the implementation type and pointer to data together as a trait object. The SomeTool struct can contain a single field, a boxed SomeToolImpl trait object, where the trait specifies tool-specific methods such as execute. This has the following characteristics:
You no longer need an explicit type field because the run-time type information is incorporated in the trait object.
Each tool's implementation of the trait methods can access its own data in a type-safe manner without casts or unwraps. This is because the trait object's vtable automatically invokes the correct function for the correct trait implementation, and it is a compile-time error to try to invoke a different one.
The "fat pointer" representation of the trait object has the same performance characteristics as the type+dataptr pair - for example, the size of SomeTool will be two pointers, and accessing the implementation data will still involve a single pointer dereference.
Here is an example implementation:
struct SomeTool {
impl_: Box<SomeToolImpl>,
}
impl SomeTool {
fn execute(&mut self) {
self.impl_.execute();
}
}
trait SomeToolImpl {
fn execute(&mut self);
}
struct SpecificTool1 {
foo_bar: bool
}
impl SpecificTool1 {
pub fn new(foo_bar: bool) -> SomeTool {
let my_data = SpecificTool1 { foo_bar: foo_bar };
SomeTool { impl_: Box::new(my_data) }
}
}
impl SomeToolImpl for SpecificTool1 {
fn execute(&mut self) {
println!("I am {}", self.foo_bar);
}
}
struct SpecificTool2 {
num: u64
}
impl SpecificTool2 {
pub fn new(num: u64) -> SomeTool {
let my_data = SpecificTool2 { num: num };
SomeTool { impl_: Box::new(my_data) }
}
}
impl SomeToolImpl for SpecificTool2 {
fn execute(&mut self) {
println!("I am {}", self.num);
}
}
pub fn main() {
let mut tool1: SomeTool = SpecificTool1::new(true);
let mut tool2: SomeTool = SpecificTool2::new(42);
tool1.execute();
tool2.execute();
}
Note that, in this design, it doesn't make sense to make implementation an Option because we always associate the tool type with the implementation. While it is perfectly valid to have an implementation without data, it must always have a type associated with it.