Consider the following type alias and function definitions:
#[cfg(feature = "my-feature")]
pub type CompositeOutputType = (usize, String);
#[cfg(not(feature = "my-feature"))]
pub type CompositeOutputType = usize;
pub fn my_function(input: usize) -> CompositeOutputType {
#[cfg(feature = "my-feature")]
(input, ""); // Why it throws error without this semicolon?
#[cfg(not(feature = "my-feature"))]
input
}
fn main() {
#[cfg(feature = "my-feature")]
let (output_1, output_2) = my_function(1);
#[cfg(not(feature = "my-feature"))]
let output = my_function(1);
println!("output: {:?}", output);
}
Is there a better way to write this in Rust, where the function output depends on the feature? Note that I can also defined CompositeOutputType as a struct instead, so then I do not need to worry about output being a single element or tuple, as it always will be a single object. Hence, my main function might simplify, but I was wondering if there is a better way to write this, to simplify outputting of my_function.
Hence, my main function might simplify, but I was wondering if there is a better way to write this, to simplify outputting of my_function.
Modules?
#[cfg(feature = "my-feature")]
mod inner {
pub type CompositeOutputType = (usize, String);
pub fn my_function(input: usize) -> CompositeOutputType {
(input, "".to_string())
}
}
#[cfg(not(feature = "my-feature"))]
mod inner {
pub type CompositeOutputType = usize;
pub fn my_function(input: usize) -> CompositeOutputType {
input
}
}
pub use inner::*;
though the entire thing seems misdesigned to me, changing the interface of a function entirely depending on a feature seems extremely wonky and error-prone, even with compiler support.
If a feature requires changing code, you might as well expose different types and functions entirely.
Related
Is there a way to apply Rust's feature flag support to function parameters? I have a function which takes bunch of inputs, but one of the inputs should only be passed if a feature is active. Of course I can make that parameter optional like:
pub fn my_function(input1: usize, input2: String, input3: Option<Vec<u32>>) -> () {...}
But is there a way to instead make use of the feature flag and define something like this?:
pub fn my_function(
input1: usize,
input2: String,
#[cfg(feature = "my-feature")]
input3: Vec<u32>
) -> () {...}
Similarly, later calling it as: my_function(0, "", #[cfg(feature = "my-feature")] vec![0])?
Rust's feature flags are a thin wrapper around conditional compilation. Imagine how you'd write your code if the feature were always enabled, then again if it were always disabled, and wrap the lines that differ between the two with #[cfg(..]) expressions. To define a function that differs, you can define two different versions and have them call a shared inner block if they're nearly the same, or just define them separately if they should do different things:
struct Thing;
impl Thing {
#[cfg(not(feature = "my-feature"))]
pub fn my_function(input1: usize, input2: String) {
Thing::my_function_inner(input1, input2, None);
}
#[cfg(feature = "my-feature")]
pub fn my_function(input1: usize, input2: String, input3: Vec<u32>) {
Thing::my_function_inner(input1, input2, Some(input3));
}
#[inline]
fn my_function_inner(input1: usize, input2: String, input3: Option<Vec<u32>>) {
todo!();
}
}
Alternatively, you can simply tag the conditional argument with a cfg expression and use conditional compilation within the function body:
impl Thing {
pub fn my_function(
input1: usize,
input2: String,
#[cfg(feature = "my-feature")] input3: Vec<u32>,
) {
// common implementation
#[cfg(feature = "my-feature")]
{ /* `input3` is only available when feature is enabled */ }
#[cfg(not(feature = "my-feature"))]
{ /* only run when feature is disabled */ }
}
}
Choose whichever makes the most sense for your use case. The compiler will likely optimize them similarly.
To call this, you can either rely on the feature flag always being enabled/disabled and just use the corresponding version, or you can include code for both like so:
fn main() {
#[cfg(not(feature = "my-feature"))]
Thing::my_function(0, "".to_string());
#[cfg(feature = "my-feature")]
Thing::my_function(0, "".to_string(), vec![0]);
}
EDIT: To support conditionally compiled return types, you can declare your return type to be a type alias that is itself conditionally compiled, like so:
#[cfg(feature = "my-feature")]
type ReturnType = (usize, usize);
#[cfg(not(feature = "my-feature"))]
type ReturnType = (usize,);
fn my_function() -> ReturnType { todo!() }
Thanks to #ChayimFriedman and #Jmb.
I am modeling an API where method overloading would be a good fit. My naïve attempt failed:
// fn attempt_1(_x: i32) {}
// fn attempt_1(_x: f32) {}
// Error: duplicate definition of value `attempt_1`
I then added an enum and worked through to:
enum IntOrFloat {
Int(i32),
Float(f32),
}
fn attempt_2(_x: IntOrFloat) {}
fn main() {
let i: i32 = 1;
let f: f32 = 3.0;
// Can't pass the value directly
// attempt_2(i);
// attempt_2(f);
// Error: mismatched types: expected enum `IntOrFloat`
attempt_2(IntOrFloat::Int(i));
attempt_2(IntOrFloat::Float(f));
// Ugly that the caller has to explicitly wrap the parameter
}
Doing some quick searches, I've found some references that talk about overloading, and all of them seem to end in "we aren't going to allow this, but give traits a try". So I tried:
enum IntOrFloat {
Int(i32),
Float(f32),
}
trait IntOrFloatTrait {
fn to_int_or_float(&self) -> IntOrFloat;
}
impl IntOrFloatTrait for i32 {
fn to_int_or_float(&self) -> IntOrFloat {
IntOrFloat::Int(*self)
}
}
impl IntOrFloatTrait for f32 {
fn to_int_or_float(&self) -> IntOrFloat {
IntOrFloat::Float(*self)
}
}
fn attempt_3(_x: &dyn IntOrFloatTrait) {}
fn main() {
let i: i32 = 1;
let f: f32 = 3.0;
attempt_3(&i);
attempt_3(&f);
// Better, but the caller still has to explicitly take the reference
}
Is this the closest I can get to method overloading? Is there a cleaner way?
Yes, there is, and you almost got it already. Traits are the way to go, but you don't need trait objects, use generics:
#[derive(Debug)]
enum IntOrFloat {
Int(i32),
Float(f32),
}
trait IntOrFloatTrait {
fn to_int_or_float(&self) -> IntOrFloat;
}
impl IntOrFloatTrait for i32 {
fn to_int_or_float(&self) -> IntOrFloat {
IntOrFloat::Int(*self)
}
}
impl IntOrFloatTrait for f32 {
fn to_int_or_float(&self) -> IntOrFloat {
IntOrFloat::Float(*self)
}
}
fn attempt_4<T: IntOrFloatTrait>(x: T) {
let v = x.to_int_or_float();
println!("{:?}", v);
}
fn main() {
let i: i32 = 1;
let f: f32 = 3.0;
attempt_4(i);
attempt_4(f);
}
See it working here.
Here's another way that drops the enum. It's an iteration on Vladimir's answer.
trait Tr {
fn go(&self) -> ();
}
impl Tr for i32 {
fn go(&self) {
println!("i32")
}
}
impl Tr for f32 {
fn go(&self) {
println!("f32")
}
}
fn attempt_1<T: Tr>(t: T) {
t.go()
}
fn main() {
attempt_1(1 as i32);
attempt_1(1 as f32);
}
Function Overloading is Possible!!! (well, sorta...)
This Rust Playground example has more a more detailed example, and shows usage of a struct variant, which may be better for documentation on the parameters.
For more serious flexible overloading where you want to have sets of any number of parameters of any sort of type, you can take advantage of the From<T> trait for conversion of a tuple to enum variants, and have a generic function that converts tuples passed into it to the enum type.
So code like this is possible:
fn main() {
let f = Foo { };
f.do_something(3.14); // One f32.
f.do_something((1, 2)); // Two i32's...
f.do_something(("Yay!", 42, 3.14)); // A str, i32, and f64 !!
}
First, define the different sets of parameter combinations as an enum:
// The variants should consist of unambiguous sets of types.
enum FooParam {
Bar(i32, i32),
Baz(f32),
Qux(&'static str, i32, f64),
}
Now, the conversion code; a macro can be written to do the tedious From<T> implementations, but here's what it could produce:
impl From<(i32, i32)> for FooParam {
fn from(p: (i32, i32)) -> Self {
FooParam::Bar(p.0, p.1)
}
}
impl From<f32> for FooParam {
fn from(p: f32) -> Self {
FooParam::Baz(p)
}
}
impl From<(&'static str, i32, f64)> for FooParam {
fn from(p: (&'static str, i32, f64)) -> Self {
FooParam::Qux(p.0, p.1, p.2)
}
}
And then finally, implement the struct with generic method:
struct Foo {}
impl Foo {
fn do_something<T: Into<FooParam>>(&self, t: T) {
use FooParam::*;
let fp = t.into();
match fp {
Bar(a, b) => print!("Bar: {:?}, {:?}\n", a, b),
Baz(a) => print!("Baz: {:?}\n", a),
Qux(a, b, c) => {
print!("Qux: {:?}, {:?}, {:?}\n", a, b, c)
}
}
}
}
Note: The trait bound on T needs to be specified.
Also, the variants need to be composed of combinations of types that the compiler wouldn't find ambiguous - which is an expectation for overloaded methods in other languages as well (Java/C++).
This approach has possibilities... it would be awesome if there's a decorator available - or one were written that did the From<T> implementations automatically when applied to an enum. Something like this:
// THIS DOESN'T EXIST - so don't expect the following to work.
// This is just an example of a macro that could be written to
// help in using the above approach to function overloading.
#[derive(ParameterOverloads)]
enum FooParam {
Bar(i32, i32),
Baz(f32),
Qux(&'static str, i32, f64),
}
// If this were written, it could eliminate the tedious
// implementations of From<...>.
The Builder
Another approach that addresses the case where you have multiple optional parameters to an action or configuration is the builder pattern. The examples below deviate somewhat from the recommendations in the link. Typically, there's a separate builder class/struct which finalizes the configuration and returns the configured object when a final method is invoked.
One of the most relevant situations this can apply to is where you want a constructor that takes a variable number of optional arguments - since Rust doesn't have built-in overloading, we can't have multiple versions of ___::new(). But we can get a similar effect using a chain of methods that return self. Playground link.
fn main() {
// Create.
let mut bb = BattleBot::new("Berzerker".into());
// Configure.
bb.flame_thrower(true)
.locomotion(TractorTreads)
.power_source(Uranium);
println!("{:#?}", bb);
}
Each of the configuration methods has a signature similar to:
fn power_source(&mut self, ps: PowerSource) -> &mut Self {
self.power_source = ps;
self
}
These methods could also be written to consume self and return non-reference copies or clones of self.
This approach can also be applied to actions. For instance, we could have a Command object that can be tuned with chained methods, which then performs the command when .exec() is invoked.
Applying this same idea to an "overloaded" method that we want to take a variable number of parameters, we modify our expectations a bit and have the method take an object that can be configured with the builder pattern.
let mut params = DrawParams::new();
graphics.draw_obj(params.model_path("./planes/X15.m3d")
.skin("./skins/x15.sk")
.location(23.64, 77.43, 88.89)
.rotate_x(25.03)
.effect(MotionBlur));
Alternatively, we could decide on having a GraphicsObject struct that has several config tuning methods, then performs the drawing when .draw() is invoked.
#![feature(rustc_private)]
#![feature(box_syntax)]
extern crate rustc;
extern crate rustc_driver;
use rustc::hir::intravisit as hir_visit;
use rustc::hir;
use rustc_driver::driver::{CompileController, CompileState};
pub struct SomeVisitor<'a, 'tcx: 'a> {
pub map: &'a hir::map::Map<'tcx>,
}
impl<'v, 'tcx: 'v> rustc::hir::intravisit::Visitor<'tcx> for SomeVisitor<'v, 'tcx> {
fn nested_visit_map<'this>(&'this mut self) -> hir_visit::NestedVisitorMap<'this, 'tcx> {
hir_visit::NestedVisitorMap::All(self.map)
}
}
fn hir(s: &mut CompileState) {
let krate = s.hir_crate.unwrap();
let map = s.hir_map.unwrap();
let mut visitor = SomeVisitor { map };
hir_visit::walk_crate(&mut visitor, krate);
}
fn main() {
{
let mut controller = CompileController::basic();
controller.after_hir_lowering.callback = box hir;
}
}
playground
I understand why I am getting the lifetime error and it is very easy to solve it by adding explicit lifetimes for the function hir.
pub fn walk_crate<'v, V: hir_visit::Visitor<'v>>(visitor: &mut V, krate: &'v Crate) {}
Because of this definition the lifetime for the reference needs to live for 'tcx.
fn hir<'v, 'tcx>(s: &'tcx mut CompileState<'v, 'tcx>) {
let krate = s.hir_crate.unwrap();
let map = s.hir_map.unwrap();
let mut visitor = SomeVisitor { map };
hir_visit::walk_crate(&mut visitor, krate);
}
But then the function hir becomes incompatible for the callback. playground
I assume that I may need to use HRTB here?
Update:
My current workaround is to use transmute. (playground). Surely there must be a better way?
hir_visit::walk_crate(&mut visitor, visitor.map.krate());
The solution was to realize that map also contains a krate as a reference, but with the correct lifetime. This means that I don't have to introduce explicit lifetimes.
playground
During build.rs time, I'm generating Rust code that calls a C function with a variable number of arguments:
//build.rs
fn generate_code(param_name: &str, param_type: &str);
This generates:
c_func(x, {param_name});
where param_type is any integer type.
The problem is with "integer promotions". For example, this code gives a compile-time error because p should be promoted to a c_uint by the C calling conventions for variadics and Rust does not do it implicitly:
use std::os::raw::c_uint;
extern "C" {
fn c_func(x: c_uint, ...);
}
fn main() {
let x: c_uint = 1;
let p: u8 = 2;
c_func(x, p);
}
To promote the integer, I need to know the size of c_uint on the target platform to convert integer types with size <= c_uint to c_uint (and signed types to c_int),
Maybe there is some kind of trick I can do with the help of generated code (
like c_func(x, p.promote());), where the promote function is part of some magic trait that promotes to the suitable type?
I do not believe there's any way of accessing that information in the build script.
where the promote function is part of some magic trait that promotes to the suitable type?
That sounds like a workable solution. You can implement this trait for various types depending on the target platform:
extern crate libc;
trait Promote {
type Output;
fn promote(self) -> Self::Output;
}
// This can be made into a macro for all the "easy" cases
impl Promote for u8 {
type Output = libc::c_uint;
fn promote(self) -> Self::Output {
self as Self::Output
}
}
#[cfg(target_pointer_width = "32")]
impl Promote for u64 {
type Output = libc::uint64_t;
fn promote(self) -> Self::Output {
self as Self::Output
}
}
#[cfg(target_pointer_width = "64")]
impl Promote for u64 {
type Output = libc::c_uint;
fn promote(self) -> Self::Output {
self as Self::Output
}
}
extern "C" {
fn foo(a: u8, ...);
}
fn main() {
unsafe {
foo(1, 1u8.promote(), 1u64.promote());
}
}
I have an object that can be in either of two modes: a source or a sink. It is always in one of them and it is always known at compile time (when passed the object you know if you are going to read or write to it obviously).
I can put all the methods on the same object, and just assume I won't be called improperly or error when I do, or I was thinking I could be make two
tuple structs of the single underlying object and attach the methods to those tuple structs instead. The methods are almost entirely disjoint.
It is kind of abusing the fact that both tuple structs have the same layout and there is zero overhead for the casts and tuple storage.
Think of this similar to the Java ByteBuffer and related classes where you write then flip then read then flip back and write more. Except this would catch errors in usage.
However, it does seem a little unusual and might be overly confusing for such a small problem. And it seems like there is a better way to do this -- only requirement is zero overhead so no dynamic dispatch.
https://play.rust-lang.org/?gist=280d2ec2548e4f38e305&version=stable
#[derive(Debug)]
struct Underlying {
a: u32,
b: u32,
}
#[derive(Debug)]
struct FaceA(Underlying);
impl FaceA {
fn make() -> FaceA { FaceA(Underlying{a:1,b:2}) }
fn doa(&self) { println!("FaceA do A {:?}", *self); }
fn dou(&self) { println!("FaceA do U {:?}", *self); }
fn tob(&self) -> &FaceB { unsafe{std::mem::transmute::<&FaceA,&FaceB>(self)} }
}
#[derive(Debug)]
struct FaceB(Underlying);
impl FaceB {
fn dob(&self) { println!("FaceB do B {:?}", *self); }
fn dou(&self) { println!("FaceB do U {:?}", *self); }
fn toa(&self) -> &FaceA { unsafe{std::mem::transmute::<&FaceB,&FaceA>(self)} }
}
fn main() {
let a = FaceA::make();
a.doa();
a.dou();
let b = a.tob();
b.dob();
b.dou();
let aa = b.toa();
aa.doa();
aa.dou();
}
First of all, it seems like you don't understand how ownership works in Rust; you may want to read the Ownership chapter of the Rust Book. Specifically, the way you keep re-aliasing the original FaceA is how you would specifically enable the very thing you say you want to avoid. Also, all the borrows are immutable, so it's not clear how you intend to do any sort of mutation.
As such, I've written a new example from scratch that involves going between two types with disjoint interfaces (view on playpen).
#[derive(Debug)]
pub struct Inner {
pub value: i32,
}
impl Inner {
pub fn new(value: i32) -> Self {
Inner {
value: value,
}
}
}
#[derive(Debug)]
pub struct Upper(Inner);
impl Upper {
pub fn new(inner: Inner) -> Self {
Upper(inner)
}
pub fn into_downer(self) -> Downer {
Downer::new(self.0)
}
pub fn up(&mut self) {
self.0.value += 1;
}
}
#[derive(Debug)]
pub struct Downer(Inner);
impl Downer {
pub fn new(inner: Inner) -> Self {
Downer(inner)
}
pub fn into_upper(self) -> Upper {
Upper::new(self.0)
}
pub fn down(&mut self) {
self.0.value -= 1;
}
}
fn main() {
let mut a = Upper::new(Inner::new(0));
a.up();
let mut b = a.into_downer();
b.down();
b.down();
b.down();
let mut c = b.into_upper();
c.up();
show_i32(c.0.value);
}
#[inline(never)]
fn show_i32(v: i32) {
println!("v: {:?}", v);
}
Here, the into_upper and into_downer methods consume the subject value, preventing anyone from using it afterwards (try accessing a after the call to a.into_downer()).
This should not be particularly inefficient; there is no heap allocation going on here, and Rust is pretty good at moving values around efficiently. If you're curious, this is what the main function compiles down to with optimisations enabled:
mov edi, -1
jmp _ZN8show_i3220h2a10d619fa41d919UdaE
It literally inlines the entire program (save for the show function that I specifically told it not to inline). Unless profiling shows this to be a serious performance problem, I wouldn't worry about it.