I have a derive style procedural macro, where i would like to calculate the potential maximum length of the serialized version of a struct. For example, given the struct TestStruct below, i would like to call some function with all the fields to calculate the worst case length.
#[derive(MyLengthProcMacro)]
struct TestStruct {
f1: u32,
f2: i64,
f3: SomeOtherStruct
}
For primitives this could look something like:
fn get_string_length(ty: &Type) -> usize {
let type_string = quote! { #ty }.to_string();
match type_string.replace(" ", "").as_str() {
"str" => panic!("String slices must be annotated with a length using #[at_arg()]"),
"tuple" => panic!("Tuples are not supported!"),
"char" => "a".len(),
"bool" => "false".len(),
"isize" => format!("{:?}", std::isize::MAX).len(),
"usize" => format!("{:?}", std::usize::MAX).len(),
"u8" => format!("{:?}", std::u8::MAX).len(),
"u16" => format!("{:?}", std::u16::MAX).len(),
"u32" => format!("{:?}", std::u32::MAX).len(),
"u64" => format!("{:?}", std::u64::MAX).len(),
"u128" => format!("{:?}", std::u128::MAX).len(),
"i8" => format!("{:?}", std::i8::MIN).len(),
"i16" => format!("{:?}", std::i16::MIN).len(),
"i32" => format!("{:?}", std::i32::MIN).len(),
"i64" => format!("{:?}", std::i64::MIN).len(),
"i128" => format!("{:?}", std::i128::MIN).len(),
"f32" => format!("{:?}", std::f32::MIN).len(),
"f64" => format!("{:?}", std::f64::MIN).len(),
_ => {
// println!("Unexpected type: {:?}", type_string);
0
}
}
}
The problem i am asking here is regarding any non-primitive fields, eg. f3: SomeOtherStruct. Is there a way to access the fields of fields, in a proc-macro?
No.
Macros are expanded before that kind of analysis is done, so the compiler has no idea what SomeOtherStruct is at that point.
However, this is not how you should implement that macro! Your way doesn't allow the user to use type aliases.
What you should do is simply use your trait recursively and use sum the <T as MyLength>::my_length() where T is the type of the field for each field.
They manually implement your trait for the primitive types.
Related
Im new to rust
consider this following code
fn my_function(text: &str) -> Result<String, _WHAT_SHOULD_I_USE_HERE_> {
let word = OtherLib::text(&text)?;
...
Ok(word)
}
OtherLib::text has a return type of Result<String, anyhow::Error>
but my problem is rust force me to add return type on my_function
I am not using anyhow on my project
is there any way to solve this, aside/without installing anyhow. And is there any easy way to auto complete this on vscode
what i expected
fn my_function(text: &str) -> Result<String, _JUST_INFER_IT_> {
and the caller of my function will get
Result<String, Error1 | Error2>
btw using _ is not working for me
fn my_function(text: &str) -> Result<String, _> {
// the placeholder `_` is not allowed within types on item signatures for return types
Part 2
I think rust don't have a union😅
so base on following the OtherLib::text, I come up with the following imaginary type
imaginary type: Result<String, ErrorKindEnum1 | ErrorKindEnum2 | Error>
current type : Result<String, Error>
where | is union type and Error at the end is just a subset/value/kind of ErrorKindEnum3, example ErrorKindEnum3::Error1
my new problem is how I can match it like, because vscode dosent give me any intellisense
match value {
ErrorKindEnum1::Error1 => todo!(),
ErrorKindEnum1::Error2 => todo!(),
ErrorKindEnum2::Error1 => todo!(),
...
ErrorKindEnum3::Error1 => todo!(), // this is okay
ErrorKindEnum3::Error2 => todo!(), // this should error. Not possible
}
as much as possible without type casting, or hardcoding type😅
Part 3
enum MyError {
A, B, C
}
fn greet () -> MyError {
if ... {
return MyError::A;
}
if ... {
return MyError::B;
}
}
fn main() {
match greet() {
MyError:A => todo!(),
MyError:B => todo!(),
MyError:C => todo!(), // rust cant tell me this is impossible?
}
}
I am using the Serde crate to deserialise a JSON file, which has a nested structure like this:
struct Nested {
a: Vec<Foo>,
b: u8,
}
struct Foo {
c: Bar,
d: Vec<f32>,
}
Struct Bar {
e: u32,
f: String,
}
Part of the applications purpose is to check for missing parameters (or incorrect types in parameters), and then display a nicely printed list of errors found in the file, so I need to handle the structure missing parameters or wrongly typed.
I came across this great post that helped solved my issue, by wrapping each parameter in an enum result that contains the value if it passed, the value if it failed, or a final enum if it was missing (since the nested structures might also be missing I wrapped them in the same enum):
pub enum TryParse<T> {
Parsed(T),
Unparsed(Value),
NotPresent
}
struct Nested {
a: TryParse<Vec<Foo>>,
b: TryParse<u8>,
}
struct Foo {
c: TryParse<Bar>,
d: TryParse<Vec<f32>>,
}
Struct Bar {
e: TryParse<u32>,
f: TryParse<String>,
}
However, I'm not sure how to access them now without unpacking every step into a match statement. For example, I can access B very easily:
match file.b {
Parsed(val) => {println!("Got parameter of {}", val)},
Unparsed(val) => {println!("Invalid type: {:?}", val)}
NotPresent => {println!("b not found")},
};
However, I'm not sure how to access the nested ones (C D E and F). I can't use Unwrap or expect since this isn't technically a 'Result', so how do I unpack these?:
if file.a.c.e.Parsed() && file.a.c.e == 32 {... //invalid
if file.a.d && file.a.d.len() == 6... //invalid
I know in a way this flies against rust's 'handle every outcome' philosophy, and I want to handle them, but I want to know if there is a nicer way than 400 nested match statements (the above example is very simplified, the files I am using have up to 6 nested layers, each parameter in the top node has at least 3 layers, some are vectors as well)…
Perhaps I need to implement a function similar to unwrap() on my 'TryParse'? or would it be better to wrap each parameter in a 'Result', extend that with the deserialise trait, and then somehow store the error in the Err option that says if it was a type error or missing parameter?
EDIT
I tried adding the following, some of which works and some of which does not:
impl <T> TryParse<T> {
pub fn is_ok(self) -> bool { //works
match self {
Self::Parsed(_t) => true,
_ => false,
}
}
pub fn is_absent(self) -> bool { //works
match self {
Self::NotPresent => true,
_ => false,
}
}
pub fn is_invalid(self) -> bool { //works
match self {
Self::Unparsed(_) => true,
_ => false,
}
}
pub fn get(self) -> Result<T, dyn Error> { //doesnt work
match self {
Self::Parsed(t) => Ok(t),
Self::Unparsed(e) => Err(e),
Self::NotPresent => Err("Invalid")
}
}
}
I can't believe it is this hard just to get the result, should I just avoid nested enums or get rid of the TryParse enums/ functions all together and wrap everything in a result, so the user simply knows if it worked or didn't work (but no explanation why it failed)
Implementing unwrap() is one possibility. Using Result is another, with a custom error type. You can deserialize directly into result with #[serde(deserialize_with = "...")], or using a newtype wrapper.
However, a not-enough-used power of pattern matching is nested patterns. For example, instead of if file.a.c.e.Parsed() && file.a.c.e == 32 you can write:
if let TryParse::Parsed(a) = &file.a {
// Unfortunately we cannot combine this `if let` with the surrounding `if let`,
// because `Vec` doesn't support pattern matching (currently).
if let TryParsed::Parsed(
[Foo {
c:
TryParse::Parsed(Bar {
e: TryParse::Parsed(32),
..
}),
..
}, ..],
) = a.as_slice()
{
// ...
}
}
May not be the most Rust-y way of doing it, but for those like me moving from another language like C/Python/C++, this is the way I have done it that still allows me to quickly validate if I have an error and use the match syntax to handle it. Thanks to #Chayim Friedman for assisting with this, his way is probably better but this made the most sense for me:
#[derive(Debug)]
pub enum TryParse<T> {
Parsed(T),
Unparsed(Value),
NotPresent
}
impl<'de, T: DeserializeOwned> Deserialize<'de> for TryParse<T> {
fn deserialize<D: Deserializer<'de>>(deserializer: D) -> Result<Self, D::Error> {
match Option::<Value>::deserialize(deserializer)? {
None => Ok(TryParse::NotPresent),
Some(value) => match T::deserialize(&value) {
Ok(t) => Ok(TryParse::Parsed(t)),
Err(_) => Ok(TryParse::Unparsed(value)),
},
}
}
}
impl <T> TryParse<T> {
//pub fn is_ok(self) -> bool { ---> Use get().is_ok(), built into result
// match self {
// Self::Parsed(_t) => true,
// _ => false,
// }
//}
pub fn is_absent(self) -> bool {
match self {
Self::NotPresent => true,
_ => false,
}
}
pub fn is_invalid(self) -> bool {
match self {
Self::Unparsed(_) => true,
_ => false,
}
}
pub fn get(&self) -> Result<&T, String> {
match self {
Self::Parsed(t) => Ok(t),
Self::Unparsed(v) => Err(format!("Unable to Parse {:?}", v)),
Self::NotPresent => Err("Parameter not Present".to_string())
}
}
// pub fn get_direct(&self) -> &T {
// match self {
// Self::Parsed(t) => t,
// _ => panic!("Can't get this value!"),
// }
// }
}
match &nested.a.get().unwrap()[1].c.get.expect("Missing C Parameter").e{
Parsed(val) => {println!("Got value of E: {}", val)},
Unparsed(val) => {println!("Invalid Type: {:?}", val)}
NotPresent => {println!("Param E Not Found")},
};
//Note the use of '&' at the beginning because we need to borrow a reference to it
I know I need to change my mindset to use the rust way of thinking, and I am completely open to other suggestions if they can demonstrate some working code.
I am working on some language bindings to Arrayfire using the arrayfire-rust crate.
Arrayfire has a typed struct Array<T> which represents a matrix. All acceptable types implement the HasAfEnum trait. This trait has a number of associated types, whose values are not the same for the types that implement this trait.
Since I need a reference to the array in a Rwlock for safe language interop, I have defined the following struct:
pub struct ExAfRef(pub RwLock<ExAfArray>);
impl ExAfRef {
pub fn new(slice: &[u8], dim: Dim4, dtype: ExAfDType) -> Self {
Self(RwLock::new(ExAfArray::new(slice, dim, dtype)))
}
pub fn value(&self) -> ExAfArray {
match self.0.try_read() {
Ok(refer) => (*refer),
Err(_) => unreachable!(),
}
}
}
which is contained by a struct:
pub struct ExAf {
pub resource: ResourceArc<ExAfRef>,
}
impl ExAf {
pub fn new(slice: &[u8], dim: Dim4, dtype: ExAfDType) -> Self {
Self {
resource: ResourceArc::new(ExAfRef::new(slice, dim, dtype)),
}
}
// This function is broken
pub fn af_value<T: HasAfEnum>(&self) -> &Array<T> {
self.resource.value().value()
}
}
With the help of the following enum:
pub enum ExAfArray {
U8(Array<u8>),
S32(Array<i32>),
S64(Array<i64>),
F32(Array<f32>),
F64(Array<f64>),
}
impl ExAfArray {
pub fn new(slice: &[u8], dim: Dim4, dtype: ExAfDType) -> Self {
let array = Array::new(slice, dim);
match dtype {
ExAfDType::U8 => ExAfArray::U8(array),
ExAfDType::S32 => ExAfArray::S32(array.cast::<i32>()),
ExAfDType::S64 => ExAfArray::S64(array.cast::<i64>()),
ExAfDType::F32 => ExAfArray::F32(array.cast::<f32>()),
ExAfDType::F64 => ExAfArray::F64(array.cast::<f64>()),
}
}
// This function is broken
pub fn value<T: HasAfEnum>(&self) -> &Array<T> {
// match self {
// ExAfArray::U8(array) => array,
// ExAfArray::S32(array) => array,
// ExAfArray::S64(array) => array,
// ExAfArray::F32(array) => array,
// ExAfArray::F64(array) => array,
// }
if let ExAfArray::U8(array) = self {
return array;
} else if let ExAfArray::S32(array) = self {
return array;
} else if let ExAfArray::S64(array) = self {
return array;
} else if let ExAfArray::F32(array) = self {
return array;
} else {
let ExAfArray::F64(array) = self;
return array;
}
}
pub fn get_type(&self) -> ExAfDType {
match self {
ExAfArray::U8(array) => ExAfDType::U8,
ExAfArray::S32(array) => ExAfDType::S32,
ExAfArray::S64(array) => ExAfDType::S64,
ExAfArray::F32(array) => ExAfDType::F32,
ExAfArray::F64(array) => ExAfDType::F64,
}
}
}
I have used an enum because generic structs are not supported in my language-interop "framework" and because the HasAfEnum trait has associated types (hence dynamic dispatch using dyn is not viable (at least to my knowledge)).
This has worked fine for initializing new arrays.
However when I need to apply some operation on an array, I need to be able to access the value stored by the enum variant. However I am unable to write a type signature for a function to access the value, as dynamic dispatch is not usable and generics are too boilerplate.
Since all variants are tuples, is there some way I can access the value of the tuple variant using a built-in enum feature?
EDIT:
I am using rustler
In short, no there is not a way to do what you seem to be trying to do in Rust presently.
Your functions are broken because you are trying to use generics orthogonally to how they work. When a generic function is called in Rust, the caller fills in the type parameters, not the callee. However, your enum in a sense "knows" what the concrete array type is, so only it can determine what that type parameter is supposed to be. If this mismatch is blocking your progress, this usually calls for a reconsideration of your code structure.
This also explains why there is no built-in enum method that does what you're trying to do. That method would run into the same issue as your value method. When you want to inspect the contents of an enum in Rust, you need to pattern match on it.
There is at least one way to try to accomplish your goal, but I would not really recommend it. One change that makes the code closer to being viable is by passing a closure into the function to make the modification, (the syntax below is not currently valid Rust but it gets the idea across):
pub fn modify<'a, F>(&'a self, op: F)
where
F: for<T: HasAfEnum> FnOnce(&'a Array<T>)
{
// This looks repetitive, but the idea is that in each branch
// the type parameter T takes on the appropriate type for the variant
match self {
ExAfArray::U8(array) => op(array),
ExAfArray::S32(array) => op(array),
ExAfArray::S64(array) => op(array),
ExAfArray::F32(array) => op(array),
ExAfArray::F64(array) => op(array),
}
}
Unfortunately the for<T> FnTrait(T) syntax does not exist yet and I'm not even sure if there's a proposal for it to be added. This can be worked around through a macro:
pub(crate) fn call_unary<F, T, U>(arg: T, f: F) -> U
where F: FnOnce(T) -> U {
f(arg)
}
macro_rules! modify {
($ex_af_array:expr, $op:expr) => {
match &$ex_af_array {
ExAfArray::U8(array) => call_unary(array, $op),
ExAfArray::S32(array) => call_unary(array, $op),
ExAfArray::S64(array) => call_unary(array, $op),
ExAfArray::F32(array) => call_unary(array, $op),
ExAfArray::F64(array) => call_unary(array, $op),
}
};
}
The call_unary helper is needed to ensure type inference works properly. ($op)(array) will fail to compile when the types of the arguments to $op need to be inferred.
Now this solution mostly covers the functionality that for<T> FnTrait(T) would provide, but it's not very clean code (especially after the macro body is sanitized), and the compiler errors will be poor if the macro is misused.
Apologize for confusing title.
Say I have a user_ids: Vec<u32> and there is a function fn get_year(user_id: u32) -> Result<Option<u32>, Error>(this function involve db lookup`
And I would like to get years of all those user_ids and concert it into a hashmap where key is the id and value is the year. If it should return error immediately if there is one get_year return error.
I am able to get it working with for
use std::collections::HashMap;
#[derive(Debug)]
enum Error {
ConnectionFail
}
fn get_year(user_id: u32) -> Result<Option<u32>, Error> {
Err(Error::ConnectionFail)
}
fn get_age_lookup(user_ids: Vec<u32>) -> Result<HashMap<u32, u32>, Error> {
let mut age_lookup = HashMap::new();
for &user_id in user_ids.iter() {
let year_result = get_year(user_id)?;
match year_result {
None => (),
Some(year) => {age_lookup.insert(user_id, year);}
};
};
Ok(age_lookup)
}
fn main() {
let age_lookup = get_age_lookup(vec![1, 2, 3]);
println!("result: {:?}!", age_lookup);
}
But not able to do it without for. I am able to get it working but that require 2 iterators which is probably not efficient. I wonder if there is a way to make it work with only 1 iteration without for`?
fn get_age_lookup(user_ids: Vec<u32>) -> Result<HashMap<u32, u32>, Error> {
let age_tuples =
user_ids
.into_iter()
.map(|user_id| {
get_year(user_id)
.map(|year_result| match year_result {
None => None,
Some(year) => Some((user_id, year))
})
})
.collect::<Result<Vec<Option<(u32, u32)>>, Error>>()?;
Ok(age_tuples.into_iter().filter_map(|x| x).collect::<HashMap<u32, u32>>())
}
}
You are very close. The missing ingredient here is Result::transpose(), which turns a Result<Option<T>, E> into a Option<Result<T, E>>. You can filter_map() over that, which will eliminate the Ok(None) values and simultaneously unwrap the nested Option, so you'll have an iterator of Result<T, E> at that point. Finally, collect() can convert into a collection wrapped by a Result -- if any item in the sequence is Err, it will short-circuit and propagate the error.
fn get_age_lookup(user_ids: Vec<u32>) -> Result<HashMap<u32, u32>, Error> {
user_ids
.into_iter()
.map(|user_id| {
get_year(user_id)
.map(|year_result| match year_result {
None => None,
Some(year) => Some((user_id, year))
})
})
.filter_map(|x| x.transpose())
.collect()
}
(Playground)
As an aside, you can additionally simplify your match using the Option::map() method, which applies a transformation to Ok values. So this:
match year_result {
None => None,
Some(year) => Some((user_id, year))
}
Can become:
year_result.map(|year| (user_id, year))
I've written this function to parse a comma-separated string and return either a <Vec<&str>> or a custom error:
fn parse_input(s: &str) -> Result<Vec<&str>, ParseError> {
match s.splitn(2, ',').next() {
Ok(s) => s.collect::<Vec<&str>>(),
Err(err) => Err(ParseError::InvalidInput)
}
}
The compiler gives me this response:
Ok(s) => s.collect::<Vec<&str>>(),
^^^^^ expected enum `Option`, found enum `Result`
...
Err(err) => Err(ParseError::InvalidInput)
^^^^^^^^ expected enum `Option`, found enum `Result`
My problem is that I don't understand how to change the code to satisfy the compiler. What is wrong with this function?
.next() returns an Option<&str>, i.e. Some(s) or None.
fn parse_input(s: &str) -> Result<Vec<&str>, ParseError> {
match s.splitn(2, ',').next() {
Some(s) => s.collect::<Vec<&str>>(),
None => Err(ParseError::InvalidInput),
}
}
Just like you wrapped the error with Err to make it a Result, the non-error needs to be wrapped with Ok.
fn parse_input(s: &str) -> Result<Vec<&str>, ParseError> {
match s.splitn(2, ',').next() {
Some(s) => Ok(s.collect::<Vec<&str>>()),
None => Err(ParseError::InvalidInput),
}
}
Whether it’s the pattern-matched Some(s) or the outer parameter s, s.collect() doesn’t make sense. Going by your description, maybe you want to split the string on commas, collect that into a Vec, and produce an error if the result doesn’t consist of exactly two parts?
fn parse_input(s: &str) -> Result<Vec<&str>, ParseError> {
let parts: Vec<_> = s.split(',').collect();
if parts.len() == 2 {
Ok(parts)
} else {
Err(ParseError::InvalidInput)
}
}
Maybe a pair would be better? Also, if more than one comma is acceptable and you just want to split on the first one, split_once fits perfectly.
fn parse_input(s: &str) -> Result<(&str, &str), ParseError> {
s.split_once(',').ok_or(ParseError::InvalidInput)
}