I have a type Instruction and would use std::convert::TryFrom to convert from a string.
Shall I implement over String or &str? If I use &str I am obliged to
use &* pattern or as_ref().
I have something like: Rust Playground permalink
use std::convert::TryFrom;
enum Instruction {
Forward, /* other removed for brievity */
}
#[derive(Debug)]
struct InstructionParseError(char);
impl std::convert::TryFrom<&str> for Instruction {
type Error = InstructionParseError;
fn try_from(input: &str) -> Result<Self, Self::Error> {
match input {
"F" => Ok(Instruction::Forward),
_ => unimplemented!(), // For brievity
}
}
}
fn main() {
// I use a string because this input can come from stdio.
let instr = String::from("F");
let instr = Instruction::try_from(&*instr);
}
I read this answer: Should Rust implementations of From/TryFrom target references or values? but i am wondering what is the best option: Implement both? Use impl avanced typing?
One solution I think after reading #SirDarius' comment is just to implement also for String
and use as_ref() or &* inside.
use std::convert::TryFrom;
enum Instruction {
Forward, /* other removed for brievity */
}
#[derive(Debug)]
struct InstructionParseError(char);
impl std::convert::TryFrom<String> for Instruction {
type Error = InstructionParseError;
fn try_from(input: String) -> Result<Self, Self::Error> {
Instruction::try_from(input.as_ref())
}
}
Like described here Should Rust implementations of From/TryFrom target references or values? maybe in the future if some change is made with blanket implemation, AsRef will be usable.
*** This doesn't actually work as SirDarius points out below.
Use T: AsRef<str>.
impl<T: AsRef<str>> std::convert::TryFrom<T> for Instruction {
type Error = InstructionParseError;
fn try_from(input: T) -> Result<Self, Self::Error> {
let input: &str = input.as_ref();
match input {
"F" => Ok(Instruction::Forward),
_ => unimplemented!(), // For brievity
}
}
}
Related
I am trying to understand following enum from this repo
#[repr(C)]
#[derive(BorshSerialize, BorshDeserialize, Debug, Clone)]
pub struct InitEscrowArgs {
pub data: EscrowReceive,
}
#[repr(C)]
#[derive(BorshSerialize, BorshDeserialize, Debug, Clone)]
pub struct ExchangeArgs {
pub data: EscrowReceive,
}
#[derive(BorshSerialize, BorshDeserialize, Clone)]
pub enum EscrowInstruction {
InitEscrow(InitEscrowArgs),
Exchange(ExchangeArgs),
CancelEscrow(),
}
and it's use of it in this match from this repo.
pub fn process(
program_id: &Pubkey,
accounts: &[AccountInfo],
instruction_data: &[u8],
) -> ProgramResult {
let instruction = EscrowInstruction::try_from_slice(instruction_data)?;
match instruction {
EscrowInstruction::InitEscrow(args) => {
msg!("Instruction: Init Escrow");
Self::process_init_escrow(program_id, accounts, args.data.amount)
}
EscrowInstruction::Exchange(args) => {
msg!("Instruction: Exchange Escrow");
Self::process_exchange(program_id, accounts, args.data.amount)
}
EscrowInstruction::CancelEscrow() => {
msg!("Instruction: Cancel Escrow");
Self::process_cancel(program_id, accounts)
}
}
}
I understand that this try_from_slice method gets some sort of byte array and deserialize it.
I do not understand how it determines which enum value to use.
The enum has 3 choices, InitEscrow / Exchange / CancelEscrow, but what determines the match to know which one it is suppose to select?
Seem to me the InitEscrowArgs and ExchangeArgs both takes in same struct. Both containing data that is EscrowReceive data type.
Method try_from_slice is part of the BorshDeserialize trait, which is derived on the enum in question. So, the choice between enum variants is made by the implementation of deserializer.
To see what is really going on, I've built the simplest possible example:
use borsh::BorshDeserialize;
#[derive(BorshDeserialize)]
enum Enum {
Variant1(u8),
Variant2,
}
By using cargo expand and a little manual cleanup, we can get the following equivalent code:
impl borsh::de::BorshDeserialize for Enum {
fn deserialize(buf: &mut &[u8]) -> Result<Self, std::io::Error> {
let variant_idx: u8 = borsh::BorshDeserialize::deserialize(buf)?;
let return_value = match variant_idx {
0u8 => Enum::Variant1(borsh::BorshDeserialize::deserialize(buf)?),
1u8 => Enum::Variant2,
_ => {
let msg = format!("Unexpected variant index: {}", variant_idx);
return Err(std::io::Error::new(
std::io::ErrorKind::InvalidInput,
msg,
));
}
};
Ok(return_value)
}
}
Where the inner deserialize calls refers to impl BorshDeserialize for u8:
fn deserialize(buf: &mut &[u8]) -> Result<Self> {
if buf.is_empty() {
return Err(Error::new(
ErrorKind::InvalidInput,
ERROR_UNEXPECTED_LENGTH_OF_INPUT,
));
}
let res = buf[0];
*buf = &buf[1..];
Ok(res)
}
So, it works the following way:
Deserializer tries to pull one byte from input; if there's none - this is an error.
This byte is interpreted as an index of enum variant; if it doesn't match to one of variants - this is an error.
If the variant contains any data, deserializer tries to pull this data from the input; if it fails (according to the inner type's implementation) - this is an error.
Wondering if there's a "proper" way of converting an Enum to a &str and back.
The problem I'm trying to solve:
In the clap crate, args/subcommands are defined and identified by &strs. (Which I'm assuming don't fully take advantage of the type checker.) I'd like to pass a Command Enum to my application instead of a &str which would be verified by the type-checker and also save me from typing (typo-ing?) strings all over the place.
This is what I came up with from searching StackOverflow and std:
use std::str::FromStr;
#[derive(Debug, Clone, Copy, PartialEq)]
pub enum Command {
EatCake,
MakeCake,
}
impl FromStr for Command {
type Err = ();
fn from_str(s: &str) -> std::result::Result<Self, Self::Err> {
match s.to_ascii_lowercase().as_str() {
"eat-cake" => Ok(Self::EatCake),
"make-cake" => Ok(Self::MakeCake),
_ => Err(()),
}
}
}
impl<'a> From<Command> for &'a str {
fn from(c: Command) -> Self {
match c {
Command::EatCake => "eat-cake",
Command::MakeCake => "make-cake",
}
}
}
fn main() {
let command_from_str: Command = "eat-cake".to_owned().parse().unwrap();
let str_from_command: &str = command_from_str.into();
assert_eq!(command_from_str, Command::EatCake);
assert_eq!(str_from_command, "eat-cake");
}
And here's a working playground:
https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=b5e9ac450fd6a79b855306e96d4707fa
Here's an abridged version of what I'm running in clap.
let matches = App::new("cake")
.setting(AppSettings::SubcommandRequiredElseHelp)
// ...
.subcommand(
SubCommand::with_name(Command::MakeCake.into())
// ...
)
.subcommand(
SubCommand::with_name(Command::EatCake.into())
// ...
)
.get_matches();
It seems to work, but I'm not sure if I'm missing something / a bigger picture.
Related:
How to use an internal library Enum for Clap Args
How do I return an error within match statement while implementing from_str in rust?
Thanks!
The strum crate may save you some work. Using strum I was able to get the simple main() you have to work without any additional From implementations.
use strum_macros::{Display, EnumString, IntoStaticStr};
#[derive(Debug, Clone, Copy, PartialEq)]
#[derive(Display, EnumString, IntoStaticStr)] // strum macros.
pub enum Command {
#[strum(serialize = "eat-cake")]
EatCake,
#[strum(serialize = "make-cake")]
MakeCake,
}
fn main() {
let command_from_str: Command = "eat-cake".to_owned().parse().unwrap();
let str_from_command: &str = command_from_str.into();
assert_eq!(command_from_str, Command::EatCake);
assert_eq!(str_from_command, "eat-cake");
}
(English is not my native language; please excuse typing errors.)
In my project thers is an error type:
pub type RvzResult<T> = Result<T, RvzError>;
#[derive(Error, Debug)]
pub enum RvzError {
// some error types...
,
OtherErr(Box<dyn Error>)
}
One day I had a Mutex object and used it like this:
pub fn some_func() -> RvzResult<()> {
// ...
let lock = the_mutex.lock()?;
// ...
}
But rustc wasn't so happy:error[E0277]: '?' couldn't convert the error to 'RvzError'
I tried to impl From trait like this:
impl <T> From<PoisonError<T>> for RvzError {
fn from(err: PoisonError<T>) -> Self {
Self::OtherErr(Box::new(err))
}
}
It failed:error[E0310]: the parameter type 'T' may not live long enough
A PoisonError is not just an error code, but has a special function: it allows you to bypass the lock-poisoning check and access the data anyway. The <T> parameter of PoisonError is the lock guard object that it can be asked to return, so the PoisonError in your function is a borrow of the_mutex.
Therefore, it cannot be made into a Box<dyn Error> (which is implicitly + 'static) and should not be returned from some_func() anyway. Instead, create a new error which doesn't try to contain the original PoisonError value — either a variant of RvzError:
impl<T> From<PoisonError<T>> for RvzError {
fn from(err: PoisonError<T>) -> Self {
Self::Poison
}
}
Or a boxed string error (using this impl):
impl<T> From<PoisonError<T>> for RvzError {
fn from(_: PoisonError<T>) -> Self {
Self::OtherErr(Box::<dyn Error>::from("PoisonError"))
}
}
I want to convert multiple env.variables to static struct.
I can do it mannually:
Env {
is_development: env::var("IS_DEVELOPMENT")
.unwrap()
.parse::<bool>()
.unwrap(),
server: Server {
host: env::var("HOST").unwrap(),
port: env::var("PORT")
.unwrap()
.parse::<u16>()
.unwrap(),
},
}
But when there is multiple values, it's became bloated. Is there a way to make generic helper function that will give me value that i specify or panic? Something like this (or another solution):
fn get_env_var<T>(env_var_name: String) -> T {
// panic is ok here
let var = env::var(env_var_name).unwrap();
T::from(var)
}
get_env_var<u16>("PORT") // here i got u16
get_env_var<bool>("IS_DEVELOPMENT") // here is my boolean
Full example
use crate::server::logger::log_raw;
use dotenv::dotenv;
use serde::Deserialize;
use std::env;
#[derive(Deserialize, Debug, Clone)]
pub struct Server {
pub host: String,
pub port: u16,
}
#[derive(Deserialize, Debug, Clone)]
pub struct Env {
pub is_development: bool,
pub server: Server,
}
impl Env {
pub fn init() -> Self {
dotenv().expect(".env loading fail");
// how can i specify what type i expect?
fn get_env_var<T>(env_var_name: String) -> T {
// panic is ok here
let var = env::var(env_var_name).unwrap();
T::from(var)
}
// instead this
Env {
is_development: env::var("IS_DEVELOPMENT")
.unwrap()
.parse::<bool>()
.unwrap(),
server: Server {
host: env::var("HOST").unwrap(),
port: env::var("PORT")
.unwrap()
.parse::<u16>()
.unwrap(),
},
}
// do something like this
/*
Env {
is_development: get_env_var<bool>("IS_DEVELOPMENT"),
server: Server {
host: get_env_var<String>("HOST"),
port: get_env_var<u16>("PORT"),
},
}
*/
}
}
lazy_static! {
pub static ref ENV: Env = Env::init();
}
Like in your manual version, where you use str::parse, you can have the same requirement as str::parse, which is FromStr. So if you include the T: FromStr requirement, then you'll be able to do var.parse::<T>().
use std::env;
use std::fmt::Debug;
use std::str::FromStr;
fn get_env_var<T>(env_var_name: &str) -> T
where
T: FromStr,
T::Err: Debug,
{
let var = env::var(env_var_name).unwrap();
var.parse::<T>().unwrap()
}
Then if you run the following by executing PORT=1234 IS_DEVELOPMENT=true cargo run.
fn main() {
println!("{}", get_env_var::<u16>("PORT"));
println!("{}", get_env_var::<bool>("IS_DEVELOPMENT"));
}
Then it will output:
1234
true
Alternatively, you might want to be able to handle VarError::NotPresent and fallback to a default.
use std::env::{self, VarError};
use std::fmt::Debug;
use std::str::FromStr;
fn get_env_var<T>(env_var_name: &str) -> Result<T, VarError>
where
T: FromStr,
T::Err: Debug,
{
let var = env::var(env_var_name)?;
Ok(var.parse().unwrap())
}
Now if you only executed PORT=1234 cargo run, then it would make it easier to do this:
let is_dev = get_env_var::<bool>("IS_DEVELOPMENT")
.map_err(|err| match err {
VarError::NotPresent => Ok(false),
err => Err(err),
})
.unwrap();
println!("{:?}", is_dev);
If you want to fallback to Default if VarError::NotPresent:
fn get_env_var<T>(env_var_name: &str) -> T
where
T: FromStr,
T::Err: Debug,
T: Default,
{
let var = match env::var(env_var_name) {
Err(VarError::NotPresent) => return T::default(),
res => res.unwrap(),
};
var.parse().unwrap()
}
Rust genericity, inspired by Haskell's works through traits and specifically trait bounds. This means when you write
fn get_env_var<T>(env_var_name: String) -> T
since there is no trait bound on T there are essentially no capabilities for it (this is rather unlike C++).
Therefore, as far as rustc is concerned, pretty much the only thing it can do with a T is... take one as parameter then return it as-is.
Thus to do anything useful with a T (including creating one, whether from something else or de novo) you need to use the correct trait and provide the correct trait bounds.
The From trait is entirely the wrong trait to involve here: it specifies total (never-failing) conversions e.g. converting a u16 to a u32, which can never fail.
Whether it's converting a String to a bool or a u16, the conversion is quite obviously less than total: there is an infinity of string values which are not sequences of decimal digits describing a number below 2^16.
In Rust, the signifier of failabibility tends to be Try. There is a TryFrom trait, however for historical reasons and as it documents in its signature the str::parse method is hooked on the FromStr trait.
This means in order to declare that your T can be created from a string (and use the parse method to create one), you need to bound T to FromStr. And of course indicate that it may fail, and will return whatever error T generates when it can't be parsed from a string:
fn get_env_var<T: FromStr>(env_var_name: String) -> Result<T, T::Err> {
let var = env::var(env_var_name).unwrap();
var.parse()
}
Incidentally, taking a String as input is usually avoided unless you really have to[0]. Usually you'd take an &str, that's a lot more flexible as it can be used e.g. with string literals (which are of type &'static str).
So
fn get_env_var<T: FromStr>(env_var_name: &str) -> Result<T, T::Err> {
let var = env::var(env_var_name).unwrap();
var.parse()
}
[0] or for efficiency purposes sometimes
I'm trying to implement a method that looks like:
fn concretify<T: Any>(rc: Rc<Any>) -> Option<T> {
Rc::try_unwrap(rc).ok().and_then(|trait_object| {
let b: Box<Any> = unimplemented!();
b.downcast().ok().map(|b| *b)
})
}
However, try_unwrap doesn't work on trait objects (which makes sense, as they're unsized). My next thought was to try to find some function that unwraps Rc<Any> into Box<Any> directly. The closest thing I could find would be
if Rc::strong_count(&rc) == 1 {
Some(unsafe {
Box::from_raw(Rc::into_raw(rc))
})
} else {
None
}
However, Rc::into_raw() appears to require that the type contained in the Rc to be Sized, and I'd ideally not like to have to use unsafe blocks.
Is there any way to implement this?
Playground Link, I'm looking for an implementation of rc_to_box here.
Unfortunately, it appears that the API of Rc is lacking the necessary method to be able to get ownership of the wrapped type when it is !Sized.
The only method which may return the interior item of a Rc is Rc::try_unwrap, however it returns Result<T, Rc<T>> which requires that T be Sized.
In order to do what you wish, you would need to have a method with a signature: Rc<T> -> Result<Box<T>, Rc<T>>, which would allow T to be !Sized, and from there you could extract Box<Any> and perform the downcast call.
However, this method is impossible due to how Rc is implemented. Here is a stripped down version of Rc:
struct RcBox<T: ?Sized> {
strong: Cell<usize>,
weak: Cell<usize>,
value: T,
}
pub struct Rc<T: ?Sized> {
ptr: *mut RcBox<T>,
_marker: PhantomData<T>,
}
Therefore, the only Box you can get out of Rc<T> is Box<RcBox<T>>.
Note that the design is severely constrained here:
single-allocation mandates that all 3 elements be in a single struct
T: ?Sized mandates that T be the last field
so there is little room for improvement in general.
However, in your specific case, it is definitely possible to improve on the generic situation. It does, of course, require unsafe code. And while it works fairly well with Rc, implementing it with Arc would be complicated by the potential data-races.
Oh... and the code is provided as is, no warranty implied ;)
use std::any::Any;
use std::{cell, mem, ptr};
use std::rc::Rc;
struct RcBox<T: ?Sized> {
strong: cell::Cell<usize>,
_weak: cell::Cell<usize>,
value: T,
}
fn concretify<T: Any>(rc: Rc<Any>) -> Option<T> {
// Will be responsible for freeing the memory if there is no other weak
// pointer by the end of this function.
let _guard = Rc::downgrade(&rc);
unsafe {
let killer: &RcBox<Any> = {
let killer: *const RcBox<Any> = mem::transmute(rc);
&*killer
};
if killer.strong.get() != 1 { return None; }
// Do not forget to decrement the count if we do take ownership,
// as otherwise memory will not get released.
let result = killer.value.downcast_ref().map(|r| {
killer.strong.set(0);
ptr::read(r as *const T)
});
// Do not forget to destroy the content of the box if we did not
// take ownership
if result.is_none() {
let _: Rc<Any> = mem::transmute(killer as *const RcBox<Any>);
}
result
}
}
fn main() {
let x: Rc<Any> = Rc::new(1);
println!("{:?}", concretify::<i32>(x));
}
I don't think it's possible to implement your concretify function if you're expecting it to move the original value back out of the Rc; see this question for why.
If you're willing to return a clone, it's straightforward:
fn concretify<T: Any+Clone>(rc: Rc<Any>) -> Option<T> {
rc.downcast_ref().map(Clone::clone)
}
Here's a test:
#[derive(Debug,Clone)]
struct Foo(u32);
#[derive(Debug,Clone)]
struct Bar(i32);
fn main() {
let rc_foo: Rc<Any> = Rc::new(Foo(42));
let rc_bar: Rc<Any> = Rc::new(Bar(7));
let foo: Option<Foo> = concretify(rc_foo);
println!("Got back: {:?}", foo);
let bar: Option<Foo> = concretify(rc_bar);
println!("Got back: {:?}", bar);
}
This outputs:
Got back: Some(Foo(42))
Got back: None
Playground
If you want something more "movey", and creating your values is cheap, you could also make a dummy, use downcast_mut() instead of downcast_ref(), and then std::mem::swap with the dummy.