How can a float value be converted to a String? For whatever reason, the documentation and all online sources I can find are only concerned with the other way around.
let value: f32 = 17.65;
let value_as_str: String = .....
Sometimes, the answer is easy: to_string().
let pi = 3.1415926;
let s = pi.to_string(); // : String
Background
The foundation for "creating a readable string representation of something" is in the fmt module. Probably the most important trait in this module is Display. Display is an abstraction over types that can be formatted as a user-facing string (pretty much exactly what you want). Usually the Display trait is used by println!() and friends. So you can already convert your float to string with the format!() macro:
let s = format!("{}", pi);
But there is something else: the ToString trait. This trait talks about types that can be converted to a String. And now, there is a magic implementation:
impl<T> ToString for T
where T: Display + ?Sized
This means: every type which implements Display also automatically implements ToString! So instead of writing format!("{}", your_value) you can simply write your_value.to_string()!
While these wildcard implementations are extremely useful and versatile, they have one disadvantage: finding methods is much harder. As you point out, the documentation of f32 doesn't mention to_string() at all. This is not very good, but it is a known issue. We're trying to improve this situation!
Advanced formatting
The to_string() method uses the default formatting options, so it's equivalent to format!("{}", my_value). But sometimes, you want to tweak how the value is converted into a string. To do that, you have to use format!() and the full power of the fmt format specifier. You can read about those in the module documentation. One example:
let s = format!("{:.2}", pi);
This will result in a string with exactly two digits after the decimal point ("3.14").
If you want to convert your float into a string using scientific notation, you can use the {:e} (or {:E}) format specifier which corresponds to the LowerExp (or UpperExp) trait.
let s = format!("{:e}", pi * 1_000_000.0);
This will result in "3.1415926e6".
Related
Problem
I want to write a single function that allows me to convert from type A (in this case, u8) to type B (a custom type), and then back from type B to type A. According to the first paragraph of the entry about the traits From and Into of Rust by Example:
The From and Into traits are inherently linked, and this is actually part of its implementation. If you are able to convert type A from type B, then it should be easy to believe that we should be able to convert type B to type A.
However, implementing From (i.e.impl From<A> for B) allowed me to convert from A to B and only A to B, except in two different ways (still not sure why two ways are necessary but anyway). Can I convert from B to A using the same implementation? Or is there no way to use the information already there?
What I have tried
I tried implementing From (or TryFrom in this case) on my type NormalMneumonic like so
impl TryFrom<&str> for NormalMneumonic {
type Error = Error;
fn try_from(value: &str) -> Result<Self, Self::Error> {
match value {
"JP" => Ok(Self::Jump),
// --snip--
_ => Err("Input is not a normal menumonic."),
}
}
}
With that I'm able to do
let mneumonic_1 /*: Result<NormalMneumonic, _>*/ = NormalMneumonic::try_from("JP");
let mneumonic_2: Result<NormalMneumonic, _> = "JP".try_into();
assert_eq!(mneumonic_1, mneumonic_2);
but I haven't found a way to convert, in this case, from NormalMneumonic back into &str.
I'm looking for something like
let mneumonic_string = NormalMneumonic::Jump.try_into(); // or perhaps &str::try_from(NormalMneumonic::Jump)
Some context
I'm trying to write an assembler and a linker for a simplified assembly language using Rust. One of the data types I have defined to help with that is NormalMneumonic, which is just an enum with a variant for each valid mneumonic.
On writing the assembler, I'll need to read some text file and write some binary, and on linking I'll need to read back some binary files and write a different binary file. With that in mind, I was looking for a way to convert back and forth between a string slice (&str) or a byte (u8) and a NormalMneumonic variant.
From the quote I mentioned, I thought converting back and forth between types was the use case for the From trait, but it seems the book in this case just uses misleading language.
No, a From<A> for B implementation will not create a From<B> for A implementation.
The From trait is composed of only a single method fn from(a: A) -> B. With just this signature, would you be able to create the reverse implementation for all A and B? Of course not! And the compiler will not look at the existing implementation's body to try to deduce the other. For one thing, many conversions are lossy, many conversions are fallible, and may have hurdles converting one way that don't exist when converting the opposite direction. So even if the compiler did look at the existing implementation, its not practical or even possible in general.
From the quote I mentioned, I thought converting back and forth between types was the use case for the From trait, but it seems the book in this case just uses misleading language.
Indeed, you've misinterpreted the quote. It is essentially saying the same thing twice, but in a different context: "convert type A from type B" is the same operation as "convert type B to type A", both are B -> A, just the subject of the phrasing has changed. And this reflects the only difference between From and Into. The syntaxes A::from(b) and b.into() (with inferred A) cannot be done with a single trait.
If you're looking to make your life easier when dealing with enums, as already mentioned, the strum crate has many derive macros designed to:
convert to string: IntoStaticStr and/or ToString
convert from a string: EnumString
convert from u8: FromRepr
(converting to u8 can be done with just as u8 if #[repr(u8)] is added)
See these existing answers for other options:
How do I get an enum as a string?
Can I convert a string to enum without macros in Rust?
How do I match enum values with an integer?
Maybe strum_macros can help.
It can generate code to convert enum value to &str
use strum_macros::IntoStaticStr;
#[derive(IntoStaticStr)]
enum NormalMneumonic {
}
let hex = "100000000000000000".as_bytes().to_hex();
// hex == "313030303030303030303030303030303030"
println!("{:x}", 100000000000000000000000u64);
// literal out of range for u64
How can I got that value?
In Python, I would just call hex(100000000000000000000000) and I get '0x152d02c7e14af6800000'.
to_hex() comes from the hex crate.
One needs to be aware of the range of representable values for different numeric types in Rust. In this particular case, the value exceeds the limits of an u64, but the u128 type accommodates the value. The following code outputs the same value as the example in Python:
fn main() {
let my_string = "100000000000000000000000".to_string(); // `parse()` works with `&str` and `String`!
let my_int = my_string.parse::<u128>().unwrap();
let my_hex = format!("{:X}", my_int);
println!("{}", my_hex);
}
Checked with the Rust Playground:
152D02C7E14AF6800000
An explicit usage of arbitrary precision arithmetic is required in the general case. A few suggestions from What's the best crate for arbitrary precision arithmetic in Rust? on Reddit:
num_bigint works on stable and does not have unsafe code.
ramp uses unsafe and does not work on stable Rust, but it is faster.
rust-gmp and rug bind to the state-of-the-art bigint implementation in C (GMP). They are the fastest and have the most features. You probably want to use one of those.
Rust has FromStr, however as far as I can see this only takes Unicode text input. Is there an equivalent to this for [u8] arrays?
By "parse" I mean take ASCII characters and return an integer, like C's atoi does.
Or do I need to either...
Convert the u8 array to a string first, then call FromStr.
Call out to libc's atoi.
Write an atoi in Rust.
In nearly all cases the first option is reasonable, however there are cases where files maybe be very large, with no predefined encoding... or contain mixed binary and text, where its most straightforward to read integer numbers as bytes.
No, the standard library has no such feature, but it doesn't need one.
As stated in the comments, the raw bytes can be converted to a &str via:
str::from_utf8
str::from_utf8_unchecked
Neither of these perform extra allocation. The first one ensures the bytes are valid UTF-8, the second does not. Everyone should use the checked form until such time as profiling proves that it's a bottleneck, then use the unchecked form once it's proven safe to do so.
If bytes deeper in the data need to be parsed, a slice of the raw bytes can be obtained before conversion:
use std::str;
fn main() {
let raw_data = b"123132";
let the_bytes = &raw_data[1..4];
let the_string = str::from_utf8(the_bytes).expect("not UTF-8");
let the_number: u64 = the_string.parse().expect("not a number");
assert_eq!(the_number, 231);
}
As in other code, these these lines can be extracted into a function or a trait to allow for reuse. However, once that path is followed, it would be a good idea to look into one of the many great crates aimed at parsing. This is especially true if there's a need to parse binary data in addition to textual data.
I do not know of any way in the standard library, but maybe the atoi crate works for you? Full disclosure: I am its author.
use atoi::atoi;
let (number, digits) = atoi::<u32>(b"42 is the answer"); //returns (42,2)
You can check if the second element of the tuple is a zero to see if the slice starts with a digit.
let (number, digits) = atoi::<u32>(b"x"); //returns (0,0)
let (number, digits) = atoi::<u32>(b"0"); //returns (0,1)
I'm working on a library which will provide a trait for axis-aligned bounding boxes (AABB) operations. The trait is declared like this:
trait Aabb {
type Precision : Zero + One + Num + PartialOrd + Copy;
// [...]
}
I don't care which precision the user chooses, as long as these constraints are respected (though I don't really expect integer types to be chosen).
I'm having trouble using literals. Some operations require constant values, as an example:
let extension = 0.1;
aabb.extend(extension);
This doesn't work because Aabb::extend expects Aabb::Precision and not a float. My solution was something like this:
let mut ten = Aabb::Precision::zero();
for _ in 0..10 {
ten = ten + Aabb::Precision::one();
}
aabb_extension = Aabb::Precision::one() / ten;
This works, but I need to resort to this every time I need a specific number and it is getting cumbersome. Is this really the only way?
I need to resort to this every time I need a specific number and it is getting cumbersome. Is this really the only way?
Basically, yes. Unless you can answer the question of "how do you support converting a literal 0 to MyCustomTypeThatImplementsTheTrait".
You can't have it both ways — you can't ask for something to be generic and then use concrete literals.
You can have different workarounds. Providing base values like "zero" and "one", or having a "convert a specific type to yourself" method, for example.
You could also re-evaluate what you are attempting to do; perhaps you are thinking at too low a level. Indeed, what does it mean to "extend by 0.1" a type that represents points as floating point values between 0 and 1?
Maybe it would be better to have an expand_by_percentage method instead, or something else that makes sense in the domain.
See also:
How do I use integer number literals when using generic types?
Cannot create a generic function that uses a literal zero
Dividing a const by a generic in Rust
How can I create an is_prime function that is generic over various integer types?
In this case, I would recommend that you create your own trait and provide default implementations of the methods.
For example, I would naively imagine:
trait ApproximateValue: Zero + One {
fn approximate(val: f64) -> ApproximateValue {
// some algorithm to create "val" from Zero and One
}
}
then, your Precision associated type will have a bound of ApproximateValue and you will just call Precision::approximate(0.1).
In Rust programming language - I am trying to convert an integer into the string representation and so I write something like:
use std::int::to_str_bytes;
...
to_str_bytes(x, 10);
...but it says that I have to specify a third argument.The documentation is here: http://static.rust-lang.org/doc/master/std/int/fn.to_str_bytes.html , but I am not clever enough to understand what it expects as the third argument.
Using x.to_str() as in Njol's answer is the straightforward way to get a string representation of an integer. However, x.to_str() returns an owned (and therefore heap-allocated) string (~str). As long as you don't need to store the resulting string permanently, you can avoid the expense of an extra heap allocation by allocating the string representation on the stack. This is exactly the point of the std::int::to_str_bytes function - to provide a temporary string representation of a number.
The third argument, of type f: |v: &[u8]| -> U, is a closure that takes a byte slice (I don't think Rust has stack-allocated strings). You use it like this:
let mut f = std::io::stdout();
let result = std::int::to_str_bytes(100, 16, |v| {
f.write(v);
Some(())
});
to_str_bytes returns whatever the closure does, in this case Some(()).
int seems to implement ToStr: http://static.rust-lang.org/doc/master/std/to_str/trait.ToStr.html
so you should be able to simply use x.to_str() or to_str(x)