I am loading data from another language. Numbers can be very large and they are serialized as a byte array of u8s.
These are loaded into rust as a vec of u8s:
vec![1, 0, 0]
This represents 100. I also have a u32 to represent the cale.
I'm trying to load this into a rust_decimal, but am stuck.
measure_value.value -> a vec of u8
measure_value.scale -> a u32
let r_dec = rust_Decimal::????
This is the implementation I have so far, but it feels inelegant!
pub fn proto_to_decimal(input: &DecimalValueProto) -> Result<Decimal, String> {
let mut num = 0;
let mut power: i32 = (input.value.len() - 1)
.try_into()
.map_err(|_| "Failed to convert proto to decimal")?; //casting down from usize to i32 is failable
for digit in input.value.iter() {
let expansion: i128 = if power == 0 { expansion = *digit as i128 } else { expansion = (*digit as i128) * 10_i128.pow(power as u32) as i128 }
num += expansion;
power -= 1;
}
Ok(Decimal::from_i128_with_scale(num as i128, input.scale))
}
Related
Currently I'm using the following code to return a number as a binary (base 2), octal (base 8), or hexadecimal (base 16) string.
fn convert(inp: u32, out: u32, numb: &String) -> Result<String, String> {
match isize::from_str_radix(numb, inp) {
Ok(a) => match out {
2 => Ok(format!("{:b}", a)),
8 => Ok(format!("{:o}", a)),
16 => Ok(format!("{:x}", a)),
10 => Ok(format!("{}", a)),
0 | 1 => Err(format!("No base lower than 2!")),
_ => Err(format!("printing in this base is not supported")),
},
Err(e) => Err(format!(
"Could not convert {} to a number in base {}.\n{:?}\n",
numb, inp, e
)),
}
}
Now I want to replace the inner match statement so I can return the number as an arbitrarily based string (e.g. base 3.) Are there any built-in functions to convert a number into any given radix, similar to JavaScript's Number.toString() method?
For now, you cannot do it using the standard library, but you can:
use my crate radix_fmt
or roll your own implementation:
fn format_radix(mut x: u32, radix: u32) -> String {
let mut result = vec![];
loop {
let m = x % radix;
x = x / radix;
// will panic if you use a bad radix (< 2 or > 36).
result.push(std::char::from_digit(m, radix).unwrap());
if x == 0 {
break;
}
}
result.into_iter().rev().collect()
}
fn main() {
assert_eq!(format_radix(1234, 10), "1234");
assert_eq!(format_radix(1000, 10), "1000");
assert_eq!(format_radix(0, 10), "0");
}
If you wanted to eke out a little more performance, you can create a struct and implement Display or Debug for it. This avoids allocating a String. For maximum over-engineering, you can also have a stack-allocated array instead of the Vec.
Here is Boiethios' answer with these changes applied:
struct Radix {
x: i32,
radix: u32,
}
impl Radix {
fn new(x: i32, radix: u32) -> Result<Self, &'static str> {
if radix < 2 || radix > 36 {
Err("Unnsupported radix")
} else {
Ok(Self { x, radix })
}
}
}
use std::fmt;
impl fmt::Display for Radix {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
let mut x = self.x;
// Good for binary formatting of `u128`s
let mut result = ['\0'; 128];
let mut used = 0;
let negative = x < 0;
if negative {
x*=-1;
}
let mut x = x as u32;
loop {
let m = x % self.radix;
x /= self.radix;
result[used] = std::char::from_digit(m, self.radix).unwrap();
used += 1;
if x == 0 {
break;
}
}
if negative {
write!(f, "-")?;
}
for c in result[..used].iter().rev() {
write!(f, "{}", c)?;
}
Ok(())
}
}
fn main() {
assert_eq!(Radix::new(1234, 10).to_string(), "1234");
assert_eq!(Radix::new(1000, 10).to_string(), "1000");
assert_eq!(Radix::new(0, 10).to_string(), "0");
}
This could still be optimized by:
creating an ASCII array instead of a char array
not zero-initializing the array
Since these avenues require unsafe or an external crate like arraybuf, I have not included them. You can see sample code in internal implementation details of the standard library.
Here is an extended solution based on the first comment which does not bind the parameter x to be a u32:
fn format_radix(mut x: u128, radix: u32) -> String {
let mut result = vec![];
loop {
let m = x % radix as u128;
x = x / radix as u128;
// will panic if you use a bad radix (< 2 or > 36).
result.push(std::char::from_digit(m as u32, radix).unwrap());
if x == 0 {
break;
}
}
result.into_iter().rev().collect()
}
This is faster than the other answer:
use std::char::from_digit;
fn encode(mut n: u32, r: u32) -> Option<String> {
let mut s = String::new();
loop {
if let Some(c) = from_digit(n % r, r) {
s.insert(0, c)
} else {
return None
}
n /= r;
if n == 0 {
break
}
}
Some(s)
}
Note I also tried these, but they were slower:
https://doc.rust-lang.org/std/collections/struct.VecDeque.html#method.push_front
https://doc.rust-lang.org/std/string/struct.String.html#method.push
https://doc.rust-lang.org/std/vec/struct.Vec.html#method.insert
I would like to convert my bytes array into a u64.
For example
b"00" should return 0u64
b"0a" should return 10u64
I am working on blockchain, so I must find something efficient.
For example, my current function is not efficient at all.
let number_string = String::from_utf8_lossy(&my_bytes_array)
.to_owned()
.to_string();
let number = u64::from_str_radix(&number_string , 16).unwrap();
I have also tried
let number = u64::from_le_bytes(my_bytes_array);
But I got this error mismatched types expected array [u8; 8], found &[u8]
How about?
pub fn hex_to_u64(x: &[u8]) -> Option<u64> {
let mut result: u64 = 0;
for i in x {
result *= 16;
result += (*i as char).to_digit(16)? as u64;
}
Some(result)
}
I would like to have macro splitting one byte into tuple with 2-8 u8 parts using bitreader crate.
I managed to achieve that by following code:
use bitreader::BitReader;
trait Tupleprepend<T> {
type ResultType;
fn prepend(self, t: T) -> Self::ResultType;
}
macro_rules! impl_tuple_prepend {
( () ) => {};
( ( $t0:ident $(, $types:ident)* ) ) => {
impl<$t0, $($types,)* T> Tupleprepend<T> for ($t0, $($types,)*) {
type ResultType = (T, $t0, $($types,)*);
fn prepend(self, t: T) -> Self::ResultType {
let ($t0, $($types,)*) = self;
(t, $t0, $($types,)*)
}
}
impl_tuple_prepend! { ($($types),*) }
};
}
impl_tuple_prepend! {
(_1, _2, _3, _4, _5, _6, _7, _8)
}
macro_rules! split_byte (
($reader:ident, $bytes:expr, $count:expr) => {{
($reader.read_u8($count).unwrap(),)
}};
($reader:ident, $bytes:expr, $count:expr, $($next_counts:expr),+) => {{
let head = split_byte!($reader, $bytes, $count);
let tail = split_byte!($reader, $bytes, $($next_counts),+);
tail.prepend(head.0)
}};
($bytes:expr $(, $count:expr)* ) => {{
let mut reader = BitReader::new($bytes);
split_byte!(reader, $bytes $(, $count)+)
}};
);
Now I can use this code as I would like to:
let buf: &[u8] = &[0x72];
let (bit1, bit2, bits3to8) = split_byte!(&buf, 1, 1, 6);
Is there a way to avoid using Tupleprepend trait and create only 1 tuple instead of 8 in the worst scenario?
Because the number of bit widths directly corresponds to the number of returned values, I'd solve the problem using generics and arrays instead. The macro only exists to remove the typing of the [], which I don't really think is worth it.
fn split_byte<A>(b: u8, bit_widths: A) -> A
where
A: Default + std::ops::IndexMut<usize, Output = u8>,
for<'a> &'a A: IntoIterator<Item = &'a u8>,
{
let mut result = A::default();
let mut start = 0;
for (idx, &width) in bit_widths.into_iter().enumerate() {
let shifted = b >> (8 - width - start);
let mask = (0..width).fold(0, |a, _| (a << 1) | 1);
result[idx] = shifted & mask;
start += width;
}
result
}
macro_rules! split_byte {
($b:expr, $($w:expr),+) => (split_byte($b, [$($w),+]));
}
fn main() {
let [bit1, bit2, bits3_to_8] = split_byte!(0b1010_1010, 1, 1, 6);
assert_eq!(bit1, 0b1);
assert_eq!(bit2, 0b0);
assert_eq!(bits3_to_8, 0b10_1010);
}
See also:
How does for<> syntax differ from a regular lifetime bound?
How to write a trait bound for adding two references of a generic type?
How do I write the lifetimes for references in a type constraint when one of them is a local reference?
If it's ok to target nightly Rust, I'd use the unstable min_const_generics feature:
#![feature(min_const_generics)]
fn split_byte<const N: usize>(b: u8, bit_widths: [u8; N]) -> [u8; N] {
let mut result = [0; N];
let mut start = 0;
for (idx, &width) in bit_widths.iter().enumerate() {
let shifted = b >> (8 - width - start);
let mask = (0..width).fold(0, |a, _| (a << 1) | 1);
result[idx] = shifted & mask;
start += width;
}
result
}
macro_rules! split_byte {
($b:expr, $($w:expr),+) => (split_byte($b, [$($w),+]));
}
fn main() {
let [bit1, bit2, bits3_to_8] = split_byte!(0b1010_1010, 1, 1, 6);
assert_eq!(bit1, 0b1);
assert_eq!(bit2, 0b0);
assert_eq!(bits3_to_8, 0b10_1010);
}
See also:
Is it possible to control the size of an array using the type parameter of a generic?
My rust code runs in an environment where I have no access to std::string and std::* (but I have access to core::str). How can I convert a Vec<char> to u32 without going through String, such as:
let num_in_chars: Vec<char> = vec!['1', '2'];
// some process here
// let num = ...
// This is how I could do it if I have access to `String`
// let num = num_in_chars.iter().collect::<String>().parse::<u32>().unwrap();
assert_eq!(12, num);
Thanks
You must convert each char to a digit (in the map) and then you multiply each previous result by 10 and you add the new digit:
/// Returns `None` in case of invalid digit.
pub fn vec_to_int(digits: impl IntoIterator<Item = char>) -> Option<u32> {
const RADIX: u32 = 10;
digits
.into_iter()
.map(|c| c.to_digit(RADIX))
.try_fold(0, |ans, i| i.map(|i| ans * RADIX + i))
}
#[test]
fn it_works() {
let nums = vec!['1', '2'];
let num = vec_to_int(nums);
assert_eq!(Some(12), num);
}
#[test]
fn invalid_digit() {
let nums = vec!['1', 'a'];
let num = vec_to_int(nums);
assert_eq!(None, num);
}
Currently I'm using the following code to return a number as a binary (base 2), octal (base 8), or hexadecimal (base 16) string.
fn convert(inp: u32, out: u32, numb: &String) -> Result<String, String> {
match isize::from_str_radix(numb, inp) {
Ok(a) => match out {
2 => Ok(format!("{:b}", a)),
8 => Ok(format!("{:o}", a)),
16 => Ok(format!("{:x}", a)),
10 => Ok(format!("{}", a)),
0 | 1 => Err(format!("No base lower than 2!")),
_ => Err(format!("printing in this base is not supported")),
},
Err(e) => Err(format!(
"Could not convert {} to a number in base {}.\n{:?}\n",
numb, inp, e
)),
}
}
Now I want to replace the inner match statement so I can return the number as an arbitrarily based string (e.g. base 3.) Are there any built-in functions to convert a number into any given radix, similar to JavaScript's Number.toString() method?
For now, you cannot do it using the standard library, but you can:
use my crate radix_fmt
or roll your own implementation:
fn format_radix(mut x: u32, radix: u32) -> String {
let mut result = vec![];
loop {
let m = x % radix;
x = x / radix;
// will panic if you use a bad radix (< 2 or > 36).
result.push(std::char::from_digit(m, radix).unwrap());
if x == 0 {
break;
}
}
result.into_iter().rev().collect()
}
fn main() {
assert_eq!(format_radix(1234, 10), "1234");
assert_eq!(format_radix(1000, 10), "1000");
assert_eq!(format_radix(0, 10), "0");
}
If you wanted to eke out a little more performance, you can create a struct and implement Display or Debug for it. This avoids allocating a String. For maximum over-engineering, you can also have a stack-allocated array instead of the Vec.
Here is Boiethios' answer with these changes applied:
struct Radix {
x: i32,
radix: u32,
}
impl Radix {
fn new(x: i32, radix: u32) -> Result<Self, &'static str> {
if radix < 2 || radix > 36 {
Err("Unnsupported radix")
} else {
Ok(Self { x, radix })
}
}
}
use std::fmt;
impl fmt::Display for Radix {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
let mut x = self.x;
// Good for binary formatting of `u128`s
let mut result = ['\0'; 128];
let mut used = 0;
let negative = x < 0;
if negative {
x*=-1;
}
let mut x = x as u32;
loop {
let m = x % self.radix;
x /= self.radix;
result[used] = std::char::from_digit(m, self.radix).unwrap();
used += 1;
if x == 0 {
break;
}
}
if negative {
write!(f, "-")?;
}
for c in result[..used].iter().rev() {
write!(f, "{}", c)?;
}
Ok(())
}
}
fn main() {
assert_eq!(Radix::new(1234, 10).to_string(), "1234");
assert_eq!(Radix::new(1000, 10).to_string(), "1000");
assert_eq!(Radix::new(0, 10).to_string(), "0");
}
This could still be optimized by:
creating an ASCII array instead of a char array
not zero-initializing the array
Since these avenues require unsafe or an external crate like arraybuf, I have not included them. You can see sample code in internal implementation details of the standard library.
Here is an extended solution based on the first comment which does not bind the parameter x to be a u32:
fn format_radix(mut x: u128, radix: u32) -> String {
let mut result = vec![];
loop {
let m = x % radix as u128;
x = x / radix as u128;
// will panic if you use a bad radix (< 2 or > 36).
result.push(std::char::from_digit(m as u32, radix).unwrap());
if x == 0 {
break;
}
}
result.into_iter().rev().collect()
}
This is faster than the other answer:
use std::char::from_digit;
fn encode(mut n: u32, r: u32) -> Option<String> {
let mut s = String::new();
loop {
if let Some(c) = from_digit(n % r, r) {
s.insert(0, c)
} else {
return None
}
n /= r;
if n == 0 {
break
}
}
Some(s)
}
Note I also tried these, but they were slower:
https://doc.rust-lang.org/std/collections/struct.VecDeque.html#method.push_front
https://doc.rust-lang.org/std/string/struct.String.html#method.push
https://doc.rust-lang.org/std/vec/struct.Vec.html#method.insert