In the case of Brazil, the thousands separator is '.' and the decimal separator is ','.
Is there any more efficient way using just the standard Rust library?
I'm currently using the following functions:
thousands_separator:
fn thousands_separator (value: f64, decimal: usize) -> String {
let abs_value = value.abs(); // absolute value
let round = format!("{:0.decimal$}", abs_value);
let integer: String = round[..(round.len() - decimal - 1)].to_string();
let fraction: String = round[(round.len() - decimal)..].to_string();
//println!("round: {}", round);
//println!("integer: {}", integer);
//println!("fraction: {}", fraction);
let size = 3;
let thousands_sep = '.';
let decimal_sep = "," ;
let integer_splitted = integer
.chars()
.enumerate()
.flat_map(|(i, c)| {
if (integer.len() - i) % size == 0 && i > 0 {
Some(thousands_sep)
} else {
None
}
.into_iter()
.chain(std::iter::once(c))
})
.collect::<String>();
if value.is_sign_negative() {
"-".to_string() + &integer_splitted + decimal_sep + &fraction
} else {
integer_splitted + decimal_sep + &fraction
}
}
thousands_separator_alternative:
fn thousands_separator_alternative(value: f64, decimal: usize) -> String {
let abs_value = value.abs(); // absolute value
let round = format!("{:0.decimal$}", abs_value);
let integer: String = round[..(round.len() - decimal - 1)].to_string();
let fraction: String = round[(round.len() - decimal)..].to_string();
println!("round: {}", round);
println!("integer: {}", integer);
println!("fraction: {}", fraction);
let size = 3;
let thousands_sep = '.';
let decimal_sep = "," ;
// Get chars from string.
let chars: Vec<char> = integer.chars().collect();
// Allocate new string.
let mut integer_splitted = String::new();
// Add characters and thousands_sep in sequence
let mut i = 0;
loop {
let j = integer.len() - i;
if j % size == 0 && i > 0 {
integer_splitted.push(thousands_sep);
}
integer_splitted.push(chars[i]);
//println!("i: {} ; j: {} ; integer_splitted: {}", i, j, integer_splitted);
if i == integer.len() - 1 {
break;
}
i += 1;
}
if value.is_sign_negative() {
"-".to_string() + &integer_splitted + decimal_sep + &fraction
} else {
integer_splitted + decimal_sep + &fraction
}
}
The end result is (https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=393859bc0628fcf61eb57dad57a0b945):
number: 67999.9999
A: formatted number: 68.000,00
round: 68000.00
integer: 68000
fraction: 00
B: formatted number: 68.000,00
number: 56345722178.365
A: formatted number: 56.345.722.178,36
round: 56345722178.36
integer: 56345722178
fraction: 36
B: formatted number: 56.345.722.178,36
number: -2987954368.996177
A: formatted number: -2.987.954.369,00
round: 2987954369.00
integer: 2987954369
fraction: 00
B: formatted number: -2.987.954.369,00
number: 0.999
A: formatted number: 1,00
round: 1.00
integer: 1
fraction: 00
B: formatted number: 1,00
number: -4321.99999
A: formatted number: -4.322,00
round: 4322.00
integer: 4322
fraction: 00
B: formatted number: -4.322,00
Related
I want to invert 12 bit long string representation of binary number and convert it to decimal.
let a_str = "110011110101";
let b_str = invert(a_str); // value should be "001100001010"
println!("{}", isize::from_str_radix(a_str, 2).unwrap()); // decimal is 3317
println!("{}", isize::from_str_radix(b_str, 2).unwrap()); // decimal should be 778
You can use the bitwise not operator ! and mask out the unwanted bits:
let a = isize::from_str_radix(a_str, 2).unwrap();
println!("{a}");
println!("{}", !a & (1isize << a_str.len()).wrapping_sub(1));
Playground.
Or you can manipulate the string, but this is much more expensive (allocation and copy):
let b_str = a_str
.chars()
.map(|c| if c == '0' { '1' } else { '0' })
.collect::<String>();
println!("{}", isize::from_str_radix(a_str, 2).unwrap());
println!("{}", isize::from_str_radix(&b_str, 2).unwrap());
Playground.
I'm attempting to write an addition algorithm of two floating point numbers in Rust. I have nearly got it to work, but there are a few cases when the final mantissa is one off from what it should be. (I'm not yet dealing with subnormal numbers). My algorithm is:
fn add_f32(a: f32, b: f32) -> f32 {
let a_bits = a.to_bits();
let b_bits = b.to_bits();
let a_exp = (a_bits << 1) >> (23 + 1);
let b_exp = (b_bits << 1) >> (23 + 1);
let mut a_mant = a_bits & 0x007fffff;
let mut b_mant = b_bits & 0x007fffff;
let mut a_exp = (a_exp as i32).wrapping_sub(127);
let mut b_exp = (b_exp as i32).wrapping_sub(127);
if b_exp > a_exp {
// If b has a larger exponent than a, swap a and b so that a has the larger exponent
core::mem::swap(&mut a_mant, &mut b_mant);
core::mem::swap(&mut a_exp, &mut b_exp);
}
let exp_diff = (a_exp - b_exp) as u32;
// Add the implicit leading 1 bit to the mantissas
a_mant |= 1 << 23;
b_mant |= 1 << 23;
// Append an extra bit to the mantissas to ensure correct rounding
a_mant <<= 1;
b_mant <<= 1;
// If the shift causes an overflow, the b_mant is too small so is set to 0
b_mant = b_mant.checked_shr(exp_diff).unwrap_or(0);
let mut mant = a_mant + b_mant;
let overflow = (mant >> 25) != 0;
if !overflow {
// Check to see if we round up
if mant & 1 == 1 {
mant += 1;
}
}
// check for overflow caused by rounding up
let overflow = overflow || (mant >> 25) != 0;
mant >>= 1;
if overflow {
if mant & 1 == 1 {
mant += 1;
}
// Check to see if we round up
mant >>= 1;
a_exp += 1;
}
// Remove implicit leading one
mant <<= 9;
mant >>= 9;
f32::from_bits(mant | ((a_exp.wrapping_add(127) as u32) << 23))
}
For example, the test
#[test]
fn test_add_small() {
let a = f32::MIN_POSITIVE;
let b = f32::from_bits(f32::MIN_POSITIVE.to_bits() + 1);
let c = add_f32(a, b);
let d = a + b;
assert_eq!(c, d);
}
fails, with the actual answer being 00000001000000000000000000000000 (binary representation) and my answer being 00000001000000000000000000000001.
Is anyone able to help me with what is wrong with my code?
I'm writing a method that receives an instance of bytes::Bytes representing a Type/Length/Value data structure where byte 0 is the type, the next 4 the length and the remaining the value. I implemented a unit test that is behaving a very unexpected way.
Given the method:
fn split_into_packets(packet: &Bytes) -> Vec<Bytes> {
let mut packets: Vec<Bytes> = Vec::new();
let mut begin: usize = 1;
while begin < packet.len() {
let slice = &packet[1..5];
print!("0: {} ", slice[0]);
print!("1: {} ", slice[1]);
print!("2: {} ", slice[2]);
println!("3: {}", slice[3]);
let size = u32::from_be_bytes(pop(slice));
println!("{}", size);
}
return packets;
}
And the test:
let mut bytes = BytesMut::with_capacity(330);
bytes.extend_from_slice(b"R\x52\x00\x00\x00\x08\x00");
let packets = split_into_packets(&bytes.freeze());
I see the following on my console:
0: 82 1: 0 2: 0 3: 0
I expected it to be:
0: 0 1: 0 2: 0 3: 82
What's going on? What am I missing?
fn split_into_packets(packet: &Bytes) -> Vec<Bytes> { // paket = "R\x52\x00\x00\x00\x08\x00"
let mut packets: Vec<Bytes> = Vec::new();
let mut begin: usize = 1;
while begin < packet.len() {
let slice = &packet[1..5]; // slice = "\x52\x00\x00\x00"
print!("0: {} ", slice[0]); // "\x52\x00\x00\x00"
^^^^
| |
+--+--- this is slice[0] = 0x52 = 82 (in decimal)
print!("1: {} ", slice[1]); // "\x52\x00\x00\x00"
^^^^
| |
+--+--- this is slice[1] = 0x0 = 0 (in decimal)
print!("2: {} ", slice[2]); // "\x52\x00\x00\x00"
^^^^
| |
+--+--- this is slice[2] = 0x0 = 0 (in decimal)
println!("3: {}", slice[3]); // "\x52\x00\x00\x00"
^^^^
| |
+--+--- this is slice[3] = 0x0 = 0 (in decimal)
let size = u32::from_be_bytes(pop(slice));
println!("{}", size);
}
return packets;
}
I hope the above explains why you get 82, 0, 0, 0 when printing the bytes one after another.
So, onto the next thing: How do we convert 4 bytes to an u32: To do that there are two possibilities that differ in how they interpret the bytes:
from_be_bytes: Converts bytes to u32 in big-endian: u32::from_be_bytes([0x12, 0x34, 0x56, 0x78])==0x12345678
from_le_bytes: Converts bytes to u32 in little-endian: u32::from_le_bytes([0x78, 0x56, 0x34, 0x12])==0x12345678
For endianess, you can e.g. consult the respective wikipedia page.
I'm doing some computational mathematics in Rust, and I have some large numbers which I store in an array of 24 values. I have functions that convert them to bytes and back, but it doesn't work fine for u32 values, whereas it works fine for u64. The code sample can be found below:
fn main() {
let mut bytes = [0u8; 96]; // since u32 is 4 bytes in my system, 4*24 = 96
let mut j;
let mut k: u32;
let mut num: [u32; 24] = [1335565270, 4203813549, 2020505583, 2839365494, 2315860270, 442833049, 1854500981, 2254414916, 4192631541, 2072826612, 1479410393, 718887683, 1421359821, 733943433, 4073545728, 4141847560, 1761299410, 3068851576, 1582484065, 1882676300, 1565750229, 4185060747, 1883946895, 4146];
println!("original_num: {:?}", num);
for i in 0..96 {
j = i / 4;
k = (i % 4) as u32;
bytes[i as usize] = (num[j as usize] >> (4 * k)) as u8;
}
println!("num_to_ytes: {:?}", &bytes[..]);
num = [0u32; 24];
for i in 0..96 {
j = i / 4;
k = (i % 4) as u32;
num[j as usize] |= (bytes[i as usize] as u32) << (4 * k);
}
println!("recovered_num: {:?}", num);
}
Rust playground
The above code does not retrieve the correct number from the byte array. But, if I change all u32 to u64, all 4s to 8s, and reduce the size of num from 24 values to 12, it works all fine. I assume I have some logical problem for the u32 version. The correctly working u64 version can be found in this Rust playground.
Learning how to create a MCVE is a crucial skill when programming. For example, why do you have an array at all? Why do you reuse variables?
Your original first number is 0x4F9B1BD6, the output first number is 0x000B1BD6.
Comparing the intermediate bytes shows that you have garbage:
let num = 0x4F9B1BD6_u32;
println!("{:08X}", num);
let mut bytes = [0u8; BYTES_PER_U32];
for i in 0..bytes.len() {
let k = (i % BYTES_PER_U32) as u32;
bytes[i] = (num >> (4 * k)) as u8;
}
for b in &bytes {
print!("{:X}", b);
}
println!();
4F9B1BD6
D6BD1BB1
Printing out the values of k:
for i in 0..bytes.len() {
let k = (i % BYTES_PER_U32) as u32;
println!("{} / {}", k, 4 * k);
bytes[i] = (num >> (4 * k)) as u8;
}
Shows that you are trying to shift by multiples of 4 bits:
0 / 0
1 / 4
2 / 8
3 / 12
I'm pretty sure that every common platform today uses 8 bits for a byte, not 4.
This is why magic numbers are bad. If you had used constants for the values, you would have noticed the problem much sooner.
since u32 is 4 bytes in my system
A u32 better be 4 bytes on every system — that's why it's a u32.
Overall, don't reinvent the wheel. Use the byteorder crate or equivalent:
extern crate byteorder;
use byteorder::{BigEndian, ReadBytesExt, WriteBytesExt};
const LENGTH: usize = 24;
const BYTES_PER_U32: usize = 4;
fn main() {
let num: [u32; LENGTH] = [
1335565270, 4203813549, 2020505583, 2839365494, 2315860270, 442833049, 1854500981,
2254414916, 4192631541, 2072826612, 1479410393, 718887683, 1421359821, 733943433,
4073545728, 4141847560, 1761299410, 3068851576, 1582484065, 1882676300, 1565750229,
4185060747, 1883946895, 4146,
];
println!("original_num: {:?}", num);
let mut bytes = [0u8; LENGTH * BYTES_PER_U32];
{
let mut bytes = &mut bytes[..];
for &n in &num {
bytes.write_u32::<BigEndian>(n).unwrap();
}
}
let mut num = [0u32; LENGTH];
{
let mut bytes = &bytes[..];
for n in &mut num {
*n = bytes.read_u32::<BigEndian>().unwrap();
}
}
println!("recovered_num: {:?}", num);
}
I've been trying to generate primes between m and n with the following function:
//the variable sieve is a list of primes between 1 and 32000
//The primes up to 100 are definitely correct
fn sieve_primes(sieve: &Vec<usize>, m: &usize, n: &usize) -> Vec<usize> {
let size: usize = *n - *m + 1;
let mut list: Vec<usize> = Vec::with_capacity(size);
for i in *m..(*n + 1) {
list.push(i);
}
for i in sieve {
for j in ( ((*m as f32) / (*i as f32)).ceil() as usize)..( (((*n as f32) / (*i as f32)).floor() + 1.0) as usize) {
println!("{} ",j);
if j != 1 {list[i * j - *m] = 0;}
}
}
let mut primes: Vec<usize> = Vec::new();
for num in &list{
if *num >= 2 {primes.push(*num);}
}
primes
}
This works for smaller (less than 1000000-ish) values of m and n, but
it fails at runtime for numbers around the billions / hundred-millions.
The output for m = 99999999, n = 100000000 is:
33333334
thread '' panicked at 'index out of bounds: the len is 2 but the index is 3'
If you look at the numbers this doesn't make any sense. First of all, it seems to skip the number 2 in the list of primes. Second, when i = 3 the for statement should simplify to for j in 33333333..333333334, which for some reason starts j at 33333334.
f32 can only represent all 24-bit integers exactly, which corresponds to about 16 million (actually 16777216). Above that there are gaps, up to 33554432 only even numbers can be represented. So in your example 33333333 cannot be represented as f32 and is rounded to 33333334.
You don't need to use float to round the result of an integer division. Using integers directly is both faster and doesn't have precision issues. For non-negative integers you can do the following:
fn main() {
let a = 12;
let b = 7;
println!("rounded down: {}", a / b);
println!("rounded: {}", (a + b / 2) / b);
println!("rounded up: {}", (a + b - 1) / b);
}
You are casting integers to f32, but f32 is not precise enough. Use f64 instead.
fn main() {
println!("{}", 33333333.0f32); // prints 33333332
}