I have an f32 value that I'd like to print. Being a float, I can represent integers with this type as well
let a_float: f32 = 3.0;
let another_float: f32 = 3.14;
// let's pretend this was user input,
// we didn't know what they'd put enter, so we used f32 to cover our bases
let an_integer: f32 = 3;
I'd like to print a value stored as an f32 with a minimum amount of precision, but using as much as necessary to represent the value stored. If my desired minimum precision was one (1), I'd expect the following transformation to be done on the float:
let a_float: f32 = 3.0; // print 3.0
let another_float: f32 = 3.14; // print 3.14
let an_integer: f32 = 3; // print 3.0
I know that I can set a finite number of decimal places using std::fmt's precision, but that doesn't seem to give me what I want. Is there a way to achieve this functionality without bringing in additional formatting crates? Pulling in additional crates isn't out of the realm of possibility, I'm moreso interested in what I'm able to do 'out of the box'
Rust already does this by default. Every float is printed with as many digits as are necessary to denote that particular float uniquely.
Here's a program to demonstrate this. It generates 10000 random floats, converts them to strings, and checks how many digits can be deleted from the fractional part without changing the value.
(Caveat: This does not show that there aren't cases where the number could be represented in fewer digits by rounding it in a different direction, which can happen sometimes if I remember correctly. I'm not a float formatting expert.)
use std::collections::HashMap;
use rand::{Rng, thread_rng};
/// Change this to choose the type analyzed
type Float = f32;
fn main() {
let mut rng = thread_rng();
let mut digit_histogram = HashMap::new();
for _ in 1..10000 {
let x: Float = rng.gen_range(0.0..10.0);
let string = x.to_string();
// Break up string representation
let before_exponent_pos = string.find('e').unwrap_or(string.len());
let after_decimal_pos = string.find('.')
.map(|p| p + 1)
.unwrap_or(before_exponent_pos);
let prefix = &string[..after_decimal_pos];
let mut fractional_digits = &string[after_decimal_pos..before_exponent_pos];
let suffix = &string[before_exponent_pos..];
// What happens if we truncate the digits?
let initial_digits = fractional_digits.len();
let mut unnecessary_digits = 0;
while fractional_digits.len() > 0 {
fractional_digits = &fractional_digits[..fractional_digits.len() - 1];
let shortened_string = format!("{}{}{}",
prefix,
fractional_digits,
suffix,
);
let shortened_x = shortened_string.parse::<Float>().unwrap();
if shortened_x == x {
unnecessary_digits += 1;
} else {
break;
}
}
*(digit_histogram
.entry((initial_digits, unnecessary_digits))
.or_insert(0)) += 1;
}
// Summarize results.
let mut digit_histogram = digit_histogram.into_iter().collect::<Vec<_>>();
digit_histogram.sort_by_key(|pair| pair.0);
for ((initial_digits, unnecessary_digits), occurrences) in digit_histogram {
println!(
"{} digits with {} unnecessary × {}",
initial_digits,
unnecessary_digits,
occurrences);
}
}
Runnable on Rust Playground. Results:
2 digits with 0 unnecessary × 1
3 digits with 0 unnecessary × 6
4 digits with 0 unnecessary × 25
5 digits with 0 unnecessary × 401
6 digits with 0 unnecessary × 4061
7 digits with 0 unnecessary × 4931
8 digits with 0 unnecessary × 504
9 digits with 0 unnecessary × 62
10 digits with 0 unnecessary × 8
The program saw a wide variety of numbers of digits, but never any that could be deleted without changing the answer.
Pretty print got me what I was looking for
fn main() {
let a_float: f32 = 3.0; // print 3.0
let another_float: f32 = 3.14; // print 3.14
let an_integer: i32 = 3; // print 3.0
println!("{:?}", a_float);
println!("{:?}", another_float);
println!("{:?}", an_integer as f32);
}
Related
Example code:
use num_bigint::BigUint;
use num_traits::identities::One;
fn main() {
// Example: 10001 (17) => 1110 (14)
let n = BigUint::from(17u32);
println!("{}", n);
// BigUint doesn't support `!`
//let n = !n;
let mask = (BigUint::one() << n.bits()) - 1u32;
let n = n ^ mask;
println!("{}", n);
}
The above code is doing a binary complement of a BigUint using a bit mask. Questions:
Is there a better way to do the binary complement than with a mask? It seems BigUint doesn't include the ! operator (but the mask may be necessary anyway depending on how ! was defined).
If not is there a better way to generate the mask? (Caching masks helps, but can use lots of memory.)
More context with the problem I'm actually looking at: binary complement sequences
If you alternate multiplying by 3 and bit flipping a number some interesting sequences arise. Example starting with 3:
0. 3 (11b) => 3*3 = 9 (1001b) => bit complement is 6 (0110b)
1. 6 (110b)
2. 13 (1101b)
3. 24 (11000b)
4. 55 (110111b)
5. 90 (1011010b)
6. 241 (11110001b)
7. 300 (100101100b)
8. 123 (1111011b)
9. 142 (10001110b)
11. 85 (1010101b)
12. 0 (0b)
One question is whether it reaches zero for all starting numbers or not. Some meander around for quite a while before reaching zero (425720 takes 87,037,147,316 iterations to reach 0). Being able to compute this efficiently can help in answering these questions. Mostly I'm learning a bit more rust with this though.
If you are looking for performance, num-bigint probably isn't the best choice. Everything that is really high-performance, though, seems to be GPL licensed.
Either way, here is a solution using the rug library, which directly supports !(not), and seems to be really fast:
use rug::{ops::NotAssign, Integer};
fn main() {
// Example: 10001 (17) => 1110 (14)
let mut n = Integer::from(17u32);
println!("{}", n);
n.not_assign();
n.keep_bits_mut(n.significant_bits() - 1);
println!("{}", n);
}
17
14
Note that not_assign also inverts the sign bit. We can remove that bit through the keep_bits_mut function.
For example, here is a version of your algorithm:
use rug::{ops::NotAssign, Integer};
fn step(n: &mut Integer) {
*n *= 3;
n.not_assign();
n.keep_bits_mut(n.significant_bits() - 1);
}
fn main() {
let mut n = Integer::from(3);
println!("{}", n);
while n != 0 {
step(&mut n);
println!("{}", n);
}
}
3
6
13
24
55
90
241
300
123
142
85
0
The best solution is probably to just do it yourself. You perform an allocation each time you create a BigUint which really slows down your program. Since we are not doing complex math, we can simplify most of this to a couple bitwise operations.
After a little bit of tinkering, here is how I implemented it. For convenience, I used the unstable nightly feature bigint_helper_methods to allow for the carrying_add function. This helped simplify the addition process.
#[derive(Debug)]
pub struct BigUintHelper {
words: Vec<u64>,
}
impl BigUintHelper {
pub fn mul3_invert(&mut self) {
let len = self.words.len();
// Multiply everything by 3 by adding it to itself with a bit shift
let mut carry = false;
let mut prev_bit = 0;
for word in &mut self.words[..len - 1] {
let previous = *word;
// Perform the addition operation
let (next, next_carry) = previous.carrying_add((previous << 1) | prev_bit, carry);
// Reset carried values for next round
prev_bit = previous >> (u64::BITS - 1);
carry = next_carry;
// Invert the result as we go to avoid needing another pass
*word = !next;
}
// Perform the last word seperatly since we may need to do the invert differently
let previous = self.words[len - 1];
let (next, next_carry) = previous.carrying_add((previous << 1) | prev_bit, carry);
// Extra word from the combination of the carry bits
match next_carry as u64 + (previous >> (u64::BITS - 1)) {
0 => {
// The carry was 0 so we do the normal process
self.words[len - 1] = invert_bits(next);
self.cleanup_end();
}
1 => {
self.words[len - 1] = !next;
// invert_bits(1) = 0
self.cleanup_end();
}
2 => {
self.words[len - 1] = !next;
// invert_bits(2) = 1
self.words.push(1);
}
_ => unreachable!(),
}
}
/// Remove any high order words without any bits
#[inline(always)]
fn cleanup_end(&mut self) {
while let Some(x) = self.words.pop() {
if x != 0 {
self.words.push(x);
break;
}
}
}
/// Count how many rounds it takes to convert this value to 0.
pub fn count_rounds(&mut self) -> u64 {
let mut rounds = 0;
while !self.words.is_empty() {
self.mul3_invert();
rounds += 1;
}
rounds
}
}
impl From<u64> for BigUintHelper {
fn from(x: u64) -> Self {
BigUintHelper {
words: vec![x],
}
}
}
#[inline(always)]
const fn invert_bits(x: u64) -> u64 {
match x.leading_zeros() {
0 => !x,
y => ((1u64 << (u64::BITS - y)) - 1) ^ x
}
}
Rust Playground
I am trying to get the length (the number of digits when interpreted in decimal) of an int in rust. I found a way to do it, however am looking for method that comes from the primitive itself. This is what I have:
let num = 90.to_string();
println!("num: {}", num.chars().count())
// num: 2
I am looking at https://docs.rs/digits/0.3.3/digits/struct.Digits.html#method.length. is this a good candidate? How do I use it? Or are there other crates that does it for me?
One liners with less type conversion is the ideal solution I am looking for.
You could loop and check how often you can divide the number by 10 before it becomes a single digit.
Or in the other direction (because division is slower than multiplication), check how often you can multiply 10*10*...*10 until you reach the number:
fn length(n: u32, base: u32) -> u32 {
let mut power = base;
let mut count = 1;
while n >= power {
count += 1;
if let Some(new_power) = power.checked_mul(base) {
power = new_power;
} else {
break;
}
}
count
}
With nightly rust (or in the future, when the int_log feature is stabilized), you can use:
#![feature(int_log)]
n.checked_log10().unwrap_or(0) + 1
Here is a one-liner that doesn't require strings or floating point:
println!("num: {}", successors(Some(n), |&n| (n >= 10).then(|| n / 10)).count());
It simply counts the number of times the initial number needs to be divided by 10 in order to reach 0.
EDIT: the first version of this answer used iterate from the (excellent and highly recommended) itertools crate, but #trentcl pointed out that successors from the stdlib does the same. For reference, here is the version using iterate:
println!("num: {}", iterate(n, |&n| n / 10).take_while(|&n| n > 0).count().max(1));
Here's a (barely) one-liner that's faster than doing a string conversion, using std::iter stuff:
let some_int = 9834;
let decimal_places = (0..).take_while(|i| 10u64.pow(*i) <= some_int).count();
The first method below relies on the following formula, where a and b are the logarithmic bases.
log<a>( x ) = log<b>( x ) / log<b>( a )
log<a>( x ) = log<2>( x ) / log<2>( a ) // Substituting 2 for `b`.
The following function can be applied to finding the number of digits for bases that are a power of 2. This approach is very fast.
fn num_digits_base_pow2(n: u64, b: u32) -> u32
{
(63 - n.leading_zeros()) / (31 - b.leading_zeros()) + 1
}
The bits are counted for both n (the number we want to represent) and b (the base) to find their log2 floor values. Then the adjusted ratio of these values gives the ceiling log value in the desired base.
For a general purpose approach to finding the number of digits for arbitrary bases, the following should suffice.
fn num_digits(n: u64, b: u32) -> u32
{
(n as f64).log(b as f64).ceil() as u32
}
if num is signed:
let digits = (num.abs() as f64 + 0.1).log10().ceil() as u32;
A nice property of numbers that is always good to have in mind is that the number of digits required to write a number $x$ in base $n$ is actually $\lceil log_n(x + 1) \rceil$.
Therefore, one can simply write the following function (notice the cast from u32 to f32, since integers don't have a log function).
fn length(n: u32, base: u32) -> u32 {
let n = (n+1) as f32;
n.log(base as f32).ceil() as u32
}
You can easily adapt it for negative numbers. For floating point numbers this might be a bit (i.e. a lot) more tricky.
To take into account Daniel's comment about the pathological cases introduced by using f32, note that, with nightly Rust, integers have a logarithm method. (Notice that, imo, those are implementation details, and you should more focus on understanding the algorithm than the implementation.):
#![feature(int_log)]
fn length(n: u32, base: u32) -> u32 {
n.log(base) + 1
}
I'm trying to solve my first ever project Euler problem just to have fun with Rust, and got stuck on what seems to be an extremely long compute time to solve
Problem:
https://projecteuler.net/problem=757
I came up with this code to try to solve it, which I'm able to solve the base problem (up to 10^6) in ~245 ms and get the expected result of 2,851.
use std::time::Instant;
fn factor(num: u64) -> Vec<u64> {
let mut counter = 1;
let mut factors = Vec::with_capacity(((num as f64).log(10.0)*100.0) as _);
while counter <= (num as f64).sqrt() as _ {
let div = num / counter;
let rem = num % counter;
if rem == 0 {
factors.push(counter);
factors.push(div);
}
counter += 1
}
factors.shrink_to_fit();
factors
}
fn main() {
let now = Instant::now();
let max = 10u64.pow(6);
let mut counter = 0;
'a: for i in 1..max {
// Optimization: All numbers in the pattern appear to be evenly divisible by 4
let div4 = i / 4;
let mod4 = i % 4;
if mod4 != 0 {continue}
// Optimization: And the remainder of that divided by 3 is always 0 or 1
if div4 % 3 > 1 {continue}
let mut factors = factor(i);
if factors.len() >= 4 {
// Optimization: The later found factors seem to be the most likely to fit the pattern, so try them first
factors.reverse();
let pairs: Vec<_> = factors.chunks(2).collect();
for paira in pairs.iter() {
for pairb in pairs.iter() {
if pairb[0] + pairb[1] == paira[0] + paira[1] + 1 {
counter += 1;
continue 'a;
}
}
}
}
}
println!("{}, {} ms", counter, now.elapsed().as_millis());
}
It looks like my code is spending the most amount of time on factoring, and in my search for a more efficient factoring algorithm than what I was able to come up with on my own, I couldn't find any rust code already made (the code I did find was actually slower.) But I did a simulation to estimate how long it would take even if I had a perfect factoring algorithm, and it would take 13 days to find all numbers up to 10^14 with the non-factoring portions of this code. Probably not what the creator of this problem intends.
Given I'm relatively new to programming, is there some concept or programming method that I'm not aware of (like say using a hashmap to do fast lookups) that can be used in this situation? Or is the solution going to involve spotting patterns in the numbers and making optimizations like the ones I have found so far?
If Vec::push is called when the vector is at its capacity, it will re-allocate its internal buffer to double the size and copy all its elements to this new allocation.
Vec::new() creates a vector with no space allocated so it will be doing this re-allocation.
You can use Vec::with_capacity((num/2) as usize) to avoid this and just allocate the max you might need.
I am trying to find the sum of the digits of a given number. For example, 134 will give 8.
My plan is to convert the number into a string using .to_string() and then use .chars() to iterate over the digits as characters. Then I want to convert every char in the iteration into an integer and add it to a variable. I want to get the final value of this variable.
I tried using the code below to convert a char into an integer:
fn main() {
let x = "123";
for y in x.chars() {
let z = y.parse::<i32>().unwrap();
println!("{}", z + 1);
}
}
(Playground)
But it results in this error:
error[E0599]: no method named `parse` found for type `char` in the current scope
--> src/main.rs:4:19
|
4 | let z = y.parse::<i32>().unwrap();
| ^^^^^
This code does exactly what I want to do, but first I have to convert each char into a string and then into an integer to then increment sum by z.
fn main() {
let mut sum = 0;
let x = 123;
let x = x.to_string();
for y in x.chars() {
// converting `y` to string and then to integer
let z = (y.to_string()).parse::<i32>().unwrap();
// incrementing `sum` by `z`
sum += z;
}
println!("{}", sum);
}
(Playground)
The method you need is char::to_digit. It converts char to a number it represents in the given radix.
You can also use Iterator::sum to calculate sum of a sequence conveniently:
fn main() {
const RADIX: u32 = 10;
let x = "134";
println!("{}", x.chars().map(|c| c.to_digit(RADIX).unwrap()).sum::<u32>());
}
my_char as u32 - '0' as u32
Now, there's a lot more to unpack about this answer.
It works because the ASCII (and thus UTF-8) encodings have the Arabic numerals 0-9 ordered in ascending order. You can get the scalar values and subtract them.
However, what should it do for values outside this range? What happens if you provide 'p'? It returns 64. What about '.'? This will panic. And '♥' will return 9781.
Strings are not just bags of bytes. They are UTF-8 encoded and you cannot just ignore that fact. Every char can hold any Unicode scalar value.
That's why strings are the wrong abstraction for the problem.
From an efficiency perspective, allocating a string seems inefficient. Rosetta Code has an example of using an iterator which only does numeric operations:
struct DigitIter(usize, usize);
impl Iterator for DigitIter {
type Item = usize;
fn next(&mut self) -> Option<Self::Item> {
if self.0 == 0 {
None
} else {
let ret = self.0 % self.1;
self.0 /= self.1;
Some(ret)
}
}
}
fn main() {
println!("{}", DigitIter(1234, 10).sum::<usize>());
}
If c is your character you can just write:
c as i32 - 0x30;
Test with:
let c:char = '2';
let n:i32 = c as i32 - 0x30;
println!("{}", n);
output:
2
NB: 0x30 is '0' in ASCII table, easy enough to remember!
Another way is to iterate over the characters of your string and convert and add them using fold.
fn sum_of_string(s: &str) -> u32 {
s.chars().fold(0, |acc, c| c.to_digit(10).unwrap_or(0) + acc)
}
fn main() {
let x = "123";
println!("{}", sum_of_string(x));
}
I've been trying to generate primes between m and n with the following function:
//the variable sieve is a list of primes between 1 and 32000
//The primes up to 100 are definitely correct
fn sieve_primes(sieve: &Vec<usize>, m: &usize, n: &usize) -> Vec<usize> {
let size: usize = *n - *m + 1;
let mut list: Vec<usize> = Vec::with_capacity(size);
for i in *m..(*n + 1) {
list.push(i);
}
for i in sieve {
for j in ( ((*m as f32) / (*i as f32)).ceil() as usize)..( (((*n as f32) / (*i as f32)).floor() + 1.0) as usize) {
println!("{} ",j);
if j != 1 {list[i * j - *m] = 0;}
}
}
let mut primes: Vec<usize> = Vec::new();
for num in &list{
if *num >= 2 {primes.push(*num);}
}
primes
}
This works for smaller (less than 1000000-ish) values of m and n, but
it fails at runtime for numbers around the billions / hundred-millions.
The output for m = 99999999, n = 100000000 is:
33333334
thread '' panicked at 'index out of bounds: the len is 2 but the index is 3'
If you look at the numbers this doesn't make any sense. First of all, it seems to skip the number 2 in the list of primes. Second, when i = 3 the for statement should simplify to for j in 33333333..333333334, which for some reason starts j at 33333334.
f32 can only represent all 24-bit integers exactly, which corresponds to about 16 million (actually 16777216). Above that there are gaps, up to 33554432 only even numbers can be represented. So in your example 33333333 cannot be represented as f32 and is rounded to 33333334.
You don't need to use float to round the result of an integer division. Using integers directly is both faster and doesn't have precision issues. For non-negative integers you can do the following:
fn main() {
let a = 12;
let b = 7;
println!("rounded down: {}", a / b);
println!("rounded: {}", (a + b / 2) / b);
println!("rounded up: {}", (a + b - 1) / b);
}
You are casting integers to f32, but f32 is not precise enough. Use f64 instead.
fn main() {
println!("{}", 33333333.0f32); // prints 33333332
}