In Java, intValue() gives back a truncated portion of the BigInteger instance. I wrote a similar program in Rust but it appears not to truncate:
extern crate num;
use num::bigint::{BigInt, RandBigInt};
use num::ToPrimitive;
fn main() {
println!("Hello, world!");
truncate_num(
BigInt::parse_bytes(b"423445324324324324234324", 10).unwrap(),
BigInt::parse_bytes(b"22447", 10).unwrap(),
);
}
fn truncate_num(num1: BigInt, num2: BigInt) -> i32 {
println!("Truncation of {} is {:?}.", num1, num1.to_i32());
println!("Truncation of {} is {:?}.", num2, num2.to_i32());
return 0;
}
The output I get from this is
Hello, world!
Truncation of 423445324324324324234324 is None.
Truncation of 22447 is Some(22447).
How can I achieve this in Rust? Should I try a conversion to String and then truncate manually? This would be my last resort.
Java's intValue() returns the lowest 32 bits of the integer. This could be done by a bitwise-AND operation x & 0xffffffff. A BigInt in Rust doesn't support bitwise manipulation, but you could first convert it to a BigUint which supports such operations.
fn truncate_biguint_to_u32(a: &BigUint) -> u32 {
use std::u32;
let mask = BigUint::from(u32::MAX);
(a & mask).to_u32().unwrap()
}
Converting BigInt to BigUint will be successful only when it is not negative. If the BigInt is negative (-x), we could find the lowest 32 bits of its absolute value (x), then negate the result.
fn truncate_bigint_to_u32(a: &BigInt) -> u32 {
use num_traits::Signed;
let was_negative = a.is_negative();
let abs = a.abs().to_biguint().unwrap();
let mut truncated = truncate_biguint_to_u32(&abs);
if was_negative {
truncated.wrapping_neg()
} else {
truncated
}
}
Demo
You may use truncate_bigint_to_u32(a) as i32 if you need a signed number.
There is also a to_signed_bytes_le() method with which you could extract the bytes and decode that into a primitive integer directly:
fn truncate_bigint_to_u32_slow(a: &BigInt) -> u32 {
let mut bytes = a.to_signed_bytes_le();
bytes.resize(4, 0);
bytes[0] as u32 | (bytes[1] as u32) << 8 | (bytes[2] as u32) << 16 | (bytes[3] as u32) << 24
}
This method is extremely slow compared to the above methods and I don't recommend using it.
There's no natural truncation of a big integer into a smaller one. Either it fits or you have to decide what value you want.
You could do this:
println!("Truncation of {} is {:?}.", num1, num1.to_i32().unwrap_or(-1));
or
println!("Truncation of {} is {:?}.", num1, num1.to_i32().unwrap_or(std::i32::MAX));
but your application logic should probably dictate what's the desired behavior when the returned option contains no value.
Related
I am trying to find the sum of the digits of a given number. For example, 134 will give 8.
My plan is to convert the number into a string using .to_string() and then use .chars() to iterate over the digits as characters. Then I want to convert every char in the iteration into an integer and add it to a variable. I want to get the final value of this variable.
I tried using the code below to convert a char into an integer:
fn main() {
let x = "123";
for y in x.chars() {
let z = y.parse::<i32>().unwrap();
println!("{}", z + 1);
}
}
(Playground)
But it results in this error:
error[E0599]: no method named `parse` found for type `char` in the current scope
--> src/main.rs:4:19
|
4 | let z = y.parse::<i32>().unwrap();
| ^^^^^
This code does exactly what I want to do, but first I have to convert each char into a string and then into an integer to then increment sum by z.
fn main() {
let mut sum = 0;
let x = 123;
let x = x.to_string();
for y in x.chars() {
// converting `y` to string and then to integer
let z = (y.to_string()).parse::<i32>().unwrap();
// incrementing `sum` by `z`
sum += z;
}
println!("{}", sum);
}
(Playground)
The method you need is char::to_digit. It converts char to a number it represents in the given radix.
You can also use Iterator::sum to calculate sum of a sequence conveniently:
fn main() {
const RADIX: u32 = 10;
let x = "134";
println!("{}", x.chars().map(|c| c.to_digit(RADIX).unwrap()).sum::<u32>());
}
my_char as u32 - '0' as u32
Now, there's a lot more to unpack about this answer.
It works because the ASCII (and thus UTF-8) encodings have the Arabic numerals 0-9 ordered in ascending order. You can get the scalar values and subtract them.
However, what should it do for values outside this range? What happens if you provide 'p'? It returns 64. What about '.'? This will panic. And '♥' will return 9781.
Strings are not just bags of bytes. They are UTF-8 encoded and you cannot just ignore that fact. Every char can hold any Unicode scalar value.
That's why strings are the wrong abstraction for the problem.
From an efficiency perspective, allocating a string seems inefficient. Rosetta Code has an example of using an iterator which only does numeric operations:
struct DigitIter(usize, usize);
impl Iterator for DigitIter {
type Item = usize;
fn next(&mut self) -> Option<Self::Item> {
if self.0 == 0 {
None
} else {
let ret = self.0 % self.1;
self.0 /= self.1;
Some(ret)
}
}
}
fn main() {
println!("{}", DigitIter(1234, 10).sum::<usize>());
}
If c is your character you can just write:
c as i32 - 0x30;
Test with:
let c:char = '2';
let n:i32 = c as i32 - 0x30;
println!("{}", n);
output:
2
NB: 0x30 is '0' in ASCII table, easy enough to remember!
Another way is to iterate over the characters of your string and convert and add them using fold.
fn sum_of_string(s: &str) -> u32 {
s.chars().fold(0, |acc, c| c.to_digit(10).unwrap_or(0) + acc)
}
fn main() {
let x = "123";
println!("{}", sum_of_string(x));
}
I found a function to compute a mean and have been playing with it. The code snippet below runs, but if the data inside the input changes from a float to an int an error occurs. How do I get this to work with floats and integers?
use std::borrow::Borrow;
fn mean(arr: &mut [f64]) -> f64 {
let mut i = 0.0;
let mut mean = 0.0;
for num in arr {
i += 1.0;
mean += (num.borrow() - mean) / i;
}
mean
}
fn main() {
let val = mean(&mut vec![4.0, 5.0, 3.0, 2.0]);
println!("The mean is {}", val);
}
The code in the question doesn't compile because f64 does not have a borrow() method. Also, the slice it accepts doesn't need to be mutable since we are not changing it. Here is a modified version that compiles and works:
fn mean(arr: &[f64]) -> f64 {
let mut i = 0.0;
let mut mean = 0.0;
for &num in arr {
i += 1.0;
mean += (num - mean) / i;
}
mean
}
We specify &num when looping over arr, so that the type of num is f64 rather than a reference to f64. This snippet would work with both, but omitting it would break the generic version.
For the same function to accept floats and integers alike, its parameter needs to be generic. Ideally we'd like it to accept any type that can be converted into f64, including f32 or user-defined types that defin such a conversion. Something like this:
fn mean<T>(arr: &[T]) -> f64 {
let mut i = 0.0;
let mut mean = 0.0;
for &num in arr {
i += 1.0;
mean += (num as f64 - mean) / i;
}
mean
}
This doesn't compile because x as f64 is not defined for x of an arbitry type. Instead, we need a trait bound on T that defines a way to convert T values to f64. This is exactly the purpose of the Into trait; every type T that implements Into<U> defines an into(self) -> U method. Specifying T: Into<f64> as the trait bound gives us the into() method that returns an f64.
We also need to request T to be Copy, to prevent reading the value from the array to "consume" the value, i.e. attempt moving it out of the array. Since primitive numbers such as integers implement Copy, this is ok for us. Working code then looks like this:
fn mean<T: Into<f64> + Copy>(arr: &[T]) -> f64 {
let mut i = 0.0;
let mut mean = 0.0;
for &num in arr {
i += 1.0;
mean += (num.into() - mean) / i;
}
mean
}
fn main() {
let val1 = mean(&vec![4.0, 5.0, 3.0, 2.0]);
let val2 = mean(&vec![4, 5, 3, 2]);
println!("The means are {} and {}", val1, val2);
}
Note that this will only work for types that define lossless conversion to f64. Thus it will work for u32, i32 (as in the above example) and smaller integer types, but it won't accept for example a vector of i64 or u64, which cannot be losslessly converted to f64.
Also note that this problem lends nicely to functional programming idioms such as enumerate() and fold(). Although outside the scope of this already longish answer, writing out such an implementation is an exercise hard to resist.
I'm trying to iterate over all possible byte (u8) values. Unfortunately my range literals in 0..256 are cast to u8 and 256 overflows:
fn foo(byte: u8) {
println!("{}", byte);
}
fn main() {
for byte in 0..256 {
foo(byte);
println!("Never executed.");
}
for byte in 0..1 {
foo(byte);
println!("Executed once.");
}
}
The above compiles with:
warning: literal out of range for u8
--> src/main.rs:6:20
|
6 | for byte in 0..256 {
| ^^^
|
= note: #[warn(overflowing_literals)] on by default
The first loop body is never executed at all.
My workaround is very ugly and feels brittle because of the cast:
for short in 0..256 {
let _explicit_type: u16 = short;
foo(short as u8);
}
Is there a better way?
As of Rust 1.26, inclusive ranges are stabilized using the syntax ..=, so you can write this as:
for byte in 0..=255 {
foo(byte);
}
This is issue Unable to create a range with max value.
The gist of it is that byte is inferred to be u8, and therefore 0..256 is represented as a Range<u8> but unfortunately 256 overflows as an u8.
The current work-around is to use a larger integral type and cast to u8 later on since 256 is actually never reached.
There is a RFC for inclusive range with ... which has entered final comment period; maybe in the future it'll be possible to have for byte in 0...255 or its alternative (0..255).inclusive().
So I'm trying to get a random number, but I'd rather not have it come back as uint instead of int... Not sure if this match is right, either, but the compiler doesn't get that far because it's never heard of this from_uint thing I'm trying to do:
fn get_random(max: &int) -> int {
// Here we use * to dereference max
// ...that is, we access the value at
// the pointer location rather than
// trying to do math using the actual
// pointer itself
match int::from_uint(rand::random::<uint>() % *max + 1) {
Some(n) => n,
None => 0,
}
}
from_uint is not in the namespace of std::int, but std::num: http://doc.rust-lang.org/std/num/fn.from_uint.html
Original answer:
Cast a u32 to int with as. If you cast uint or u64 to int, you risk overflowing into the negatives (assuming you are on 64 bit). From the docs:
The size of a uint is equivalent to the size of a pointer on the particular architecture in question.
This works:
use std::rand;
fn main() {
let max = 42i;
println!("{}" , get_random(&max));
}
fn get_random(max: &int) -> int {
(rand::random::<u32>() as int) % (*max + 1)
}
I'm just learning rust, henceforth this question has probably some trivial answer.
I want to access to individual digits of a rust BigUint. This is for a project Euler puzzle asking the sum of these digits.
I did it like below:
let mut f = func(100);
let mut total: BigUint = Zero::zero();
while f > FromPrimitive::from_uint(0).unwrap() {
let digit = f % FromPrimitive::from_uint(10).unwrap();
f = f / FromPrimitive::from_uint(10).unwrap();
total = total + digit;
}
println!("");
println!("Sum of digits of func(100) = {}", total);
It works, but it's quite frustrating, because I believe these digits are internaly stored as an array, but I can't access to them directly because the underlying data member is private.
Is there any way to do that in a more straightforward way ?
The internal storage of BigUint is a vector of BigDigit, which is an alias for u32, so it probably won't help you a lot to get the sum of base 10 digits.
I doubt casting to String would be an efficient way to compute this, but you can make it a little more straightforward using the div_rem() method from trait Integer. (After all, casting to String does this computation internally.)
extern crate num;
use std::num::Zero;
use num::bigint::BigUint;
use num::integer::Integer;
fn main() {
let mut f = func(100);
let mut total: BigUint = Zero::zero();
let ten: BigUint = FromPrimitive::from_uint(10).unwrap();
while f > Zero::zero() {
let (quotient, remainder) = f.div_rem(&ten);
f = quotient;
total = total + remainder;
}
println!("");
println!("Sum of digits of func(100) = {}", total);
}
I ended doing it using to_string() as suggested by #C.Quilley. But as #Levans pointed out the internal implementation merely perform a div_rem. I still prefer that version because I see it as more readable and straightforwad.
extern crate num;
use num::bigint::BigUint;
// What this function does is irrelevant
fn func(n: uint) -> BigUint {
let p : BigUint = FromPrimitive::from_uint(n).unwrap();
p*p*p*p*p*p*p*p*p*p*p*p*p
}
fn main() {
let n = 2014u;
let f = func(n);
let total: uint = f.to_string().as_slice().chars()
.fold(0, |a, b| a + b as uint - '0' as uint);
println!("Sum of digits of func({}) = {} : {}", n, f, total);
}