I'm implementing an algorithm to get the factorial of a certain number for a programming class.
fn factorial(number: u64) -> u64 {
if number < 2 {
1
} else {
number * factorial(number - 1)
}
}
When I tried with 100 or even with 25 I get this error "thread 'main' panicked at 'attempt to multiply with overflow'", so I tried wrapping, and the result function was:
fn factorial(number: u64) -> u64 {
if number < 2 {
1
} else {
number.wrapping_mul(factorial(number - 1))
}
}
This way there is not panic but the result is always zero, so I tried using f64 and result was
100! = 93326215443944100000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
instead of
100! = 93326215443944152681699238856266700490715968264381621468592963895217599993229915608941463976156518286253697920827223758251185210916864000000000000000000000000
Is there another way to store the result so the right value is returned?
100! is a really big number. In fact, the largest factorial that will fit in a u64 is just 20!. For numbers that don't fit in a u64, num::bigint::BigUint is an appropriate storage option.
The following code calculates a value for 100!. You can run it in your browser here.
extern crate num;
use num::BigUint;
fn factorial(number: BigUint) -> BigUint {
let big_1 = 1u32.into();
let big_2 = 2u32.into();
if number < big_2 {
big_1
} else {
let prev_factorial = factorial(number.clone() - big_1);
number * prev_factorial
}
}
fn main() {
let number = 100u32.into();
println!("{}", factorial(number));
}
To give some insight into why u64 doesn't work, you can call the bits method on the result. If you do so, you will find that the value of 100! requires 525 bits to store. That's more than 8 u64's worth of storage.
I wanted to complement #Jason Watkins answer with an iterative solution using Iterator::fold:
extern crate num;
use num::{bigint::BigUint, One};
fn factorial(value: u32) -> BigUint {
(2..=value).fold(BigUint::one(), |res, n| res * n)
}
fn main() {
let result = factorial(10);
assert_eq!(result, 3628800u32.into());
}
Related
I need several parts of a program, in different modules, to have a unique integer.
for example:
pub fn foo() -> u64 {
unique_integer!()
}
pub fn bar() -> u64 {
unique_integer!()
}
(foo() should never return the same as bar(), but the values themselves are meaningless and do not need to be stable across builds. All invocations of foo() must return the same values, as must all invocations to bar(). It is preferred, but not essential, that the values are contiguous.)
Is there a way of using a macro to do this?
You could compute a compile-time hash using the module path (which contains the crate and modules leading up to the file), the file name, column and line number of the macro invocation like this:
pub const fn hash(module_path: &'static str, file: &'static str, line: u32, column: u32) -> u64 {
let mut hash = 0xcbf29ce484222325;
let prime = 0x00000100000001B3;
let mut bytes = module_path.as_bytes();
let mut i = 0;
while i < bytes.len() {
hash ^= bytes[i] as u64;
hash = hash.wrapping_mul(prime);
i += 1;
}
bytes = file.as_bytes();
i = 0;
while i < bytes.len() {
hash ^= bytes[i] as u64;
hash = hash.wrapping_mul(prime);
i += 1;
}
hash ^= line as u64;
hash = hash.wrapping_mul(prime);
hash ^= column as u64;
hash = hash.wrapping_mul(prime);
hash
}
macro_rules! unique_number {
() => {{
const UNIQ: u64 = crate::hash(module_path!(), file!(), line!(), column!());
UNIQ
}};
}
fn foo() -> u64 {
unique_number!()
}
fn bar() -> u64 {
unique_number!()
}
fn main() {
println!("{} {}", foo(), bar()); // 2098219922142993841 2094402417770602149 on the playground
}
(playground)
This has the benefit of consistent results, when compared to the top answer that can return different values depending on the order of invocation, and this is also entirely computed in compile time, which remove the runtime overhead of maintaining a counter.
The only downside to this is that you could get hash value collisions. But the chance is low. If you want, you could try implementing an algorithm that computes perfect hash values. The example shown uses the FNV algorithm which should be decent but not perfect.
Not exactly a macro but anyway it's a proposition:
#[repr(u64)]
enum Unique {
Foo,
Bar,
}
pub fn foo() -> u64 {
Unique::Foo as u64
}
pub fn bar() -> u64 {
Unique::Bar as u64
}
Compiler should warn you if you don't use a variant.
No, you can not use a regular macro for this. However, you might be able to find a procedural macro crate which might give this functionality.
That being said...
This does not count as safe rust, but if we are okay with throwing safety out the window then this should do the trick.
macro_rules! unique_u64 {
() => {{
struct PlaceHolder;
let id = ::std::any::TypeId::of::<PlaceHolder>();
unsafe { ::std::mem::transmute::<_, u64>(id) }
}};
}
This is probably undefined behavior, but since we know that every type should have a unique TypeId it would have the desired effect. The only reason I know that this is even possible is because I have looked at the structure of TypeId and know it contains a single u64 to distinguish types. However, there are currently plans to change TypeId from being a u64 to something more stable and less prone to this kind of unsafe code. We have no guarantees on what the contents of TypeId might change to and when it does change it might silently fail if it still has the same size as a u64.
Alternatively,
We can achieve a similar result in safe rust by hashing the TypeId. Now, it slightly breaks the rules since we do not have any guarantee that it will always produce a unique result. However, it seems highly unlikely that 2 different TypeIds would hash to the same value. Plus this stays within safe rust and is unlikely to break for future releases of Rust.
macro_rules! unique_u64 {
() => {{
use ::std::hash::{Hash, Hasher};
struct PlaceHolder;
let id = ::std::any::TypeId::of::<PlaceHolder>();
let mut hasher = ::std::collections::hash_map::DefaultHasher::new();
id.hash(&mut hasher);
hasher.finish()
}};
}
It's possible to do something like this with once_cell, using a static atomic variable as a counter:
use core::sync::atomic::{Ordering, AtomicU64};
use once_cell::sync::Lazy;
static COUNTER: AtomicU64 = AtomicU64::new(0);
fn foo() -> u64 {
static LOCAL_COUNTER: Lazy<u64> = Lazy::new(|| COUNTER.fetch_add(1, Ordering::Relaxed));
*LOCAL_COUNTER
}
fn bar() -> u64 {
static LOCAL_COUNTER: Lazy<u64> = Lazy::new(|| COUNTER.fetch_add(1, Ordering::Relaxed));
*LOCAL_COUNTER
}
fn main() {
dbg!(foo()); // 0
dbg!(foo()); // still 0
dbg!(bar()); // 1
dbg!(foo()); // unchanged - 0
dbg!(bar()); // unchanged - 1
}
Playground
And, yes, the repeating code can be, as usual, wrapped in macro:
macro_rules! unique_integer {
() => {{
static LOCAL_COUNTER: Lazy<u64> = Lazy::new(|| COUNTER.fetch_add(1, Ordering::Relaxed));
*LOCAL_COUNTER
}}
}
fn foo() -> u64 {
unique_integer!()
}
fn bar() -> u64 {
unique_integer!()
}
The problem I recently meet requires to do integer operation with boundary based on the bits of integer type.
For example, using i32 integer to do add operation, here's a piece of pseudo code to present the idea:
sum = a + b
max(min(sum, 2147483647), -2147483648)
// if the sum is larger than 2147483647, then return 2147483647.
// if the sum is smaller than -2147483648, then return -2147483648.
To achieve this, I naively wrote following ugly code:
fn i32_add_handling_by_casting(a: i32, b: i32) -> i32 {
let sum: i32;
if (a as i64 + b as i64) > 2147483647 as i64 {
sum = 2147483647;
} else if (a as i64 + b as i64) < -2147483648 as i64 {
sum = -2147483648;
} else {
sum = a + b;
}
sum
}
fn main() {
println!("{:?}", i32_add_handling_by_casting(2147483647, 1));
println!("{:?}", i32_add_handling_by_casting(-2147483648, -1));
}
The code works well; but my six sense told me that using type casting is problematic. Thus, I tried to use traditional panic (exception) handling to deal with this...but I stuck with below code (the panic result can't detect underflow or overflow):
use std::panic;
fn i32_add_handling_by_panic(a: i32, b: i32) -> i32 {
let sum: i32;
let result = panic::catch_unwind(|| {a + b}).ok();
match result {
Some(result) => { sum = result },
None => { sum = ? }
}
sum
}
fn main() {
println!("{:?}", i32_add_handling_by_panic(2147483647, 1));
println!("{:?}", i32_add_handling_by_panic(-2147483648, -1));
}
To sum up, I have 3 questions:
Is my type casting solution valid for strong typing language? (If possible, I need the explanation why it's valid or not valid.)
Is there other better way to deal with this problem?
Could panic handle different exception separately?
In this case, the Rust standard library has a method called saturating_add, which supports your use case:
assert_eq!(10_i32.saturating_add(20), 30);
assert_eq!(i32::MIN.saturating_add(-1), i32::MIN);
assert_eq!(i32::MAX.saturating_add(1), i32::MAX);
Internally, it is implemented as a compiler intrinsic.
In general, such problems are not intended to be solved with panics and unwinding, which are intended for cleanup in exceptional cases only. A hand-written version might involve type casting, but calculating a as i64 + b as i64 only once. Alternatively, here's a version using checked_add, which returns None rather than panics in case of overflow:
fn saturating_add(a: i32, b: i32) -> i32 {
if let Some(sum) = a.checked_add(b) {
sum
} else if a < 0 {
i32::MIN
} else {
i32::MAX
}
}
I need to calculate 21 factorial in my project.
fn factorial(num: u64) -> u64 {
match num {
0 => 1,
1 => 1,
_ => factorial(num - 1) * num,
}
}
fn main() {
let x = factorial(21);
println!("The value of 21 factorial is {} ", x);
}
When running this code, I get an error:
thread 'main' panicked at 'attempt to multiply with overflow', src\main.rs:5:18
A u64 can’t hold 21! (it’s between 2^65 and 2^66), but a u128 can.
A possible implementation could be
pub fn factorial(num: u128) -> u128 {
(1..=num).product()
}
#[test]
fn factorial_of_21() {
assert_eq!(51090942171709440000,factorial(21));
}
#[test]
fn factorial_of_0() {
assert_eq!(1,factorial(0));
}
I need to calculate 21 factorial in my project.
21! doesn't fit in a 64 bit int. You need some arbitrary precision arithmetic (or bigint) library or to implement yours, or use 128 bits ints or some floating point.
According to this list, you could consider using ramp.
I think the implementation should look like this
pub fn factorial(num: u128) -> u128 {
(1..=num).product()
}
I used the num::BigUInt type to avoid integer overflows when calculating the factorial of a number.
However, I had to resort to using .clone() to pass rustc's borrow checker.
How can I refactor the factorial function to avoid cloning what could be large numbers many times?
use num::{BigUint, FromPrimitive, One};
fn main() {
for n in -2..33 {
let bign: Option<BigUint> = FromPrimitive::from_isize(n);
match bign {
Some(n) => println!("{}! = {}", n, factorial(n.clone())),
None => println!("Number must be non-negative: {}", n),
}
}
}
fn factorial(number: BigUint) -> BigUint {
if number < FromPrimitive::from_usize(2).unwrap() {
number
} else {
number.clone() * factorial(number - BigUint::one())
}
}
I tried to use a reference to BigUInt in the function definition but got some errors saying that BigUInt did not support references.
The first clone is easy to remove. You are trying to use n twice in the same expression, so don't use just one expression:
print!("{}! = ", n);
println!("{}", factorial(n));
is equivalent to println!("{}! = {}", n, factorial(n.clone())) but does not try to move n and use a reference to it at the same time.
The second clone can be removed by changing factorial not to be recursive:
fn factorial(mut number: BigUint) -> BigUint {
let mut result = BigUint::one();
let one = BigUint::one();
while number > one {
result *= &number;
number -= &one;
}
result
}
This might seem unidiomatic however. There is a range function, that you could use with for, however, it uses clone internally, defeating the point.
I don't think take a BigUint as parameter make sense for a factorial. u32 should be enough:
use num::{BigUint, One};
fn main() {
for n in 0..42 {
println!("{}! = {}", n, factorial(n));
}
}
fn factorial_aux(accu: BigUint, i: u32) -> BigUint {
if i > 1 {
factorial_aux(accu * i, i - 1)
}
else {
accu
}
}
fn factorial(n: u32) -> BigUint {
factorial_aux(BigUint::one(), n)
}
Or if you really want to keep BigUint:
use num::{BigUint, FromPrimitive, One, Zero};
fn main() {
for i in (0..42).flat_map(|i| FromPrimitive::from_i32(i)) {
print!("{}! = ", i);
println!("{}", factorial(i));
}
}
fn factorial_aux(accu: BigUint, i: BigUint) -> BigUint {
if !i.is_one() {
factorial_aux(accu * &i, i - 1u32)
} else {
accu
}
}
fn factorial(n: BigUint) -> BigUint {
if !n.is_zero() {
factorial_aux(BigUint::one(), n)
} else {
BigUint::one()
}
}
Both version doesn't do any clone.
If you use ibig::UBig instead of BigUint, those clones will be free, because ibig is optimized not to allocate memory from the heap for numbers this small.
The num crate in Rust provides a way of representing zeros and ones via T::zero() and T::one(). Is there a way of representing other integers, such as two, three, etc.?
Consider the following (artificial) example:
extern crate num;
trait IsTwo {
fn is_two(self) -> bool;
}
impl<T: num::Integer> IsTwo for T {
fn is_two(self) -> bool {
self == (T::one() + T::one())
}
}
Is there a better way of representing T::one() + T::one() as 2?
One way of representing arbitrary integers in generic code is to use the num::NumCast trait:
impl<T: num::Integer + num::NumCast> IsTwo for T {
fn is_two(self) -> bool {
self == T::from(2).unwrap()
}
}
A related way is to use the num::FromPrimitive trait:
impl<T: num::Integer + num::FromPrimitive> IsTwo for T {
fn is_two(self) -> bool {
self == T::from_i32(2).unwrap()
}
}
Related questions and answers: [1, 2].
You can write a function:
fn two<T>() -> T
where T: num::Integer,
{
let mut v = T::zero();
for _ in 0..2 {
v = v + T::one();
}
v
}
I've chosen this form because it's easily made into a macro, which can be reused for any set of values:
num_constant!(two, 2);
num_constant!(forty_two, 42);
I hear the concerns now... "but that's a loop and inefficient!". That's what optimizing compilers are for. Here's the LLVM IR for two when compiled in release mode:
; Function Attrs: noinline readnone uwtable
define internal fastcc i32 #_ZN10playground3two17hbef99995c3606e93E() unnamed_addr #3 personality i32 (i32, i32, i64, %"unwind::libunwind::_Unwind_Exception"*, %"unwind::libunwind::_Unwind_Context"*)* #rust_eh_personality {
bb3:
br label %bb8
bb8: ; preds = %bb3
ret i32 2
}
That's right - it's been optimized to the value 2. No loops.
It's relatively simple to forge any number from 0 and 1:
you need to create 2, which is hardly difficult
you then proceed in converting your number to base 2, which takes O(log2(N)) operations
The algorithm is dead simple:
fn convert<T: Integer>(n: usize) -> T {
let two = T::one() + T::one();
let mut n = n;
let mut acc = T::one();
let mut result = T::zero();
while n > 0 {
if n % 2 != 0 {
result += acc;
}
acc *= two;
n /= 2;
}
result
}
And will be efficient both in Debug (O(log2(N)) iterations) and Release (the compiler optimizes it out completely).
For those who wish to see it in action, here on the playground we can see that convert::<i32>(12345) is optimized to 12345 as expected.
As an exercise to the reader, implement a generic version of convert which takes any Integer parameter, there's not much operations required on n after all.