Is it possible to ask the compiler to optimize the code if I know that the domain of a certain parameter will likely be among a few select values?
eg.
// x will be within 1..10
fn foo(x: u32, y: u32) -> u32 {
// some logic
}
The above function should be compiled into
fn foo(x: u32, y: u32) -> u32 {
match x {
1 => foo1(y), // foo1 is generated by the compiler from foo, optimized for when x == 1
2 => foo2(y), // foo2 is generated by the compiler from foo, optimized for when x == 2
...
10 => foo10(y),
_ => foo_default(x, y) // unoptimized foo logic
}
}
I would like the compiler to generate the above rewrite based on some hint.
You can put the logic in a #[inline(always)] foo_impl(), then call it with the values you expect:
// x will be within 1..10
#[inline(always)]
fn foo_impl(x: u32, y: u32) -> u32 {
// some logic
}
fn foo(x: u32, y: u32) -> u32 {
match x {
1 => foo_impl(1, y),
2 => foo_impl(2, y),
// ...
10 => foo_impl(10, y),
_ => foo_impl(x, y),
}
}
Because of the #[inline(always)] the compiler will inline all foo_impl() calls then use the constants to optimize the call. Nothing is guaranteed, but it should be pretty reliable (haven't tested though).
Make sure to benchmark: this can actually be a regression due to code bloat.
Let's use this toy example:
fn foo(x: u32, y: u32) -> u32 {
x * y
}
movl %edi, %eax
imull %esi, %eax
retq
But in your application, you know that x is very likely to be 2 every time. We can communicate that to the compiler with std::intrinsics::likely:
#![feature(core_intrinsics)]
fn foo(x: u32, y: u32) -> u32 {
if std::intrinsics::likely(x == 2) {
foo_impl(x, y)
} else {
foo_impl(x, y)
}
}
fn foo_impl(x: u32, y: u32) -> u32 {
x * y
}
leal (%rsi,%rsi), %eax
imull %edi, %esi
cmpl $2, %edi
cmovnel %esi, %eax
retq
DISCLAIMER: I'm not experienced enough to know if this is a good optimization or not, just that the hint changed the output.
Unfortunately while I think this is the clearest syntax, std::intrinsics are not stabilized. Fortunately though, we can get the same behavior using the #[cold] attribute, which is available on stable, that can convey your desire to the compiler:
fn foo(x: u32, y: u32) -> u32 {
if x == 2 {
foo_impl(x, y)
} else {
foo_impl_unlikely(x, y)
}
}
fn foo_impl(x: u32, y: u32) -> u32 {
x * y
}
#[cold]
fn foo_impl_unlikely(x: u32, y: u32) -> u32 {
foo_impl(x, y)
}
leal (%rsi,%rsi), %eax
imull %edi, %esi
cmpl $2, %edi
cmovnel %esi, %eax
retq
I'm skeptical whether applying this to your use-case will actually yield the transformation you propose. I'd think there'd have to be a significant impact on const-propagation and even a willingness from the compiler to optimise x < 10 into a branch of ten constants, but using the hints above will let it decide what is best.
But sometimes, you know what is best more than the compiler and can force the const-propagation by applying the transformation manually: as you've done in your original example or a different way in #ChayimFriedman's answer.
Related
I would like to iterate over some elements inside a vector contained as a member in a struct called Test. The idea is to mutate Test independently in each iteration and signify success if some external logic on each mutated Test is successful. For simplicity, the mutation is just changing the vector element to 123u8. The problem I have is not being able to change the elements inside a loop. I have two solutions which I though would give the same answer:
#[derive(Debug)]
struct Test {
vec: Vec<u8>
}
impl Test {
fn working_solution(&mut self, number: u8) -> bool {
self.vec[0] = number;
self.vec[1] = number;
self.vec[2] = number;
true
}
fn non_working_solution(&mut self, number: u8) -> bool {
self.vec.iter().all(|mut x| {
x = &number; // mutation
true // external logic
})
}
}
fn main() {
let vec = vec![0u8,1u8,2u8];
let mut test = Test { vec };
println!("Original: {:?}", test);
test.working_solution(123u8);
println!("Altered: {:?}", test);
let vec = vec![0u8,1u8,2u8];
let mut test = Test { vec };
println!("Original: {:?}", test);
test.non_working_solution(123u8);
println!("Altered: {:?}", test);
}
(playground)
Output:
Original: Test { vec: [0, 1, 2] }
Altered: Test { vec: [123, 123, 123] }
Original: Test { vec: [0, 1, 2] }
Altered: Test { vec: [0, 1, 2] }
Expected output:
Original: Test { vec: [0, 1, 2] }
Altered: Test { vec: [123, 123, 123] }
Original: Test { vec: [0, 1, 2] }
Altered: Test { vec: [123, 123, 123] }
How do I change a member of self when using an iterator?
As you can see in the documentation, ìter takes a &self, that is, whatever you do, you can not modify self (you can create a modified copy, but this is not the point of what you want to do here).
Instead, you can use the method iter_mut, which is more or less the same, but takes a &mut self, i.e., you can modify it.
An other side remark, you don't want to use all, which is used to check if a property is true on all elements (hence the bool returned), instead, you want to use for_each which applies a function to all elements.
fn non_working_solution(&mut self, number: u8) {
self.vec.iter_mut().for_each(|x| {
*x = number; // mutation
})
}
(Playground)
As Stargateur mentioned in the comments, you can also use a for loop:
fn non_working_solution(&mut self, number: u8) {
for x in self.vec.iter_mut() {
*x = number
}
}
Since Rust 1.50, there is a dedicated method for filling a slice with a value — [_]::fill:
self.vec.fill(number)
In this case, fill seems to generate less code than a for loop or for_each:
(Compiler Explorer)
pub fn f(slice: &mut [u8], number: u8) {
slice.fill(number);
}
pub fn g(slice: &mut [u8], number: u8) {
for x in slice {
*x = number;
}
}
pub fn h(slice: &mut [u8], number: u8) {
slice
.iter_mut()
.for_each(|x| *x = number);
}
example::f:
mov rax, rsi
mov esi, edx
mov rdx, rax
jmp qword ptr [rip + memset#GOTPCREL]
example::g:
test rsi, rsi
je .LBB1_2
push rax
mov rax, rsi
movzx esi, dl
mov rdx, rax
call qword ptr [rip + memset#GOTPCREL]
add rsp, 8
.LBB1_2:
ret
example::h:
test rsi, rsi
je .LBB2_1
mov rax, rsi
movzx esi, dl
mov rdx, rax
jmp qword ptr [rip + memset#GOTPCREL]
.LBB2_1:
ret
The problem I recently meet requires to do integer operation with boundary based on the bits of integer type.
For example, using i32 integer to do add operation, here's a piece of pseudo code to present the idea:
sum = a + b
max(min(sum, 2147483647), -2147483648)
// if the sum is larger than 2147483647, then return 2147483647.
// if the sum is smaller than -2147483648, then return -2147483648.
To achieve this, I naively wrote following ugly code:
fn i32_add_handling_by_casting(a: i32, b: i32) -> i32 {
let sum: i32;
if (a as i64 + b as i64) > 2147483647 as i64 {
sum = 2147483647;
} else if (a as i64 + b as i64) < -2147483648 as i64 {
sum = -2147483648;
} else {
sum = a + b;
}
sum
}
fn main() {
println!("{:?}", i32_add_handling_by_casting(2147483647, 1));
println!("{:?}", i32_add_handling_by_casting(-2147483648, -1));
}
The code works well; but my six sense told me that using type casting is problematic. Thus, I tried to use traditional panic (exception) handling to deal with this...but I stuck with below code (the panic result can't detect underflow or overflow):
use std::panic;
fn i32_add_handling_by_panic(a: i32, b: i32) -> i32 {
let sum: i32;
let result = panic::catch_unwind(|| {a + b}).ok();
match result {
Some(result) => { sum = result },
None => { sum = ? }
}
sum
}
fn main() {
println!("{:?}", i32_add_handling_by_panic(2147483647, 1));
println!("{:?}", i32_add_handling_by_panic(-2147483648, -1));
}
To sum up, I have 3 questions:
Is my type casting solution valid for strong typing language? (If possible, I need the explanation why it's valid or not valid.)
Is there other better way to deal with this problem?
Could panic handle different exception separately?
In this case, the Rust standard library has a method called saturating_add, which supports your use case:
assert_eq!(10_i32.saturating_add(20), 30);
assert_eq!(i32::MIN.saturating_add(-1), i32::MIN);
assert_eq!(i32::MAX.saturating_add(1), i32::MAX);
Internally, it is implemented as a compiler intrinsic.
In general, such problems are not intended to be solved with panics and unwinding, which are intended for cleanup in exceptional cases only. A hand-written version might involve type casting, but calculating a as i64 + b as i64 only once. Alternatively, here's a version using checked_add, which returns None rather than panics in case of overflow:
fn saturating_add(a: i32, b: i32) -> i32 {
if let Some(sum) = a.checked_add(b) {
sum
} else if a < 0 {
i32::MIN
} else {
i32::MAX
}
}
I need to calculate 21 factorial in my project.
fn factorial(num: u64) -> u64 {
match num {
0 => 1,
1 => 1,
_ => factorial(num - 1) * num,
}
}
fn main() {
let x = factorial(21);
println!("The value of 21 factorial is {} ", x);
}
When running this code, I get an error:
thread 'main' panicked at 'attempt to multiply with overflow', src\main.rs:5:18
A u64 can’t hold 21! (it’s between 2^65 and 2^66), but a u128 can.
A possible implementation could be
pub fn factorial(num: u128) -> u128 {
(1..=num).product()
}
#[test]
fn factorial_of_21() {
assert_eq!(51090942171709440000,factorial(21));
}
#[test]
fn factorial_of_0() {
assert_eq!(1,factorial(0));
}
I need to calculate 21 factorial in my project.
21! doesn't fit in a 64 bit int. You need some arbitrary precision arithmetic (or bigint) library or to implement yours, or use 128 bits ints or some floating point.
According to this list, you could consider using ramp.
I think the implementation should look like this
pub fn factorial(num: u128) -> u128 {
(1..=num).product()
}
Currently I'm using the following code to return a number as a binary (base 2), octal (base 8), or hexadecimal (base 16) string.
fn convert(inp: u32, out: u32, numb: &String) -> Result<String, String> {
match isize::from_str_radix(numb, inp) {
Ok(a) => match out {
2 => Ok(format!("{:b}", a)),
8 => Ok(format!("{:o}", a)),
16 => Ok(format!("{:x}", a)),
10 => Ok(format!("{}", a)),
0 | 1 => Err(format!("No base lower than 2!")),
_ => Err(format!("printing in this base is not supported")),
},
Err(e) => Err(format!(
"Could not convert {} to a number in base {}.\n{:?}\n",
numb, inp, e
)),
}
}
Now I want to replace the inner match statement so I can return the number as an arbitrarily based string (e.g. base 3.) Are there any built-in functions to convert a number into any given radix, similar to JavaScript's Number.toString() method?
For now, you cannot do it using the standard library, but you can:
use my crate radix_fmt
or roll your own implementation:
fn format_radix(mut x: u32, radix: u32) -> String {
let mut result = vec![];
loop {
let m = x % radix;
x = x / radix;
// will panic if you use a bad radix (< 2 or > 36).
result.push(std::char::from_digit(m, radix).unwrap());
if x == 0 {
break;
}
}
result.into_iter().rev().collect()
}
fn main() {
assert_eq!(format_radix(1234, 10), "1234");
assert_eq!(format_radix(1000, 10), "1000");
assert_eq!(format_radix(0, 10), "0");
}
If you wanted to eke out a little more performance, you can create a struct and implement Display or Debug for it. This avoids allocating a String. For maximum over-engineering, you can also have a stack-allocated array instead of the Vec.
Here is Boiethios' answer with these changes applied:
struct Radix {
x: i32,
radix: u32,
}
impl Radix {
fn new(x: i32, radix: u32) -> Result<Self, &'static str> {
if radix < 2 || radix > 36 {
Err("Unnsupported radix")
} else {
Ok(Self { x, radix })
}
}
}
use std::fmt;
impl fmt::Display for Radix {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
let mut x = self.x;
// Good for binary formatting of `u128`s
let mut result = ['\0'; 128];
let mut used = 0;
let negative = x < 0;
if negative {
x*=-1;
}
let mut x = x as u32;
loop {
let m = x % self.radix;
x /= self.radix;
result[used] = std::char::from_digit(m, self.radix).unwrap();
used += 1;
if x == 0 {
break;
}
}
if negative {
write!(f, "-")?;
}
for c in result[..used].iter().rev() {
write!(f, "{}", c)?;
}
Ok(())
}
}
fn main() {
assert_eq!(Radix::new(1234, 10).to_string(), "1234");
assert_eq!(Radix::new(1000, 10).to_string(), "1000");
assert_eq!(Radix::new(0, 10).to_string(), "0");
}
This could still be optimized by:
creating an ASCII array instead of a char array
not zero-initializing the array
Since these avenues require unsafe or an external crate like arraybuf, I have not included them. You can see sample code in internal implementation details of the standard library.
Here is an extended solution based on the first comment which does not bind the parameter x to be a u32:
fn format_radix(mut x: u128, radix: u32) -> String {
let mut result = vec![];
loop {
let m = x % radix as u128;
x = x / radix as u128;
// will panic if you use a bad radix (< 2 or > 36).
result.push(std::char::from_digit(m as u32, radix).unwrap());
if x == 0 {
break;
}
}
result.into_iter().rev().collect()
}
This is faster than the other answer:
use std::char::from_digit;
fn encode(mut n: u32, r: u32) -> Option<String> {
let mut s = String::new();
loop {
if let Some(c) = from_digit(n % r, r) {
s.insert(0, c)
} else {
return None
}
n /= r;
if n == 0 {
break
}
}
Some(s)
}
Note I also tried these, but they were slower:
https://doc.rust-lang.org/std/collections/struct.VecDeque.html#method.push_front
https://doc.rust-lang.org/std/string/struct.String.html#method.push
https://doc.rust-lang.org/std/vec/struct.Vec.html#method.insert
The num crate in Rust provides a way of representing zeros and ones via T::zero() and T::one(). Is there a way of representing other integers, such as two, three, etc.?
Consider the following (artificial) example:
extern crate num;
trait IsTwo {
fn is_two(self) -> bool;
}
impl<T: num::Integer> IsTwo for T {
fn is_two(self) -> bool {
self == (T::one() + T::one())
}
}
Is there a better way of representing T::one() + T::one() as 2?
One way of representing arbitrary integers in generic code is to use the num::NumCast trait:
impl<T: num::Integer + num::NumCast> IsTwo for T {
fn is_two(self) -> bool {
self == T::from(2).unwrap()
}
}
A related way is to use the num::FromPrimitive trait:
impl<T: num::Integer + num::FromPrimitive> IsTwo for T {
fn is_two(self) -> bool {
self == T::from_i32(2).unwrap()
}
}
Related questions and answers: [1, 2].
You can write a function:
fn two<T>() -> T
where T: num::Integer,
{
let mut v = T::zero();
for _ in 0..2 {
v = v + T::one();
}
v
}
I've chosen this form because it's easily made into a macro, which can be reused for any set of values:
num_constant!(two, 2);
num_constant!(forty_two, 42);
I hear the concerns now... "but that's a loop and inefficient!". That's what optimizing compilers are for. Here's the LLVM IR for two when compiled in release mode:
; Function Attrs: noinline readnone uwtable
define internal fastcc i32 #_ZN10playground3two17hbef99995c3606e93E() unnamed_addr #3 personality i32 (i32, i32, i64, %"unwind::libunwind::_Unwind_Exception"*, %"unwind::libunwind::_Unwind_Context"*)* #rust_eh_personality {
bb3:
br label %bb8
bb8: ; preds = %bb3
ret i32 2
}
That's right - it's been optimized to the value 2. No loops.
It's relatively simple to forge any number from 0 and 1:
you need to create 2, which is hardly difficult
you then proceed in converting your number to base 2, which takes O(log2(N)) operations
The algorithm is dead simple:
fn convert<T: Integer>(n: usize) -> T {
let two = T::one() + T::one();
let mut n = n;
let mut acc = T::one();
let mut result = T::zero();
while n > 0 {
if n % 2 != 0 {
result += acc;
}
acc *= two;
n /= 2;
}
result
}
And will be efficient both in Debug (O(log2(N)) iterations) and Release (the compiler optimizes it out completely).
For those who wish to see it in action, here on the playground we can see that convert::<i32>(12345) is optimized to 12345 as expected.
As an exercise to the reader, implement a generic version of convert which takes any Integer parameter, there's not much operations required on n after all.