Computing Ceil of Log2 in Rust - rust

I want to calculate the following in Rust:
Python Equivalent:
math.ceil(math.log(b+1, 2))
ceil(log_2 n+1)
I have tried:
a+1.log2()
(a+1).log2()
but I get the error use of unstable library feature 'int_log'. I don't want to use an unstable library feature. What is the easiest way to calculate log2, without any external crates if possible?

Rust is very particular about the difference between ints and floats. You can't just type a integer and expect it to be auto-casted automatically. Always add a . at the end for a float.
Seems like you attempted to call the function on a integer. Try this:
Try this:
(1.).log2().ceil()
or
(a as f32 + 1.).log2().ceil()
If you want the end result as a integer you can use as i32 at the end. If you want double precision floats replace f32 with f64.
Reference:
https://doc.rust-lang.org/std/primitive.f32.html#method.ceil
https://doc.rust-lang.org/std/primitive.f32.html#method.log2

If this is time-critical then you can use knowledge of the datatypes to make things fast.
If the input is a positive integer (let's say a u64), then log2(x) is almost the index of the last non-1 bit, which you can determine using u64::leading_zeros. In fact log2(x+1) is exactly the index of the last non-1 bit of x. This gives: (playground)
pub fn main() {
for a in 0..10u64 {
let v = (a as f32 + 1.).log2().ceil();
let w = u64::BITS - a.leading_zeros();
println!("{} {} {}", a, v, w);
}
}
outputs
0 0 0
1 1 1
2 2 2
3 2 2
4 3 3
5 3 3
6 3 3
7 3 3
8 4 4
9 4 4
A little more care is needed to handle the sign-bit for signed numbers like i64.
If your number is a floating-point value, there are still some tricks you can play to make things fast, especially if you are willing not to handle NaN, Inf and denormalised numbers.
In particular, for f64 the bit pattern for "normal" numbers is
52 bits of "fraction", 11 bits of mantissa and 1 sign bit. Calling the 11 bits of mantissa e the value of the float is (sign)*(1+fraction/2^52)*2^(e-1023)
So log2(x) = log2(1+fraction/2^52) + (e-1023).
The e-1023 part is an integer, the rest as 0<=log2(1+fraction/2^52)<1, with zero when fraction==0. So ceil(log2(x)) = if fraction==0 {e-1023} else {e-1022}.
This gives us this:
pub fn main() {
for a in -20..20i64 {
let f = (a as f64)/4.0 + 1.0;
let v = f.log2().ceil();
let b: u64 = f.to_bits();
let s = (b >> 63) & 1;
let e = (b >> 52) & ((1<<11)-1);
let frac = b & ((1<<52) -1);
let z = if frac==0 { e as i64 - 1023 } else { e as i64 -1022 };
println!("{:0.2} {:064b} : {:01b} {:011b} {:052b} {} {}", f, b, s, e, frac, z, v);
}
}
which outputs (after trimming out the bit representations)
...
-0.75 ... -2 -2
-0.50 ... -1 -1
-0.25 ... 0 -0
0.00 ... 0 0
0.25 ... 1 1
0.50 ... 1 1
0.75 ... 1 1
1.00 ... 1 1
1.25 ... 2 2
1.50 ... 2 2
1.75 ... 2 2
2.00 ... 2 2
...
Again, this only holds when 1+a is a normal number. You'll need to add conditionals to handle those if you need to. (And you should unless you're sure you don't need to).
Similar tricks can be used for u32 and f32 types.

Related

Multiply numbers from two iterators in order and without duplicates

I have this code and I want every combination to be multiplied:
fn main() {
let min = 1;
let max = 9;
for i in (min..=max).rev() {
for j in (min..=max).rev() {
println!("{}", i * j);
}
}
}
Result is something like:
81
72
[...]
9
72
65
[...]
8
6
4
2
9
8
7
6
5
4
3
2
1
Is there a clever way to produce the results in descending order (without collecting and sorting) and without duplicates?
Note that this answer provides a solution for this specific problem (multiplication table) but the title asks a more general question (any two iterators).
The naive solution of storing all elements in a vector and then sorting it uses O(n^2 log n) time and O(n^2) space (where n is the size of the multiplication table).
You can use a priority queue to reduce the memory to O(n):
use std::collections::BinaryHeap;
fn main() {
let n = 9;
let mut heap = BinaryHeap::new();
for j in 1..=n {
heap.push((9 * j, j));
}
let mut last = n * n + 1;
while let Some((val, j)) = heap.pop() {
if val < last {
println!("{val}");
last = val;
}
if val > j {
heap.push((val - j, j));
}
}
}
playground.
The conceptual idea behind the algorithm is to consider 9 separate sequences
9*9, 9*8, 9*7, .., 9*1
8*9, 8*8, 8*7, .., 8*1
...
1*9, 1*8, 1*7, .., 1*1
Since they are all decreasing, at a given moment, we only need to consider one element of each sequence (the largest one we haven't reached yet).
These are inserted into the priority queue which allows us to efficiently find the maximum one.
Once we have printed a given element we move onto the next one in the sequence and insert that into the priority queue.
By keeping track of the last element printed we can avoid duplicates.

How to linearly scale an integer into 100 parts, even if it is below 100?

How can I split a single integer into 100 parts?
The max value and last value will be the input value.
The code is what I have come up with so far:
fn main() {
let max_value: i32 = 6543;
let part = max_value.checked_div(100).unwrap();
for no in 1..=100 {
let num = if no == 100 { max_value } else { part * no };
println!("{}", num);
}
}
This works well if max_value is 100 or larger:
65
130
195
260
325
390
455
[…]
6240
6305
6370
6435
6543
But for max_value smaller than 100 it doesn't work at all:
let max_value: i32 = 90;
0
0
0
0
[…]
0
0
0
0
90
How can I do this properly?
Here is an approach with the following characteristics:
Pure integer arithmetic.
No overflow as long as the divisor d is less than 2 * u32::MAX (or the max of whatever integer type you are using).
O(1) integer divisions.
Steps are as evenly spaced as possible – the result is the same as using n * i / d for i in 1..=d, but with only integer additions in each loop iteration, and without the problems with integer overflow.
fn divide_integer(n: u32, d: u32) -> impl Iterator<Item = u32> {
let step = n / d;
let rem_step = n % d;
(0..d).scan((0, 0), move |(current, rem), _| {
*current += step;
*rem += rem_step;
if *rem >= d {
*rem -= d;
*current += 1;
}
Some(*current)
})
}
This increases the current value current by n / d in each iteration, but also keeps track of the remainder rem. The increment for the remainder rem_step is always less than d, so rem will always be less than 2 * d, so rem cannot overflow if d <= u32::MAX. The value of current is less than or equal to n at all times, so current can never overflow.

Format number by rounding up

I have a number which I want to print with a fixed precision, rounded up. I know I can use {:.3} to truncate it.
assert_eq!("0.0123", format!("{:.3}", 0.0123456))
Is there a simple way to "ceil" it instead?
assert_eq!("0.0124", format!("{:magic}", 0.012301))
assert_eq!("0.0124", format!("{:magic}", 0.012399))
assert_eq!("0.0124", format!("{:magic}", 0.0124))
I can do something like
let x = format!("{:.3}", (((y * 1000.0).ceil() + 0.5) as i64) as f64 / 1000.0)
which is pretty unreadable. It also gives me would give me 3 digits after the decimal point, not three digits of precision, so I need to figure out the scale the number, probably with something like -log10(y) as i64
In case it's not clear, I want a string to show the user, not an f64.
More examples
assert_eq!("1.24e-42", format!("{:magic}", 1.234e-42))
assert_eq!("1240", format!("{:magic}", 1234.5)) // "1240." also works
If the f64 representing 0.123 is slightly larger than the real number 0.123, displaying "0.124" is acceptable.
The two requirements are:
The string, when converted back to an f64, is greater than or equal to the original f64 (so 0.123 -> "0.124" is acceptable)
The string has 3 significant digits (although dropping trailing zeros is acceptable, so 0.5 -> "0.5" and "0.5 -> "0.500" both work)
In case it comes up, the input number will always be positive.
This is harder than it seems because there is no way to tell the formatting machinery to change the rounding strategy. Also, format precision works on the number of digits after the decimal point, not on the number of significant digits. (AFAIK there is no equivalent to the printf("%.3g", n), and even if there were, it wouldn't round up.)
You can use a decimal arithmetic crate such as rust_decimal to do the heavy-lifting - something like:
use rust_decimal::prelude::*;
pub fn fmtup(n: f64, ndigits: u32) -> String {
let d = Decimal::from_f64_retain(n).unwrap();
d.round_sf_with_strategy(ndigits, RoundingStrategy::AwayFromZero)
.unwrap()
.normalize()
.to_string()
}
EDIT: The answer originally included a manual implementation of the rounding due to issues in rust_decimal which have since been fixed. As of Oct 24 2021 the above snippet using rust_decimal is the recommended solution. The only exception is if you need to handle numbers that are very large or very close to zero (such as 1.234e-42 or 1.234e42), which are approximated to zero or rejected by rust_decimal.
To manually round to significant digits, one can scale the number until it has the desired number of digits before the decimal point, and then round it up. In case of 3 digits, scaling would multiply or divide it by 10 until it falls between 100 and 1000. After rounding the number, format the resulting whole number as string, and insert the . at the position determined by the amount of scaling done in the first step.
To avoid inexactness of floating-point division by ten, the number can be first converted to a fraction, and then all operations can proceed on the fraction. Here is an implementation that uses the ubiquitous num crate to provide fractions:
use num::{rational::BigRational, FromPrimitive};
/// Format `n` to `ndigits` significant digits, rounding away from zero.
pub fn fmtup(n: f64, ndigits: i32) -> String {
// Pass 0 (which we can't scale), infinities and NaN to f64::to_string()
if n == 0.0 || !n.is_finite() {
return n.to_string();
}
// Handle negative numbers the easy way.
if n < 0.0 {
return format!("-{}", fmtup(-n, ndigits));
}
// Convert the input to a fraction. From this point onward, we are only doing exact
// arithmetic.
let mut n = BigRational::from_float(n).unwrap();
// Scale N so its whole part is ndigits long, meaning truncating it will result in an
// integer ndigits long. If ndigits is 3, we'd want N to be in (100, 1000] range, so
// that e.g. 0.012345 would be scaled to 123.45, and then rounded up to 124.
let mut scale = 0i16;
let ten = BigRational::from_u8(10).unwrap();
let lower_bound = ten.pow(ndigits - 1);
if n < lower_bound {
while n < lower_bound {
n *= &ten;
scale -= 1;
}
} else {
let upper_bound = lower_bound * &ten;
while n >= upper_bound {
n /= &ten;
scale += 1;
}
}
// Round N up
n = n.ceil();
// Format the number as integer and place the decimal point at the right position.
let mut s = n.to_string();
// multiply N with 10**scale, i.e. append zeros if SCALE is positve, otherwise
// insert the point inside or before the number
if scale > 0 {
s.extend(std::iter::repeat('0').take(scale as _));
} else if scale < 0 {
// Find where to place the decimal point in the string.
let point_pos = s.len() as i16 + scale;
if point_pos <= 0 {
// Negative position means before beginning of the string, so we have
// to pad with zeros. E.g. s == "123" and point_pos == -2 means we
// want "0.00123", and with point_pos == 0 we'd want "0.123".
let mut pad = "0.".to_string();
pad.extend(std::iter::repeat('0').take(-point_pos as _));
pad.push_str(&s);
s = pad;
// Trim trailing zeros after decimal point. E.g. 0.25 gets scaled to
// 250 and then ends up "0.250".
s.truncate(s.trim_end_matches('0').len());
} else {
// Insert the decimal point in the middle of string. E.g. s == "123"
// and point_pos == 1 would result in "1.23".
let point_pos = point_pos as usize;
if s.as_bytes()[point_pos..].iter().all(|&digit| digit == b'0') {
// if only zeros are after the decimal point, e.g. "10.000", omit those
// digits instead of placing the decimal point.
s.truncate(point_pos);
} else {
s.insert(point_pos, '.');
}
}
}
s
}
Playground
Here are some test cases:
fn main() {
let fmt3up = |n| fmtup(n, 3);
assert_eq!("12400", fmt3up(12301.));
assert_eq!("1240", fmt3up(1234.5));
assert_eq!("124", fmt3up(123.01));
assert_eq!("1000", fmt3up(1000.));
assert_eq!("999", fmt3up(999.));
assert_eq!("1010", fmt3up(1001.));
assert_eq!("100", fmt3up(100.));
assert_eq!("10", fmt3up(10.));
assert_eq!("99", fmt3up(99.));
assert_eq!("101", fmt3up(101.));
assert_eq!("0.25", fmt3up(0.25));
assert_eq!("12400", fmt3up(12301.0));
assert_eq!("0.0124", fmt3up(0.0123)); // because 0.0123 is slightly above 123/10_000
assert_eq!("0.0124", fmt3up(0.012301));
assert_eq!("0.00124", fmt3up(0.0012301));
assert_eq!("0.0124", fmt3up(0.012399));
assert_eq!("0.0124", fmt3up(0.0124));
assert_eq!("0.124", fmt3up(0.12301));
assert_eq!("1.24", fmt3up(1.2301));
assert_eq!("1.24", fmt3up(1.234));
}
Note that this will display 1.234e-42 as 0.00000000000000000000000000000000000000000124, but an improvement that to switch to exponential notation should be fairly straightforward.

changing robin karp(without modulo operator) to Rabin-karp algorithm(with moduo opeartor) for string matching

I am trying to solve a string matching problem using Rabin-karp algorithm.
I made use of horner's method for calculating hash function,but i forgot to use modulo operator..its like this now.
(this is for starting pattern length of the big string)
1 for(i=0;i<l1;i++)
2 {
3 unsigned long long int k2 = *(s1+i);
4 p1 += k2 * k1;
5 k1 = (k1 * 31);
6 }
where s1 is a string containing characters,and its like
s1[0](k1^0) + s1[1](k1^1) and so on....
and i did the same for the pattern we need to find..
0 unsigned long long int j;
1 for(j=0;j<l1;j++)
2 {
3 unsigned long long int k3 = *(str+j);
4 p2 += k3 * k4;
5 k4 = (k4 *31);
6 }
Now i am going through strings of length = pattern length in the big string.
Code for that is..
0 long long int ll1 = strlen(s1),ll2=strlen(str);
1 for(j=1;j<=ll2;j++)
2 {
3 printf("p1 and p2 are %d nd %d\n",p1,p2);
4 if ( p2 == p1)
5 {
6 r1 = 1;
7 break;
8 }
9 long int w1 = *(str+j-1);
10 p2 -= w1;
11 p2 = p2/31;
12 long int lp = *(str+j+l1-1);
13 p2 += ((lp *vp));
14 }
15 if ( r1 == 0)
16 {
17 printf("n\n");
18 }
19 else
20 {
21 printf("y\n");
22 }
23 }
where str is the big string,s1 is pattern string.
I tested for multiple inputs an i am getting correct answers for all of them but its taking a lot of time..i then realized its because of high calculations needed when the pattern string is too long and if we use a modulo operator we can minimize those calculations..**my question is how to incorporate modulo operator in this code while searching for patterns?
**
My entire code:
http://ideone.com/81hOiU
Please help me out with this,i tried searching in net but could not find help.
Thanks in Advance!

What's the probability that X *consecutive* bits in an array of N bits is set to 1?

I'm trying to code a simple, sufficiently accurate filter for validating a piece of hardware in an RTL simulation. We're simulating the randomness inherent in a chip's flip-flops, by randomly initializing all the flip-flops in the design to either 0 or 1. This corresponds to the chip's flip-flops getting some random value during power-up. We're also randomizing the flops in the reset tree ( where reset tree has no feedback loops ), which means that you can get false glitching on your reset lines.
e.g.
|||
VVV Nth reset-tree flop
+----+ +----+ +----+ / / +----+
reset_in | | 0 | | 1 | | 0 / / | | reset_out
-------->D Q>----->D Q>----->D Q>---- / ... / -->D Q>----
| | | | | | \ \ | |
| | | | | | \ \ | |
+^---+ +^---+ +^---+ / / +^---+
| | | / / |
clk ------+------------+------------+---------/ / ---+
You'll see a 0->1->0 which looks like a reset, but is really a glitch.
I want to build a filter that looks for a certain number of consecutive 1 values to determine whether the reset I just saw was the reset coming from the reset controller or a spurious reset.
I know this is statistics and maybe related to the Poisson distribution, but how do I determine the probability that any X consecutive bits in a set of N bits are 1?
P.S. Yes. I am aware of 4-val RTL simulation. We're doing that also, but some Verilog constructs don't have sufficient pessimism when propagating X's and Z's.
EDIT: The below doesn't answer the question, sorry... Comment clarified that the real problem is about the probability of x consecutive 1s out of n bits, not just the simple thing I assumed.
Had a quick look at this: http://www.mathhelpforum.com/math-help/probability-statistics/64519-probability-consecutive-wins.html which may be what you are looking for - it seems to deal with working out the probability of a run of toin cosses out of a larger population of toin cosses, so sounds similar. But its late and I am tired so I haven't decoded the math :)
OBSOLETE:
It sounds like you are basically dealing with binominal probability - see http://en.wikipedia.org/wiki/Binomial_probability.
I have to admit I haven't done the calculations for about 20 years, so somewhat rusty...
Basically, binominal allows you to "add together" the probability of an event occuring multiple times, where there is only two possible outcomes each time.
Order is significant in your case so it should be as simple as multiplying the probabilites;
For 1 bit it is 50%
For 2 bits it is 50%^2 = 25%
For 3 bits it is 50%^3 = 12.5%
Look at it another way;
1 bit only has 2 possible combinations, one of which is all 1s = 50%
2 bits have 4 possible combinations (10, 01, 11, 00), only one of which is all 1s - so 25%
3 bit have 2^3 = 8 possible combinations, only one of which is all 1s, so 1/8 = 12.5%
So... probability of n bits all being 1 = 1/(2^n).
If you want a quick test to see if a sequence of bits is random based on the longest streak of 1's, you can use the fact that the expected longest streak of 1's in N bits is Θ(log(N)).
Furthermore, the probability that the longest streak exceeds r*log₂(N) bits is at most 1/N^(r-1), and similarly the probability that the longest streak is less than log₂(N)/r bits is at most 1/N^(r-1).
These results are derived in the section on "Streaks" in the chapter on "Counting and Probability" in Introduction to Algorithms
OK, here's what I found:
P = 1 - Q(X)
where
Q(X) = [1 - 1/2(Z)]/[(X + 1 - XZ) x 1/2 x Z^(X+1)]
where
Z = 1 + (1/2)(1/2)^X + (X+1)[(1/2)(1/2)^X]^2 + ...
The link with some of the math is here:
Math Forum
you can do a recursive program (python):
prob (x,n) gives your desired result
import math
def prob(x,n,i=0):
if i == x: return 1
if (x+i) > n: return 0
t = .5 * prob(x,n-1,i+1) + .5 * prob(x,n-1,i)
return t
My approach to this would be to define a FSA that accepts bit patterns of the correct type, and then simulate the pattern for each number of bits. i.e.
State state_map[] = {
0 => { 0 -> 0; 1 -> 1; accepts = false },
1 => { 0 -> 0; 1 -> 2; accepts = false },
2 => { 0 -> 0; 1 -> 3; accepts = false },
3 => { 0 -> 3; 1 -> 3; accepts = true }
};
state[t: 0, s: 0] = 1.0;
state[t: 0, s: 1] = 0.0;
state[t: 0, s: 2] = 0.0;
state[t: 0, s: 3] = 0.0;
for (t = 0; t < N; t++)
for (s = 0; s<NUM_STATES; s++)
state[t: t+1, s: state_map[s].0] += state[t, s] * .5
state[t: t+1, s: state_map[s].1] += state[t, s] * .5
print "Probability: {0}", state[t: N, s: 3],

Resources