So I'm trying to get a random number, but I'd rather not have it come back as uint instead of int... Not sure if this match is right, either, but the compiler doesn't get that far because it's never heard of this from_uint thing I'm trying to do:
fn get_random(max: &int) -> int {
// Here we use * to dereference max
// ...that is, we access the value at
// the pointer location rather than
// trying to do math using the actual
// pointer itself
match int::from_uint(rand::random::<uint>() % *max + 1) {
Some(n) => n,
None => 0,
}
}
from_uint is not in the namespace of std::int, but std::num: http://doc.rust-lang.org/std/num/fn.from_uint.html
Original answer:
Cast a u32 to int with as. If you cast uint or u64 to int, you risk overflowing into the negatives (assuming you are on 64 bit). From the docs:
The size of a uint is equivalent to the size of a pointer on the particular architecture in question.
This works:
use std::rand;
fn main() {
let max = 42i;
println!("{}" , get_random(&max));
}
fn get_random(max: &int) -> int {
(rand::random::<u32>() as int) % (*max + 1)
}
Related
I have a u16 which I use to hold a 9-bit bitmask and I want to find out how many 1s it contains.
I found this algorithm that I have no clue how or why it works:
/* count number of 1's in 9-bit argument (Schroeppel) */
unsigned count_ones(unsigned36 a) {
return ((a * 01001001001) /* 4 adjacent copies */
& 042104210421) /* every 4th bit */
% 15; /* casting out 15.'s in hexadecimal */
}
How can I turn this into a Rust function? This is what I've tried but doesn't work:
fn main() {
let a: u16 = 0b101_100_000;
println!("Ones in {:b}: {}", a, num_of_ones(a));
}
fn num_of_ones(quantity: u16) -> u8 {
(((quantity as u64 * 01_001_001_001) & 042_104_210_421) % 15) as u8
}
A leading zero in C denotes an octal literal. Rust octals start with 0o, like the 0b you already use:
(((quantity as u64 * 0o01_001_001_001) & 0o042_104_210_421) % 15) as u8
However, there's no need for this as it's built-in, such as u16::count_ones:
println!("Ones in {:b}: {}", a, a.count_ones());
See also:
How do you set, clear and toggle a single bit in Rust?
Lets say I have the value 1025 as a byte array and the value 1030 as usize. How would I go about comparing if the byte array is bigger, lesser or equal without deserializing it?
I'm completely stuck, I assume the easisest way is to get the biggest bytes of the byte array, its position, then bitshift the u32 and see if any bits in the byte is set, if not the byte array is bigger.
In short I want to write some functions to be able to decide if a > b, a < b and a == b.
To use a code example
fn is_greater(a: &[u8], b: usize) -> bool {
// a is LE, so reverse and get the largest bytes
let c = a.iter()
.enumerate()
.rev()
.filter_map(|(i, b)| ( if *b != 0 { return Some((i, *b)); } else { None }))
.collect::<Vec<(usize, u8)>>();
for (i, be) in c {
let k = (b >> (i * 8)) & 255;
println!("{}, {}", be, k);
return be as usize > k
}
false
}
EDIT: Should have clarified, the byte array can be any integer, unsigned integer or float. Simply any integer bincode::serialize can serialize.
I also had in mind to avoid converting the byte array, comparison is supposed to be used on 100000 of byte arrays, so I assume bit operations is the preferred way.
No need for all those extra steps. The basic problem is to know if the integer encoded in the byte-array is little endian, big endian or native endian. Knowing that, you can use usize::from_??_bytes to convert a fixed-size array to an integer; use the TryFrom-trait to get the fixed-size array from the slice.
fn is_greater(b: &[u8], v: usize) -> Result<bool, std::array::TryFromSliceError> {
use std::convert::TryFrom;
Ok(usize::from_le_bytes(<[u8; 8]>::try_from(b)?) > v)
}
This function will return an error if the byte-slice is smaller than 8 bytes, in which case there is no way to construct a usize; you can also convert to u32 or even u16, upcast that to usize and then do the comparison. Also notice that this example uses from_le_bytes, assuming the bytes-slice contains an integer encoded as little endian.
I'd like to define a function that can return a number whose type is specified when the function is called. The function takes a buffer (Vec<u8>) and returns numeric value, e.g.
let byte = buf_to_num<u8>(&buf);
let integer = buf_to_num<u32>(&buf);
The buffer contains an ASCII string that represents a number, e.g. b"827", where each byte is the ASCII code of a digit.
This is my non-working code:
extern crate num;
use num::Integer;
use std::ops::{MulAssign, AddAssign};
fn buf_to_num<T: Integer + MulAssign + AddAssign>(buf: &Vec::<u8>) -> T {
let mut result : T;
for byte in buf {
result *= 10;
result += (byte - b'0');
}
result
}
I get mismatched type errors for both the addition and the multiplication lines (expected type T, found u32). So I guess my problem is how to tell the type system that T can be expressed in terms of a literal 10 or in terms of the result of (byte - b'0')?
Welcome to the joys of having to specify every single operation you're using as a generic. It's a pain, but it is worth.
You have two problems:
result *= 10; without a corresponding From<_> definition. This is because, when you specify "10", there is no way for the compiler to know what "10" as a T means - it knows primitive types, and any conversion you defined by implementing From<_> traits
You're mixing up two operations - coercion from a vector of characters to an integer, and your operation.
We need to make two assumptions for this:
We will require From<u32> so we can cap our numbers to u32
We will also clarify your logic and convert each u8 to char so we can use to_digit() to convert that to u32, before making use of From<u32> to get a T.
use std::ops::{MulAssign, AddAssign};
fn parse_to_i<T: From<u32> + MulAssign + AddAssign>(buf: &[u8]) -> T {
let mut buffer:T = (0 as u32).into();
for o in buf {
buffer *= 10.into();
buffer += (*o as char).to_digit(10).unwrap_or(0).into();
}
buffer
}
You can convince yourself of its behavior on the playground
The multiplication is resolved by force-casting the constant as u8, which makes it benefit from our requirement of From<u8> for T and allows the rust compiler to know we're not doing silly stuff.
The final change is to set result to have a default value of 0.
Let me know if this makes sense to you (or if it doesn't), and I'll be glad to elaborate further if there is a problem :-)
In Java, intValue() gives back a truncated portion of the BigInteger instance. I wrote a similar program in Rust but it appears not to truncate:
extern crate num;
use num::bigint::{BigInt, RandBigInt};
use num::ToPrimitive;
fn main() {
println!("Hello, world!");
truncate_num(
BigInt::parse_bytes(b"423445324324324324234324", 10).unwrap(),
BigInt::parse_bytes(b"22447", 10).unwrap(),
);
}
fn truncate_num(num1: BigInt, num2: BigInt) -> i32 {
println!("Truncation of {} is {:?}.", num1, num1.to_i32());
println!("Truncation of {} is {:?}.", num2, num2.to_i32());
return 0;
}
The output I get from this is
Hello, world!
Truncation of 423445324324324324234324 is None.
Truncation of 22447 is Some(22447).
How can I achieve this in Rust? Should I try a conversion to String and then truncate manually? This would be my last resort.
Java's intValue() returns the lowest 32 bits of the integer. This could be done by a bitwise-AND operation x & 0xffffffff. A BigInt in Rust doesn't support bitwise manipulation, but you could first convert it to a BigUint which supports such operations.
fn truncate_biguint_to_u32(a: &BigUint) -> u32 {
use std::u32;
let mask = BigUint::from(u32::MAX);
(a & mask).to_u32().unwrap()
}
Converting BigInt to BigUint will be successful only when it is not negative. If the BigInt is negative (-x), we could find the lowest 32 bits of its absolute value (x), then negate the result.
fn truncate_bigint_to_u32(a: &BigInt) -> u32 {
use num_traits::Signed;
let was_negative = a.is_negative();
let abs = a.abs().to_biguint().unwrap();
let mut truncated = truncate_biguint_to_u32(&abs);
if was_negative {
truncated.wrapping_neg()
} else {
truncated
}
}
Demo
You may use truncate_bigint_to_u32(a) as i32 if you need a signed number.
There is also a to_signed_bytes_le() method with which you could extract the bytes and decode that into a primitive integer directly:
fn truncate_bigint_to_u32_slow(a: &BigInt) -> u32 {
let mut bytes = a.to_signed_bytes_le();
bytes.resize(4, 0);
bytes[0] as u32 | (bytes[1] as u32) << 8 | (bytes[2] as u32) << 16 | (bytes[3] as u32) << 24
}
This method is extremely slow compared to the above methods and I don't recommend using it.
There's no natural truncation of a big integer into a smaller one. Either it fits or you have to decide what value you want.
You could do this:
println!("Truncation of {} is {:?}.", num1, num1.to_i32().unwrap_or(-1));
or
println!("Truncation of {} is {:?}.", num1, num1.to_i32().unwrap_or(std::i32::MAX));
but your application logic should probably dictate what's the desired behavior when the returned option contains no value.
I am trying to find the sum of the digits of a given number. For example, 134 will give 8.
My plan is to convert the number into a string using .to_string() and then use .chars() to iterate over the digits as characters. Then I want to convert every char in the iteration into an integer and add it to a variable. I want to get the final value of this variable.
I tried using the code below to convert a char into an integer:
fn main() {
let x = "123";
for y in x.chars() {
let z = y.parse::<i32>().unwrap();
println!("{}", z + 1);
}
}
(Playground)
But it results in this error:
error[E0599]: no method named `parse` found for type `char` in the current scope
--> src/main.rs:4:19
|
4 | let z = y.parse::<i32>().unwrap();
| ^^^^^
This code does exactly what I want to do, but first I have to convert each char into a string and then into an integer to then increment sum by z.
fn main() {
let mut sum = 0;
let x = 123;
let x = x.to_string();
for y in x.chars() {
// converting `y` to string and then to integer
let z = (y.to_string()).parse::<i32>().unwrap();
// incrementing `sum` by `z`
sum += z;
}
println!("{}", sum);
}
(Playground)
The method you need is char::to_digit. It converts char to a number it represents in the given radix.
You can also use Iterator::sum to calculate sum of a sequence conveniently:
fn main() {
const RADIX: u32 = 10;
let x = "134";
println!("{}", x.chars().map(|c| c.to_digit(RADIX).unwrap()).sum::<u32>());
}
my_char as u32 - '0' as u32
Now, there's a lot more to unpack about this answer.
It works because the ASCII (and thus UTF-8) encodings have the Arabic numerals 0-9 ordered in ascending order. You can get the scalar values and subtract them.
However, what should it do for values outside this range? What happens if you provide 'p'? It returns 64. What about '.'? This will panic. And '♥' will return 9781.
Strings are not just bags of bytes. They are UTF-8 encoded and you cannot just ignore that fact. Every char can hold any Unicode scalar value.
That's why strings are the wrong abstraction for the problem.
From an efficiency perspective, allocating a string seems inefficient. Rosetta Code has an example of using an iterator which only does numeric operations:
struct DigitIter(usize, usize);
impl Iterator for DigitIter {
type Item = usize;
fn next(&mut self) -> Option<Self::Item> {
if self.0 == 0 {
None
} else {
let ret = self.0 % self.1;
self.0 /= self.1;
Some(ret)
}
}
}
fn main() {
println!("{}", DigitIter(1234, 10).sum::<usize>());
}
If c is your character you can just write:
c as i32 - 0x30;
Test with:
let c:char = '2';
let n:i32 = c as i32 - 0x30;
println!("{}", n);
output:
2
NB: 0x30 is '0' in ASCII table, easy enough to remember!
Another way is to iterate over the characters of your string and convert and add them using fold.
fn sum_of_string(s: &str) -> u32 {
s.chars().fold(0, |acc, c| c.to_digit(10).unwrap_or(0) + acc)
}
fn main() {
let x = "123";
println!("{}", sum_of_string(x));
}