How to properly cast to a negative number in Rust? - rust

In Rust, this code is valid :
let signedInt: i32 = 23*-1;
However, this is not :
let unsignedInt: u16 = 2;
let signedInt: i32 = unsignedInt*-1;
Which makes sense, as Rust tries to interpret -1 as if it were of the same type as unsignedInt.
So casting is needed. However, said casting becomes quite ugly when using more types :
-((unsignedInt*320) as f32)
Doing this is needed, as -(unsignedInt*320) is an invalid expression. But the code above is basically unreadable, and I was wondering what was the best way of making it both valid Rust and human-readable.

Rust requires explicit casts because it's a common source of bugs in other languages like C. Generally you should avoid as, and use from or into instead if possible, otherwise try_from/try_into. The main exception is int<->float casts which are only possible with as at the moment.
Because all numbers in u16 can be represented in i32, your second example can be written as:
let unsignedInt: u16 = 2;
let signedInt = i32::from(unsignedInt) * -1;
Your third example must still be written with an as cast, but you can leave out the variable type:
let unsignedInt: u16 = 2;
let float = -((unsignedInt * 320) as f32);

Related

Find what std::convert::try_into, converts to

I am learning Rust and here is the sample from the book
use std::convert::TryInto;
fn main() {
let a: i32 = 10;
let b: u16 = 100;
let b_ = b.try_into()
.unwrap();
if a < b_ {
println!("Ten is less than one hundred.");
}
}
Author says b.try_into() converts b to i32. But where do we specify this in code? b_ is not given an explicit type, so why would a u16 get converted to i32 and not to a u32 or something else?
Thanks.
Rust has a quite smart compiler and it can look at nearby code to determine what type a variable should get. This is called type inference.
If you explicitly want to set the type that the .try_into() function should convert, you can put the type in the usual position.
let b_: i32 = b.try_into().unwrap();
You also need to remember that you cannot specify any type for conversion because they are manually implemented in the Rust standard library.
My guess is that the compiler looks at the bottom if statement and infers that b_ should be a i32 (So that it can perform the if check with a which it already knows is an i32).
I also tested that reversing the condition i.e if b_ > a causes a compile error. I guess it is because it wants to know what type b_ is before going for a

Is there a way to `f64::from(0.23_f32)` and get 0.23_f64?

I'm trying to tie together two pieces of software: one that gives me a f32, and one that expects f64 values. In my code, I use f64::from(my_f32), but in my test, I compare the outcome and the value that I'm comparing has not been converted as expected: the f64 value has a bunch of extra, more precise, digits, such that the values aren't equal.
In my case, the value is 0.23. Is there a way to convert the 0.23_f32 to f64 such that I end up with 0.23_f64 instead of 0.23000000417232513?
fn main() {
let x = 0.23_f32;
println!("{}", x);
println!("{}", f64::from(x));
println!("---");
let x = 0.23_f64;
println!("{}", x);
println!("{}", f64::from(x));
}
Playground
Edit: I understand that floating-point numbers are stored differently--in fact, I use this handy visualizer on occasion to view the differences in representations between 32-bit and 64-bit floats. I was looking to see if there's some clever way to get around this.
Edit 2: A "clever" example that I just conjured up would be my_32.to_string().parse::<f64>()--that gets me 0.23_f64, but (obviously) requires string parsing. I'd like to think there might be something at least slightly more numbers-related (for lack of a better term).
Comments have already pointed out why this is happening. This answer exists to give you ways to circumvent this.
The first (and most obvious) is to use arbitrary-precision libraries. A solid example of this in rust is rug. This allows you to express pretty much any number exactly, but it causes some problems across FFI boundaries (amongst other cases).
The second is to do what most people do around floating point numbers, and bracket your equalities. Since you know that most floats will not be stored exactly, and you know your input type, you can use constants such as std::f32::MIN to bracket your type, like so (playground):
use std::cmp::PartialOrd;
use std::ops::{Add, Div, Sub};
fn bracketed_eq<
I,
E: From<I> + From<f32> + Clone + PartialOrd + Div<Output = E> + Sub<Output = E> + Add<Output = E>,
>(
input: E,
target: I,
value: I,
) -> bool {
let target: E = target.into();
let value: E = value.into();
let bracket_lhs: E = target.clone() - (value.clone() / (2.0).into());
let bracket_rhs: E = target.clone() + (value.clone() / (2.0).into());
bracket_lhs >= input && bracket_rhs <= input
}
#[test]
fn test() {
let u: f32 = 0.23_f32;
assert!(bracketed_eq(f64::from(u), 0.23, std::f32::MIN))
}
A large amount of this is boilerplate and a lot of it gets completely optimized away by the compiler; it is also possible to drop the Clone requirement by restricting some trait choices. Add, Sub, Div are there for the operations, From<I> to realize the conversion, From<f32> for the constant 2.0.
The right way to compare floating-point values is to bracket them. The question is how to determine the bracketing interval? In your case, since you have a representation of the target value as f32, you have two solutions:
The obvious solution is to do the comparison between f32s, so convert your f64 result to f32 to get rid of the extra digits, and compare that to the expected result. Of course, this may still fail if accumulated rounding errors cause the result to be slightly different.
The right solution would have been to use the next_after function to get the smallest bracketing interval around your target:
let result: f64 = 0.23f64;
let expect: f32 = 0.23;
assert_ne!(result, expect.into());
assert!(expect.next_after (0.0).into() < result && result < expect.next_after (1.0).into());
but unfortunately this was never stabilized (see #27752).
So you will have to determine the precision that is acceptable to you, possibly as a function of f32::EPSILON:
let result: f64 = 0.23f64;
let expect: f32 = 0.23;
assert_ne!(result, expect.into());
assert!(f64::from (expect) - f64::from (std::f32::EPSILON) < result && result < f64::from (expect) + f64::from (std::f32::EPSILON);
If you don't want to compare the value, but instead want to truncate it before passing it on to some computation, then the function to use is f64::round:
const PRECISION: f64 = 100.0;
let from_db: f32 = 0.23;
let truncated = (f64::from (from_db) * PRECISION).round() / PRECISION;
println!("f32 : {:.32}", from_db);
println!("f64 : {:.32}", 0.23f64);
println!("output: {:.32}", truncated);
prints:
f32 : 0.23000000417232513427734375000000
f64 : 0.23000000000000000999200722162641
output: 0.23000000000000000999200722162641
A couple of notes:
The result is still not equal to 0.23 since that number cannot be represented as an f64 (or as an f32 for that matter), but it is as close as you can get.
If there are legal implications as you implied, then you probably shouldn't be using floating point numbers in the first place but you should use either some kind of fixed-point with the legally mandated precision, or some arbitrary precision library.

How can I define a generic function that can return a given integer type?

I'd like to define a function that can return a number whose type is specified when the function is called. The function takes a buffer (Vec<u8>) and returns numeric value, e.g.
let byte = buf_to_num<u8>(&buf);
let integer = buf_to_num<u32>(&buf);
The buffer contains an ASCII string that represents a number, e.g. b"827", where each byte is the ASCII code of a digit.
This is my non-working code:
extern crate num;
use num::Integer;
use std::ops::{MulAssign, AddAssign};
fn buf_to_num<T: Integer + MulAssign + AddAssign>(buf: &Vec::<u8>) -> T {
let mut result : T;
for byte in buf {
result *= 10;
result += (byte - b'0');
}
result
}
I get mismatched type errors for both the addition and the multiplication lines (expected type T, found u32). So I guess my problem is how to tell the type system that T can be expressed in terms of a literal 10 or in terms of the result of (byte - b'0')?
Welcome to the joys of having to specify every single operation you're using as a generic. It's a pain, but it is worth.
You have two problems:
result *= 10; without a corresponding From<_> definition. This is because, when you specify "10", there is no way for the compiler to know what "10" as a T means - it knows primitive types, and any conversion you defined by implementing From<_> traits
You're mixing up two operations - coercion from a vector of characters to an integer, and your operation.
We need to make two assumptions for this:
We will require From<u32> so we can cap our numbers to u32
We will also clarify your logic and convert each u8 to char so we can use to_digit() to convert that to u32, before making use of From<u32> to get a T.
use std::ops::{MulAssign, AddAssign};
fn parse_to_i<T: From<u32> + MulAssign + AddAssign>(buf: &[u8]) -> T {
let mut buffer:T = (0 as u32).into();
for o in buf {
buffer *= 10.into();
buffer += (*o as char).to_digit(10).unwrap_or(0).into();
}
buffer
}
You can convince yourself of its behavior on the playground
The multiplication is resolved by force-casting the constant as u8, which makes it benefit from our requirement of From<u8> for T and allows the rust compiler to know we're not doing silly stuff.
The final change is to set result to have a default value of 0.
Let me know if this makes sense to you (or if it doesn't), and I'll be glad to elaborate further if there is a problem :-)

How could rust multiply &i32 with i32?

Consider this example:
fn main() {
let v: Vec<i32> = vec![1, 2, 3, 4, 5];
let b: i32 = (&v[2]) * 4.0;
println!("product of third value with 4 is {}", b);
}
This fails as expected as float can't be multiplied with &i32.
error[E0277]: cannot multiply `{float}` to `&i32`
--> src\main.rs:3:23
|
3 | let b: i32 = (&v[2]) * 4.0;
| ^ no implementation for `&i32 * {float}`
|
= help: the trait `std::ops::Mul<{float}>` is not implemented for `&i32`
But when I change the float to int, it works fine.
fn main() {
let v: Vec<i32> = vec![1, 2, 3, 4, 5];
let b: i32 = (&v[2]) * 4;
println!("product of third value with 4 is {}", b);
}
Did the compiler implement the operation between &i32 and i32?
If yes, how is this operation justified in such a type safe language?
Did the compiler implement the operation between &i32 and i32?
Yes. Well, not the compiler, but rather the standard library. You can see the impl in the documentation.
If yes, how is this operation justified in such a type safe language?
"Type safe" is not a Boolean property, but rather a spectrum. Most C++ programmers would say that C++ is type safe. Yet, C++ has many features that automatically cast between types (constructors, operator T, taking references of values, ...). When designing a programming language, one has to balance the risk of bugs (when introducing convenient type conversions) with the inconvenience (when not having them).
As an extreme example: consider if Option<T> would deref to T and panic if it was None. That's the behavior of most languages that have null. I think it's pretty clear that this "feature" has led to numerous bugs in the real world (search term "billion dollar mistake"). On the other hand, let's consider what bugs could be caused by having &i32 * i32 compile. I honestly can't think of any. Maaaybe someone wanted to multiply the raw pointer of one value with an integer? Rather unlikely in Rust. So since the chance of introducing bugs with this feature is very low, but it is convenient, it was decided to be implemented.
This is always something the designers have to balance. Different languages are in a different position on this spectrum. Rust would likely be considered "more typesafe" than C++, but not doubt, there are even "more typesafe" languages than Rust out there. In this context, "more typesafe" just meant: decisions leaned more towards "inconvenience instead of potential bugs".
I think you may be confusing &i32 from rust with &var from C.
In C,
int var = 5;
int newvar = &var * 4; /* this would be bad code,
should not multiply an address by an integer!
Of course, C will let you. */
the '&' operator returns the address of the variable 'var'.
However, in rust, the '&' operator borrows use of the variable var.
In Rust,
let var: i32 = 5;
assert_eq!(&var * 8, 40);
This works, because &var refers to 5, not to the address of var. Note that in C, the & is an operator. In Rust, the & is acting as part of the type of the variable. Hence, the type is &i32.
This is very confusing. If there were more characters left on standard keyboard, i am sure the designers would have used a different one.
Please see the book and carefully follow the diagrams. The examples in the book use String, which is allocated on the heap. Primitives, like i32 are normally allocated on the stack and may be completely optimized away by the compiler. Also, primitives are frequently copied even when reference notation is used, so that gets confusing. Still, I think it is easier to look at the heap examples using String first and then to consider how this would apply to primitives. The logic is the same, but the actual storage and optimization my be different.
It’s very simple actually: Rust will automatically dereference references for you. It’s not like C where you have to dereference a pointer yourself. Rust references are very similar to C++ references in this regard.

How does Rust solve mutability for Hindley-Milner?

I've read that Rust has very good type inference using Hindley-Milner. Rust also has mutable variables and AFAIK there must be some constraints when a HM algorithm works with mutability because it could over-generalize. The following code:
let mut a;
a = 3;
a = 2.5;
Does not compile, because at the second row integer was inferred and a floating point value cannot be assigned to an integer variable. So I'm guessing that for simple variables, as soon as a non-generic type is inferred, the variable becomes a mono-type and cannot be generalized anymore.
But what about a template, like Vec? For example this code:
let mut v;
v = Vec::new();
v.push(3);
v.push(2.3);
This fails again, but for the last line again. That means that the second row inferred the type partially (Vec) and the third one inferred the container type.
What's the rule? Is there something like value-restriction that I don't know about? Or am I over-complicating things and Rust has much tighter rules (like no generalization at all)?
It is considered an issue (as far as diagnostic quality goes) that rustc is slightly too eager in its type inference.
If we check your first example:
let mut a = 3;
a = 2.5;
Then the first line leads to inferring that a has a {generic integer type}, and the second line will lead to diagnose that 2.5 cannot be assigned to a because it's not a generic integer type.
It is expected that a better algorithm would instead register the conflict, and then point at the lines from which each type came. Maybe we'll get that with Chalk.
Note: the generic integer type is a trick of Rust to make integer literals "polymorphic", if there is no other hint at what specific integer type it should be, it will default to i32.
The second example occurs in basically the same way.
let mut v = Vec::new();
v.push(3);
In details:
v is assigned type $T
Vec::new() produces type Vec<$U>
3 produces type {integer}
So, on the first line, we get $T == Vec<$U> and on the second line we get $U == {integer}, so v is deduced to have type Vec<{integer}>.
If there is no other source to learn the exact integer type, it falls back to i32 by default.
I would like to note that mutability does not actually impact inference here; from the point of view of type inference, or type unification, the following code samples are equivalent:
// With mutability:
let mut a = 1;
a = 2.5;
// Without mutability:
let a = if <condition> { 1 } else { 2.5 };
There are much worse issues in Rust with regard to HM, Deref and sub-typing come as much more challenging.
If I'm not wrong it does this:
let mut a;
a = 3; //here a is already infered as mut int
a = 2.5; //fails because int != float
For the vec snippet:
let mut v;
v = Vec::new();// now v type is Vec<something>
v.push(3); // v type is Vec<int>
v.push(2.3); // here fails because Vec<int> != Vec<float>
Notice I did not used rust types, but just for having a general idea.

Resources