Power expression in a constant [duplicate] - rust

let arr0 = [0u8; 15];
let arr1 = [0u8; arr0.len()]; // this fails
I think the compiler should be able to determine the length of arr0 as a compile time constant, no? Still this is flagged as error saying that variable found instead of constant integer.
Why?
Is there constexpr (C++) function in Rust?
Version:
rustc 1.0.0-nightly (ecf8c64e1 2015-03-21) (built 2015-03-22)

Because it hasn't been implemented yet. Extending the subset of Rust that counts as constant expressions can be done backwards-compatibly, so there's no rush to do so before 1.0, and it's not even settled how it should be done (how much should be allowed, whether there should be a constexpr mechanism and how powerful it should be, etc).
In the meantime, macros and syntax extensions cover many of the same use cases (and the latter are strictly more powerful than constexpr ever will be).

Related

Why isn't there implicit type conversion (coercion) between primitive types in Rust

I am reading through Rust by Example, and I am curious about why we cannot coerce a decimal to a u8, like in the following snippet:
let decimal = 65.4321_f32;
// Error! No implicit conversion
let integer: u8 = decimal;
But explicit casting is allowed, so I don't understand why can't we have it implicit too.
Is this a language design decision? What advantages does this bring?
Safety is a big part of the design of Rust and its standard library. A lot of the focus is on memory safety but Rust also tries to help prevent common bugs by forcing you to make decisions where data could be lost or where your program could panic.
A good example of this is that it uses the Option type instead of null. If you are given an Option<T> you are now forced to decide what to do with it. You could decide to unwrap it and panic, or you could use unwrap_or to provide a sensible default. Your decision, but you have to make it.
To convert a f64 to a u8 you can use the as operator. It doesn't happen automatically because Rust can't decide for you what you want to happen in the case where the number is too big or too small. Or maybe you want to do something with the extra decimal part? Do you want to round it up or down or to the nearest integer?
Even the as operator is considered by some[1] to be an early design mistake, since you can easily lose data unintentionally - especially when your code evolves over time and the types are less visible because of type inference.
[1] https://github.com/rust-lang/rfcs/issues/2784#issuecomment-543180066

What happens when casting a big float to a int?

I was wondering what would happen when I cast a very large float value to an integer. This is an example I wrote:
fn main() {
let x = 82747650246702476024762_f32;//-1_i16;
let y = x as u8;
let z = x as i32;
println!("{} {} {}", x, y, z);
}
and the output is:
$ ./casts
82747650000000000000000 0 -2147483648
Obviously the float wouldn't fit in any of the integers, but since Rust so strongly advertises that it is safe, I would have expected an error of some kind. These operations use the llvm fptosi and fptoui instructions, which produce a so called poison value if the value doesn't fit within the type it has been casted to. This may produce undefined behavior, which is very bad, especially when writing Rust code.
How can I be sure my float to int casts don't result in undefined behavior in Rust? And why would Rust even allow this (as it is known for creating safe code)?
In Rust 1.44 and earlier, if you use as to cast a floating-point number to an integer type and the floating-point number does not fit¹ in the target type, the result is an undefined value², and most things that you can do with it cause undefined behavior.
This serious issue (#10184) was fixed in Rust 1.45. Since that release, float-to-integer casts saturate instead (that is, values that are too large or small are converted to T::MAX or T::MIN, respectively; NaN is converted to 0).
In older versions of Rust, you can enable the new, safe behavior with the -Z saturating-float-casts flag. Note that saturating casts may be slightly slower since they have to check the type bounds first. If you really need to avoid the check, the standard library provides to_int_unchecked. Since the behavior is undefined when the number is out of range, you must use unsafe.
(There used to be a similar issue for certain integer-to-float casts, but it was resolved by making such casts always saturating. This change was not considered a performance regression and there is no way to opt in to the old behavior.)
Related questions
Can casting in safe Rust ever lead to a runtime error?
¹ "Fit" here means either NaN, or a number of such large magnitude that it cannot be approximated by the smaller type. 8.7654321_f64 will still be truncated to 8 by an as u8 cast, even though the value cannot be represented exactly by the destination type -- loss of precision does not cause undefined behavior, only being out of range does.
² A "poison" value in LLVM, as you correctly note in the question, but Rust itself does not distinguish between undef and poison values.
Part of your problem is "as" does lossy casts, i.e. you can cast a u8 to a u16 no problem but a u16 can't always fit in a u8, so the u16 is truncated. You can read about the behavior of "as" here.
Rust is designed to be memory safe, which means you can't access memory you shouldn't , or have data races, (unless you use unsafe) but you can still have memory leaks and other undesirable behavior.
What you described is unexpected behavior, but it is still well defined, these are very different things. Two different rust compilers would compile this to code that has the same result, if it was undefined behavior then the compiler implementer could have this compile to whatever they wanted, or not compile at all.
Edit: As pointed out in the comments and other answers, casting a float to an int using as currently causes undefined behavior, this is due to a known bug in the compiler.

Rust no_std split_at_mut function "no method named `split_mut_at` found for type" compilation error in rust stable

I am trying to extract bits of code from an embedded rust example that does not compile. A lot of these old embedded examples don't compile because they use nightly and they quickly become broken and neglected.
let mut buffer : [u8; 2048] = [0;2048];
// some code to fill the buffer here
// say we want to split the buffer at position 300
let (request_buffer, response_buffer) = buffer.split_mut_at(300);
This example uses #![no_std] so there is no standard library to link to and must have compiled at some point so the function split_mut_at must have worked at some point. I am using IntelliJ rust AND Visual Studio Code as the IDE but neither IDE's can point me to the definition of the split_mut_at function. There is a minefield of crates and use statements in the example and there is no clear way to pin-point where some function comes without huge trial and error effort.
btw, split_at_mut can usually be found in std::string::String
Is there a rust command that tells you what crate a function belongs to in your project? It always takes so long to update rust-docs when doing a rust update. Surely that can help!
You're looking for slice::split_at_mut (note the mut at the end). It is listed in the nightly docs here and the stable docs here. It is also indeed available with #![no_std]. It is defined in libcore here.
As a general rule of thumb when a function x from core or std has a mutable and immutable variant, the function requiring a immutable reference is named x and the function requiring a mutable reference is named x_mut.

Rustc only warns when value that overflows is assigned

I am finding what I think is a very strange behaviour. Rustc panics when when a variable overflows at runtime; this makes sense to me. However, it only raises a warning when value that overflows is assigned at compile time. Shouldn't that be a compile time error? Otherwise, two behaviours seem inconsistent.
I expect a compile time error:
fn main() {
let b: i32 = 3_000_000_000;
println!("{}", b);
}
Produces:
<anon>:2:18: 2:31 warning: literal out of range for i32, #[warn(overflowing_literals)] on by default
<anon>:2 let b: i32 = 3_000_000_000;
Playground 1
This makes sense to me:
fn main() {
let b: i32 = 30_000;
let c: i32 = 100_000;
let d = b * c;
println!("{}", d);
}
Produces:
thread '<main>' panicked at 'arithmetic operation overflowed', <anon>:4
playpen: application terminated with error code 101
Playground 2
Edit:
Given the comment by FrancisGagné, and me discovering that Rust implements operators that check for overflow during the operation, for example checked_mul, I see that one needs to implement overflow checks themselves. Which makes sense, because release version should be optimized, and constantly checking for overflows could get expensive. So I no longer see the "inconsistency". However, I am still surprised, that assigning a value that would overflow does not lead to compile time error. In golang it would: Go Playground
Actually, your comments are not consistent with the behavior you observe:
in your first example: you get a compile-time warning, which you ignore, and thus the compiler deduces that you want wrapping behavior
in your second example: you get a run-time error
The Go example is similar to the first Rust example (except that Go, by design, does not have warnings).
In Rust, an underflow or overflow results in an unspecified value which can be ! or bottom in computer science, a special value indicating that the control flow diverge which in general means either abortion or exception.
This specification allows:
instrumenting the Debug mode to catch all overflows at the very point at which they occur
not instrumenting1 the Release mode (and using wrapping arithmetic there)
and yet have both modes be consistent with the specification.
1 Not instrumenting by default, you can if you choose and for a relatively modest performance cost outside of heavy numeric code activate the overflow checks in Release with a simple flag.
On the cost of overflow checks: the current Rust/LLVM situation is helpful for debugging but has not really been optimized. Thus, in this framework, overflow checks cost. If the situation improves, then rustc might decide, one day, to activate overflow checking by default even in Release.
In Midori (a Microsoft experimental OS developed in a language similar to C#), overflow check was turned on even in Release builds:
In Midori, we compiled with overflow checking on by default. This is different from stock C#, where you must explicitly pass the /checked flag for this behavior. In our experience, the number of surprising overflows that were caught, and unintended, was well worth the inconvenience and cost. But it did mean that our compiler needed to get really good at understanding how to eliminate unnecessary ones.
Apparently, they improved their compiler so that:
it would reason about the ranges of variables, and statically eliminate bounds checks and overflow checks when possible
it would aggregate checks as much as possible (a single check for multiple potentially overflowing operations)
The latter is only to be done in Release (you lose precision) but reduces the number of branches.
So, what cost remain?
Potentially different arithmetic rules that get in the way of optimizations:
in regular arithmetic, 64 + x - 128 can be optimized to x - 64; with overflow checks activated the compiler might not be able to perform this optimization
vectorization can be hampered too, if the compiler does not have overflow checking vector built-ins
...
Still, unless the code is heavily numeric (scientific simulations or graphics, for example), then it might impact it indeed.

Why can't I use a function returning a compile-time constant as a constant?

let arr0 = [0u8; 15];
let arr1 = [0u8; arr0.len()]; // this fails
I think the compiler should be able to determine the length of arr0 as a compile time constant, no? Still this is flagged as error saying that variable found instead of constant integer.
Why?
Is there constexpr (C++) function in Rust?
Version:
rustc 1.0.0-nightly (ecf8c64e1 2015-03-21) (built 2015-03-22)
Because it hasn't been implemented yet. Extending the subset of Rust that counts as constant expressions can be done backwards-compatibly, so there's no rush to do so before 1.0, and it's not even settled how it should be done (how much should be allowed, whether there should be a constexpr mechanism and how powerful it should be, etc).
In the meantime, macros and syntax extensions cover many of the same use cases (and the latter are strictly more powerful than constexpr ever will be).

Resources