Should I use i32 or i64 on 64bit machine? - rust

main.rs
#![feature(core_intrinsics)]
fn print_type_of<T>(_: &T) {
println!("{}", unsafe { std::intrinsics::type_name::<T>() });
}
fn main() {
let x = 93;
let y = 93.1;
print_type_of(&x);
print_type_of(&y);
}
If I compile with "rustc +nightly ./main.rs", i got this output:
$ ./main
i32
f64
I run a x86_64 Linux machine. Floating point variables are double precision by default, which is good.
Why integers are only 4 bytes? Which should I use? If I don't need i64 should I use i32? Are i32 better for performance?

Are i32 better for performance?
That's actually kind of a subtle thing. If we look up some recent instruction-level benchmarks for example for SkylakeX, there is for the most part a very clear lack of difference between 64bit and 32bit instructions. An exception to that is division, 64bit division is slower than 32bit division, even when dividing the same values (division is one of the few variable-time instructions that depend on the values of its inputs).
Using i64 for data also makes auto-vectorization less effective - this is also one of the rare places where data smaller than 32bit has a use beyond data-size optimization. Of course the data size also matters for the i32 vs i64 question, working with sizable arrays of i64's can easily be slower just because it's bigger, therefore costing more space in the caches and (if applicable) more bandwidth. So if the question is [i32] vs [i64], then it matters.
Even more subtle is the fact that using 64bit operations means that the code will contains more REX prefixes on average, making the code slightly less dense meaning that less of it will fit in the L1 code cache at once. This is a small effect though. Just having some 64bit variables in the code is not a problem.
Despite all that, definitely don't overuse i32, especially in places where you should really have an usize. For example, do not do this:
// don't do this
for i in 0i32 .. data.len() as i32 {
sum += data[i as usize];
}
This causes a large performance regression. Not only is there a pointless sign-extension in the loop now, it also defeats bounds check elimination and auto-vectorization. But of course there is no reason to write code like that in the first place, it's unnatural and harder than doing it right.

The Rust Programming Language says:
[...] integer types default to i32: this type is generally the fastest, even on 64-bit systems.
And (in the next section):
The default type is f64 because on modern CPUs it’s roughly the same speed as f32 but is capable of more precision.
However, this is fairly simplified. What integer type you should use depends a lot on your program. Don't think about speed when initially writing the program, unless you already know that speed will be a problem. In the vast majority of code, speed doesn't matter: even in performance critical applications, most code is cold code. In contrast, correctness always matters.
Also note that only unconstrained numeric variables default to i32/f64. As soon as you use the variable in a context where a specific numeric type is needed, the compiler uses that type.

First of all, you should design your application for your needs/requirements. I.e., if you need „large“ integers, use large types. If you don’t need them, you should use small types.
IF you encounter any performance issues (AND ONLY THEN) should you adjust types to something you may not need by requirement.

Related

what are the advantages of using smaller integer types in rust?

I am learning rust and in the official tutorial, the author assigned the value 5 to a variable like so:
let x: i32 = 5;
I thought this was weird as one could use u8 as the type and the program would run fine. This got me thinking, are there any advantages to using a lower bit number? Is it faster?
The main advantage is that they use less memory. A vector<i32> with 1 billion elements will use 4GB, while a vector<u8> will use 1GB. This can be a significant advantage regardless of speed.
Arithmetic on smaller integer types on modern CPUs is not faster in general. There are some issues with using only part of a register but optimizers will almost certainly resolve these performance problems for you.
When you have a lot of integers and the optimizer can make use of vectorization (for example adding your 1 billion integers in the vector) then smaller types will typically yield better performance, because more of them fit in a SIMD register.
If you use them just as one scalar stack variable like in your example, I highly doubt there will be a difference in 99% of cases. Here other considerations are more important:
A bigger type will make overflows less likely, maybe you did calculate your maximal possible value wrong.
For public interfaces bigger types are more future proof.
Its better to cast from i8 to i32 than the other way round.

How to avoid comparing two different integer types?

I seems like I am in a catch 22 situation here.
I have this code:
const MAX u32 = 10;
let vec Vec<String> = vec![String::from("test")];
let exceeds = vec.len() > MAX;
I get this error:
let res = vec_one.len() < MAX;
^^^ expected `usize`, found `u32`
help: you can convert an `u32` to `usize` and panic if
the converted value wouldn't fit
let res = vec_one.len() < MAX.try_into().unwrap();
^^^^^^^^^^^^^^^^^^^^^^^
As far as I understand from the Rust documentation, if the problem of comparing two different integer types occurs, it has a bad smell.
That shouldn't happen.
However, on the one hand u32 is the recommended integer type, if one is not sure which on to take, because the compiler can handle it most efficiently. I use MAX in many places in the code, where I use it for comparisons with other u32 variables and constants. Therefore MAX has to be of u32 type.
On the other hand Rust's Vector instance returns usize, which is, again as far as I understand from the documentation, an integer type depending on the underlying architecture.
In this context the help hint:
help: you can convert an `u32` to `usize` and panic if
the converted value wouldn't fit
looks to me rather misleading. A situation where the difference of two integer types raises panic should IMHO be avoided by using the proper integer types up front.
The rust help message should come up with a hint how to solve the integer type collision and not with a help which might in the worst case lead to the halt of program execution.
And, to get to the point, how can I resolve the integer type collision in a better way?
However, on the one hand u32 is the recommended integer type, if one is not sure which on to take, because the compiler can handle it most efficiently.
The compiler doesn't really care, it's more a detail of the architecture. u32 is generally a pretty good default because it's handled efficiently on both 32b and 64b architecture, and when sufficient it avoids "wasting" memory on a u64 (or, god forbid, u128).
On the other hand Rust's Vector instance returns usize, which is, again as far as I understand from the documentation, an integer type depending on the underlying architecture.
It's, specifically, an integer large enough to hold a pointer. So strictly speaking it's ABI-related rather than architecture: though the two are usually identical, Linux's x32 ABI uses 32b pointers on a 64b architecture.
x32 makes some sense because 32b values are handled efficiently on 64b architectures (so there's no loss there) and it saves memory when lots of values are pointers (lower stack use, smaller structures, better cache locality, ...).
The rust help message should come up with a hint how to solve the integer type collision and not with a help which might in the worst case lead to the halt of program execution.
And, to get to the point, how can I resolve the integer type collision in a better way?
Just don't put in the unwrap call?
try_into is not magic, it's just a failible conversion, it'll return Ok(result) if the conversion succeeds and Err(...) if the conversion fails (which would require that the platform's pointers be less than 32b and the specific value doesn't fit in a pointer, which seems unlikely).
But I don't really see the point of performing a runtime conversion here, just provide an usize version of MAX.
As Rust doesn't have untyped constant as is less safe than desirable (and using separate literal risks them drifting apart), so I'd suggest using a trivial macro expanding to the literal (I guess you could even use a macro instead of a constant in the first place but that's a bit meh doc-wise) e.g.
macro_rules! MAX {
() => { 500 }
}
const MAX: u32 = MAX!();
const MAXsise: usize = MAX!();
const MAX8: u8 = MAX!();
will properly trigger a compilation error on the third definition whereas const MAX8: u8 = MAX as u8; would not.
Or you could perform the conversion with as and ignore the issue altogether, given the magnitude of your constant's value the risk is basically non-existent (though present if the possibility exists that MAX would ever be larger than... 2^16 probably)

Is it better to return by value for small types for getters in traits?

For most data types, I follow the convention in https://stackoverflow.com/a/35391084/11963778 and have getters returning references:
trait HasName {
fn name(&self) -> &String;
fn name_mut(&mut self) -> &mut String;
}
However, for data types that have copy semantics and are smaller than (or around the size of) a pointer, should I have a getter method returning the value instead? It would look something like this:
trait HasNum {
fn num_v(&self) -> i32;
fn num(&self) -> &i32;
fn num_mut(&mut self) -> &mut i32;
}
Is it good practice to have a getter that returns the value instead? If so, then up to what size should I do this for small data types?
As a rule of thumb you can copy values held on a single cache line instead of using references. While cache lines are typically 64bytes on x86, Intel recommends limiting data to 16 bytes to reduce the chance of the value not being aligned.
So in other words, its probably fine to just copy anything around the size of [i32; 4] or less.
Note: While there is some reasoning behind it, I just made this rule up based on what I know about performance so far. If enough people were to look at this post, I'm sure someone else will have a better answer. That being said, even if my reasoning is a bit off I think it will likely still hold up in most cases when you aren't trying to optimize an extremely time critical piece of code or for a specific CPU.
In the time I spent writing this answer I also found a few more interesting points in this question.
https://stackoverflow.com/a/40185996/5987669
It is common, for example, for a machine to have an architecture (machine registers, memory architecture, etc) which result in a "sweet spot" - copying variables of some size is most "efficient", but copying larger OR SMALLER variables is less so. Larger variables will cost more to copy, because there may be a need to do multiple copies of smaller chunks. Smaller ones may also cost more, because the compiler needs to copy the smaller value into a larger variable (or register), do the operations on it, then copy the value back.
https://stackoverflow.com/a/49523201/5987669
This answer is specific to C, but I wouldn't be surprised if it applied to Rust as well
There is a certain GCC optimization called IPA SRA, that replaces "pass by reference" with "pass by value" automatically: https://gcc.gnu.org/onlinedocs/gcc/Optimize-Options.html (-fipa-sra)
...
So with this optimization enabled, using references for small types should be as fast as passing them by value.
On the other hand passing (for example) std::string by value could not be optimized to by-reference speed, as custom copy semantics are being involved.

Is `u32`/`i32` suggested even on limited range number case?

Should we use u32/i32 or it's lower variant (u8/i8, u16/i16) when dealing with limited range number like "days in month" which ranged from 1-30 or "score of a subject" which ranged from 0 to 100? Or why we shouldn't?
Is there any optimization or benefit on the lower variant (i.e. memory efficient)?
Summary
Correctness should be prioritized over performance and correctness-wise (for ranges like 1–100), all solutions (u8, u32, ...) are equally bad. The best solution would be to create a new type to benefit from strong typing.
The rest of my answer tries to justify this claim and discusses different ways of creating the new type.
More explanation
Let's take a look at the "score of subject" example: the only legal values are 0–100. I'd argue that correctness-wise, using u8 and u32 is equally bad: in both cases, your variable can hold values that are not legal in your semantic context; that's bad!
And arguing that the u8 is better, because there are less illegal values, is like arguing that wrestling a bear is better than walking through New York, because you only have one possibility of dying (blood loss by bear attack) as opposed to the many possibilities of death (car accident, knife attack, drowning, ...) in New York.
So what we want is a type that guarantees to hold only legal values. We want to create a new type that does exactly this. However, there are multiple ways to proceed; each with different advantages and disadvantages.
(A) Make the inner value public
struct ScoreOfSubject(pub u8);
Advantage: at least APIs are more easy to understand, because the parameter is already explained by the type. What is easier to understand:
add_record("peter", 75, 47) or
add_record("peter", StudentId(75), ScoreOfSubject(47))?
I'd say the latter one ;-)
Disadvantage: we don't actually do any range checking and illegal values can still occur; bad!.
(B) Make inner value private and supply a range checking constructor
struct ScoreOfSubject(pub u8);
impl ScoreOfSubject {
pub fn new(value: u8) -> Self {
assert!(value <= 100);
ScoreOfSubject(value)
}
pub fn get(&self) -> u8 { self.0 }
}
Advantage: we enforce legal values with very little code, yeah :)
Disadvantage: working with the type can be annoying. Pretty much every operation requires the programmer to pack & unpack the value.
(C) Add a bunch of implementations (in addition to (B))
(the code would impl Add<_>, impl Display and so on)
Advantage: the programmer can use the type and do all useful operations on it directly -- with range checking! This is pretty optimal.
Please take a look at Matthieu M.'s comment:
[...] generally multiplying scores together, or dividing them, does not produce a score! Strong typing not only enforces valid values, it also enforces valid operations, so that you don't actually divide two scores together to get another score.
I think this is a very important point I failed to make clear before. Strong typing prevents the programmer from executing illegal operations on values (operations that don't make any sense). A good example is the crate cgmath which distinguishes between point and direction vectors, because both support different operations on them. You can find additional explanation here.
Disadvantage: a lot of code :(
Luckily the disadvantage can be reduced by the Rust macro/compiler plugin system. There are crates like newtype_derive or bounded_integer that do this kind of code generation for you (disclaimer: I never worked with them).
But now you say: "you can't be serious? Am I supposed to spend my time writing new types?".
Not necessarily, but if you are working on production code (== at least somewhat important), then my answer is: yes, you should.
A no-answer answer: I doubt you would see any difference in benchmarks, unless you do A LOT of arithmetic or process HUGE arrays of numbers.
You should probably just go with the type which makes more sense (no reason to use negatives or have an upper bound in millions for a day of month) and provides the methods you need (e.g. you can't perform abs() directly on an unsigned integer).
There could be major benefits using smaller types but you would have to benchmark your application on your target platform to be sure.
The first and most easily realized benefit from the lower memory footprint is better caching. Not only is your data more likely to fit into the cache, but it is also less likely to discard other data in the cache, potentially improving a completely different part of your application. Whether or not this is triggered depends on what memory your application touches and in what order. Do the benchmarks!
Network data transfers have an obvious benefit from using smaller types.
Smaller data allows "larger" instructions. A 128-bit SIMD unit can handle 4 32-bit data OR 16 8-bit data, making certain operations 4 times faster. In benchmarks I´ve made these instructions do execute 4 times faster indeed BUT the whole application improved by less than 1%, and the code became more of a mess. Shaping your program into making better use of SIMD can be tricky.
As of signed/unsigned discussions unsigned has slightly better properties which a compiler may or may not take advantage of.

Where are data-types for Rust (eg. float) defined?

Rust is obviously a new language (0.8). It looks interesting, and I'm starting to look at it. I came across a mention of some software where float was changed to f64, so I wanted to find where the data-types, including float were defined. I couldn't find anything very specific. I did find:
There are three floating-point types: float, f32, and f64. Floating-point numbers are written 0.0, 1e6, or 2.1e-4. Like integers, floating-point literals are inferred to the correct type. Suffixes f, f32, and f64.
I guess "float" or "f" is 16 bit.
On a more general note (I'm no computer scientist), is it really worth messing around with all these small data-types like int, int32, int64, f, f32, f64 (to name a few). I can understand some languages having eg. a byte-type, because string is a fairly complex type. For numeric-types, I think it just creates unnecessary complexity. Why not just have i64 and f64 and call them int and float (or i64 and f64 to cater for future changes, or have float and int default to these).
Perhaps there are some low-level programs that require smaller values, but why not reserve the usage to those programs that need them and leave them out of the core? I find it an unnecessary chore to convert from eg. int to i64, etc., and what does it really achieve? Alternatively, leave them in the "core", but default to the 64-bit types. The 64-bit types are obviously necessary, whereas the rest are only necessary for specific cases (IMO).
float, f32 and f64 are defined in the Rust manual: http://static.rust-lang.org/doc/0.8/rust.html#primitive-types
Specifically, for float:
The Rust type float is a machine-specific type equal to one of the
supported Rust floating-point machine types (f32 or f64). It is the
largest floating-point type that is directly supported by hardware on
the target machine, or if the target machine has no floating-point
hardware support, the largest floating-point type supported by the
software floating-point library used to support the other
floating-point machine types.
So float is not 16-bits, it's an alias for either f32 or f64, depending on the hardware.
To answer the second part of your question, in a low-language level like Rust, one can not simply postulate that a float is 64-bits, because if the hardware does not support this kind of floats natively then there is a significant performance penalty. One can neither have a single float type with an unspecified representation, because for a lot of use cases one need to have guarantees on the precision of the manipulated numbers.
In general, you would use float in the general case, and f32 or f64 when you have specific needs.
Edit: float has now been removed from the language, and there are only f32 and f64 float types now. The point being that all current architectures support 64-bits floats now, so float was always f64, and no use case were found for a machine-specific type.
An answer to your "more general note": Rust exposes this variety of numeric types for two inter-related reasons:
Rust is a systems language.
The compilation target (LLVM, or eventually concrete machine code, such as amd64) makes these distinctions to represent different hardware capabilities. Choosing different data types for different platforms impacts runtime performance, memory, and other resource usage. This flexibility is exposed to the programmer to allow them to fine tune their software to particular hardware.
Rust prioritizes interoperating with C.
C may have made the same distinctions for the same justification, or it may have made the distinction because C is simpler when it provides fewer abstractions and delegates more to the underlying assembler.
Either way, in order to interoperate between Rust and C without costly generic abstraction layers between them, the primitive types in each language correspond directly to each other.
Some advice:
If you don't care about performance, simply use the largest int, i64 or u64, or float type, f64. The resulting code will have easy-to-predict wrap-around and precisions behavior, but it will perform differently on different architectures, and it wastes space or time in some situations.
This contrasts with the conventional C wisdom (and maybe the Rust community would disagree with me), because if you use the "natural type for the architecture" the same code will perform well on multiple architectures. I'm of the opinion, however, that unexpected differences in wrap-around or float precision are a worse problem than performance. (My bias comes from dealing with security.)
If you want to avoid wrap around completely with integers, you can use bigints, which are costly software abstractions over hardware primitives.
As an aside, I appreciate explicitly seeing f64 as the type to remind me of precision errors. For example, JavaScript numbers are f64, but it may be easy to forget that and get surprised by JS code like var f = Math.pow(2, 53); f + 1 === f which evaluates to true. The same is true in Rust, but since I notice the type is f64 I'm more likely to remember.

Resources