I have an array, let's say:
let arr: Vec<f64> = vec![1.5, 1.1, -1.6, 0.9, -0.5];
I want the absolute maximum of this array:
let abs_max = arr."some functions";
println!("{}", abs_max); // Gives 1.6 (or -1.6, but not 1.5).
Is there a smart, maybe almost inline, way of doing this? Or do i have to create my own function which iterates through all the values by for loop and compares the values?
I've tried the following suggestion made in this post:
let abs_max = arr.iter().max_by(|a, b| a.abs().total_cmp(b.abs()))
And this code gives the following error:
error[E0599]: no method named `abs` found for reference `&&{float}` in the current scope
--> src\main.rs:51:44
|
51 | let abs_max = arr.iter().max_by(|a, b| a.abs().total_cmp(b.abs()));
| ^^^ method not found in `&&{float}`
error[E0599]: no method named `abs` found for reference `&&{float}` in the current scope
--> src\main.rs:51:62
|
51 | let abs_max = arr.iter().max_by(|a, b| a.abs().total_cmp(b.abs()));
| ^^^ method not found in `&&{float}`
In Rust, we usually operate on iterators rather than collections like vectors directly, so all of the juicy "operate on a collection" functions are going to be in std::iter::Iterator.
Now, Iterator has a max function, which would work for integers, but it won't work for floats, because floats are not totally ordered (NaN, as usual, is a problem).
If you had a collection of integers or anything else that implemented the Ord trait, you could write
arr.iter().max()
which would return an Option<T> containing the maximum value, or None if the collection was empty.
However, f64 and the other floating-point types don't implement Ord. Fortunately, the wonderful writers of the Rust documentation for Iterator::max thought of this.
Note that f32/f64 doesn’t implement Ord due to NaN being incomparable. You can work around this by using Iterator::reduce:
assert_eq!(
[2.4, f32::NAN, 1.3]
.into_iter()
.reduce(f32::max)
.unwrap(),
2.4
);
So we can apply reduce to f64::max (which is like Ord::max except that it works for the non-total ordering of f64). Adapting a bit to your use case, we get
arr.iter().copied().reduce(f64::max)
Again, this returns an Option<f64> which is empty if the initial collection is empty. You can unwrap if you know the initial collection to be nonempty. Also, if you're never using the array again after this point (i.e. you can pass ownership of it to the iterator), you can use arr.into_iter() rather than arr.iter().copied() to save yourself a copy of each element.
This compares using f64::max, the "usual" ordering of f64. But it sounds like you want a custom ordering, namely the maximum by absolute value. We can use max_by to get this custom ordering.
arr.iter().max_by(|a, b| a.abs().total_cmp(&b.abs()))
Finally, if this is a large project and you don't mind pulling in external dependencies, I recommend OrderedFloat for dealing with all of the awkwardness around NaN and floating point types. It provides an Ord instance for a float-like type which sorts NaN as part of the total ordering, rather than following the (frankly bizarre) IEEE rules for ordering floats. With that library, your maximum becomes
arr.iter().max_by_key(|x| OrderedFloat(x.abs())
Related
When working on a training problem for rust, I needed to take all items in a vector, square each of them, and then sum them. I realize that this is not good code and that changing it is faster than asking StackOverflow. I will be changing how this works but right now I'm just trying to learn how to use map and no examples seem to help me with this problem. This is for understanding, but if you have a more idiomatic way to code this quite simply, I would also love to see that. Here is the line of code:
let thing1 = divs.into_iter().map(|n| -> n*n).collect::<Vec<u64>>.iter().sum();
The important bit being:
divs.into_iter().map(|n| -> n*n)
Here is the error:
error: expected `{`, found `*`
--> src/lib.rs:10:51
|
10 | let thing1 = divs.into_iter().map(|n| -> n*n).collect::<Vec<u64>>.iter().sum();
| ^ expected `{`
|
help: try placing this code inside a block
|
10 | let thing1 = divs.into_iter().map(|n| -> n{ *n }).collect::<Vec<u64>>.iter().sum();
| + +
error: could not compile `challenge` due to previous error
This error persists regardless of what operation I do on n, n+1, etc. I tried doing what the complier wanted and it thought I was trying to dereference n. I don't understand why map would act this way - all examples I've seen don't use blocks in map.
You would only want to put -> for a closure to denote the return type. n*n is not a type, so the compiler tries to guess that you meant n as the return type and *n as the closure body, which could be valid syntax if the braces are added.
Simply remove the ->.
I am trying to create a concurrent hash map that stores Strings as keys and i32s as values. Essentially, I am parsing a large corpus of text and every time the same String is encountered, I add 1 to value that String maps to. Using HashMap I can do this just fine, but it is not concurrent, and I'd like to parallelize this.
The problem I seem to be encountering is that I need to read the value, then update that value. Since CHashMap locks the value (as it should), this is not possible. However, if values cannot be updated then CHashMap is useless, which implies that I must be doing something wrong. Here is the relevant code. Note that the function "token_split" works and the string parsing in general works just fine. This algorithm has been tested thoroughly in a single-threaded environment using HashMap. What you see here is that same single-threaded code, but this time using CHashMap instead of HashMap. The text corpus is inside of a big vector.
let mut d1: CHashMap<String,i32> = CHashMap::new();
let sep = Regex::new(r"([ ]+)").unwrap();
for i in 0..la.len() {
let strs: Vec<String> = token_split(&la[i],&sep);
if let mut Some(count) = d1.get_mut(&strs[0]){
d1.insert(strs[0].clone(),count+1);
}
else{
d1.insert(strs[0].clone(),1);
}
}
When compiled (or when using cargo check), the error is:
error[E0369]: cannot add `{integer}` to `WriteGuard<'_, String, i32>`
--> src/main.rs:61:35
|
61 | d1.insert(strs[0].clone(),count+1);
| -----^- {integer}
| |
| WriteGuard<'_, String, i32>
To access the value guarded by the WriteGuard dereference it:
d1.insert(strs[0].clone(), *count + 1);
However, this is still incorrect as it may deadlock. You already have a guard for the entry, just modify it:
*count += 1;
This question already has an answer here:
Why is indexing a mutable vector based on its len() considered simultaneous borrowing?
(1 answer)
Closed 2 months ago.
This post was edited and submitted for review last month and failed to reopen the post:
Original close reason(s) were not resolved
All I wanna do is to swap the first and the last element in Vec.
So I wrote this:
// getting a vector of integers from stdin:
let mut ranks = std::io::stdin()
.lock()
.lines()
.next()
.unwrap()
.unwrap()
.trim()
.split_whitespace()
.map(|s| s.parse::<usize>().unwrap())
.collect::<Vec<usize>>();
// ...
// pushing something to ranks
// ...
ranks.swap(0, ranks.len() - 1); // <------ TRYING TO SWAP
And of course, it doesn't work, because
error[E0502]: cannot borrow `ranks` as immutable because it is also borrowed as mutable
--> src\main.rs:4:19
|
4 | ranks.swap(0, ranks.len() - 1);
| --------------^^^^^^^^^^^-----
| | | |
| | | immutable borrow occurs here
| | mutable borrow later used by call
| mutable borrow occurs here
|
help: try adding a local storing this argument...
--> src\main.rs:4:19
|
4 | ranks.swap(0, ranks.len() - 1);
| ^^^^^^^^^^^
help: ...and then using that local as the argument to this call
By using this piece of advice the compiler gave me, I came up with following:
let last = ranks.len() - 1;
ranks.swap(0, last);
...which looks terrible.
So the thing is: why do I need to create local variable to store vector length, if I want to pass it to the method? Well, of course because of the borrowing rules: the value of ranks will be borrowed as mutable as I call swap, so that I can't use ranks.len() anymore.
But isn't it reasonable to suppose that the values of parameters will be calculated before the method starts to do anything and before it somehow can change the content of vector or it's length?
Basically, the order of computing parameters of a function is straight in Rust, which creates many obsticles for writing clean code for me. So I would like to ask, why the order is chosen to be straight? Cause if it was reverse, things would be much easier to express. For instance, the above piece of code could be rewritten as:
ranks.swap(0, ranks.len() - 1);
...which (I think you would agree) is much more readable and cleaner.
Also, it is strange that the similar in the sense of borrowing code compiles succesfully:
let mut vec = vec![1, 2, 3]; // creating vector
vec.push(*vec.last().unwrap()); // DOUBLE BORROWING HERE!!!
// ^ ^
// | |
// | ----- immutable
// ----- mutable
println!("{:?}", vec);
So what the hell is going on? Double standarts, aren't they?
I would like to know the answer. Thank you for explanation.
but isn't it reasonable to suppose that the values of parameters will be calculated before the method starts to do anything and before it somehow can change the content of vector?
...are you sure?
Evaluation order of operands
The following list of expressions all evaluate their operands the same way, as described after the list. Other expressions either don't take operands or evaluate them conditionally as described on their respective pages.
...
Call expression
Method call expression
...
The operands of these expressions are evaluated prior to applying the effects of the expression. Expressions taking multiple operands are evaluated left to right as written in the source code.
So, self is evaluated before ranks.len() - 1. And self is actually &mut self because of auto-ref. So you first take a mutable reference to the vector, then call len() on it - that requires a borrowing it - while it is already borrowed mutably!
Now, this sometimes work (for example, if you would call push() instead of swap()). But this works by magic (called two-phase borrows that splits borrowing into borrowing and activating the reference), and this magic doesn't work here. I think the reason is that swap() is defined for slices and not Vec and thus we need to go through Deref - but I'm not totally sure.
Given v = vec![1,2,3,4], why does v[4..] return an empty vector, but v[5..] panics, while both v[4] and v[5] panic? I suspect this has to do with the implementation of slicing without specifying either the start- or endpoint, but I couldn't find any information on this online.
This is simply because std::ops::RangeFrom is defined to be "bounded inclusively below".
A quick recap of all the plumbing: v[4..] desugars to std::ops::Index using 4.. (which parses as a std::ops::RangeFrom) as the parameter. std::ops::RangeFrom implements std::slice::SliceIndex and Vec has an implementation for std::ops::Index for any parameter that implements std::slice::SliceIndex. So what you are looking at is a RangeFrom being used to std::ops::Index the Vec.
std::ops::RangeFrom is defined to always be inclusive on the lower bound. For example [0..] will include the first element of the thing being indexed. If (in your case) the Vec is empty, then [0..] will be the empty slice. Notice: if the lower bound wasn't inclusive, there would be no way to slice an empty Vec at all without causing a panic, which would be cumbersome.
A simple way to think about it is "where the fence-post is put".
A v[0..] in a vec![0, 1, 2 ,3] is
| 0 1 2 3 |
^
|- You are slicing from here. This includes the
entire `Vec` (even if it was empty)
In v[4..] it is
| 0 1 2 3 |
^
|- You are slicing from here to the end of the Vector.
Which results in, well, nothing.
while a v[5..] would be
| 0 1 2 3 |
^
|- Slicing from here to infinity is definitely
outside the `Vec` and, also, the
caller's fault, so panic!
and a v[3..] is
| 0 1 2 3 |
^
|- slicing from here to the end results in `&[3]`
While the other answer explains how to understand and remember the indexing behavior implemented in Rust standard library, the real reason why it is the way it is has nothing to do with technical limitations. It comes down to the design decision made by the authors of Rust standard library.
Given v = vec![1,2,3,4], why does v[4..] return an empty vector, but v[5..] panics [..] ?
Because it was decided so. The code below that handles slice indexing (full source) will panic if the start index is larger than the slice's length.
fn index(self, slice: &[T]) -> &[T] {
if self.start > slice.len() {
slice_start_index_len_fail(self.start, slice.len());
}
// SAFETY: `self` is checked to be valid and in bounds above.
unsafe { &*self.get_unchecked(slice) }
}
fn slice_start_index_len_fail(index: usize, len: usize) -> ! {
panic!("range start index {} out of range for slice of length {}", index, len);
}
How could it be implemented differently? I personally like how Python does it.
v = [1, 2, 3, 4]
a = v[4] # -> Raises an exception - Similar to Rust's behavior (panic)
b = v[5] # -> Same, raises an exception - Also similar to Rust's
# (Equivalent to Rust's v[4..])
w = v[4:] # -> Returns an empty list - Similar to Rust's
x = v[5:] # -> Also returns an empty list - Different from Rust's, which panics
Python's approach is not necessarily better than Rust's, because there's always a trade-off. Python's approach is more convenient (there's no need to check if a start index is not greater than the length), but if there's a bug, it's harder to find because it doesn't fail early.
Although Rust can technically follow Python's approach, its designers decided to fail early by panicking in order that a bug can be faster to find, but with a cost of some inconvenience (programmers need to ensure that a start index is not greater than the length).
This is similar to How do I use a custom comparator function with BTreeSet? however in my case I won't know the sorting criteria until runtime. The possible criteria are extensive and can't be hard-coded (think something like sort by distance to target or sort by specific bytes in a payload or combination thereof). The sorting criteria won't change after the map/set is created.
The only alternatives I see are:
use a Vec, but log(n) inserts and deletes are crucial
wrap each of the elements with the sorting criteria (directly or indirectly), but that seems wasteful
This is possible with standard C++ containers std::map/std::set but doesn't seem possible with Rust's BTreeMap/BTreeSet. Is there an alternative in the standard library or in another crate that can do this? Or will I have to implement this myself?
My use-case is a database-like system where elements in the set are defined by a schema, like:
Element {
FIELD x: f32
FIELD y: f32
FIELD z: i64
ORDERBY z
}
But since the schema is user-defined at runtime, the elements are stored in a set of bytes (BTreeSet<Vec<u8>>). Likewise the order of the elements is user-defined. So the comparator I would give to BTreeSet would look like |a, b| schema.cmp(a, b). Hard-coded, the above example may look something like:
fn cmp(a: &Vec<u8>, b: &Vec<u8>) -> Ordering {
let a_field = self.get_field(a, 2).as_i64();
let b_field = self.get_field(b, 2).as_i64();
a_field.cmp(b_field)
}
Would it be possible to pass the comparator closure as an argument to each node operation that needs it? It would be owned by the tree wrapper instead of cloned in every node.