I'd ideally like to have something like the following:
iter = if go_up {
(min .. limit)
} else {
(limit .. max).rev()
};
to create an iterator that either counts up or down to some limit, depending on the situation. However, because Range and Rev are different types, I can't do this. I can use the step_by feature, but because my limits are an unsigned data-type, I then also have to cast everything. The best I have so far is:
#![feature(step_by)]
iter = if go_up {
(min as i64 .. limit as i64).step_by(1)
} else {
(limit as i64 .. max as i64).step_by(-1)
};
but this requires both unstable features, and shoehorning my types. It seems like there should be a neater way to do this; does anyone know one?
The direct solution is to simply create an iterator that can either count upwards or downwards. Use an enum to choose between the types:
use std::ops::Range;
use std::iter::Rev;
enum Foo {
Upwards(Range<u8>),
Downwards(Rev<Range<u8>>),
}
impl Foo {
fn new(min: u8, limit: u8, max: u8, go_up: bool) -> Foo {
if go_up {
Foo::Upwards(min..limit)
} else {
Foo::Downwards((limit..max).rev())
}
}
}
impl Iterator for Foo {
type Item = u8;
fn next(&mut self) -> Option<Self::Item> {
match *self {
Foo::Upwards(ref mut i) => i.next(),
Foo::Downwards(ref mut i) => i.next(),
}
}
}
fn main() {
for i in Foo::new(1, 5, 10, true) {
println!("{}", i);
}
for i in Foo::new(1, 5, 10, false) {
println!("{}", i);
}
}
Another pragmatic solution that introduces a little bit of indirection is to Box the iterators:
fn thing(min: u8, limit: u8, max: u8, go_up: bool) -> Box<Iterator<Item = u8>> {
if go_up {
Box::new(min..limit)
} else {
Box::new((limit..max).rev())
}
}
fn main() {
for i in thing(1, 5, 10, true) {
println!("{}", i);
}
for i in thing(1, 5, 10, false) {
println!("{}", i);
}
}
Personally, your solution
iter = if go_up {
(min as i64 .. limit as i64).step_by(1)
} else {
(limit as i64 .. max as i64).step_by(-1)
};
is a better option than Shepmaster's first example, since it's more complete (eg. there's a size_hint), it's more likely to be correct by virtue of being a standard tool and it's faster to write.
It's true that this is unstable, but there's nothing stopping you from just copying the source in the meantime. That gives you a nice upgrade path for when this eventually gets stabilized.
The enum wrapper technique is great in more complex cases, though, but in this case I'd be tempted to KISS.
Related
I need a way to put different objects that all implement a certain trait integrate() in one enum. This enum shall implement a method that calls its variant's method integrate() in a certain way e.g. many times.
I tried to make a very simple example, but it is still not as short as I would want it to be.
Some more explanation: I want to write a solver that integrates certain differential equations i.e. calculate how a physical system behaves over a certain time span. For each time step the method integrate() is called. But when I execute the program I want to be able to choose which physical system is used at runtime. My idea was to have an enum that has the different physical systems in it e.g. OscillatorA and OscillatorB (in reality this could be a double pendulum, or a vibrating string - doesn't matter).
pub trait Integrate {
fn integrate(&mut self);
}
pub struct OscillatorA {
z: u32,
}
impl Integrate for OscillatorA {
fn integrate(&mut self) {
self.z += 1; // something happens here
}
}
#[derive(Debug)]
pub struct OscillatorB {
x: u32,
y: u32,
}
impl Integrate for OscillatorB {
fn integrate(&mut self) {
self.x += 1; // something different happens here
self.y += 2;
}
}
#[derive(Debug)]
pub enum Oscillator {
A(OscillatorA),
B(OscillatorB),
// ... many other physical systems come here
}
impl Oscillator {
pub fn new(num: &u64) -> Self {
match num {
0 => Self::A(OscillatorA { z: 1 }),
1.. => Self::B(OscillatorB { x: 1, y: 2 }),
}
}
}
impl Integrate for Oscillator {
fn integrate(&mut self) {
// this looks like it should be redundant:
match self {
Self::A(osc) => osc.integrate(),
Self::B(osc) => osc.integrate(),
}
}
}
pub fn integrate_n_times(object: &mut impl Integrate, n: u64) {
for _ in 0..n {
object.integrate();
}
}
fn main() {
let which = 0; // can be set via commandline arguments.
let mut s = Oscillator::new(&which);
integrate_n_times(&mut s, 10);
// ..
}
The function integrate_n_times(&mut self, n) will call n times the integrate() method required by the Integrate-trait. But it somehow doesn't feel right, because it will at each iteration solve a match-statement. I guess with compiler optimizations this might be avoided, but it somehow "feels" wrong, because it certainly reads like this.
Is there a better design pattern I am missing? Should I require the method "integrate_n_times" through the trait as well? (But then I would rely on it being written correctly in every Oscillator struct).
I somehow need to have one "main-struct" that I contains all the different physical systems and can call them depending on what arguments I pass to the program.
I would probably use dynamic dispatch here. While it's generally slower than using static dispatch, I would imagine it's faster than a massive match cases. Plus I think it's easier to work with, as long as we don't try to get the original type with Any and down-casting.
impl Oscillator {
pub fn new(num: &u64) -> Box<dyn Integrate> {
match num {
0 => Box::new(OscillatorA { z: 1 }),
1.. => Box::new(OscillatorB { x: 1, y: 2 }),
}
}
}
pub fn integrate_n_times(object: &mut Box<dyn Integrate>, n: u64) {
for _ in 0..n {
object.integrate();
}
}
fn main() {
let which = 0; // can be set via commandline arguments.
let mut my_oscillator: Box<dyn Integrate> = Oscillator::new(&which);
integrate_n_times(&mut my_oscillator, 10);
}
Is there a Rust equivalent of the following C++ sample (that I've written for this question):
union example {
uint32_t fullValue;
struct {
unsigned sixteen1: 16;
unsigned sixteen2: 16;
};
struct {
unsigned three: 3;
unsigned twentynine: 29;
};
};
example e;
e.fullValue = 12345678;
std::cout << e.sixteen1 << ' ' << e.sixteen2 << ' ' << e.three << ' ' << e.twentynine;
For reference, I'm writing a CPU emulator & easily being able to split out binary parts of a variable like this & reference them by different names, makes the code much simpler. I know how to do this in C++ (as above), but am struggling to work out how to do the equivalent in Rust.
You could do this by creating a newtype struct and extracting the relevant bits using masking and/or shifts.
This code to do this is slightly longer (but not much so) and importantly avoids the undefined behavior you are triggering in C++.
#[derive(Debug, Clone, Copy)]
struct Example(pub u32);
impl Example {
pub fn sixteen1(self) -> u32 {
self.0 & 0xffff
}
pub fn sixteen2(self) -> u32 {
self.0 >> 16
}
pub fn three(self) -> u32 {
self.0 & 7
}
pub fn twentynine(self) -> u32 {
self.0 >> 3
}
}
pub fn main() {
let e = Example(12345678);
println!("{} {} {} {}", e.sixteen1(), e.sixteen2(), e.three(), e.twentynine());
}
Update
You can make some macros for extracting certain bits:
// Create a u32 mask that's all 0 except for one patch of 1's that
// begins at index `start` and continues for `len` digits.
macro_rules! mask {
($start:expr, $len:expr) => {
{
assert!($start >= 0);
assert!($len > 0);
assert!($start + $len <= 32);
if $len == 32 {
assert!($start == 0);
0xffffffffu32
} else {
((1u32 << $len) - 1) << $start
}
}
}
}
const _: () = assert!(mask!(3, 7) == 0b1111111000);
const _: () = assert!(mask!(0, 32) == 0xffffffff);
// Select `num_bits` bits from `value` starting at `start`.
// For example, select_bits!(0xabcd1234, 8, 12) == 0xd12
// because the created mask is 0x000fff00.
macro_rules! select_bits {
($value:expr, $start:expr, $num_bits:expr) => {
{
let mask = mask!($start, $num_bits);
($value & mask) >> mask.trailing_zeros()
}
}
}
const _: () = assert!(select_bits!(0xabcd1234, 8, 12) == 0xd12);
Then either use these directly on a u32 or make a struct to implement taking certain bits:
struct Example {
v: u32,
}
impl Example {
pub fn first_16(&self) -> u32 {
select_bits!(self.v, 0, 16)
}
pub fn last_16(&self) -> u32 {
select_bits!(self.v, 16, 16)
}
pub fn first_3(&self) -> u32 {
select_bits!(self.v, 0, 3)
}
pub fn last_29(&self) -> u32 {
select_bits!(self.v, 3, 29)
}
}
fn main() {
// Use hex for more easily checking the expected values.
let e = Example { v: 0x12345678 };
println!("{:x} {:x} {:x} {:x}", e.first_16(), e.last_16(), e.first_3(), e.last_29());
// Or use decimal for checking with the provided C code.
let e = Example { v: 12345678 };
println!("{} {} {} {}", e.first_16(), e.last_16(), e.first_3(), e.last_29());
}
Original Answer
While Rust does have unions, it may be better to use a struct for your use case and just get bits from the struct's single value.
// Create a u32 mask that's all 0 except for one patch of 1's that
// begins at index `start` and continues for `len` digits.
macro_rules! mask {
($start:expr, $len:expr) => {
{
assert!($start >= 0);
assert!($len > 0);
assert!($start + $len <= 32);
let mut mask = 0u32;
for i in 0..$len {
mask |= 1u32 << (i + $start);
}
mask
}
}
}
struct Example {
v: u32,
}
impl Example {
pub fn first_16(&self) -> u32 {
self.get_bits(mask!(0, 16))
}
pub fn last_16(&self) -> u32 {
self.get_bits(mask!(16, 16))
}
pub fn first_3(&self) -> u32 {
self.get_bits(mask!(0, 3))
}
pub fn last_29(&self) -> u32 {
self.get_bits(mask!(3, 29))
}
// Get the bits of `self.v` specified by `mask`.
// Example:
// self.v == 0xa9bf01f3
// mask == 0x00fff000
// The result is 0xbf0
fn get_bits(&self, mask: u32) -> u32 {
// Find how many trailing zeros `mask` (in binary) has.
// For example, the mask 0xa0 == 0b10100000 has 5.
let mut trailing_zeros_count_of_mask = 0;
while mask & (1u32 << trailing_zeros_count_of_mask) == 0 {
trailing_zeros_count_of_mask += 1;
}
(self.v & mask) >> trailing_zeros_count_of_mask
}
}
fn main() {
// Use hex for more easily checking the expected values.
let e = Example { v: 0x12345678 };
println!("{:x} {:x} {:x} {:x}", e.first_16(), e.last_16(), e.first_3(), e.last_29());
// Or use decimal for checking with the provided C code.
let e = Example { v: 12345678 };
println!("{} {} {} {}", e.first_16(), e.last_16(), e.first_3(), e.last_29());
}
This setup makes it easy to select any range of bits you want. For example, if you want to get the middle 16 bits of the u32, you just define:
pub fn middle_16(&self) -> u32 {
self.get_bits(mask!(8, 16))
}
And you don't even really need the struct. Instead of having get_bits() be a method, you could define it to take a u32 value and mask, and then define functions like
pub fn first_3(v: u32) -> u32 {
get_bits(v, mask!(0, 3))
}
Note
I think this Rust code works the same regardless of your machine's endianness, but I've only run it on my little-endian machine. You should double check it if it could be a problem for you.
You could use the bitfield crate.
This appears to approximate what you are looking for at least on a syntactic level.
For reference, your original C++ code prints:
24910 188 6 1543209
Now there is no built-in functionality in Rust for bitfields, but there is the bitfield crate.
It allows specifying a newtype struct and then generates setters/getters for parts of the wrapped value.
For example pub twentynine, set_twentynine: 31, 3; means that it should generate the setter set_twentynine() and getter twentynine() that sets/gets the bits 3 through 31, both included.
So transferring your C++ union into a Rust bitfield, this is how it could look like:
use bitfield::bitfield;
bitfield! {
pub struct Example (u32);
pub full_value, set_full_value: 31, 0;
pub sixteen1, set_sixteen1: 15, 0;
pub sixteen2, set_sixteen2: 31, 16;
pub three, set_three: 2, 0;
pub twentynine, set_twentynine: 31, 3;
}
fn main() {
let mut e = Example(0);
e.set_full_value(12345678);
println!(
"{} {} {} {}",
e.sixteen1(),
e.sixteen2(),
e.three(),
e.twentynine()
);
}
24910 188 6 1543209
Note that those generated setters/getters are small enough to have a very high chance to be inlined by the compiler, giving you zero overhead.
Of course if you want to avoid adding an additional dependency and instead want to implement the getters/setters by hand, look at #apilat's answer instead.
Alternative: the c2rust-bitfields crate:
use c2rust_bitfields::BitfieldStruct;
#[repr(C, align(1))]
#[derive(BitfieldStruct)]
struct Example {
#[bitfield(name = "full_value", ty = "u32", bits = "0..=31")]
#[bitfield(name = "sixteen1", ty = "u16", bits = "0..=15")]
#[bitfield(name = "sixteen2", ty = "u16", bits = "16..=31")]
#[bitfield(name = "three", ty = "u8", bits = "0..=2")]
#[bitfield(name = "twentynine", ty = "u32", bits = "3..=31")]
data: [u8; 4],
}
fn main() {
let mut e = Example { data: [0; 4] };
e.set_full_value(12345678);
println!(
"{} {} {} {}",
e.sixteen1(),
e.sixteen2(),
e.three(),
e.twentynine()
);
}
24910 188 6 1543209
Advantage of this one is that you can specify the type of the union parts yourself; the first one was u32 for all of them.
I'm unsure, however, how endianess plays into this one. It might yield different results on a system with different endianess. Might require further research to be sure.
I would like to find the first element which is greater than a limit from an ordered collection. While iteration over it is always an option, I need a faster one. Currently, I came up with a solution like this but it feels a little hacky:
use std::cmp::Ordering;
use std::collections::BTreeMap;
use std::ops::Bound::{Included, Unbounded};
#[derive(Debug)]
struct FloatWrapper(f32);
impl Eq for FloatWrapper {}
impl PartialEq for FloatWrapper {
fn eq(&self, other: &Self) -> bool {
(self.0 - other.0).abs() < 1.17549435e-36f32
}
}
impl Ord for FloatWrapper {
fn cmp(&self, other: &Self) -> Ordering {
if (self.0 - other.0).abs() < 1.17549435e-36f32 {
Ordering::Equal
} else if self.0 - other.0 > 0.0 {
Ordering::Greater
} else if self.0 - other.0 < 0.0 {
Ordering::Less
} else {
Ordering::Equal
}
}
}
impl PartialOrd for FloatWrapper {
fn partial_cmp(&self, other: &Self) -> Option<Ordering> {
Some(self.cmp(other))
}
}
The wrapper around the float is not nice even that I am sure that there will be no NaNs
The Range is also unnecessary since I want a single element.
Is there a better way of achieving a similar result using only Rust's standard library? I know that there are plenty of tree implementations but it feels like overkill.
After the suggestions in the answer to use the iterator I did a little benchmark with the following code:
fn main() {
let measure = vec![
10, 15, 20, 30, 40, 50, 60, 70, 80, 90, 100, 110, 120, 130, 140, 150, 160, 170, 180, 190,
200,
];
let mut measured_binary = Vec::new();
let mut measured_iter = Vec::new();
let mut measured_vec = Vec::new();
for size in measure {
let mut ww = BTreeMap::new();
let mut what_found = Vec::new();
for _ in 0..size {
let now: f32 = thread_rng().gen_range(0.0, 1.0);
ww.insert(FloatWrapper(now), now);
}
let what_to_search: Vec<FloatWrapper> = (0..10000)
.map(|_| thread_rng().gen_range(0.0, 0.8))
.map(|x| FloatWrapper(x))
.collect();
let mut rez = 0;
for current in &what_to_search {
let now = Instant::now();
let m = find_one(&ww, current);
rez += now.elapsed().as_nanos();
what_found.push(m);
}
measured_binary.push(rez);
rez = 0;
for current in &what_to_search {
let now = Instant::now();
let m = find_two(&ww, current);
rez += now.elapsed().as_nanos();
what_found.push(m);
}
measured_iter.push(rez);
let ww_in_vec: Vec<(FloatWrapper, f32)> =
ww.iter().map(|(&key, &value)| (key, value)).collect();
rez = 0;
for current in &what_to_search {
let now = Instant::now();
let m = find_three(&ww_in_vec, current);
rez += now.elapsed().as_nanos();
what_found.push(m);
}
measured_vec.push(rez);
println!("{:?}", what_found);
}
println!("binary :{:?}", measured_binary);
println!("iter_map :{:?}", measured_iter);
println!("iter_vec :{:?}", measured_vec);
}
fn find_one(from_what: &BTreeMap<FloatWrapper, f32>, what: &FloatWrapper) -> f32 {
let v: Vec<f32> = from_what
.range((Included(what), (Unbounded)))
.take(1)
.map(|(_, &v)| v)
.collect();
*v.get(0).expect("we are in truble")
}
fn find_two(from_what: &BTreeMap<FloatWrapper, f32>, what: &FloatWrapper) -> f32 {
from_what
.iter()
.skip_while(|(i, _)| *i < what) // Skipping all elements before it
.take(1) // Reducing the iterator to 1 element
.map(|(_, &v)| v) // Getting its value, dereferenced
.next()
.expect("we are in truble") // Our
}
fn find_three(from_what: &Vec<(FloatWrapper, f32)>, what: &FloatWrapper) -> f32 {
*from_what
.iter()
.skip_while(|(i, _)| i < what) // Skipping all elements before it
.take(1) // Reducing the iterator to 1 element
.map(|(_, v)| v) // Getting its value, dereferenced
.next()
.expect("we are in truble") // Our
}
The key takeaway for me is that it is worth to use the binary search after ~50 elements. In my case with 30000 elements means 200x speedup (at least based on this microbenchmark).
You said you wanted a std-only solution, but this is a common enough problem, so here's a solution using the crate ordered-float:
Cargo.toml
[dependencies]
ordered-float = "1.0"
main.rs
use ordered_float::OrderedFloat; // 1.0.2
use std::collections::BTreeMap;
fn main() {
let mut ww = BTreeMap::new();
ww.insert(OrderedFloat(1.0), "one");
ww.insert(OrderedFloat(2.0), "two");
ww.insert(OrderedFloat(3.0), "three");
ww.insert(OrderedFloat(4.0), "three");
let rez = ww.range(OrderedFloat(1.5)..).next().map(|(_, &v)| v);
println!("{:?}", rez);
}
prints
Some("two")
Now, isn't that nice and clean? If you want a less verbose syntax, I suggest wrapping the BTreeMap itself, so you can give it appropriately named methods that make sense for your application.
NaN behavior
Be aware that OrderedFloat may not behave the way you expect in the presence of NaNs:
NaN is sorted as greater than all other values and equal to itself, in contradiction with the IEEE standard.
Now that we've gone over and clarified the requirements a bit, there's a couple of bad news for you:
You're not getting away from the requirement to have a wrapping type. As I'm sure you've discovered, this is because no floating-point type implements Ord
You're also not getting away from a combinator of some sort
First, we're going to clear up your impl, as they both have shortfalls described in the comments. In the future, it may make sense to use the wrapper traits in eq-float, as they already implement all this. The implementations at fault are PartialEq and Ord, and they both break down on a few points. The new implementations:
impl Ord for FloatWrapper {
fn cmp(&self, other: &Self) -> Ordering {
self.0.partial_cmp(&other.0).unwrap_or_else(|| {
if self.0.is_nan() && !other.0.is_nan() {
Ordering::Less
} else if !self.0.is_nan() && other.0.is_nan() {
Ordering::Greater
} else {
Ordering::Equal
}
})
}
}
impl PartialEq for FloatWrapper {
fn eq(&self, other: &Self) -> bool {
if self.0.is_nan() && other.0.is_nan() {
true
} else {
self.0 == other.0
}
}
}
Nothing surprising, we're just abusing the fact that f32 implements PartialOrd for Ord and surfacing all other implementations on FloatWrapper itself.
Now, for the combinator. Your current combinator will force a range of elements to be stored temporarily in memory, to then discard one. We can do better by abusing the fact that iter() is a sorted iterator. So, we can skip while we search, and then take the first:
let mut first_element = ww.iter()
.skip_while(|(i, _)| *i < &FloatWrapper::new(1.5)) // Skipping all elements before it
.take(1) // Reducing the iterator to 1 element
.map(|(_, &v)| v) // Getting its value, dereferenced
.next(); // Our result
This yields a 10% speedup in low-element-count situations over your first implementation.
I used the num::BigUInt type to avoid integer overflows when calculating the factorial of a number.
However, I had to resort to using .clone() to pass rustc's borrow checker.
How can I refactor the factorial function to avoid cloning what could be large numbers many times?
use num::{BigUint, FromPrimitive, One};
fn main() {
for n in -2..33 {
let bign: Option<BigUint> = FromPrimitive::from_isize(n);
match bign {
Some(n) => println!("{}! = {}", n, factorial(n.clone())),
None => println!("Number must be non-negative: {}", n),
}
}
}
fn factorial(number: BigUint) -> BigUint {
if number < FromPrimitive::from_usize(2).unwrap() {
number
} else {
number.clone() * factorial(number - BigUint::one())
}
}
I tried to use a reference to BigUInt in the function definition but got some errors saying that BigUInt did not support references.
The first clone is easy to remove. You are trying to use n twice in the same expression, so don't use just one expression:
print!("{}! = ", n);
println!("{}", factorial(n));
is equivalent to println!("{}! = {}", n, factorial(n.clone())) but does not try to move n and use a reference to it at the same time.
The second clone can be removed by changing factorial not to be recursive:
fn factorial(mut number: BigUint) -> BigUint {
let mut result = BigUint::one();
let one = BigUint::one();
while number > one {
result *= &number;
number -= &one;
}
result
}
This might seem unidiomatic however. There is a range function, that you could use with for, however, it uses clone internally, defeating the point.
I don't think take a BigUint as parameter make sense for a factorial. u32 should be enough:
use num::{BigUint, One};
fn main() {
for n in 0..42 {
println!("{}! = {}", n, factorial(n));
}
}
fn factorial_aux(accu: BigUint, i: u32) -> BigUint {
if i > 1 {
factorial_aux(accu * i, i - 1)
}
else {
accu
}
}
fn factorial(n: u32) -> BigUint {
factorial_aux(BigUint::one(), n)
}
Or if you really want to keep BigUint:
use num::{BigUint, FromPrimitive, One, Zero};
fn main() {
for i in (0..42).flat_map(|i| FromPrimitive::from_i32(i)) {
print!("{}! = ", i);
println!("{}", factorial(i));
}
}
fn factorial_aux(accu: BigUint, i: BigUint) -> BigUint {
if !i.is_one() {
factorial_aux(accu * &i, i - 1u32)
} else {
accu
}
}
fn factorial(n: BigUint) -> BigUint {
if !n.is_zero() {
factorial_aux(BigUint::one(), n)
} else {
BigUint::one()
}
}
Both version doesn't do any clone.
If you use ibig::UBig instead of BigUint, those clones will be free, because ibig is optimized not to allocate memory from the heap for numbers this small.
How can I replace a value if it fails a predicate?
To illustrate:
assert_eq!((3-5).but_if(|v| v < 0).then(0), 0)
I thought there would be something on Option or Result to allow this, but I cannot find it.
I thought there would be something on Option or Result
But neither of these types appear here. Subtracting two numbers yields another number.
It appears you just want a traditional if-else statement:
fn main() {
let a = 3 - 5;
assert_eq!(if a < 0 { 0 } else { a }, 0);
}
Since you have two values that can be compared, you may also be interested in max:
use std::cmp::max;
fn main() {
assert_eq!(max(0, 3 - 5), 0);
}
You can make your proposed syntax work, but I'm not sure it's worth it. Presented without further comment...
fn main() {
assert_eq!((3 - 5).but_if(|&v| v < 0).then(0), 0)
}
trait ButIf: Sized {
fn but_if<F>(self, f: F) -> ButIfTail<Self>
where F: FnOnce(&Self) -> bool;
}
// or `impl<T> ButIf for T {` for maximum flexibility
impl ButIf for i32 {
fn but_if<F>(self, f: F) -> ButIfTail<Self>
where F: FnOnce(&Self) -> bool,
{
ButIfTail(f(&self), self)
}
}
struct ButIfTail<T>(bool, T);
impl<T> ButIfTail<T> {
fn then(self, alt: T) -> T {
if self.0 {
alt
} else {
self.1
}
}
}
Update: This got a bit nicer since Rust 1.27, when Option::filter was added:
assert_eq!(Some(3 - 5).filter(|&v| v >= 0).unwrap_or(0), 0);
Prior to Rust 1.27, you would have needed an iterator in order to write a single, chained expression without lots of additional custom machinery:
assert_eq!(Some(3 - 5).into_iter().filter(|&v| v >= 0).next().unwrap_or(0), 0);