How to define mutual recursion with closures? - rust

I can do something like this:
fn func() -> (Vec<i32>, Vec<i32>) {
let mut u = vec![0;5];
let mut v = vec![0;5];
fn foo(u: &mut [i32], v: &mut [i32], i: usize, j: usize) {
for k in i+1..u.len() {
u[k] += 1;
bar(u, v, k, j);
}
}
fn bar(u: &mut [i32], v: &mut [i32], i: usize, j: usize) {
for k in j+1..v.len() {
v[k] += 1;
foo(u, v, i, k);
}
}
foo(&mut u, &mut v, 0, 0);
(u,v)
}
fn main() {
let (u,v) = func();
println!("{:?}", u);
println!("{:?}", v);
}
but I would prefer to do something like this:
fn func() -> (Vec<i32>, Vec<i32>) {
let mut u = vec![0;5];
let mut v = vec![0;5];
let foo = |i, j| {
for k in i+1..u.len() {
u[k] += 1;
bar(k, j);
}
};
let bar = |i, j| {
for k in j+1..v.len() {
v[k] += 1;
foo(i, k);
}
};
foo(0, 0);
(u,v)
}
fn main() {
let (u,v) = func();
println!("{:?}", u);
println!("{:?}", v);
}
The second example doesn't compile with the error: unresolved name bar.
In my task I can do it through one recursion, but it will not look clear.
Does anyone have any other suggestions?

I have a solution for mutually recursive closures, but it doesn't work with multiple mutable borrows, so I couldn't extend it to your example.
There is a way to use define mutually recursive closures, using an approach similar to how this answer does single recursion. You can put the closures together into a struct, where each of them takes a borrow of that struct as an extra argument.
fn func(n: u32) -> bool {
struct EvenOdd<'a> {
even: &'a Fn(u32, &EvenOdd<'a>) -> bool,
odd: &'a Fn(u32, &EvenOdd<'a>) -> bool
}
let evenodd = EvenOdd {
even: &|n, evenodd| {
if n == 0 {
true
} else {
(evenodd.odd)(n - 1, evenodd)
}
},
odd: &|n, evenodd| {
if n == 0 {
false
} else {
(evenodd.even)(n - 1, evenodd)
}
}
};
(evenodd.even)(n, &evenodd)
}
fn main() {
println!("{}", func(5));
println!("{}", func(6));
}

While defining mutually recursive closures works in some cases, as demonstrated in the answer by Alex Knauth, I don't think that's an approach you should usually take. It is kind of opaque, has some limitations pointed out in the other answer, and it also has a performance overhead since it uses trait objects and dynamic dispatch at runtime.
Closures in Rust can be thought of as functions with associated structs storing the data you closed over. So a more general solution is to define your own struct storing the data you want to close over, and define methods on that struct instead of closures. For this case, the code could look like this:
pub struct FooBar {
pub u: Vec<i32>,
pub v: Vec<i32>,
}
impl FooBar {
fn new(u: Vec<i32>, v: Vec<i32>) -> Self {
Self { u, v }
}
fn foo(&mut self, i: usize, j: usize) {
for k in i+1..self.u.len() {
self.u[k] += 1;
self.bar(k, j);
}
}
fn bar(&mut self, i: usize, j: usize) {
for k in j+1..self.v.len() {
self.v[k] += 1;
self.foo(i, k);
}
}
}
fn main() {
let mut x = FooBar::new(vec![0;5], vec![0;5]);
x.foo(0, 0);
println!("{:?}", x.u);
println!("{:?}", x.v);
}
(Playground)
While this can get slightly more verbose than closures, and requires a few more explicit type annotations, it's more flexible and easier to read, so I would generally prefer this approach.

Related

How to traverse and consume a vector in given order? [duplicate]

For example, I have a Vec<String> and an array storing indexes.
let src = vec!["a".to_string(), "b".to_string(), "c".to_string()];
let idx_arr = [2_usize, 0, 1];
The indexes stored in idx_arr comes from the range 0..src.len(), without repetition or omission.
I want to move the elements in src to another container in the given order, until the vector is completely consumed. For example,
let iter = into_iter_in_order(src, &idx_arr);
for s in iter {
// s: String
}
// or
consume_vec_in_order(src, &idx_arr, |s| {
// s: String
});
If the type of src can be changed to Vec<Option<String>>, things will be much easier, just use src[i].take(). However, it cannot.
Edit:
"Another container" refers to any container, such as a queue or hash set. Reordering in place is not the answer to the problem. It introduces the extra time cost of O(n). The ideal method should be 0-cost.
Not sure if my algorithm satisfies your requirements but here I have an algorithm that can consume the provided vector in-order without initializing a new temporary vector, which is more efficient for a memory.
fn main() {
let src = &mut vec!["a".to_string(), "b".to_string(), "c".to_string(), "d".to_string()];
let idx_arr = [2_usize, 3, 1, 0];
consume_vector_in_order(src, idx_arr.to_vec());
println!("{:?}", src); // d , c , a , b
}
// In-place consume vector in order
fn consume_vector_in_order<T>(v: &mut Vec<T>, inds: Vec<usize>) -> &mut Vec<T>
where
T: Default,
{
let mut i: usize = 0;
let mut temp_inds = inds.to_vec();
while i < inds.to_vec().len() {
let s_index = temp_inds[i];
if s_index != i {
let new_index = temp_inds[s_index];
temp_inds.swap(s_index, new_index);
v.swap(s_index, new_index);
} else {
i += 1;
}
}
v
}
You can use the technique found in How to sort a Vec by indices? (using my answer in particular) since that can reorder the data in-place from the indices, and then its just simple iteration:
fn consume_vec_in_order<T>(mut vec: Vec<T>, order: &[usize], mut cb: impl FnMut(T)) {
sort_by_indices(&mut vec, order.to_owned());
for elem in vec {
cb(elem);
}
}
Full example available on the playground.
Edit:
An ideal method, but needs to access unstable features and functions not exposed by the standard library.
use std::alloc::{Allocator, RawVec};
use std::marker::PhantomData;
use std::mem::{self, ManuallyDrop};
use std::ptr::{self, NonNull};
#[inline]
unsafe fn into_iter_in_order<'a, T, A: Allocator>(
vec: Vec<T, A>,
order: &'a [usize],
) -> IntoIter<'a, T, A> {
unsafe {
let mut vec = ManuallyDrop::new(vec);
let cap = vec.capacity();
let alloc = ManuallyDrop::new(ptr::read(vec.allocator()));
let ptr = order.as_ptr();
let end = ptr.add(order.len());
IntoIter {
buf: NonNull::new_unchecked(vec.as_mut_ptr()),
_marker_1: PhantomData,
cap,
alloc,
ptr,
end,
_marker_2: PhantomData,
}
}
}
struct IntoIter<'a, T, A: Allocator> {
buf: NonNull<T>,
_marker_1: PhantomData<T>,
cap: usize,
alloc: ManuallyDrop<A>,
ptr: *const usize,
end: *const usize,
_marker_2: PhantomData<&'a usize>,
}
impl<T, A: Allocator> Iterator for IntoIter<T, A> {
type Item = T;
#[inline]
fn next(&mut self) -> Option<T> {
if self.ptr == self.end {
None
} else {
let idx = unsafe { *self.ptr };
self.ptr = unsafe { self.ptr.add(1) };
if T::IS_ZST {
Some(unsafe { mem::zeroed() })
} else {
Some(unsafe { ptr::read(self.buf.as_ptr().add(idx)) })
}
}
}
}
impl<#[may_dangle] T, A: Allocator> Drop for IntoIter<T, A> {
fn drop(&mut self) {
struct DropGuard<'a, T, A: Allocator>(&'a mut IntoIter<T, A>);
impl<T, A: Allocator> Drop for DropGuard<'_, T, A> {
fn drop(&mut self) {
unsafe {
// `IntoIter::alloc` is not used anymore after this and will be dropped by RawVec
let alloc = ManuallyDrop::take(&mut self.0.alloc);
// RawVec handles deallocation
let _ = RawVec::from_raw_parts_in(self.0.buf.as_ptr(), self.0.cap, alloc);
}
}
}
let guard = DropGuard(self);
// destroy the remaining elements
unsafe {
while self.ptr != self.end {
let idx = *self.ptr;
self.ptr = self.ptr.add(1);
let p = if T::IS_ZST {
self.buf.as_ptr().wrapping_byte_add(idx)
} else {
self.buf.as_ptr().add(idx)
};
ptr::drop_in_place(p);
}
}
// now `guard` will be dropped and do the rest
}
}
Example:
let src = vec![
"0".to_string(),
"1".to_string(),
"2".to_string(),
"3".to_string(),
"4".to_string(),
];
let mut dst = vec![];
let iter = unsafe { into_iter_in_order(src, &[2, 1, 3, 0, 4]) };
for s in iter {
dst.push(s);
}
assert_eq!(dst, vec!["2", "1", "3", "0", "4"]);
My previous answer:
use std::mem;
use std::ptr;
pub unsafe fn consume_vec_in_order<T>(vec: Vec<T>, order: &[usize], mut cb: impl FnMut(T)) {
// Check whether `order` contains all numbers in 0..len without repetition
// or omission.
if cfg!(debug_assertions) {
use std::collections::HashSet;
let n = order.len();
if n != vec.len() {
panic!("The length of `order` is not equal to that of `vec`.");
}
let mut set = HashSet::<usize>::new();
for &idx in order {
if idx >= n {
panic!("`{idx}` in the `order` is out of range (0..{n}).");
} else if set.contains(&idx) {
panic!("`order` contains the repeated element `{idx}`");
} else {
set.insert(idx);
}
}
}
unsafe {
for &idx in order {
let s = ptr::read(vec.get_unchecked(idx));
cb(s);
}
vec.set_len(0);
}
}
Example:
let src = vec![
"0".to_string(),
"1".to_string(),
"2".to_string(),
"3".to_string(),
"4".to_string(),
];
let mut dst = vec![];
consume_vec_in_order(
src,
&[2, 1, 3, 0, 4],
|elem| dst.push(elem),
);
assert_eq!(dst, vec!["2", "1", "3", "0", "4"]);

How can I intersperse a rust iterator with a value every n items?

I have an iterator of characters, and I want to add a newline every N characters:
let iter = "abcdefghijklmnopqrstuvwxyz".chars();
let iter_with_newlines = todo!();
let string: String = iter_with_newlines.collect();
assert_eq("abcdefghij\nklmnopqrst\nuvwxyz", string);
So basically, I want to intersperse the iterator with a newline every n characters. How can I do this?
Some Ideas I had
It would be great if I could do something like this, where chunks would be a method to make Iterator<T> into Iterator<Iterator<T>: iter.chunks(10).intersperse('\n').flatten()
It would also be cool if I could do something like this: iter.chunks.intersperseEvery(10, '\n'), where intersperseEvery is a method that would only intersperse the value every n items.
You can do it without temporary allocation using enumerate and flat_map:
use either::Either;
fn main() {
let iter = "abcdefghijklmnopqrstuvwxyz".chars();
let iter_with_newlines = iter
.enumerate()
.flat_map(|(i, c)| {
if i % 10 == 0 {
Either::Left(['\n', c].into_iter())
} else {
Either::Right(std::iter::once(c))
}
})
.skip(1); // The above code add a newline in first position -> skip it
let string: String = iter_with_newlines.collect();
assert_eq!("abcdefghij\nklmnopqrst\nuvwxyz", string);
}
Playground
Here's what I ended up doing:
// src/intersperse_sparse.rs
use core::iter::Peekable;
/// An iterator adaptor to insert a particular value
/// every n elements of the adapted iterator.
///
/// Iterator element type is `I::Item`
pub struct IntersperseSparse<I>
where
I: Iterator,
I::Item: Clone,
{
iter: Peekable<I>,
step_length: usize,
index: usize,
separator: I::Item,
}
impl<I> IntersperseSparse<I>
where
I: Iterator,
I::Item: Clone,
{
#[allow(unused)] // Although this function isn't explicitly exported, it is called in the default implementation of the IntersperseSparseAdapter, which is exported.
fn new(iter: I, step_length: usize, separator: I::Item) -> Self {
if step_length == 0 {
panic!("Chunk size cannot be 0!")
}
Self {
iter: iter.peekable(),
step_length,
separator,
index: 0,
}
}
}
impl<I> Iterator for IntersperseSparse<I>
where
I: Iterator,
I::Item: Clone,
{
type Item = I::Item;
fn next(&mut self) -> Option<Self::Item> {
if self.index == self.step_length && self.iter.peek().is_some() {
self.index = 0;
Some(self.separator.clone())
} else {
self.index += 1;
self.iter.next()
}
}
}
/// An iterator adaptor to insert a particular value created by a function
/// every n elements of the adapted iterator.
///
/// Iterator element type is `I::Item`
pub struct IntersperseSparseWith<I, G>
where
I: Iterator,
G: FnMut() -> I::Item,
{
iter: Peekable<I>,
step_length: usize,
index: usize,
separator_closure: G,
}
impl<I, G> IntersperseSparseWith<I, G>
where
I: Iterator,
G: FnMut() -> I::Item,
{
#[allow(unused)] // Although this function isn't explicitly exported, it is called in the default implementation of the IntersperseSparseAdapter, which is exported.
fn new(iter: I, step_length: usize, separator_closure: G) -> Self {
if step_length == 0 {
panic!("Chunk size cannot be 0!")
}
Self {
iter: iter.peekable(),
step_length,
separator_closure,
index: 0,
}
}
}
impl<I, G> Iterator for IntersperseSparseWith<I, G>
where
I: Iterator,
G: FnMut() -> I::Item,
{
type Item = I::Item;
fn next(&mut self) -> Option<Self::Item> {
if self.index == self.step_length && self.iter.peek().is_some() {
self.index = 0;
Some((self.separator_closure)())
} else {
self.index += 1;
self.iter.next()
}
}
}
/// Import this trait to use the `iter.intersperse_sparse(n, item)` and `iter.intersperse_sparse(n, ||item)` on all iterators.
pub trait IntersperseSparseAdapter: Iterator {
fn intersperse_sparse(self, chunk_size: usize, separator: Self::Item) -> IntersperseSparse<Self>
where
Self: Sized,
Self::Item: Clone,
{
IntersperseSparse::new(self, chunk_size, separator)
}
fn intersperse_sparse_with<G>(
self,
chunk_size: usize,
separator_closure: G,
) -> IntersperseSparseWith<Self, G>
where
Self: Sized,
G: FnMut() -> Self::Item,
{
IntersperseSparseWith::new(self, chunk_size, separator_closure)
}
}
impl<I> IntersperseSparseAdapter for I where I: Iterator {}
To use it:
// src/main.rs
mod intersperse_sparse;
use intersperse_sparse::IntersperseSparseAdapter;
fn main() {
let string = "abcdefg";
let new_string: String = string.chars().intersperse_sparse(3, '\n').collect();
assert_eq!(new_string, "abc\ndef\ng");
}
If you don't particularly care about performance, you can use chunks from itertools, collect the chunks into Vecs, and then intersperse your element as a single-element Vec, just to flatten the whole thing finally.
use itertools::Itertools;
iter
.chunks(3)
.into_iter()
.map(|chunk| chunk.collect::<Vec<_>>())
.intersperse(vec![','])
.flat_map(|chunk| chunk.into_iter())
.collect::<String>();
Playground
Other than that, consider writing your own iterator extension trait, just like itertools is one?
Build an Iterator with from_fn:
let mut iter = "abcdefghijklmnopqrstuvwxyz".chars().peekable();
let mut count = 0;
let iter_with_newlines = std::iter::from_fn(move || match iter.peek() {
Some(_) => {
if count < 10 {
count += 1;
iter.next()
} else {
count = 0;
Some('\n')
}
}
None => None,
});
assert_eq!(
"abcdefghij\nklmnopqrst\nuvwxyz",
iter_with_newlines.collect::<String>()
);
Playground

What is the idiomatic way to handle the actual data and not care about how the data is protected/wrapped?

This is valid code:
use std::rc::Rc;
use std::sync::{Arc, Mutex};
fn foo(n: i32) {
println!("n = {}", n)
}
fn main() {
let a = 1;
let b = Rc::new(2);
let c = Mutex::new(3);
let d = Arc::new(Mutex::new(4));
foo(a);
foo(*b);
foo(*(c.lock().unwrap()));
foo(*((*d).lock().unwrap()));
}
Are there any traits (or anything else) that I can implement so that the function calls could simply become:
foo(a);
foo(b);
foo(c);
foo(d);
What is the Rust idiomatic way for handling the actual data and not caring about how the data is protected/wrapped?
As others have pointed out, it is a bad idea to ignore the fallibility of Mutex::lock. However for the other cases, you can make your function accept owned values and references transparently by using Borrow:
use std::borrow::Borrow;
use std::rc::Rc;
fn foo (n: impl Borrow<i32>) {
println!("n = {}", n.borrow())
}
fn main() {
let a = 1;
let b = Rc::new (2);
let c = &3;
foo (a);
foo (b);
foo (c);
}
Playground
Here is an extremely literal answer to your specific question. I wouldn't use it.
use std::{
rc::Rc,
sync::{Arc, Mutex},
};
fn foo(n: impl DontCare<Output = i32>) {
let n = n.gimme_it();
println!("n = {}", n)
}
fn main() {
let a = 1;
let b = Rc::new(2);
let c = Mutex::new(3);
let d = Arc::new(Mutex::new(4));
foo(a);
foo(b);
foo(c);
foo(d);
}
trait DontCare {
type Output;
fn gimme_it(self) -> Self::Output;
}
impl DontCare for i32 {
type Output = Self;
fn gimme_it(self) -> Self::Output {
self
}
}
impl<T: DontCare> DontCare for Mutex<T> {
type Output = T::Output;
fn gimme_it(self) -> Self::Output {
self.into_inner()
.expect("Lets first assume unwrap() will not panic")
.gimme_it()
}
}
impl<T: DontCare> DontCare for Rc<T> {
type Output = T::Output;
fn gimme_it(self) -> Self::Output {
match Rc::try_unwrap(self) {
Ok(v) => v.gimme_it(),
_ => unreachable!("Lets first assume unwrap() will not panic"),
}
}
}
impl<T: DontCare> DontCare for Arc<T> {
type Output = T::Output;
fn gimme_it(self) -> Self::Output {
match Arc::try_unwrap(self) {
Ok(v) => v.gimme_it(),
_ => unreachable!("Lets first assume unwrap() will not panic"),
}
}
}
The function signatures you've specified take ownership of the value. That will be highly painful, especially paired with any type that doesn't implement Copy.
There are a number of code paths that panic implicitly. I'm not a fan of baking in panics — I reserve that for algorithmic failures, not data-driven ones.
See also:
Why is it discouraged to accept a reference to a String (&String), Vec (&Vec), or Box (&Box) as a function argument?
My answer is similar to Shepmaster's answer, but maybe a little more practical since it doesn't consume the argument to foo2 and it gives you a reference to the target instead of taking it out of the container.
I never have any luck implementing traits based on other traits so I didn't try to do that here.
use std::ops::Deref;
use std::rc::Rc;
use std::sync::{Arc, Mutex, MutexGuard};
fn foo(n: i32) {
println!("n = {}", n)
}
trait Holding<T> {
type Holder: Deref<Target = T>;
fn held(self) -> Self::Holder;
}
fn foo2<H: Holding<i32>>(x: H) {
let h = x.held();
println!("x = {}", *h);
}
impl<'a, T: 'a> Holding<T> for &'a T {
type Holder = &'a T;
fn held(self) -> Self::Holder {
self
}
}
impl<'a, T> Holding<T> for &'a Rc<T> {
type Holder = &'a T;
fn held(self) -> Self::Holder {
&**self
}
}
impl<'a, T> Holding<T> for &'a Mutex<T> {
type Holder = MutexGuard<'a, T>;
fn held(self) -> Self::Holder {
// this can panic
self.lock().unwrap()
}
}
impl<'a, T> Holding<T> for &'a Arc<Mutex<T>> {
type Holder = MutexGuard<'a, T>;
fn held(self) -> Self::Holder {
// this can panic
(*self).lock().unwrap()
}
}
fn main() {
let a = 1;
let b = Rc::new(2);
let c = Mutex::new(3);
let d = Arc::new(Mutex::new(4));
foo(a);
foo(*b);
foo(*(c.lock().unwrap()));
foo(*((*d).lock().unwrap()));
foo2(&a);
foo2(&b);
foo2(&c);
foo2(&d);
}

What is the right way to share a reference between closures if the value outlives the closures?

I want to share a reference between two closures; mutably in one closure. This is an artificial situation, but I find it interesting in the context of learning Rust.
In order to make it work, I had to make use of Rc, Weak, and RefCell. Is there a simpler way of achieving this?
use std::cell::RefCell;
use std::rc::Rc;
#[derive(Debug)]
struct Foo {
i: i32,
}
impl Foo {
fn get(&self) -> i32 {
self.i
}
fn incr(&mut self) {
self.i += 1
}
}
fn retry<O, N>(mut operation: O, mut notify: N) -> i32
where
O: FnMut() -> i32,
N: FnMut() -> (),
{
operation();
notify();
operation()
}
fn something(f: &mut Foo) {
let f_rc = Rc::new(RefCell::new(f));
let f_weak = Rc::downgrade(&f_rc);
let operation = || {
// f.get()
let cell = f_weak.upgrade().unwrap();
let f = cell.borrow();
f.get()
};
let notify = || {
// f.incr();
let cell = f_weak.upgrade().unwrap();
let mut f = cell.borrow_mut();
f.incr();
};
retry(operation, notify);
println!("{:?}", f_rc);
}
fn main() {
let mut f = Foo { i: 1 };
something(&mut f);
}
Reference counting is unnecessary here because the entity lives longer than any of the closures. You can get away with:
fn something(f: &mut Foo) {
let f = RefCell::new(f);
let operation = || f.borrow().get();
let notify = || {
f.borrow_mut().incr();
};
retry(operation, notify);
println!("{:?}", f);
}
which is quite simpler.
The use of RefCell, however, is necessary to move the enforcement of the Aliasing XOR Mutability from compile-time to run-time.

How do I iterate over a range with a custom step?

How can I iterate over a range in Rust with a step other than 1? I'm coming from a C++ background so I'd like to do something like
for(auto i = 0; i <= n; i+=2) {
//...
}
In Rust I need to use the range function, and it doesn't seem like there is a third argument available for having a custom step. How can I accomplish this?
range_step_inclusive and range_step are long gone.
As of Rust 1.28, Iterator::step_by is stable:
fn main() {
for x in (1..10).step_by(2) {
println!("{}", x);
}
}
It seems to me that until the .step_by method is made stable, one can easily accomplish what you want with an Iterator (which is what Ranges really are anyway):
struct SimpleStepRange(isize, isize, isize); // start, end, and step
impl Iterator for SimpleStepRange {
type Item = isize;
#[inline]
fn next(&mut self) -> Option<isize> {
if self.0 < self.1 {
let v = self.0;
self.0 = v + self.2;
Some(v)
} else {
None
}
}
}
fn main() {
for i in SimpleStepRange(0, 10, 2) {
println!("{}", i);
}
}
If one needs to iterate multiple ranges of different types, the code can be made generic as follows:
use std::ops::Add;
struct StepRange<T>(T, T, T)
where for<'a> &'a T: Add<&'a T, Output = T>,
T: PartialOrd,
T: Clone;
impl<T> Iterator for StepRange<T>
where for<'a> &'a T: Add<&'a T, Output = T>,
T: PartialOrd,
T: Clone
{
type Item = T;
#[inline]
fn next(&mut self) -> Option<T> {
if self.0 < self.1 {
let v = self.0.clone();
self.0 = &v + &self.2;
Some(v)
} else {
None
}
}
}
fn main() {
for i in StepRange(0u64, 10u64, 2u64) {
println!("{}", i);
}
}
I'll leave it to you to eliminate the upper bounds check to create an open ended structure if an infinite loop is required...
Advantages of this approach is that is works with for sugaring and will continue to work even when unstable features become usable; also, unlike the de-sugared approach using the standard Ranges, it doesn't lose efficiency by multiple .next() calls. Disadvantages are that it takes a few lines of code to set up the iterator so may only be worth it for code that has a lot of loops.
If you are stepping by something predefined, and small like 2, you may wish to use the iterator to step manually. e.g.:
let mut iter = 1..10;
loop {
match iter.next() {
Some(x) => {
println!("{}", x);
},
None => break,
}
iter.next();
}
You could even use this to step by an arbitrary amount (although this is definitely getting longer and harder to digest):
let mut iter = 1..10;
let step = 4;
loop {
match iter.next() {
Some(x) => {
println!("{}", x);
},
None => break,
}
for _ in 0..step-1 {
iter.next();
}
}
Use the num crate with range_step
You'd write your C++ code:
for (auto i = 0; i <= n; i += 2) {
//...
}
...in Rust like so:
let mut i = 0;
while i <= n {
// ...
i += 2;
}
I think the Rust version is more readable too.

Resources