Is there any built in way to "combine" two Options? - rust

In the following sample program, is there any way I could avoid having to define map2?
fn map2<T, U, V, F: Fn(T, U) -> V>(f: F, a: Option<T>, b: Option<U>) -> Option<V> {
match a {
Some(x) => match b {
Some(y) => Some(f(x, y)),
None => None,
},
None => None,
}
}
fn main() {
let a = Some(5);
let b = Some(10);
let f = |a, b| {
a + b
};
let res = map2(f, a, b);
println!("{:?}", res);
// prints Some(15)
}
For people who also speak Haskell, I guess this question could also be phrased as "Is there any tool we can use instead of liftM2 in Rust?"

I don't believe there's a direct function equivalent to liftM2, but you can combine Option::and_then and Option::map like this:
fn main() {
let a = Some(5);
let b = Some(10);
let f = |a, b| {
a + b
};
println!("{:?}", a.and_then(|a| b.map(|b| f(a, b))));
}
Output:
Some(15)

As of Rust 1.46.0, you can use Option::zip:
fn map2<T, U, V, F: Fn(T, U) -> V>(f: F, a: Option<T>, b: Option<U>) -> Option<V> {
match a.zip(b) {
Some((x, y)) => Some(f(x, y)),
None => None,
}
}
This can be combined with Option::map, as shown in other answers:
fn map2<T, U, V, F: Fn(T, U) -> V>(f: F, a: Option<T>, b: Option<U>) -> Option<V> {
a.zip(b).map(|(x, y)| f(x, y))
}

I don't know if you can get down to one line (Edit: oh the accepted answer gets it down to one line nicely), but you can avoid the nested match by matching on a tuple:
let a = Some(5);
let b = Some(10);
let f = |a, b| {
a + b
};
let res = match (a, b) {
(Some(a), Some(b)) => Some(f(a, b)),
_ => None,
};
println!("{:?}", res);
// prints Some(15)

let num_maybe = Some(5);
let num_maybe2 = Some(10);
let f = |a, b| {
a + b
};
Option 1
if let (Some(a), Some(b)) = (num_maybe, num_maybe2) {
f(a, b)
}
Option 2
num_maybe.and_then(|a| num_maybe2.map(|b| f(a, b))
Option 3
[num_maybe, num_maybe2].into_iter().flatten().fold(0, f)

You can use an immediately invoked function expression (IIFE) combined with the ? (try) operator:
fn main() {
let a = Some(5);
let b = Some(10);
let f = |a, b| a + b;
let res = (|| Some(f(a?, b?)))();
println!("{:?}", res);
}
In the future, you can use try blocks:
#![feature(try_blocks)]
fn main() {
let a = Some(5);
let b = Some(10);
let f = |a, b| a + b;
let res: Option<_> = try { f(a?, b?) };
println!("{:?}", res);
}
See also:
Is there a more ergonomic syntax for Either when using futures?
Is there a shortcut to unwrap or continue in a loop?

You can use the fact that Options can be iterated over. Iterate over both options, zip them together, and map the resulting iterator over your function.
fn main() {
let a = Some(5);
let b = Some(10);
let f = |(a, b)| {
a + b
};
let res = a.iter().zip(b.iter()).map(f).next();
println!("{:?}", res);
// prints Some(15)
}
This required a modification of f, so the arguments are merged into a single tuple-argument. It would be possible without modifying f, by directly mapping over |args| f.call(args), but then you would have to specify the closure kind of f.

I stumbled upon this thread and didn't find the most obvious and straightforward one-liner solution based on zip.
let one = Some(1);
let two = Some(2);
let sum = one.zip(two).map(|(a, b)| a + b);
assert_eq!(sum, Some(3));
let two: Option<i32> = None;
let sum = one.zip(two).map(|(a, b)| a + b);
assert_eq!(sum, None);
There's also the zip_with variant which is marked as unstable right now.
let sum = one.zip_with(two, |a, b| a + b);

Related

How to declare generic types for a function that computes k-shortest-paths using Yen's algorithm and petgraph?

I have implemented Yen's algorithm Wikipedia using petgraph in Rust.
In a main function, the code looks like this:
use std::collections::BinaryHeap;
use std::cmp::Reverse;
use std::collections::HashSet;
use petgraph::{Graph, Undirected};
use petgraph::graph::NodeIndex;
use petgraph::stable_graph::StableUnGraph;
use petgraph::algo::{astar};
use petgraph::visit::NodeRef;
fn main() {
let mut graph: Graph<String, u32, Undirected> = Graph::new_undirected();
let c = graph.add_node(String::from("C"));
let d = graph.add_node(String::from("D"));
let e = graph.add_node(String::from("E"));
let f = graph.add_node(String::from("F"));
let g = graph.add_node(String::from("G"));
let h = graph.add_node(String::from("H"));
graph.add_edge(c, d, 3);
graph.add_edge(c, e, 2);
graph.add_edge(d, e, 1);
graph.add_edge(d, f, 4);
graph.add_edge(e, f, 2);
graph.add_edge(e, g, 3);
graph.add_edge(f, g, 2);
graph.add_edge(f, h, 1);
graph.add_edge(g, h, 2);
let start = c;
let goal = h;
// start solving Yen's k-shortest-paths
let (length, path) = match astar(&graph, start, |n| n == goal.unwrap(), |e| *e.weight(), |_| 0) {
Some(x) => x,
None => panic!("Testing!"),
};
println!("Initial path found\tlength: {}", length);
for i in 0..(path.len() - 1) {
println!("\t{:?}({:?}) -> {:?}({:?})", graph.node_weight(path[i].id()).unwrap(), path[i].id(), graph.node_weight(path[i+1].id()).unwrap(), path[i+1].id());
}
let k = 10;
let mut ki = 0;
let mut visited = HashSet::new();
let mut routes = vec![(length, path)];
let mut k_routes = BinaryHeap::new();
for ki in 0..(k - 1) {
println!("Computing path {}", ki);
if routes.len() <= ki {
// We have no more routes to explore
break;
}
let previous = routes[ki].1.clone();
for i in 0..(previous.len() - 1) {
let spur_node = previous[i].clone();
let root_path = &previous[0..i];
let mut graph_copy = StableUnGraph::<String, u32>::from(graph.clone());
println!("\tComputing pass {}\tspur: {:?}\troot: {:?}", i, graph.node_weight(spur_node), root_path.iter().map(|n| graph.node_weight(*n).unwrap()));
for (_, path) in &routes {
if path.len() > i + 1 && &path[0..i] == root_path {
let ei = graph.find_edge_undirected(path[i], path[i + 1]);
if ei.is_some() {
let edge = ei.unwrap().0;
graph_copy.remove_edge(edge);
let edge_obj = graph.edge_endpoints(edge);
let ns = edge_obj.unwrap();
println!("\t\tRemoving edge {:?} from {:?} -> {:?}", edge, graph.node_weight(ns.0).unwrap(), graph.node_weight(ns.1).unwrap());
}
else {
panic!("\t\tProblem finding edge");
}
}
}
if let Some((_, spur_path)) =
astar(&graph_copy, spur_node, |n| n == goal.unwrap(), |e| *e.weight(), |_| 0)
{
let nodes: Vec<NodeIndex> = root_path.iter().cloned().chain(spur_path).collect();
let mut node_names = vec![];
for ni in 0..nodes.len() {
node_names.push(graph.node_weight(nodes[ni]).unwrap());
}
// compute root_path length
let mut path_length = 0;
for i_rp in 0..(nodes.len() - 1) {
let ei = graph.find_edge_undirected(nodes[i_rp], nodes[i_rp + 1]);
if ei.is_some() {
let ew = graph.edge_weight(ei.unwrap().0);
if ew.is_some() {
path_length += ew.unwrap();
}
}
}
println!("\t\t\tfound path: {:?} with cost {}", node_names, path_length);
if !visited.contains(&nodes) {
// Mark as visited
visited.insert(nodes.clone());
// Build a min-heap
k_routes.push(Reverse((path_length, nodes)));
}
}
}
if let Some(k_route) = k_routes.pop() {
println!("\tselected route {:?}", k_route.0);
routes.push(k_route.0);
}
}
}
Now, I want to put this algorithm within a function that I can call from my code. I made an initial attempt with the signature like this:
pub fn yen_k_shortest_paths<G, E, Ty, Ix, F, K>(
graph: Graph<String, u32, Undirected>,
start: NodeIndex<u32>,
goal: NodeIndex<u32>,
mut edge_cost: F,
k: usize,
) -> Result<Vec<(u32, Vec<NodeIndex<u32>>)>, Box<dyn std::error::Error>>
where
G: IntoEdges + Visitable,
Ty: EdgeType,
Ix: IndexType,
E: Default + Debug + std::ops::Add,
F: FnMut(G::EdgeRef) -> K,
K: Measure + Copy,
{
// implementation here
}
However, when I try to call the function with:
let paths = yen::yen_k_shortest_paths(graph, start, goal, |e: EdgeReference<u32>| *e.weight(), 5);
the compiler complains: type annotations needed cannot satisfy <_ as IntoEdgeReferences>::EdgeRef == petgraph::graph::EdgeReference<'_, u32>`
I already tried several alternatives without success. Do you have any suggestion on how to fix this issue?
The issue with the yen_k_shortest_paths() function signature as written is the generic type parameters aren't used correctly. As an example, consider the first declared type parameter on yen_k_shortest_paths(): G, which is intended to represent the graph type. Declaring G like this means that the code that calls yen_k_shortest_paths() gets to pick the graph type G. But the graph argument is declared with the concrete type Graph<String, u32, Undirected>—the caller has no choice. This contradiction is the problem with G. Similar reasoning applies to the other type parameters, except F and K. There are two ways to fix this kind of issue:
Keep the graph argument as Graph<String, u32, Undirected> and remove the G type parameter.
Change the graph argument to take a G.
Approach #1 is simpler but your function won't be as general. Approach #2 can involve needing to add extra bounds and some code changes in the function in order for the code to compile.
In this case, the simplest approach doesn't need any type parameters at all:
fn yen_k_shortest_paths(
graph: &Graph<String, u32, Undirected>,
start: NodeIndex<u32>,
goal: NodeIndex<u32>,
edge_cost: fn(EdgeReference<u32>) -> u32,
k: usize,
) -> Vec<(u32, Vec<NodeIndex<u32>>)> {...}
Here's the full code, which can be run:
use std::cmp::Reverse;
use std::collections::BinaryHeap;
use std::collections::HashSet;
use petgraph::algo::astar;
use petgraph::graph::{EdgeReference, NodeIndex};
use petgraph::stable_graph::StableUnGraph;
use petgraph::visit::NodeRef;
use petgraph::{Graph, Undirected};
fn main() {
let mut graph: Graph<String, u32, Undirected> = Graph::new_undirected();
let c = graph.add_node(String::from("C"));
let d = graph.add_node(String::from("D"));
let e = graph.add_node(String::from("E"));
let f = graph.add_node(String::from("F"));
let g = graph.add_node(String::from("G"));
let h = graph.add_node(String::from("H"));
graph.add_edge(c, d, 3);
graph.add_edge(c, e, 2);
graph.add_edge(d, e, 1);
graph.add_edge(d, f, 4);
graph.add_edge(e, f, 2);
graph.add_edge(e, g, 3);
graph.add_edge(f, g, 2);
graph.add_edge(f, h, 1);
graph.add_edge(g, h, 2);
let start = c;
let goal = h;
let edge_cost = |e: EdgeReference<u32>| *e.weight();
let k = 10;
let _paths = yen_k_shortest_paths(&graph, start, goal, edge_cost, k);
}
fn yen_k_shortest_paths(
graph: &Graph<String, u32, Undirected>,
start: NodeIndex<u32>,
goal: NodeIndex<u32>,
edge_cost: fn(EdgeReference<u32>) -> u32,
k: usize,
) -> Vec<(u32, Vec<NodeIndex<u32>>)> {
let (length, path) = match astar(graph, start, |n| n == goal, edge_cost, |_| 0) {
Some(x) => x,
None => panic!("Testing!"),
};
println!("Initial path found\tlength: {}", length);
for i in 0..(path.len() - 1) {
println!(
"\t{:?}({:?}) -> {:?}({:?})",
graph.node_weight(path[i].id()).unwrap(),
path[i].id(),
graph.node_weight(path[i + 1].id()).unwrap(),
path[i + 1].id()
);
}
let mut visited = HashSet::new();
let mut routes = vec![(length, path)];
let mut k_routes = BinaryHeap::new();
for ki in 0..(k - 1) {
println!("Computing path {}", ki);
if routes.len() <= ki {
// We have no more routes to explore
break;
}
let previous = routes[ki].1.clone();
for i in 0..(previous.len() - 1) {
let spur_node = previous[i];
let root_path = &previous[0..i];
let mut graph_copy = StableUnGraph::from(graph.clone());
println!(
"\tComputing pass {}\tspur: {:?}\troot: {:?}",
i,
graph.node_weight(spur_node),
root_path
.iter()
.map(|n| graph.node_weight(*n).unwrap())
.collect::<Vec<_>>()
);
for (_, path) in &routes {
if path.len() > i + 1 && &path[0..i] == root_path {
let ei = graph.find_edge_undirected(path[i], path[i + 1]);
if let Some(ei) = ei {
let edge = ei.0;
graph_copy.remove_edge(edge);
let edge_obj = graph.edge_endpoints(edge);
let ns = edge_obj.unwrap();
println!(
"\t\tRemoving edge {:?} from {:?} -> {:?}",
edge,
graph.node_weight(ns.0).unwrap(),
graph.node_weight(ns.1).unwrap()
);
} else {
panic!("\t\tProblem finding edge");
}
}
}
if let Some((_, spur_path)) = astar(
&graph_copy,
spur_node,
|n| n == goal,
|e| *e.weight(),
|_| 0,
) {
let nodes: Vec<NodeIndex> = root_path.iter().cloned().chain(spur_path).collect();
let mut node_names = vec![];
for &node in &nodes {
node_names.push(graph.node_weight(node).unwrap());
}
// compute root_path length
let mut path_length = 0;
for i_rp in 0..(nodes.len() - 1) {
let ei = graph.find_edge_undirected(nodes[i_rp], nodes[i_rp + 1]);
if let Some(ei) = ei {
let ew = graph.edge_weight(ei.0);
if let Some(&ew) = ew {
path_length += ew;
}
}
}
println!(
"\t\t\tfound path: {:?} with cost {}",
node_names, path_length
);
if !visited.contains(&nodes) {
// Mark as visited
visited.insert(nodes.clone());
// Build a min-heap
k_routes.push(Reverse((path_length, nodes)));
}
}
}
if let Some(k_route) = k_routes.pop() {
println!("\tselected route {:?}", k_route.0);
routes.push(k_route.0);
}
}
routes
}
As another example of a possible function signature, this one is generic over the node type N and the edge cost function F:
fn yen_k_shortest_paths<'a, N, F>(
graph: &'a Graph<N, u32, Undirected>,
start: NodeIndex<u32>,
goal: NodeIndex<u32>,
edge_cost: F,
k: usize,
) -> Vec<(u32, Vec<NodeIndex<u32>>)>
where
&'a Graph<N, u32, Undirected>:
GraphBase<NodeId = NodeIndex<u32>> + IntoEdgeReferences<EdgeRef = EdgeReference<'a, u32>>,
N: Debug + Clone,
F: FnMut(EdgeReference<u32>) -> u32,
{...}
As you can see, these bounds can get pretty complicated. Figuring them out involved reading the error messages the compiler emitted, as well as reading the docs for the involved types/traits. (Although, I think in this case the complicated bound &'a Graph<N, u32, Undirected>: GraphBase<NodeId = NodeIndex<u32>> + IntoEdgeReferences<EdgeRef = EdgeReference<'a, u32>> should be inferred, but currently isn't due to a complier bug/limitation)

How to write a macro that splits a byte into a tuple of bits of user-specified count?

I would like to have macro splitting one byte into tuple with 2-8 u8 parts using bitreader crate.
I managed to achieve that by following code:
use bitreader::BitReader;
trait Tupleprepend<T> {
type ResultType;
fn prepend(self, t: T) -> Self::ResultType;
}
macro_rules! impl_tuple_prepend {
( () ) => {};
( ( $t0:ident $(, $types:ident)* ) ) => {
impl<$t0, $($types,)* T> Tupleprepend<T> for ($t0, $($types,)*) {
type ResultType = (T, $t0, $($types,)*);
fn prepend(self, t: T) -> Self::ResultType {
let ($t0, $($types,)*) = self;
(t, $t0, $($types,)*)
}
}
impl_tuple_prepend! { ($($types),*) }
};
}
impl_tuple_prepend! {
(_1, _2, _3, _4, _5, _6, _7, _8)
}
macro_rules! split_byte (
($reader:ident, $bytes:expr, $count:expr) => {{
($reader.read_u8($count).unwrap(),)
}};
($reader:ident, $bytes:expr, $count:expr, $($next_counts:expr),+) => {{
let head = split_byte!($reader, $bytes, $count);
let tail = split_byte!($reader, $bytes, $($next_counts),+);
tail.prepend(head.0)
}};
($bytes:expr $(, $count:expr)* ) => {{
let mut reader = BitReader::new($bytes);
split_byte!(reader, $bytes $(, $count)+)
}};
);
Now I can use this code as I would like to:
let buf: &[u8] = &[0x72];
let (bit1, bit2, bits3to8) = split_byte!(&buf, 1, 1, 6);
Is there a way to avoid using Tupleprepend trait and create only 1 tuple instead of 8 in the worst scenario?
Because the number of bit widths directly corresponds to the number of returned values, I'd solve the problem using generics and arrays instead. The macro only exists to remove the typing of the [], which I don't really think is worth it.
fn split_byte<A>(b: u8, bit_widths: A) -> A
where
A: Default + std::ops::IndexMut<usize, Output = u8>,
for<'a> &'a A: IntoIterator<Item = &'a u8>,
{
let mut result = A::default();
let mut start = 0;
for (idx, &width) in bit_widths.into_iter().enumerate() {
let shifted = b >> (8 - width - start);
let mask = (0..width).fold(0, |a, _| (a << 1) | 1);
result[idx] = shifted & mask;
start += width;
}
result
}
macro_rules! split_byte {
($b:expr, $($w:expr),+) => (split_byte($b, [$($w),+]));
}
fn main() {
let [bit1, bit2, bits3_to_8] = split_byte!(0b1010_1010, 1, 1, 6);
assert_eq!(bit1, 0b1);
assert_eq!(bit2, 0b0);
assert_eq!(bits3_to_8, 0b10_1010);
}
See also:
How does for<> syntax differ from a regular lifetime bound?
How to write a trait bound for adding two references of a generic type?
How do I write the lifetimes for references in a type constraint when one of them is a local reference?
If it's ok to target nightly Rust, I'd use the unstable min_const_generics feature:
#![feature(min_const_generics)]
fn split_byte<const N: usize>(b: u8, bit_widths: [u8; N]) -> [u8; N] {
let mut result = [0; N];
let mut start = 0;
for (idx, &width) in bit_widths.iter().enumerate() {
let shifted = b >> (8 - width - start);
let mask = (0..width).fold(0, |a, _| (a << 1) | 1);
result[idx] = shifted & mask;
start += width;
}
result
}
macro_rules! split_byte {
($b:expr, $($w:expr),+) => (split_byte($b, [$($w),+]));
}
fn main() {
let [bit1, bit2, bits3_to_8] = split_byte!(0b1010_1010, 1, 1, 6);
assert_eq!(bit1, 0b1);
assert_eq!(bit2, 0b0);
assert_eq!(bits3_to_8, 0b10_1010);
}
See also:
Is it possible to control the size of an array using the type parameter of a generic?

Alternatives for using the question mark operator inside a map function closure

In this function parse can return an error so I use .filter_map(Result::ok) to filter them out.
fn part1(input: &str) {
let sum = input.lines()
.map(|l| l.parse::<u32>())
.filter_map(Result::ok)
.map(|n| n as f32 / 3.0)
.map(|f| f.round())
.map(|f| f as u32 - 2)
.sum::<u32>();
// println!("{}", sum);
println!("{:?}", sum);
}
However, I would like to return out of the part1 function when parse gives an error, kind of like using the question mark operator like this .map(|l| l.parse::<u32>()?). If this is done the compiler gives the error
error[E0277]: the `?` operator can only be used in a closure that returns `Result`
or `Option` (or another type that implements `std::ops::Try`)
--> src/main.rs:64:18
|
64 | .map(|l| l.parse::<u32>()?)
| ----^^^^^^^^^^^^^^^^^
| | |
| | cannot use the `?` operator in a closure that returns `u32`
| this function should return `Result` or `Option` to accept `?`
Is this because the question mark operator is used inside a closure so it returns out of the closure instead of the enclosing function? What are some idiomatic alternatives to using the question mark operator inside the closure so that I can return out of part1 if parse gives an error or unwrap the Ok if parse is successful? The result should be similar to .filter_map(Result::ok), except instead of filtering out the errors it will return out of the enclosing function when there is an error.
You can just keep passing the Result from parse further down the chain and allow the final sum to work - since Sum is implemented for Result. Then you can use ? on the final result of the chain.
An example would look like this:
fn part1(input: &str) -> Result<u32,std::num::ParseIntError> {
let sum = input.lines()
.map(|l| l.parse::<u32>())
.map(|n| n.map( |n| n as f32 / 3.0) )
.map(|f| f.map( |f| f.round() ) )
.map(|f| f.map( |f| f as u32 - 2) )
.sum::<Result<u32,_>>()?;
Ok(sum)
}
If you're using nightly rust you can get rid of the nested closures using a try block
#![feature(try_blocks)]
fn part1(input: &str) -> Result<u32, std::num::ParseIntError> {
let sum = input.lines()
.map( |l| try {
let n = l.parse::<u32>()?;
let f = n as f32 / 3.0;
let f = f.round();
f as u32 - 2
})
.sum::<Result<u32,_>>()?;
Ok(sum)
}
If you are not using nightly you can extract the processing into a closure that returns a Result.
fn part1(input: &str) -> Result<u32, std::num::ParseIntError> {
let process_line = |l:&str| -> Result<u32,std::num::ParseIntError> {
let n = l.parse::<u32>()?;
let f = n as f32 / 3.0;
let f = f.round();
Ok(f as u32 - 2)
};
let sum = input.lines().map(process_line).sum::<Result<u32,_>>()?;
Ok(sum)
}
I'm also assuming that your real use case is somewhat more complicated than you've presented here. For something this simple I'd just use a for loop
fn part1(input: &str) -> Result<u32,std::num::ParseIntError> {
let mut sum = 0;
for line in input.lines() {
let n = l.parse::<u32>()?;
let f = n as f32 / 3.0;
let f = f.round();
sum += f as u32 - 2;
}
Ok(sum)
}
The multiple calls to map might make some solutions feel cluttered.
Instead, all your math could be performed in a single call to map, that is then used with sum:
fn part1(input: &str) -> Result<(), std::num::ParseIntError> {
let sum = input.lines()
.map(|l| {
let n = l.parse::<u32>()?;
let mut f = n as f32 / 3.0;
f = f.round();
Ok(f as u32 - 2)
})
.sum::<Result<u32, _>>()?;
// println!("{}", sum);
println!("{:?}", sum);
Ok(())
}
But you could then go further by removing the ? and using map on the Result. If you do this along with returning a value from your function, you don't even need the explicit type parameter to sum:
fn part1(input: &str) -> Result<u32, std::num::ParseIntError> {
input.lines()
.map(|l| {
l.parse::<u32>().map(|n| {
let mut f = n as f32 / 3.0;
f = f.round();
f as u32 - 2
})
})
.sum()
}
You would then have to call println outside of the function.
If you don't like the nested closures, you can always extract the math to another function (with a better name):
fn part1(input: &str) -> Result<u32, std::num::ParseIntError> {
input.lines()
.map(|l| l.parse().map(math_part))
.sum()
}
fn math_part(n: u32) -> u32 {
let mut f = n as f32 / 3.0;
f = f.round();
f as u32 - 2
}

How to group 'Option' assignments in Rust?

I have a block of code where multiple optional variables need to be assigned at once. There is very little chance any of the values will be None, so individually handing each failed case isn't especially useful.
Currently I write the checks like this:
if let Some(a) = foo_a() {
if let Some(b) = foo_b() {
if let Some(c) = foo_c() {
if let Some(d) = foo_d() {
// code
}
}
}
}
It would be convenient if it was possible to group assignments. Without this, adding a new variable indents the block one level, making for noisy diffs and causes unnecessarily deep indentation:
if let Some(a) = foo_a() &&
let Some(b) = foo_b() &&
let Some(c) = foo_c() &&
let Some(d) = foo_d()
{
// code
}
Is there a way to assign multiple Options in one if statement?
Some details worth noting:
The first function that fails should short circuit and not call the others. Otherwise, it could be written like this:
if let (Some(a), Some(b), Some(c), Some(d)) = (foo_a(), foo_b(), foo_c(), foo_d()) {
// Code
}
Deep indentation could be avoided using a function, but I would prefer not to do this since you may not want to have the body in a different scope...
fn my_function(a: Foo, b: Foo, c: Foo, d: Foo) {
// code
}
if let Some(a) = foo_a() {
if let Some(b) = foo_b() {
if let Some(c) = foo_c() {
if let Some(d) = foo_d() {
my_function(a, b, c, d);
}
}
}
}
As #SplittyDev said, you can create a macro to get the functionality you want. Here is an alternate macro-based solution which also retains the short-circuiting behaviour:
macro_rules! iflet {
([$p:pat = $e:expr] $($rest:tt)*) => {
if let $p = $e {
iflet!($($rest)*);
}
};
($b:block) => {
$b
};
}
fn main() {
iflet!([Some(a) = foo_a()] [Some(b) = foo_b()] [Some(c) = foo_c()] {
println!("{} {} {}", a, b, c);
});
}
Playground
The standard library doesn't include that exact functionality, but the language allows you to create the desired behavior using a small macro.
Here's what I came up with:
macro_rules! all_or_nothing {
($($opt:expr),*) => {{
if false $(|| $opt.is_none())* {
None
} else {
Some(($($opt.unwrap(),)*))
}
}};
}
You can feed it all your options and get some tuple containing the unwrapped values if all values are Some, or None in the case that any of the options are None.
The following is a brief example on how to use it:
fn main() {
let foo = Some(0);
let bar = Some(1);
let baz = Some(2);
if let Some((a, b, c)) = all_or_nothing!(foo, bar, baz) {
println!("foo: {}; bar: {}; baz: {}", a, b, c);
} else {
panic!("Something was `None`!");
}
}
Here's a full test-suite for the macro: Rust Playground
My first inclination was to do something similar to swizard's answer, but to wrap it up in a trait to make the chaining cleaner. It's also a bit simpler without the need for extra function invocations.
It does have the downside of increasing the nesting of the tuples.
fn foo_a() -> Option<u8> {
println!("foo_a() invoked");
Some(1)
}
fn foo_b() -> Option<u8> {
println!("foo_b() invoked");
None
}
fn foo_c() -> Option<u8> {
println!("foo_c() invoked");
Some(3)
}
trait Thing<T> {
fn thing<F, U>(self, f: F) -> Option<(T, U)> where F: FnOnce() -> Option<U>;
}
impl<T> Thing<T> for Option<T> {
fn thing<F, U>(self, f: F) -> Option<(T, U)>
where F: FnOnce() -> Option<U>
{
self.and_then(|a| f().map(|b| (a, b)))
}
}
fn main() {
let x = foo_a()
.thing(foo_b)
.thing(foo_c);
match x {
Some(((a, b), c)) => println!("matched: a = {}, b = {}, c = {}", a, b, c),
None => println!("nothing matched"),
}
}
Honestly, someone should notice about Option being an applicative functor :)
The code will be quite ugly without currying support in Rust, but it works and it shouldn't make a noisy diff:
fn foo_a() -> Option<isize> {
println!("foo_a() invoked");
Some(1)
}
fn foo_b() -> Option<isize> {
println!("foo_b() invoked");
Some(2)
}
fn foo_c() -> Option<isize> {
println!("foo_c() invoked");
Some(3)
}
let x = Some(|v| v)
.and_then(|k| foo_a().map(|v| move |x| k((v, x))))
.and_then(|k| foo_b().map(|v| move |x| k((v, x))))
.and_then(|k| foo_c().map(|v| move |x| k((v, x))))
.map(|k| k(()));
match x {
Some((a, (b, (c, ())))) =>
println!("matched: a = {}, b = {}, c = {}", a, b, c),
None =>
println!("nothing matched"),
}
You can group the values using the '?' operator to return an Option of a tuple with the required values. If on of then is None, the group_options function will return None.
fn foo_a() -> Option<u8> {
println!("foo_a() invoked");
Some(1)
}
fn foo_b() -> Option<u8> {
println!("foo_b() invoked");
None
}
fn foo_c() -> Option<u8> {
println!("foo_c() invoked");
Some(3)
}
fn group_options() -> Option<(u8, u8, u8)> {
let a = foo_a()?;
let b = foo_b()?;
let c = foo_c()?;
Some((a, b, c))
}
fn main() {
if let Some((a, b, c)) = group_options() {
println!("{}", a);
println!("{}", b);
println!("{}", c);
}
}

Is it possible to reverse a linked list in-place without allocating any nodes?

I'm trying to understand how ownership in Rust works with regard to linked lists. I have this code:
struct Node {
value: i32,
next: Option<Box<Node>>
}
fn main() {
let mut x = Box::new(Node {value: 1, next: None});
let mut y = Box::new(Node {value: 2, next: Some(x)});
}
which creates a linked list y -> x -> null. Is it possible to switch this around in-place, so that we end up with x -> y -> null without allocating any new nodes?
Absolutely. Ownership in this case is pretty simple: the main function owns y, which owns x, and owners can mutate the things they own.
To swap two nodes a and b where a → b → …, you just need to do the following:
Disconnect b from a, so that you have a → ⊥ and b → ….
Remove everything following b; call this c…. You now have b → ⊥, and c → …. Note that c might be empty, or it might be a long list; we don't care.
a and b are now alone, and don't connect to anything else, so you can just exchange their contents, swapping them in place.
Attach c to the end of a, giving you a → c → ….
Attach a to the end of b, giving you b → a → ….
No new nodes need allocating, and this can be transcribed pretty much directly into Rust:
struct Node {
value: i32,
next: Option<Box<Node>>
}
impl Node {
pub fn swap_with_next(&mut self) {
use std::mem::swap;
match self.next.take() {
Some(mut next_node) => {
let next_next = next_node.next.take();
swap(self, &mut next_node);
next_node.next = next_next;
self.next = Some(next_node);
},
None => {
// Uh-oh, there's nothing to swap *with*!
panic!("cannot swap with nothing");
}
}
}
pub fn show(&self) {
print!("{:?}", self.value);
if let Some(next) = self.next.as_ref() {
print!(" -> ");
next.show();
}
}
}
fn main() {
let mut w = Box::new(Node { value: 0, next: None });
let mut x = Box::new(Node { value: 1, next: Some(w) });
let mut y = Box::new(Node { value: 2, next: Some(x) });
y.show();
println!("");
y.swap_with_next();
y.show();
println!("");
}
Finally, I'd be remiss if I didn't point you toward Learning Rust With Entirely Too Many Linked Lists.

Resources