How do I serialize Polars DataFrame Row/HashMap of `AnyValue` into JSON? - rust

I have a row of a polars dataframe created using iterators reading a parquet file from this method: Iterate over rows polars rust
I have constructed a HashMap that represents an individual row and I would like to now convert that row into JSON.
This is what my code looks like so far:
use polars::prelude::*;
use std::iter::zip;
use std::{fs::File, collections::HashMap};
fn main() -> anyhow::Result<()> {
let file = File::open("0.parquet").unwrap();
let mut df = ParquetReader::new(file).finish()?;
dbg!(df.schema());
let fields = df.fields();
let columns: Vec<&String> = fields.iter().map(|x| x.name()).collect();
df.as_single_chunk_par();
let mut iters = df.iter().map(|s| s.iter()).collect::<Vec<_>>();
for _ in 0..df.height() {
let mut row = HashMap::new();
for (column, iter) in zip(&columns, &mut iters) {
let value = iter.next().expect("should have as many iterations as rows");
row.insert(column, value);
}
dbg!(&row);
let json = serde_json::to_string(&row).unwrap();
dbg!(json);
break;
}
Ok(())
}
And I have the following feature flags enabled: ["parquet", "serde", "dtype-u8", "dtype-i8", "dtype-date", "dtype-datetime"].
I am running into the following error at the serde_json::to_string(&row).unwrap() line:
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: Error("the enum variant AnyValue::Datetime cannot be serialized", line: 0, column: 0)', src/main.rs:47:48
I am also unable to implement my own serialized for AnyValue::DateTime because of only traits defined in the current crate can be implemented for types defined outside of the crate.
What's the best way to serialize this row into JSON?

I was able to resolve this error by using a match statement over value to change it from a Datetime to an Int64.
let value = match value {
AnyValue::Datetime(value, TimeUnit::Milliseconds, _) => AnyValue::Int64(value),
x => x
};
row.insert(column, value);
Root cause is there is no enum variant for Datetime in the impl Serialize block: https://docs.rs/polars-core/0.24.0/src/polars_core/datatypes/mod.rs.html#298
Although this code now works, it outputs data that looks like:
{'myintcolumn': {'Int64': 22342342343},
'mylistoclumn': {'List': {'datatype': 'Int32', 'name': '', 'values': []}},
'mystrcolumn': {'Utf8': 'lorem ipsum lorem ipsum'}
So you likely to be customizing the serialization here regardless of the data type.
Update: If you want to get the JSON without all of the inner nesting, I had to do a gnarly match statement:
use polars::prelude::*;
use std::iter::zip;
use std::{fs::File, collections::HashMap};
use serde_json::json;
fn main() -> anyhow::Result<()> {
let file = File::open("0.parquet").unwrap();
let mut df = ParquetReader::new(file).finish()?;
dbg!(df.schema());
let fields = df.fields();
let columns: Vec<&String> = fields.iter().map(|x| x.name()).collect();
df.as_single_chunk_par();
let mut iters = df.iter().map(|s| s.iter()).collect::<Vec<_>>();
for _ in 0..df.height() {
let mut row = HashMap::new();
for (column, iter) in zip(&columns, &mut iters) {
let value = iter.next().expect("should have as many iterations as rows");
let value = match value {
AnyValue::Null => json!(Option::<String>::None),
AnyValue::Int64(val) => json!(val),
AnyValue::Int32(val) => json!(val),
AnyValue::Int8(val) => json!(val),
AnyValue::Float32(val) => json!(val),
AnyValue::Float64(val) => json!(val),
AnyValue::Utf8(val) => json!(val),
AnyValue::List(val) => {
match val.dtype() {
DataType::Int32 => ({let vec: Vec<Option<_>> = val.i32().unwrap().into_iter().collect(); json!(vec)}),
DataType::Float32 => ({let vec: Vec<Option<_>> = val.f32().unwrap().into_iter().collect(); json!(vec)}),
DataType::Utf8 => ({let vec: Vec<Option<_>> = val.utf8().unwrap().into_iter().collect(); json!(vec)}),
DataType::UInt8 => ({let vec: Vec<Option<_>> = val.u8().unwrap().into_iter().collect(); json!(vec)}),
x => panic!("unable to parse list column: {} with value: {} and type: {:?}", column, x, x.inner_dtype())
}
},
AnyValue::Datetime(val, TimeUnit::Milliseconds, _) => json!(val),
x => panic!("unable to parse column: {} with value: {}", column, x)
};
row.insert(*column as &str, value);
}
let json = serde_json::to_string(&row).unwrap();
dbg!(json);
break;
}
Ok(())
}

Related

Rust with Datafusion - Trying to Write DataFrame to Json

*Repo w/ WIP code: https://github.com/jmelm93/rust-datafusion-csv-processing
Started programming with Rust 2 days ago, and have been trying to resolve this since ~3 hours into trying out Rust...
Any help would be appreciated.
My goal is to write a Dataframe from Datafusion to JSON (which will eventually be used to respond to HTTP requests in an API with the JSON string).
The DataFrame turns into an "datafusion::arrow::record_batch::RecordBatch" when you collect the data, and this data type is what I'm having trouble converting.
I've tried -
Using json::writer::record_batches_to_json_rows from Arrow, but it won't let me due to "struct datafusion::arrow::record_batch::RecordBatch and struct arrow::record_batch::RecordBatch have similar names, but are actually distinct types". Haven't been able to successfully convert the types to avoid this.
I tried during the Record Batch into a vec and pull out the headers and the values individually. I was able to get the headers out, but haven't had success with the values.
let mut header = Vec::new();
// let mut rows = Vec::new();
for record_batch in data_vec {
// get data
println!("record_batch.columns: : {:?}", record_batch.columns());
for col in record_batch.columns() {
for row in 0..col.len() {
// println!("Cow: {:?}", col);
// println!("Row: {:?}", row);
// let value = col.as_any().downcast_ref::<StringArray>().unwrap().value(row);
// rows.push(value);
}
}
// get headers
for field in record_batch.schema().fields() {
header.push(field.name().to_string());
}
};
Anyone know how to accomplish this?
The full script is below:
// datafusion examples: https://github.com/apache/arrow-datafusion/tree/master/datafusion-examples/examples
// datafusion docs: https://arrow.apache.org/datafusion/
use datafusion::prelude::*;
use datafusion::arrow::datatypes::{Schema};
use arrow::json;
// use serde::{ Deserialize };
use serde_json::to_string;
use std::sync::Arc;
use std::str;
use std::fs;
use std::ops::Deref;
type DFResult = Result<Arc<DataFrame>, datafusion::error::DataFusionError>;
struct FinalObject {
schema: Schema,
// columns: Vec<Column>,
num_rows: usize,
num_columns: usize,
}
// to allow debug logging for FinalObject
impl std::fmt::Debug for FinalObject {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
// write!(f, "FinalObject {{ schema: {:?}, columns: {:?}, num_rows: {:?}, num_columns: {:?} }}",
write!(f, "FinalObject {{ schema: {:?}, num_rows: {:?}, num_columns: {:?} }}",
// self.schema, self.columns, self.num_columns, self.num_rows)
self.schema, self.num_columns, self.num_rows)
}
}
fn create_or_delete_csv_file(path: String, content: Option<String>, operation: &str) {
match operation {
"create" => {
match content {
Some(c) => fs::write(path, c.as_bytes()).expect("Problem with writing file!"),
None => println!("The content is None, no file will be created"),
}
}
"delete" => {
// Delete the csv file
fs::remove_file(path).expect("Problem with deleting file!");
}
_ => println!("Invalid operation"),
}
}
async fn read_csv_file_with_inferred_schema(file_name_string: String) -> DFResult {
// create string csv data
let csv_data_string = "heading,value\nbasic,1\ncsv,2\nhere,3".to_string();
// Create a temporary file
create_or_delete_csv_file(file_name_string.clone(), Some(csv_data_string), "create");
// Create a session context
let ctx = SessionContext::new();
// Register a lazy DataFrame using the context
let df = ctx.read_csv(file_name_string.clone(), CsvReadOptions::default()).await.expect("An error occurred while reading the CSV string");
// return the dataframe
Ok(Arc::new(df))
}
#[tokio::main]
async fn main() {
let file_name_string = "temp_file.csv".to_string();
let arc_csv_df = read_csv_file_with_inferred_schema(file_name_string.clone()).await.expect("An error occurred while reading the CSV string (funct: read_csv_file_with_inferred_schema)");
// have to use ".clone()" each time I want to use this ref
let deref_df = arc_csv_df.deref();
// print to console
deref_df.clone().show().await.expect("An error occurred while showing the CSV DataFrame");
// collect to vec
let record_batches = deref_df.clone().collect().await.expect("An error occurred while collecting the CSV DataFrame");
// println!("Data: {:?}", data);
// record_batches == <Vec<RecordBatch>>. Convert to RecordBatch
let record_batch = record_batches[0].clone();
// let json_string = to_string(&record_batch).unwrap();
// let mut writer = datafusion::json::writer::RecordBatchJsonWriter::new(vec![]);
// writer.write(&record_batch).unwrap();
// let json_rows = writer.finish();
let json_rows = json::writer::record_batches_to_json_rows(&[record_batch]);
println!("JSON: {:?}", json_rows);
// get final values from recordbatch
// https://docs.rs/arrow/latest/arrow/record_batch/struct.RecordBatch.html
// https://users.rust-lang.org/t/how-to-use-recordbatch-in-arrow-when-using-datafusion/70057/2
// https://github.com/apache/arrow-rs/blob/6.5.0/arrow/src/util/pretty.rs
// let record_batches_vec = record_batches.to_vec();
let mut header = Vec::new();
// let mut rows = Vec::new();
for record_batch in data_vec {
// get data
println!("record_batch.columns: : {:?}", record_batch.columns());
for col in record_batch.columns() {
for row in 0..col.len() {
// println!("Cow: {:?}", col);
// println!("Row: {:?}", row);
// let value = col.as_any().downcast_ref::<StringArray>().unwrap().value(row);
// rows.push(value);
}
}
// get headers
for field in record_batch.schema().fields() {
header.push(field.name().to_string());
}
};
// println!("Header: {:?}", header);
// Delete temp csv
create_or_delete_csv_file(file_name_string.clone(), None, "delete");
}
I am not sure that Datafusion is the perfect place to convert CSV string into JSON string, however here is a working version of your code:
#[tokio::main]
async fn main() {
let file_name_string = "temp_file.csv".to_string();
let csv_data_string = "heading,value\nbasic,1\ncsv,2\nhere,3".to_string();
// Create a temporary file
create_or_delete_csv_file(file_name_string.clone(), Some(csv_data_string), "create");
// Create a session context
let ctx = SessionContext::new();
// Register the csv file
ctx.register_csv("t1", &file_name_string, CsvReadOptions::new().has_header(false))
.await.unwrap();
let df = ctx.sql("SELECT * FROM t1").await.unwrap();
// collect to vec
let record_batches = df.collect().await.unwrap();
// get json rows
let json_rows = datafusion::arrow::json::writer::record_batches_to_json_rows(&record_batches[..]).unwrap();
println!("JSON: {:?}", json_rows);
// Delete temp csv
create_or_delete_csv_file(file_name_string.clone(), None, "delete");
}
If you encounter arrow and datafusion struct conflicts, use datafusion::arrow instead of just the arrow library.

Keep the LocatedSpan of an "outer" parser with nom in Rust

I am trying to write a parser using the nom crate (and the nom_locate) that can parse strings such as u{12a}, i.e.:
u\{([0-9a-fA-F]{1,6})\}
I wrote the following parser combinator:
use nom::bytes::complete::{take_while_m_n};
use nom::character::complete::{char};
use nom::combinator::{map_opt, map_res};
use nom::sequence::{delimited, preceded};
pub type LocatedSpan<'a> = nom_locate::LocatedSpan<&'a str>;
pub type IResult<'a, T> = nom::IResult<LocatedSpan<'a>, T>;
#[derive(Clone, Debug)]
pub struct LexerError<'a>(LocatedSpan<'a>, String);
fn expect<'a, F, E, T>(
mut parser: F,
err_msg: E,
) -> impl FnMut(LocatedSpan<'a>) -> IResult<Option<T>>
where
F: FnMut(LocatedSpan<'a>) -> IResult<T>,
E: ToString,
{
use nom::error::Error as NomError;
move |input| match parser(input) {
Ok((remaining, output)) => Ok((remaining, Some(output))),
Err(nom::Err::Error(NomError { input, code: _ }))
| Err(nom::Err::Failure(NomError { input, code: _ })) => {
let err = LexerError(input, err_msg.to_string());
// TODO Report error.
println!("error: {:?}", err);
Ok((input, None))
}
Err(err) => Err(err),
}
}
fn lit_str_unicode_char(input: LocatedSpan) -> IResult<char> {
let parse_hex = take_while_m_n(1, 6, |c: char| c.is_ascii_hexdigit());
// FIXME Figure out a way to keep correct span here.
let parse_delim_hex = preceded(
char('u'),
delimited(
char('{'),
expect(parse_hex, "expected 1-6 hex digits"),
expect(char('}'), "expected closing brace"),
),
);
let parse_u32 = map_res(parse_delim_hex, move |hex| match hex {
None => Err("cannot parse number"),
Some(hex) => match u32::from_str_radix(hex.fragment(), 16) {
Ok(val) => Ok(val),
Err(_) => Err("invalid number"),
},
});
map_opt(parse_u32, std::char::from_u32)(input)
}
fn main() {
let raw = "u{61}";
let span = LocatedSpan::new(raw);
let result = lit_str_unicode_char(span);
println!("{:#?}", result);
}
This works correctly, I am able to get the Unicode character out of the string. However, this approach does not keep the proper spans, i.e.:
u{123}
\..../ <--- the span I want
\/ <--- the span I get
I figured I could wrap the parse_delim_hex in a recognize, which would keep the span correctly, but then I couldn't use the following parsers to "understand" the digits.
How should I get around this issue?
I think you misunderstand the purpose of the first parameter of IResult.
Quote from the documentation:
The Ok side is a pair containing the remainder of the input (the part of the data that was not parsed) and the produced value.
The span you are looking at is not the data that was found, but instead the data that was left over afterwards.
I think what you were trying to achieve is something along those lines:
use nom::bytes::complete::take_while_m_n;
use nom::character::complete::char;
use nom::combinator::{map_opt, map_res};
use nom::{InputTake, Offset};
use nom::sequence::{delimited, preceded};
pub type LocatedSpan<'a> = nom_locate::LocatedSpan<&'a str>;
pub type IResult<'a, T> = nom::IResult<LocatedSpan<'a>, T>;
#[derive(Clone, Debug)]
pub struct LexerError<'a>(LocatedSpan<'a>, String);
fn expect<'a, F, E, T>(
mut parser: F,
err_msg: E,
) -> impl FnMut(LocatedSpan<'a>) -> IResult<Option<T>>
where
F: FnMut(LocatedSpan<'a>) -> IResult<T>,
E: ToString,
{
use nom::error::Error as NomError;
move |input| match parser(input) {
Ok((remaining, output)) => Ok((remaining, Some(output))),
Err(nom::Err::Error(NomError { input, code: _ }))
| Err(nom::Err::Failure(NomError { input, code: _ })) => {
let err = LexerError(input, err_msg.to_string());
// TODO Report error.
println!("error: {:?}", err);
Ok((input, None))
}
Err(err) => Err(err),
}
}
fn lit_str_unicode_char(input: LocatedSpan) -> IResult<(char, LocatedSpan)> {
let parse_hex = take_while_m_n(1, 6, |c: char| c.is_ascii_hexdigit());
// FIXME Figure out a way to keep correct span here.
let parse_delim_hex = preceded(
char('u'),
delimited(
char('{'),
expect(parse_hex, "expected 1-6 hex digits"),
expect(char('}'), "expected closing brace"),
),
);
let parse_u32 = map_res(parse_delim_hex, |hex| match hex {
None => Err("cannot parse number"),
Some(hex) => match u32::from_str_radix(hex.fragment(), 16) {
Ok(val) => Ok(val),
Err(_) => Err("invalid number"),
},
});
// Do the actual parsing
let (s, ch) = map_opt(parse_u32, std::char::from_u32)(input)?;
let span_offset = input.offset(&s);
let span = input.take(span_offset);
Ok((s, (ch, span)))
}
fn main() {
let span = LocatedSpan::new("u{62} bbbb");
let (rest, (ch, span)) = lit_str_unicode_char(span).unwrap();
println!("Leftover: {:?}", rest);
println!("Character: {:?}", ch);
println!("Parsed Span: {:?}", span);
}
Leftover: LocatedSpan { offset: 5, line: 1, fragment: " bbbb", extra: () }
Character: 'b'
Parsed Span: LocatedSpan { offset: 0, line: 1, fragment: "u{62}", extra: () }

Convert Vec<String> to std::rc::RC<Vec<String>>

I have a function that receives a list of ids and then selects them from a database. I'm passing in a Vec and I found this issue https://github.com/rusqlite/rusqlite/issues/430 which linked to here https://github.com/rusqlite/rusqlite/blob/master/src/vtab/array.rs#L18 and it says // Note: A Rc<Vec<Value>> must be used as the parameter.
I cannot figure out how to convert this Vec to Rc<Vec> is a way that does not produce a compile error. I tried the following:
let values = std::rc::Rc::new(ids.into_iter().copied().map(String::from).collect::<Vec<String>>());
let values = std::rc::Rc::from(ids.into_iter().map(|item| item.to_string()).collect::<Vec<String>>());
let values = std::rc::Rc::from(&ids);
All 3 give the same error with some variation of this part Vec<Rc<Vecstd::string::String>>
the trait bound `Vec<Rc<Vec<std::string::String>>>: ToSql` is not satisfied the following implementations were found: <Vec<u8> as ToSql> required for the cast to the object type `dyn ToSql`
How can I convert this so it comes out as Rc<Vec> and not Vec<Rc<Vec>>
My code is here
fn table_data_to_table(ids: &Vec<String>) -> Vec<data::Item> {
let db_connection = rusqlite::Connection::open("data.sqlite")
.expect("Cannot connect to database.");
let values = std::rc::Rc::new(ids.into_iter().copied().map(String::from).collect::<Vec<String>>());
let mut statement = db_connection
.prepare("select * from item where id in rarray(?);")
.expect("Failed to prepare query.");
let mut results = statement
.query_map(rusqlite::params![vec![values]], |row| {
Ok(database::ItemData {
id: row.get(0)?,
name: row.get(1)?,
time_tp_prepare: row.get(2)?
})
});
match results {
Ok(rows) => {
let collection: rusqlite::Result<Vec<database::ItemData>> = rows.collect();
match collection {
Ok(items) => {
items.iter().map(|item_data| data::Item {
id: item_data.id,
name: item_data.name,
time_to_prepare: item_data.time_tp_prepare
}).collect()
},
Err(_) => Vec::new(),
}
},
Err(_) => Vec::new()
}
}
Looking at the example you linked your error is in not passing the Rc directly:
.query_map(rusqlite::params![vec![values]], |row| {
vs
.query_map([values], |row| {

binary_search_by returns a strange index when using a custom struct

My function get_index should return the index of asserted element
pub const INVALID_INDEX: usize = <usize>::max_value();
pub fn get_index(&mut self, element: T) -> usize
where
T: std::cmp::Ord,
{
if self.data.is_empty() {
return 0;
}
match self.data.binary_search_by(|probe| probe.cmp(&element)) {
Ok(pos) => return pos,
Err(pos) => return INVALID_INDEX,
}
}
To provide a test, I create a value and some mock data:
let mut list: List<FooModel> = List::new();
let my_foo_1 = FooModel {name: "John".to_string(), id_num: 10};
let my_foo_2 = FooModel {name: "Bill".to_string(), id_num: 20};
list.add(my_foo_1.clone());
list.add(my_foo_2.clone());
list.add(my_foo_3.clone());
list.add(my_foo_4.clone());
list.add(my_foo_5.clone());
the problem occurs when I try to get a index for the first element
println!("Element is at index {:?}",list.get_index(my_foo_1.clone()));
I get the INVALID_INDEX value returned for my_foo_1; all other expressions return the correct index value.
If I create a list with some generic types:
let mut list_2: List<u32> = List::new();
list_2.add(1);
list_2.add(2);
list_2.add(3);
list_2.add(4);
I get the correct result for the call:
println!("Element is at index {:?}", list_2.get_index(1));
Here is a MCVE of your problem:
fn main() {
let data = vec!["John", "Bill"];
let v = data.binary_search_by(|probe| probe.cmp(&"John")).ok();
println!("{:?}", v);
}
The first sentence of the documentation for binary_search_by states (emphasis mine):
Binary searches this sorted slice with a comparator function.
Your data is (most likely, because you have neglected to show the relevant code) not sorted.

Does Rust have an equivalent to Python's dictionary comprehension syntax?

How would one translate the following Python, in which several files are read and their contents are used as values to a dictionary (with filename as key), to Rust?
countries = {region: open("{}.txt".format(region)).read() for region in ["canada", "usa", "mexico"]}
My attempt is shown below, but I was wondering if a one-line, idiomatic solution is possible.
use std::{
fs::File,
io::{prelude::*, BufReader},
path::Path,
collections::HashMap,
};
macro_rules! map(
{ $($key:expr => $value:expr),+ } => {
{
let mut m = HashMap::new();
$(
m.insert($key, $value);
)+
m
}
};
);
fn lines_from_file<P>(filename: P) -> Vec<String>
where
P: AsRef<Path>,
{
let file = File::open(filename).expect("no such file");
let buf = BufReader::new(file);
buf.lines()
.map(|l| l.expect("Could not parse line"))
.collect()
}
fn main() {
let _countries = map!{ "canada" => lines_from_file("canada.txt"),
"usa" => lines_from_file("usa.txt"),
"mexico" => lines_from_file("mexico.txt") };
}
Rust's iterators have map/filter/collect methods which are enough to do anything Python's comprehensions can. You can create a HashMap with collect on an iterator of pairs, but collect can return various types of collections, so you may have to specify the type you want.
For example,
use std::collections::HashMap;
fn main() {
println!(
"{:?}",
(1..5).map(|i| (i + i, i * i)).collect::<HashMap<_, _>>()
);
}
Is roughly equivalent to the Python
print({i+i: i*i for i in range(1, 5)})
But translated very literally, it's actually closer to
from builtins import dict
def main():
print("{!r}".format(dict(map(lambda i: (i+i, i*i), range(1, 5)))))
if __name__ == "__main__":
main()
not that you would ever say it that way in Python.
Python's comprehensions are just sugar for a for loop and accumulator. Rust has macros--you can make any sugar you want.
Take this simple Python example,
print({i+i: i*i for i in range(1, 5)})
You could easily re-write this as a loop and accumulator:
map = {}
for i in range(1, 5):
map[i+i] = i*i
print(map)
You could do it basically the same way in Rust.
use std::collections::HashMap;
fn main() {
let mut hm = HashMap::new();
for i in 1..5 {
hm.insert(i + i, i * i);
}
println!("{:?}", hm);
}
You can use a macro to do the rewriting to this form for you.
use std::collections::HashMap;
macro_rules! hashcomp {
($name:ident = $k:expr => $v:expr; for $i:ident in $itr:expr) => {
let mut $name = HashMap::new();
for $i in $itr {
$name.insert($k, $v);
}
};
}
When you use it, the resulting code is much more compact. And this choice of separator tokens makes it resemble the Python.
fn main() {
hashcomp!(hm = i+i => i*i; for i in 1..5);
println!("{:?}", hm);
}
This is just a basic example that can handle a single loop. Python's comprehensions also can have filters and additional loops, but a more advanced macro could probably do that too.
Without using your own macros I think the closest to
countries = {region: open("{}.txt".format(region)).read() for region in ["canada", "usa", "mexico"]}
in Rust would be
let countries: HashMap<_, _> = ["canada", "usa", "mexico"].iter().map(|&c| {(c,read_to_string(c.to_owned() + ".txt").expect("Error reading file"),)}).collect();
but running a formatter, will make it more readable:
let countries: HashMap<_, _> = ["canada", "usa", "mexico"]
.iter()
.map(|&c| {
(
c,
read_to_string(c.to_owned() + ".txt").expect("Error reading file"),
)
})
.collect();
A few notes:
To map a vector, you need to transform it into an iterator, thus iter().map(...).
To transform an iterator back into a tangible data structure, e.g. a HashMap (dict), use .collect(). This is the advantage and pain of Rust, it is very strict with types, no unexpected conversions.
A complete test program:
use std::collections::HashMap;
use std::fs::{read_to_string, File};
use std::io::Write;
fn create_files() -> std::io::Result<()> {
let regios = [
("canada", "Ottawa"),
("usa", "Washington"),
("mexico", "Mexico city"),
];
for (country, capital) in regios {
let mut file = File::create(country.to_owned() + ".txt")?;
file.write_fmt(format_args!("The capital of {} is {}", country, capital))?;
}
Ok(())
}
fn create_hashmap() -> HashMap<&'static str, String> {
let countries = ["canada", "usa", "mexico"]
.iter()
.map(|&c| {
(
c,
read_to_string(c.to_owned() + ".txt").expect("Error reading file"),
)
})
.collect();
countries
}
fn main() -> std::io::Result<()> {
println!("Hello, world!");
create_files().expect("Failed to create files");
let countries = create_hashmap();
{
println!("{:#?}", countries);
}
std::io::Result::Ok(())
}
Not that specifying the type of countries is not needed here, because the return type of create_hashmap() is defined.

Resources