How does one convert between DateTime<FixedOffset> and DateTime<Tz>, in order to subtract to get a duration, compare inequality, or reassign?
use chrono::DateTime;
use chrono_tz::America::New_York;
fn main() {
let mut a = DateTime::parse_from_rfc3339("2022-06- 01T10:00:00").unwrap();
let b = a.with_timezone(&New_York);
a = b;
}
An attempt to do this directly yields the error:
error[E0308]: mismatched types
--> src/main.rs:13:9
|
11 | let mut a = DateTime::parse_from_rfc3339("2022-06-01T10:00:00").unwrap();
| ------------------------------------------------------------ expected due to this value
12 | let b = a.with_timezone(&New_York);
13 | a = b;
| ^ expected struct `FixedOffset`, found enum `Tz`
|
= note: expected struct `DateTime<FixedOffset>`
found struct `DateTime<Tz>`
Playground
Convert the timezone of b into the timezone of a before assigning it:
a = b.with_timezone(&a.timezone());
Related
trying to implement the SAMPLE of Lazy join multiple DataFrames on a Categorical:
use polars::prelude::*;
fn lazy_example(mut df_a: LazyFrame, mut df_b: LazyFrame) -> Result<DataFrame> {
let q1 = df_a.with_columns(vec![
col("a").cast(DataType::Categorical),
]);
let q2 = df_b.with_columns(vec![
col("b").cast(DataType::Categorical)
]);
q1.inner_join(q2, col("a"), col("b"), None).collect()
}
getting an error:
error[E0308]: mismatched types
--> src\main.rs:6:23
|
6 | col("a").cast(DataType::Categorical),
| ---- ^^^^^^^^^^^^^^^^^^^^^ expected enum `polars::prelude::DataType`, found fn item
| |
| arguments to this function are incorrect
|
= note: expected enum `polars::prelude::DataType`
found fn item `fn(Option<Arc<RevMapping>>) -> polars::prelude::DataType {polars::prelude::DataType::Categorical}`
note: associated function defined here
--> C:\Users\rnio\.cargo\registry\src\github.com-1ecc6299db9ec823\polars-lazy-0.23.1\src\dsl\mod.rs:555:12
|
555 | pub fn cast(self, data_type: DataType) -> Self {
| ^^^^
help: use parentheses to instantiate this tuple variant
|
6 | col("a").cast(DataType::Categorical(_)),
| +++
applied the suggested fix:
col("a").cast(DataType::Categorical()),
col("b").cast(DataType::Categorical()),
get next error:
error[E0061]: this enum variant takes 1 argument but 0 arguments were supplied
--> src\main.rs:7:23
|
7 | col("a").cast(DataType::Categorical()),
| ^^^^^^^^^^^^^^^^^^^^^-- an argument of type `Option<Arc<RevMapping>>` is missing
|
note: tuple variant defined here
--> C:\Users\rnio\.cargo\registry\src\github.com-1ecc6299db9ec823\polars-core-0.23.1\src\datatypes\mod.rs:707:5
|
707 | Categorical(Option<Arc<RevMapping>>),
| ^^^^^^^^^^^
help: provide the argument
|
7 | col("a").cast(DataType::Categorical(/* Option<Arc<RevMapping>> */)),
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
So its missing an argument for Categorial() ... even though it will not be used:
// The RevMapping has the internal state.
This is ignored with casts, comparisons, hashing etc.
https://docs.rs/polars/latest/polars/datatypes/enum.RevMapping.html
Any idea how to fix this?
Thanks
Thanks to #Dogbert :)
here is the working code:
fn lazy_example(mut df_a: LazyFrame, mut df_b: LazyFrame) -> Result<DataFrame> {
let q1 = df_a.with_columns(vec![
col("a").cast(DataType::Categorical(None)),
]);
let q2 = df_b.with_columns(vec![
col("b").cast(DataType::Categorical(None))
]);
q1.inner_join(q2, col("a"), col("b")).collect()
}
How can I return a default value from an Option<&String>?
This is my sample/minimal code:
fn main() {
let map = std::collections::HashMap::<String, String>::new();
let result = map.get("").or_else(|| Some("")).unwrap(); // <== I tried lots of combinations
println!("{}", result);
}
I know I could do something like this...
let value = match result {
Some(v) => v,
None => "",
};
... but I want to know if it is possible to implement it in a one-liner with or_else or unwrap_or_else?
(It is important to make the default value lazy, so it does not get computed if it is not used)
These are some of the compiler suggestions I tried (I can put them all because SO won't allow me):
7 | let result = map.get("").or_else(|| Some("") ).unwrap();
| ^^ expected struct `String`, found `str`
.
7 | let result = map.get("").or_else(|| Some(&"".to_string()) ).unwrap();
| ^^^^^^--------------^
| | |
| | temporary value created here
| returns a value referencing data owned by the current function
.
7 | let result = map.get("").or_else(|| Some(String::new()) ).unwrap();
| ^^^^^^^^^^^^^
| |
| expected `&String`, found struct `String`
|
help: consider borrowing here: `&String::new()`
.
7 | let result = map.get("").or_else(|| Some(&String::new()) ).unwrap();
| ^^^^^^-------------^
| | |
| | temporary value created here
| returns a value referencing data owned by the current function
.
and also
6 | let result = map.get("").unwrap_or_else(|| ""); // I tried lots
| ^^ expected struct `String`, found `str`
|
= note: expected reference `&String`
found reference `&'static str`
If you really need a &String as the result, you may create a String for the default value with lifetime that's long enough.
fn main() {
let map = std::collections::HashMap::<String, String>::new();
let default_value = "default_value".to_string();
let result = map.get("").unwrap_or(&default_value);
println!("{}", result);
}
If the default value is a compile-time fixed value, the allocation of default_value can be avoided by using &str instead.
fn main() {
let map = std::collections::HashMap::<String, String>::new();
let result = map.get("")
.map(String::as_str)
.unwrap_or("default_value");
println!("{}", result);
}
How can I return a default value from an Option<&String>?
It's not trivial because as you've discovered ownership gets in the way, as you need an actual String to create an &String. The cheap and easy solution to that is to just have a static empty String really:
static DEFAULT: String = String::new();
fn main() {
let map = std::collections::HashMap::<String, String>::new();
let result = map.get("").unwrap_or(&DEFAULT); // <== I tried lots of combinations
println!("{}", result);
}
String::new is const since 1.39.0, and does not allocate, so this works fine. If you want a non-empty string as default value it's not as good a solution though.
The cleaner and more regular alternative is to "downgrade" (or upgrade, depending on the POV) the &String to an &str:
let result = map.get("").map(String::as_str).unwrap_or("");
or
let result = map.get("").map(|s| &**s).unwrap_or("");
it's really not like you're losing anything here, as &String is not much more capable than &str (it does offer a few more thing e.g. String::capacity, but for the most part it exists on genericity grounds e.g. HashMap::<K, V>::get returns an &V, so if you store a String you get an &String makes sense even though it's not always quite the thing you want most).
I wanted to implement a function computing the number of digits within any generic type of integer. Here is the code I came up with:
extern crate num;
use num::Integer;
fn int_length<T: Integer>(mut x: T) -> u8 {
if x == 0 {
return 1;
}
let mut length = 0u8;
if x < 0 {
length += 1;
x = -x;
}
while x > 0 {
x /= 10;
length += 1;
}
length
}
fn main() {
println!("{}", int_length(45));
println!("{}", int_length(-45));
}
And here is the compiler output
error[E0308]: mismatched types
--> src/main.rs:5:13
|
5 | if x == 0 {
| ^ expected type parameter, found integral variable
|
= note: expected type `T`
found type `{integer}`
error[E0308]: mismatched types
--> src/main.rs:10:12
|
10 | if x < 0 {
| ^ expected type parameter, found integral variable
|
= note: expected type `T`
found type `{integer}`
error: cannot apply unary operator `-` to type `T`
--> src/main.rs:12:13
|
12 | x = -x;
| ^^
error[E0308]: mismatched types
--> src/main.rs:15:15
|
15 | while x > 0 {
| ^ expected type parameter, found integral variable
|
= note: expected type `T`
found type `{integer}`
error[E0368]: binary assignment operation `/=` cannot be applied to type `T`
--> src/main.rs:16:9
|
16 | x /= 10;
| ^ cannot use `/=` on type `T`
I understand that the problem comes from my use of constants within the function, but I don't understand why the trait specification as Integer doesn't solve this.
The documentation for Integer says it implements the PartialOrd, etc. traits with Self (which I assume refers to Integer). By using integer constants which also implement the Integer trait, aren't the operations defined, and shouldn't the compiler compile without errors?
I tried suffixing my constants with i32, but the error message is the same, replacing _ with i32.
Many things are going wrong here:
As Shepmaster says, 0 and 1 cannot be converted to everything implementing Integer. Use Zero::zero and One::one instead.
10 can definitely not be converted to anything implementing Integer, you need to use NumCast for that
a /= b is not sugar for a = a / b but an separate trait that Integer does not require.
-x is an unary operation which is not part of Integer but requires the Neg trait (since it only makes sense for signed types).
Here's an implementation. Note that you need a bound on Neg, to make sure that it results in the same type as T
extern crate num;
use num::{Integer, NumCast};
use std::ops::Neg;
fn int_length<T>(mut x: T) -> u8
where
T: Integer + Neg<Output = T> + NumCast,
{
if x == T::zero() {
return 1;
}
let mut length = 0;
if x < T::zero() {
length += 1;
x = -x;
}
while x > T::zero() {
x = x / NumCast::from(10).unwrap();
length += 1;
}
length
}
fn main() {
println!("{}", int_length(45));
println!("{}", int_length(-45));
}
The problem is that the Integer trait can be implemented by anything. For example, you could choose to implement it on your own struct! There wouldn't be a way to convert the literal 0 or 1 to your struct. I'm too lazy to show an example of implementing it, because there's 10 or so methods. ^_^
num::Zero and num::One
This is why Zero::zero and One::one exist. You can (very annoyingly) create all the other constants from repeated calls to those.
use num::{One, Zero}; // 0.4.0
fn three<T>() -> T
where
T: Zero + One,
{
let mut three = Zero::zero();
for _ in 0..3 {
three = three + One::one();
}
three
}
From and Into
You can also use the From and Into traits to convert to your generic type:
use num::Integer; // 0.4.0
use std::ops::{DivAssign, Neg};
fn int_length<T>(mut x: T) -> u8
where
T: Integer + Neg<Output = T> + DivAssign,
u8: Into<T>,
{
let zero = 0.into();
if x == zero {
return 1;
}
let mut length = 0u8;
if x < zero {
length += 1;
x = -x;
}
while x > zero {
x /= 10.into();
length += 1;
}
length
}
fn main() {
println!("{}", int_length(45));
println!("{}", int_length(-45));
}
See also:
How do I use floating point number literals when using generic types?
This question already has answers here:
Converting number primitives (i32, f64, etc) to byte representations
(5 answers)
Closed 3 years ago.
I am new and lost in Rust a bit.
I would like to add keys and values to a data store that has a put function that takes two byte string literals:
batch.put(b"foxi", b"maxi");
I generate a bunch of these k-v pairs:
for _ in 1..1000000 {
let mut ivec = Vec::new();
let s1: u8 = rng.gen();
let s2: u8 = rng.gen();
ivec.push(s1);
ivec.push(s2);
debug!("Adding key: {} and value {}", s1, s2);
vec.push(ivec);
}
let _ = database::write(db, vec);
I have a fn that tries to add them:
pub fn write(db: DB, vec: Vec<Vec<u8>>) {
let batch = WriteBatch::new();
for v in vec {
batch.put(v[0], v[1]);
}
db.write(&batch).unwrap();
}
When I try to compile this I get:
error[E0308]: mismatched types
--> src/database.rs:17:19
|
17 | batch.put(v[0], v[1]);
| ^^^^ expected &[u8], found u8
|
= note: expected type `&[u8]`
found type `u8`
error[E0308]: mismatched types
--> src/database.rs:17:25
|
17 | batch.put(v[0], v[1]);
| ^^^^ expected &[u8], found u8
|
= note: expected type `&[u8]`
found type `u8`
I was ping-ponging with the borrow checker for a while now but I could not get it working. What is the best way to have byte literal strings from u8s?
The following works:
batch.put(&v[0].as_bytes(), &v[1].as_bytes())
I'm trying to read the values from a vector and use the values as indexes to perform an addition:
fn main() {
let objetive = 3126.59;
// 27
let values: Vec<f64> = vec![
2817.42, 2162.17, 3756.57, 2817.42, -2817.42, 946.9, 2817.42, 964.42, 795.43, 3756.57,
139.34, 903.58, -3756.57, 939.14, 828.04, 1120.04, 604.03, 3354.74, 2748.06, 1470.8,
4695.71, 71.11, 2391.48, 331.29, 1214.69, 863.52, 7810.01,
];
let values_number = values.len();
let values_index_max = values_number - 1;
let mut additions: Vec<usize> = vec![0];
println!("{:?}", values_number);
while additions.len() > 0 {
let mut addition: f64 = 0.0;
let mut saltar: i32 = 0;
// Sumar valores en additions
for element_index in additions {
let addition_aux = values[element_index];
addition = addition_aux + addition;
}
}
}
I get the following error. How can I solve it?
error[E0382]: use of moved value: `additions`
--> src/main.rs:18:11
|
18 | while additions.len() > 0 {
| ^^^^^^^^^ value used here after move
...
23 | for element_index in additions {
| --------- value moved here
|
= note: move occurs because `additions` has type `std::vec::Vec<usize>`, which does not implement the `Copy` trait
error[E0382]: use of moved value: `additions`
--> src/main.rs:23:30
|
23 | for element_index in additions {
| ^^^^^^^^^ value moved here in previous iteration of loop
|
= note: move occurs because `additions` has type `std::vec::Vec<usize>`, which does not implement the `Copy` trait
The fix for this particular problem is to borrow the Vec you're iterating over instead of moving it:
for element_index in &additions {
let addition_aux = values[*element_index];
addition = addition_aux + addition;
}
but your code has other problems. You never change additions by adding or removing elements, so your while additions.len() > 0 will never terminate. I hope this is because you haven't finished and wanted to work out how to fix the immediate problem before writing the rest of the function.
For now, you might benefit from re-reading the chapter of the Rust Book about ownership, moves, and borrowing.