I'm looking for a way to count all the records found by a query. I can see there is a count function but I'm not entirely sure how to deal with the output to get a number type out.
I have something similar to
entity::table_name::Entity::find().count(&db);
which returns a Pin<Box<dyn Future + Send<Output = Result<usize, DbErr>>>>
I'm just looking to get a number out. Am I on the right track here? What would be the simplest way to get the count?
There is another way to do it:
#[derive(Copy, Clone, Debug, EnumIter, DeriveColumn)]
enum Counter {
Count,
}
entity::table_name::Entity::find()
.select_only()
.column_as(Expr::col(entity::table_name::Column::Id).count(), "count")
.into_values::<_, Counter>()
.one(&db)
.await
This looks however a bit complicated
Getting the count as a usize is as simple as
entity::entity_name::Entity::find().count(&db).await.unwrap();
Where entity_name is the name of your table/entity.
Related
I notice when reading the docs that they often use assert when explaining expected behaviour of simple code blocks.
In production level code, would it be considered an anti-pattern to do the same? While reading rust by example I only saw assert's being used in tests, but in the instances where you do expect vars or values to be a specific thing, is assert the correct approach?
The example I came across in my own code is a scenario similar to the following...
fn foo(values: Vec<String>, my_num: usize) {
assert_eq!(values.len(), my_num);
// run this code after
}
I expect the vector passed to have a length equal to another value in the function, and the code wouldn't work if that wasn't the case. Would asserting these two values as being equal be the correct practice?
What are some other best practices or ways of handling other error behaviour?
Assertions are OK in unsafe code. For example, if you need to ensure that a pointer is non-null. Otherwise, it’s better to use Results, Options and usual conditions.
struct FooError;
fn foo(values: Vec<String>, my_num: usize) -> Result<(), FooError> {
if values.len() != my_num {
return Err(FooError);
}
…
Ok(())
}
I am currently working on a way to compress data (structs I made) when serializing with serde. Everything works fine except with the specitic case of Vec. I'd like to know if some of you already met this problem or if you would have any thought to share :)
My goal is to provide a simple way of compressing any part of a struct by adding the #[serde(with="crate::compress")] macro. No matter of the underlying data structure, I want it to be compressed with my custom serialize function.
For instance, I want this structure to be serializable with compression :
struct MyCustomStruct {
data: String,
#[serde(with="crate::compress")]
data2: SomeOtherStruct,
#[serde(with="crate::compress")]
data3: Vec<u8>,
}
For now, everything works fine and calls my custom module :
// in compress module
pub fn serialize<T, S>(data: T, serializer: S) -> Result(S::Ok, S::Error)
where
T: Serialize,
S: Serializer,
{
// Simplified functioning below:
let serialized_data: Vec<u8> = some_function(data);
let compressed_data: Vec<u8> = some_other_fonction(serialized_data);
Ok(choosen_serializer::serialize(compressed_data, serializer)?)
}
However, I do have a problem when it comes to compressing Vec elements (as data3 in the struct above).
Since the data is already Vec, I don't need to serialize it and I can directly pass it to my compression function. Worse: if I serialize it, I my not be with serde_bytes and so the compression call macro will result in increasing size!
I also don't want to have two different functions (one for Vec, one other) since it would be to the user to choose which one to use and using the wrong one with Vec would work but also increasing size (instead of decreasing it).
I already thought about / tried a few things but none of them can work :
1) Macro :
Very complicated since it needs to rewrite another macro. It would be a macro writing #[serde(with="crate::compress")] or #[serde(with="crate::compress_vec_u8")] depending on the type. I don't even know if this is possible, a meta-macro? :')
2) Trait implementation
It would be something like that :
trait CompressAndSerialize {
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error)
where S:Serializer;
}
impl CompressAndSerialize for Vec<u8> { ... }
impl<T> CompressAndSerialize for T where T:Serialize { ... }
but then I got an error (conflicting implementations of trait CompressAndSerialize for Vec) which seems normal since there are indeed two implementations for Vec :/
3) Worse solution but the one I'm heading for: using TypeId::of::<T>()
and skip serialization if data is already a Vec. I would still have to encapsulate it in an Enum so de-serialization will know if data is Vec or something else...
Edit: this isn't possible because type must have static lifetime which is almost always impossible (not in my case anyway)
Sorry this is a bit long and quite specific but I hope maybe one of you will have suggestions on how to deal with this problem :D
Edit: on the serde documentation (https://serde.rs/impl-serialize.html#other-special-cases) there is a link to a issue in rust-lang/rust (https://github.com/rust-lang/rust/issues/31844) and once this issue will be resolved I won't have any problem with the serialization of Vec since serde_bytes won't be needed anymore. Too bad this issue has been opened since Feb 2016 :'(
Maybe the question is not entirely understandable, but is simple. Let’s say I have a Rust struct.
struct Person{
_id: u32,
name: String,
}
And a vector of such struct.
people: Vec<Person> = foo
Is there a nicer way of getting a vector with let’s say all Persons _id? My first instinct was just to writing a for loop.
let mut people_ids: vec<String> = vec![];
for person in people{
people_ids.push(person._id)
}
people_ids
And of course, it works.But if you are working with structs with multiple field and want to get multiple vectors of different fields, writing a for loop for each field becomes very repetitive. I wonder what is a better and more general way to do it?
First of all, you shouldn't start structure fields with underscore. That said, this works:
let people_ids: Vec<_> = people.iter().map(|p| p._id).collect();
This is similar to How do I use a custom comparator function with BTreeSet? however in my case I won't know the sorting criteria until runtime. The possible criteria are extensive and can't be hard-coded (think something like sort by distance to target or sort by specific bytes in a payload or combination thereof). The sorting criteria won't change after the map/set is created.
The only alternatives I see are:
use a Vec, but log(n) inserts and deletes are crucial
wrap each of the elements with the sorting criteria (directly or indirectly), but that seems wasteful
This is possible with standard C++ containers std::map/std::set but doesn't seem possible with Rust's BTreeMap/BTreeSet. Is there an alternative in the standard library or in another crate that can do this? Or will I have to implement this myself?
My use-case is a database-like system where elements in the set are defined by a schema, like:
Element {
FIELD x: f32
FIELD y: f32
FIELD z: i64
ORDERBY z
}
But since the schema is user-defined at runtime, the elements are stored in a set of bytes (BTreeSet<Vec<u8>>). Likewise the order of the elements is user-defined. So the comparator I would give to BTreeSet would look like |a, b| schema.cmp(a, b). Hard-coded, the above example may look something like:
fn cmp(a: &Vec<u8>, b: &Vec<u8>) -> Ordering {
let a_field = self.get_field(a, 2).as_i64();
let b_field = self.get_field(b, 2).as_i64();
a_field.cmp(b_field)
}
Would it be possible to pass the comparator closure as an argument to each node operation that needs it? It would be owned by the tree wrapper instead of cloned in every node.
If I want to define a function like so:
fn f(in_slice: &[T], out_slice: &mut [T]){
}
Is there any way to guarantee at compile time that the two slices have the same length?
No, because compiler does not know the lengths at compile time.
No.
Not yet. It's likely that we will at some point gain support for uints in generics, after which it should become possible, something like:
fn f<T, static N: uint>(in_slice: &[T, ..N], out_slice: &mut [T, ..N]) { ... }