how to make diesel auto generate model - rust

I am now using this command to generate schema in rust diesel:
diesel --database-url postgres://postgres:kZLxttcZSN#127.0.0.1:5432/rhythm \
migration run --config-file="${CURRENT_DIR}"/diesel-rhythm.toml
and this is the toml config:
[print_schema]
file = "src/model/diesel/rhythm/rhythm_schema.rs"
# This will cause only the users and posts tables to be output
filter = { only_tables = ["favorites", "songs", "playlist"] }
is it possible to make diesel auto generate the model entity? the entity may look like this:
#[derive( Serialize, Queryable, Deserialize,Default)]
pub struct Music {
pub id: i64,
pub name: String,
pub source_id: String
}
now I write the entity by handle. what should I do to make it generate by diesel cli, I read the document and did not found any useful configuration about this.

You are looking for diesel_cli_ext
First install diesel_cli_ext:
cargo install diesel_cli_ext
[Then] you would have to generate your schema file the diesel way if you haven't yet:
diesel print-schema > src/schema.rs
Finally you have to generate the models file:
diesel_ext --model > src/models.rs
The models in your schema file would be generated in src/models.rs eg:
#[derive(Queryable)]
pub struct Music {
pub id: i64,
pub name: String,
pub source_id: String
}

Related

Best way to populate a struct from a similar struct?

I am new to Rust, and am attempting to take a struct returned from a library (referred to as source struct) and convert it into protobuf message using prost. The goal is to take the source struct, map the source struct types to the protobuf message types (or rather, the appropriate types for prost-generated struct, referred to as message struct), and populate the message struct fields using fields from the source struct. The source struct fields are a subset of message struct fields. For example:
pub struct Message {
pub id: i32,
pub email: String,
pub name: String,
}
pub struct Source {
pub email: Email,
pub name: Name,
}
So, I would like to take fields from from Source, map the types to corresponding types in Message, and populate the fields of Message using Source (fields have the same name). Currently, I am manually assigning the values by creating a Message struct and doing something like message_struct.email = source_struct.email.to_string();. Except I have multiple Message structs based on protobuf, some having 20+ fields, so I'm really hoping to find a more efficient way of doing this.
If I understand you correctly you want to generate define new struct based on fields from another. In that case you have to use macros.
https://doc.rust-lang.org/book/ch19-06-macros.html
Also this question (and answer) could be useful for you Is it possible to generate a struct with a macro?
To convert struct values from one to another struct best way is to use From<T> or Into<T> trait.
https://doc.rust-lang.org/std/convert/trait.From.html
This is called FRU (functional record update).
This currently works only for structs with the same type and structs with the same type modulo generic parameters.
The RFC-2528 talks about a generalization that would make this work:
struct Foo {
field1: &'static str,
field2: i32,
}
struct Bar {
field1: f64,
field2: i32,
}
let foo = Foo { field1: "hi", field2: 1 };
let bar = Bar { field1: 3.14, ..foo };
Unfortunately, this has not yet been implemented.
You could make a method to create a Message from a Source like this.
impl Message {
pub fn from_source(source: &Source, id: i32) -> Self {
Message {
id: id,
email: source.email.to_string(),
name: source.name.to_string(),
}
}
}
And then,
let source = // todo
let id = // todo
let message = Message::from_source(&source, id);

Rust find in Vec without creating new iterator each time

I am processing a lot of data. I am working with the cargo dumps, in hopes of creating interconnected graph of all crates. I get data from postgres dump, and try to connect it. I'm trying to connect each version of package, with latest possible version of dependency. I have to go through each dependency, find possible versions, parse version and see if the version of the package and required version of the dependency matches.
I can say that first found semver match is always the right one, since I get data ordered DESC from db. Now, to my problem. I programmed this logic, however then I found out that execution of my code is going to take a long time. Around 3 hours on my hardware, and a lot more in production. This is not acceptable. I've pinpointed my problem, being that I always create a new iterator and then proceed to consume it with using find method. Is there any way to find value inside a vec, without consuming the iterator?
I have the following structs:
#[derive(Debug, Clone, sqlx::FromRow)]
pub struct CargoCrateVersionRow {
pub id: i32,
pub crate_id: i32,
pub version_num: String,
pub created_at: sqlx::types::chrono::NaiveDateTime,
pub updated_at: sqlx::types::chrono::NaiveDateTime,
pub downloads: i32,
pub features: sqlx::types::Json<HashMap<String, Vec<String>>>,
pub yanked: bool,
pub license: Option<String>,
pub crate_size: Option<i32>,
pub published_by: Option<i32>,
}
#[derive(Debug, sqlx::FromRow)]
pub struct CargoDependenciesRow {
pub version_id: i32,
pub crate_id: i32,
pub req: String,
pub optional: bool,
pub features: Vec<String>,
pub kind: i32,
}
#[derive(Debug)]
pub struct CargoCrateVersionDependencyEdge {
pub version_id_from: i32,
pub version_id_to: i32,
pub optional: bool,
pub with_features: Vec<String>,
pub kind: i32,
}
SQL command for retrieving crate_dependencies:
select version_id, crate_id, req, optional, features, kind
from dependencies;
SQL command for retrieving crate_versions:
select
id, crate_id, num "version_num", created_at, updated_at,
downloads, features, yanked, license, crate_size, published_by
from
versions
order by
id desc;
My custom logic:
let mut cargo_crate_dependency_edges: Vec<CargoCrateVersionDependencyEdge> = vec![];
let mut i = 0;
for dep in crate_dependencies {
if i % 10_000 == 0 {
println!("done 10k");
}
let req = skip_fail!(VersionReq::parse(dep.req.as_str()));
// I do not wish to create new iter each time, as this includes millions of structs
let possible_crate_version = crate_versions.iter().find(|s| {
if s.crate_id != dep.crate_id {
return false;
}
// Here, I'm using semver crate
if let Ok(parsed_version) = Version::parse(&s.version_num) {
req.matches(&parsed_version)
} else {
false
}
});
if let Some(found_crate_version) = possible_crate_version {
cargo_crate_dependency_edges.push(
CargoCrateVersionDependencyEdge {
version_id_from: dep.version_id,
version_id_to: found_crate_version.id,
optional: dep.optional,
with_features: dep.features.clone(),
kind: dep.kind,
}
);
}
i += 1;
}
println!("{}", cargo_crate_dependency_edges.len());
After some time debugging, I've come to the conclusion that I need to somehow get rid of making the iterator, because that's my bottleneck. I've benchmarked the library, pushing onto the vec, basically everything.

Rust Diesel Abstract update function

I currently trying to implement abstract function which will update a few meta fields for any table in the database, but getting problems with Identifiable.
I have a database where every table has meta-fields:
....
pub updu: Option<Uuid>, // ID of a user who changed it
pub updt: Option<NaiveDateTime>, //updated with current date/time on every change
pub ver: Option<i32>, //Version increases on every change
.....
I want to implement a function which would make update for every entity. Currently I have this implementation:
pub fn update<Model>(
conn: &PgConnection,
old_model: Model,
mut updated_model: Model,
user_id: Uuid,
) -> Result<Model, diesel::result::Error>
where
Model: MetaFields + AsChangeset<Target = <Model as HasTable>::Table> + IntoUpdateTarget,
Update<Model, Model>: LoadQuery<PgConnection, Model>
{
updated_model.update_fields(user_id);
Ok(
diesel::update(old_model)
.set(updated_model)
.get_result(conn).unwrap()
)
When I am trying to call it it shows this error:
error[E0277]: the trait bound `Marking: Identifiable` is not satisfied
--> src/service/marking_service.rs:116:24
|
116 | web::block(move || common::dao::update(&conn2, real_marking1[0].clone(), marking2_to_update, jwt_token.user_id))
| ^^^^^^^^^^^^^^^^^^^ the trait `Identifiable` is not implemented for `Marking`
|
= help: the following implementations were found:
<&'ident Marking as Identifiable>
= note: required because of the requirements on the impl of `IntoUpdateTarget` for `Marking`
For more information about this error, try `rustc --explain E0277`.
error: could not compile `testapi` due to previous error
An entity which I am trying to update in this example is:
use chrono::{NaiveDateTime, Utc};
use common::model::MetaFields;
use common::utils::constants::DEL_MARK_AVAILABLE;
use serde::{Deserialize, Serialize};
use serde_json;
use uuid::Uuid;
use crate::schema::marking;
#[derive(
Clone,
Serialize,
Deserialize,
Debug,
Queryable,
Insertable,
AsChangeset,
Identifiable,
QueryableByName,
Default,
)]
#[primary_key(uuid)]
#[table_name = "marking"]
pub struct Marking {
pub uuid: Uuid,
pub updt: Option<NaiveDateTime>,
pub ver: Option<i32>,
pub updu: Option<Uuid>,
pub comment: Option<String>,
pub status: Option<String>,
}
impl MetaFields for Marking {
fn update_fields(&mut self, user_id: Uuid) {
self.updu = Option::from(user_id);
self.ver = Option::from(self.ver.unwrap() + 1);
self.updt = Option::from(Utc::now().naive_local());
}
}
As you can see Identifiable is defined for this entity, but for some reason update cannot see it. Could someone suggest what I am missing here?
Update, schema:
table! {
marking (uuid) {
uuid -> Uuid,
updt -> Nullable<Timestamp>,
ver -> Nullable<Int4>,
updu -> Nullable<Uuid>,
comment -> Nullable<Varchar>,
status -> Nullable<Varchar>,
}
}
diesel = { version = "1.4.6", features = ["postgres", "uuid", "chrono", "uuidv07", "serde_json"] }
r2d2 = "0.8"
r2d2-diesel = "1.0.0"
diesel_json = "0.1.0"
Your code is almost correct. The error message already mentions the issue:
#[derive(Identifiable)] does generate basically the following impl: impl<'a> Identifiable for &'a struct {}, which means that trait is only implemented for a reference to self. Depending on your other trait setup you can try the following things:
Pass a reference as old_model common::dao::update
Change the definition of common::dao::update to take a reference as second argument. Then you can separate the trait bounds for Model so that you bound IntoUpdateTarget on &Model instead.
(It's hard to guess which one will be the better solution as your question is missing a lot of important context. Please try to provide a complete minimal example in the future.)

Eager loading of relations

I have three models with these relations:
Artist 1<->n Song 1<->n Play
I want to load Plays and eagerly load its Songs and its Artists.
I did this in PHP framework Eloquent easily. And it takes only 3 DB requests (one for each model).
Is this some how possible with diesel?
I think the Play model should look something like this:
#[derive(Queryable, Serialize, Deserialize)]
pub struct Play {
pub id: u64,
pub song_id: u64,
pub song: Song, // <-- ????
pub date: NaiveDateTime,
pub station_id: u64,
}
And loading something like this:
// loads one model only at the moment
// it should load the related models Song and Artist too
let items = plays.filter(date.between(date_from, date_to)).load(&*conn)?;
The resulting struct should be JSON serialized to be used by the REST API. But I can't find a way to get the needed struct in one simple and efficient way.
One way to achieve this is to join the two other tables. I did not figure out how to avoid enumerating all table fields unfortunately. This is not an elegant solution, but it should work.
fn select_plays_with_song_and_artist(
conn: &Conn,
date_from: NaiveDate,
date_to: NaiveDate,
) -> Result<Vec<(models::Play, models::Song, models::Artist)>> {
plays::table
.filter(plays::date.between(date_from, date_to))
.join(songs::table.join(artists::table))
.select((
(
plays::id,
plays::song_id,
plays::date,
plays::station_id,
),
(
songs::id,
// other relevant fields...
),
(
artist::id,
// other relevant fields...
),
))
.load::<(models::Play, models::Song, models::Artist)>(conn)
}
First of all: If you post a question about diesel online, please always include the relevant part of your schema, otherwise others need to guess how it looks like.
I will assume the following schema for your question:
table! {
plays (id)
id -> BigInt,
song_id -> BigInt,
date -> Timestamp,
station_id -> BigInt,
}
}
table! {
songs(id) {
id -> BigInt,
name -> Text,
}
}
joinable!(plays -> songs (song_id));
Now there is some issue with your Play struct. u64 is not a type supported by diesel. Checkout this documentation about which types are compatible with a BigInt SQL type.
That means I will assume the following structs implementing Queryable:
#[derive(Queryable, Serialize, Deserialize)]
pub struct Play {
pub id: i64,
// pub song_id: u64, leave this out as it is a duplicate with `song` below
pub song: Song,
pub date: NaiveDateTime,
pub station_id: i64,
}
#[derive(Queryable, Serialize, Deserialize)]
pub struct Song {
pub id: i64,
pub name: String,
}
Now to come to your core question: At first, diesel does not have the concept of a single model like known from other ORM's. If you have ever used such a thing, don't try to apply that knowledge to diesel, it won't work. There are a set of traits describing a capability of each struct. As we are talking about querying here I will concentrate on Queryable. Queryable is designed to map the result of a query (which could include zero, one or more tables) to a rust data structure. By default (by using the derive) it does this by mapping fields structural, that means your select clause needs to match the structure deriving Queryable.
Now back to your concrete problem. It means that we must construct a query that returns (i64, (i64, String), NaiveDateTime, i64) (or better the fields with the corresponding SQL types).
For this case this is as easy as using a single join:
plays::table.inner_join(songs::table)
.select((plays::id, songs::all_columns, plays::date, plays::station_id))
The only important thing here for the mapping is the select clause. You can append/prepend whatever other QueryDsl method you like.
I couldn't find anything elegant way of doing this. Here's the snippet, if you want to do it in two steps.
First, get the both models using join on both table.
impl Comment {
pub fn list(conn: &PgConnection, page_id: i32) -> Result<Vec<(Comment, User)>, Error> {
use crate::schema::users;
comment::table
.inner_join(users::table)
.select((comment::all_columns, users::all_columns))
.filter(comment::page_id.eq(page_id))
.load(conn)
}
}
Now, iterate over the result to make it fit into your own api response structure.
#[derive(Serialize)]
pub struct CommentListResponse {
#[serde(flatten)]
pub comment: Comment,
pub user: UserResponse
}
let result = Comment::list(&connection, page.id)?;
let mut comment_list = vec![];
for (comment, user) in result {
let cr = CommentListResponse {
comment,
user: UserResponse::from(user),
};
comment_list.push(cr);
}
Ok(HttpResponse::Ok().json(comment_list))

How to date format an SQL result using Diesel?

I am using Diesel to query a DB in PostgreSQL using JOINs:
let product_id = 1;
sales::table
.inner_join(product::table)
.select((
product::description,
sales::amount,
sales::date_sale
))
.filter(sales::product_id.eq(product_id))
.load(&diesel::PgConnection)
My model:
pub struct Sales {
pub id: i32,
pub product_id: Option<i32>,
pub amount: Option<BigDecimal>,
pub date_sale: Option<NaiveDateTime>
}
The result is as expected, but I need to give a date format to the field sales::date_sale that in pgadmin I do it with to_char(date_sale, 'dd/mm/YYYY').
Is it possible to use to_char in Diesel or in what way can I modify the data that the Diesel ORM brings me?
In addition to the answers provided by harmic there are two more possibilities to solve this problem.
sql_function!
Diesel provides an interface to easily define query ast nodes for sql functions that are not provided by diesel itself. Users are encouraged to use this functionality to define missing functions on their own. (In fact diesel uses internally the same method to define query ast nodes for sql functions provided out of the box). The defined query ast node is usable in every context where the types of its expression are valid, so it can be used in select and where clauses.
(This is basically the type safe version of the raw sql solution above)
For the given question something like this should work:
#[derive(Queryable)]
pub struct Sales {
pub id: i32,
pub product_id: Option<i32>,
pub amount: Option<BigDecimal>,
pub date_sale: Option<String>,
}
sql_function! {
fn to_char(Nullable<Timestamp>, Text) -> Nullable<Text>;
}
let product_id = 1;
sales::table
.inner_join(product::table)
.select((
product::description,
sales::amount,
to_char(sales::date_sale, "dd/mm/YYYY")
))
.filter(sales::product_id.eq(product_id))
.load(&diesel::PgConnection);
#[derive(Queryable)] + #[diesel(deserialize_as = "Type")]
Diesels Queryable derive provides a way to apply certain type manipulations at load time via a custom attribute. In combination with the chrono solution provided by harmic this gives the following variant:
#[derive(Queryable)]
pub struct Sales {
pub id: i32,
pub product_id: Option<i32>,
pub amount: Option<BigDecimal>,
#[diesel(deserialize_as = "MyChronoTypeLoader")]
pub date_sale: Option<String>
}
struct MyChronoTypeLoader(Option<String>);
impl Into<Option<String>> for MyChronoTypeLoader {
fn into(self) -> String {
self.0
}
}
impl<DB, ST> Queryable<ST, DB> for MyChronoTypeLoader
where
DB: Backend,
Option<NaiveDateTime>: Queryable<ST, DB>,
{
type Row = <Option<NaiveDateTime> as Queryable<ST, DB>>::Row;
fn build(row: Self::Row) -> Self {
MyChronoTypeLoader(Option::<NaiveDateTime>::build(row).map(|d| d.format("%d/%m/%Y").to_string()))
}
}
When using an ORM, data is extracted from the database in the most suitable representation for interpretation by the ORM; after which you would manipulate it in the target domain (rust in this case).
Since date_sale is an Option<NaiveDateTime>, you can use the formatting options provided by chrono:
sale.date_sale.unwrap().format("%d/%m/%Y").to_string()
(Your real code would not use unwrap(), of course!)
Alternatively, if you really need the database to do the formatting, you could use a [sql][2] to insert some raw SQL into your query:
let results = sales::table.select((sales::id, sql("to_char(date_sale, 'dd/mm/YYYY')")))
.load::<(i32, String)>(&conn);
This would be more useful if you were trying to implement some logic that would be more easy or efficient to implement in SQL.
An example would be a filtering condition: suppose you wanted to include rows from your table where the week number was an even number. While you could load the whole table and then filter it in the rust domain, that would not be very efficient. Instead you could do this:
let results = sales::table
.filter(sql("extract(week from date_sale)::smallint % 2=0"))
.load::<Sales>(&conn);

Resources