I have Rust project with Diesel implemented and it generated schema.rs file which contains all of my tables:
table! {
users (id) {
id -> Uuid,
name -> Varchar,
}
}
table! {
items (id) {
id -> Uuid,
name -> Varchar,
}
}
How could I pass any table as an argument inside of my utility function?
For instance,
pub trait Search {
fn internal_get_by_id(
diesel_table: diesel::table, // this argument should pass any diesel table
table_id: diesel::table::id, // this argument should pass Uuid from table
conn: &Conn,
id: Uuid,
) -> Fallible<Option<Self>>
where
Self: Sized,
{
diesel_table
.filter(table_id.eq(id))
.first(conn.raw())
.optional()
.map_err(Error::from)
}
fn get_by_id(conn: &Conn, id: Uuid) -> Fallible<Option<Self>>
where
Self: Sized;
}
impl Search for User {
fn get_by_id(conn: &Conn, id: Uuid) -> Fallible<Option<User>> {
Self::internal_get_by_id(users::table, users::id, conn, id)
}
}
impl Search for Item {
fn get_by_id(conn: &Conn, id: Uuid) -> Fallible<Option<Item>> {
Self::internal_get_by_id(items::table, items::id, conn, id)
}
}
First of all: It is generally not a good idea to write code that is generic over multiple tables/columns using Diesel in Rust, especially if you are new to the language and don't have a really good understanding on trait bounds and where clauses yet.
You need to list all trait bounds that are required to allow building this generic query so that everything can be checked at compile time. The following implementation should solve this (not tested, hopefully I did not miss a trait bound)
fn internal_get_by_id<T, C>(
diesel_table: T,
table_id: C,
conn: &Conn,
id: Uuid,
) -> Fallible<Option<Self>>
where
Self: Sized,
T: Table + FilterDsl<dsl::Eq<C, Uuid>>,
C: Column + Expression<SqlType = diesel::sql_types::Uuid>,
dsl::Filter<T, dsl::Eq<C, Uuid>>: LimitDsl,
dsl::Limit<dsl::Filter<T, dsl::Eq<C, Uuid>>>: LoadQuery<Conn, Self>,
Self: Queryable<dsl::SqlTypeOf<dsl::Limit<dsl::Filter<T, dsl::Eq<C, Uuid>>>>, Conn::Backend>,
{
diesel_table
.filter(table_id.eq(id))
.first(conn.raw())
.optional()
.map_err(Error::from)
}
Related
I'm trying to access an api, where I can specify what kind of fields I want included in the result. (for example "basic", "advanced", "irrelevant"
the Rust Struct to represent that would look something like
Values {
a: Option<String>;
b: Option<String>;
c: Option<String>;
d: Option<String>;
}
or probably better:
Values {
a: Option<Basic>; // With field a
b: Option<Advanced>; // With fields b,c
c: Option<Irrelevant>; // With field d
}
Using this is possible, but I'd love to reduce the handling of Option for the caller.
Is it possible to leverage the type system to simplify the usage? (Or any other way I'm not realizing?)
My idea was something in this direction, but I think that might not be possible with rust (at least without macros):
https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=093bdf1853978af61443d547082576ca
struct Values {
a: Option<&'static str>,
b: Option<&'static str>,
c: Option<&'static str>,
}
trait ValueTraits{}
impl ValueTraits for dyn Basic{}
impl ValueTraits for dyn Advanced{}
impl ValueTraits for Values{}
trait Basic {
fn a(&self) -> &'static str;
}
trait Advanced {
fn b(&self) -> &'static str;
fn c(&self) -> &'static str;
}
impl Basic for Values {
fn a(&self) -> &'static str {
self.a.unwrap()
}
}
impl Advanced for Values {
fn b(&self) -> &'static str {
self.b.unwrap()
}
fn c(&self) -> &'static str {
self.c.unwrap()
}
}
//Something like this is probably not possible, as far as I understand Rust
fn get_values<T1, T2>() -> T1 + T2{
Values {
a: "A",
b: "B",
c: "C"
}
}
fn main() {
let values = get_values::<Basic, Advanced>();
println!("{}, {}, {}", values.a(), values.b(), values.c());
}
Clarifications (Edit)
The Values struct contains deserialized json data from the api I called. I can request groups of fields to be included in the response(1-n requested fields groups), the fields are of different types.
If I knew beforehand, which of those fields are returned, I wouldn't need them to be Option, but as the caller decides which fields are returned, the fields needs to be Option (either directly, or grouped by the field groups)
There are too many possible combinations to create a struct for each of those.
I fully realize that this cannot work, it was just "peudorust":
fn get_values<T1, T2>() -> T1 + T2{
Values {
a: "A",
b: "B",
c: "C"
}
}
But my thought process was:
In theory, I could request the field groups via generics, so I could create a "dynamic" type, that implements these traits, because I know which traits are requested.
The Traits are supposed to act like a "view" into the actual struct, because if they are requested beforehand, I know I should request them from the api to include them in the Struct.
My knowledge of generics and traits isn't enough to confidently say "this isn't possible at all" and I couldn't find a conclusive answer before I asked here.
Sorry for the initial question not being clear of what the actual issue was, I hope the clarification helps with that.
I can't quite gauge whether or not you want to be able to request and return fields of multiple different types from the question. But if all the information being returned is of a single type you could try using a HashMap:
use std::collections::HashMap;
fn get_values(fields: &[&'static str]) -> HashMap<&'static str, &'static str> {
let mut selected = HashMap::new();
for field in fields {
let val = match *field {
"a" => "Value of a",
"b" => "Value of b",
"c" => "Value of c",
// Skip requested fields that don't exist.
_ => continue,
};
selected.insert(*field, val);
}
selected
}
fn main() {
let fields = ["a","c"];
let values = get_values(&fields);
for (field, value) in values.iter() {
println!("`{}` = `{}`", field, value);
}
}
Additionally you've given me the impression that you haven't quite been able to form a relationship between generics and traits yet. I highly recommend reading over the book's "Generic Types, Traits, and Lifetimes" section.
The gist of it is that generics exist to generalize a function, struct, enum, or even a trait to any type, and traits are used to assign behaviour to a type. Traits cannot be passed as a generic parameter, because traits are not types, they are behaviours. Which is why doing: get_values::<Basic, Advanced>(); doesn't work. Basic and Advanced are both traits, not types.
If you want practice with generics try generalizing get_values so that it can accept any type which can be converted into an iterator that yields &'static strs.
Edit:
The clarification is appreciated. The approach you have in mind is possible, but I wouldn't recommend it because it's implementing it is extremely verbose and will panic the moment the format of the json you're parsing changes. Though if you really need to use traits for some reason you could try something like this:
// One possible construct returned to you.
struct All {
a: Option<i32>,
b: Option<i32>,
c: Option<i32>,
}
// A variation that only returned b and c
struct Bc {
b: Option<i32>,
c: Option<i32>,
}
// impl Advanced + Basic + Default for All {...}
// impl Advanced + Default for Bc {...}
fn get_bc<T: Advanced + Default>() -> T {
// Here you would set the fields on T.
Default::default()
}
fn get_all<T: Basic + Advanced + Default>() -> T {
Default::default()
}
fn main() {
// This isn't really useful unless you want to create multiple structs that
// are supposed to hold b and c but otherwise have different fields.
let bc = get_bc::<Bc>();
let all = get_all::<All>();
// Could also do something like:
let bc = get_bc::<All>();
// but that could get confusing.
}
I think the above is how you're looking to solve your problem. Though if you can, I would still recommend using a HashMap with a trait object like this:
use std::collections::HashMap;
use std::fmt::Debug;
// Here define the behaviour you need to interact with the data. In this case
// it's just ability to print to console.
trait Value: Debug {}
impl Value for &'static str {}
impl Value for i32 {}
impl<T: Debug> Value for Vec<T> {}
fn get_values(fields: &[&'static str]) -> HashMap<&'static str, Box<dyn Value>> {
let mut selected = HashMap::new();
for field in fields {
let val = match *field {
"a" => Box::new("Value of a") as Box<dyn Value>,
"b" => Box::new(2) as Box<dyn Value>,
"c" => Box::new(vec![1,3,5,7]) as Box<dyn Value>,
// Skip requested fields that don't exist.
_ => continue,
};
selected.insert(*field, val);
}
selected
}
fn main() {
let fields = ["a","c"];
let values = get_values(&fields);
for (field, value) in values.iter() {
println!("`{}` = `{:?}`", field, value);
}
}
I have a very data driven program which contains different types of entities having very similar structures and differing only in specific places.
For example, every entity has a name which can be changed. Here's two example methods to demonstrate how the methods might be similar:
pub fn rename_blueprint(
&mut self,
ctx: &mut Context,
db_handle: &Transaction,
blueprint_id: Uuid,
new_name: &str,
) -> Result<(), DataError> {
ctx.debug(format!(
"Renaming blueprint {} to {}",
blueprint_id, new_name
));
self.assert_blueprint_exists(ctx, db_handle, blueprint_id)?;
let mut stmt = db_handle
.prepare("UPDATE `blueprints` SET `name` = ? WHERE `id` == ?")
.on_err(|_| ctx.err("Unable to prepare update statement"))?;
let changed_rows = stmt
.execute(params![new_name.to_string(), blueprint_id])
.on_err(|_| ctx.err("Unable to update name in database"))?;
if changed_rows != 1 {
ctx.err(format!("Invalid amount of rows changed: {}", changed_rows));
return Err(DataError::InvalidChangeCount {
changes: changed_rows,
expected_changes: 1,
});
}
ctx.blueprint_renamed(blueprint_id, new_name);
Ok(())
}
pub fn rename_attribute(
&mut self,
ctx: &mut Context,
db_handle: &Transaction,
attribute_id: Uuid,
new_name: &str,
) -> Result<(), DataError> {
ctx.debug(format!(
"Renaming attribute {} to {}",
attribute_id, new_name
));
self.assert_attribute_exists(ctx, db_handle, attribute_id)?;
let mut stmt = db_handle
.prepare("UPDATE `attributes` SET `name` = ? WHERE `id` == ?")
.on_err(|_| ctx.err("Unable to prepare update statement"))?;
let changed_rows = stmt
.execute(params![new_name.to_string(), attribute_id])
.on_err(|_| ctx.err("Unable to update name in database"))?;
if changed_rows != 1 {
ctx.err(format!("Invalid amount of rows changed: {}", changed_rows));
return Err(DataError::InvalidChangeCount {
changes: changed_rows,
expected_changes: 1,
});
}
ctx.attribute_renamed(attribute_id, new_name);
Ok(())
}
The same method with almost identical code now needs to exist for 5-11 more types of entities. I can usually just replace Blueprint with the name of the other entity type, and it will all work. However, that seems like quite a silly solution.
Likewise, writing a helper method which accepts all relevant strings, methods, and such to call it seems similarly silly.
I don't believe I could even avoid this by passing in some "strategy" or other indirection helper (EntityRenamer or something similar), given that the logic would need to be coded there anyway. It'd just be moving the problem one step up.
It should be mentioned that this is one of the shorter methods. Entities can also be moved, deleted, created, etc. all of which have similar code - sometimes 30+ lines long.
How to avoid code duplication of different structs with semantically equal fields/properties? does not solve my issue. That question basically asks "how to do inheritance, when no inheritance exists", whereas my code is struggling with collectivizing very similar logic into the lowest common denominator. Traits or common implementations won't solve my problem, as the code would still exist - it'd only be moved someplace else.
How would you go about deduplicating this code?
I'm more looking for guidelines than someone writing my code for me. A few possible solutions could be:
use macros and then just use something like entity_rename_impl!(args)
use a helper method with a different parameter for each specific thing that can differ from function to function
don't try to abstract the entire method, and instead focus on writing helper functions for smaller things, so that the methods might be duplicating, but it's very little code that is abstracted elsewhere
A MCVE (playground):
#![allow(unused)]
pub struct Transaction {}
impl Transaction {
pub fn execute_sql(&self, sql: &str) -> i32 {
// .. do something in the database
0
}
pub fn bind_id(&self, id: Uuid) {}
}
#[derive(Clone, Copy)]
pub struct Uuid {}
impl std::fmt::Display for Uuid {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "mockup")
}
}
pub fn assert_blueprint_exists(blueprint_id: Uuid) {}
pub fn track_blueprint_rename(id: Uuid, new_name: String) {}
pub fn assert_attribute_exists(blueprint_id: Uuid) {}
pub fn track_attribute_rename(id: Uuid, new_name: String) {}
pub fn rename_blueprint(
db_handle: &Transaction,
blueprint_id: Uuid,
new_name: &str,
) -> Result<(), String> {
println!("Renaming blueprint {} to {}", blueprint_id, new_name);
assert_blueprint_exists(blueprint_id);
db_handle.bind_id(blueprint_id);
let changed_rows = db_handle.execute_sql("UPDATE `blueprints` SET `name` = ? WHERE `id` == ?");
if changed_rows != 1 {
println!("Invalid amount of rows changed: {}", changed_rows);
return Err("Invalid change count in blueprint rename".to_string());
}
track_blueprint_rename(blueprint_id, new_name.to_string());
Ok(())
}
pub fn rename_attribute(
db_handle: &Transaction,
attribute_id: Uuid,
new_name: &str,
) -> Result<(), String> {
println!("Renaming attribute {} to {}", attribute_id, new_name);
assert_attribute_exists(attribute_id);
db_handle.bind_id(attribute_id);
let changed_rows = db_handle.execute_sql("UPDATE `attributes` SET `name` = ? WHERE `id` == ?");
if changed_rows != 1 {
println!("Invalid amount of rows changed: {}", changed_rows);
return Err("Invalid change count in attribute rename".to_string());
}
track_attribute_rename(attribute_id, new_name.to_string());
Ok(())
}
The way I generally solve this kind of problem is by using generics. Let the caller choose the appropriate type.
trait Entity {
fn desc(&self) -> String;
}
impl Entity for Blueprint {
// ...
}
pub fn rename<T>(/* ... */)
where
T: Entity,
{
// ...
}
I have trait like this to describe the structure of a type, if I know the type at compile-time, of course I can inspect all the associate constants, associate types, and static member functions of it. But the point is, there're hundreds(or even more countless) of types defined in a module, which all satisfy this TypeModule trait, but I only got a TypeID value from run-time, how can I identify which is the right type I want and inspect all its associate constants, associate types, and static member functions?
pub trait Value: std::fmt::Debug {
const SERIALIZED_SIZE_HINT: usize; //此NBT数据类型序列化的预估大小
fn deserialize_from(src: &[u8]) -> DynamicValue;//必须正确实现,返回的type_id必须正确,不允许失败,无论src为何都必须返回一个正确的DynamicValue
fn serialize_into(dynamic_value: &DynamicValue) -> Vec<u8>;//不允许失败,因为内存中的DynamicValue的数据一定处于正确的状态
}
///每个ID写一个类型,实现这个Trait,静态分发用
pub trait TypeModule {
const TYPE_ID: TypeID<'static>;
const TAG_LIST: &'static [Tag];
type BlockValue: Value;
type EntityValue: Value;
type ItemValue: Value;
fn get_type_info() -> TypeInfo {
let block = if !is_same_type::<Self::BlockValue, ()>() {Some(SerializeDeserializeFunctions{
deserialize_from: Self::BlockValue::deserialize_from,
serialize_into: Self::BlockValue::serialize_into,
serialize_size_hint: Self::BlockValue::SERIALIZED_SIZE_HINT,
})} else {None};
let entity = if !is_same_type::<Self::EntityValue, ()>() {Some(SerializeDeserializeFunctions{
deserialize_from: Self::EntityValue::deserialize_from,
serialize_into: Self::EntityValue::serialize_into,
serialize_size_hint: Self::EntityValue::SERIALIZED_SIZE_HINT,
})} else {None};
let item = if !is_same_type::<Self::ItemValue, ()>() {Some(SerializeDeserializeFunctions{
deserialize_from: Self::ItemValue::deserialize_from,
serialize_into: Self::ItemValue::serialize_into,
serialize_size_hint: Self::ItemValue::SERIALIZED_SIZE_HINT,
})} else {None};
TypeInfo {
type_id: Self::TYPE_ID,
tags: Self::TAG_LIST,
block_functions: block,
entity_functions: entity,
item_functions: item,
}
}
}
Is that possible to write code iterating all the types in a module, so I can compare their associate TYPE_ID with the run-time given value and get the correct TAG_LIST,SERIALIZED_SIZE_HINT and all the member function pointers?
The closest effort I made is adding an function to TypeModule to turn a type into a TypeInfo value:
///Helper Function
struct SameType<T1, T2> {}
impl<T1, T2> SameType<T1, T2> {
const VALUE :bool = false;
}
impl<T> SameType<T, T> {
const VALUE :bool = true;
}
fn is_same_type<T1, T2>() -> bool {SameType::<T1, T2>::VALUE}
#[derive(PartialEq,Eq,Copy,Clone)]
pub enum Tag {
}
pub struct SerializeDeserializeFunctions {
deserialize_from: fn(&[u8]) -> DynamicValue,
serialize_into: fn(&DynamicValue) -> Vec<u8>,
serialize_size_hint: usize,
}
///write an instance of this type for every id, for dynamic dispatching
pub struct TypeInfo {
type_id: TypeID<'static>,
tags: &'static [Tag],
block_functions: Option<SerializeDeserializeFunctions>,
entity_functions: Option<SerializeDeserializeFunctions>,
item_functions: Option<SerializeDeserializeFunctions>,
}
I can turn a pre-known type into TypeInfo but I don't know how check for hundreds of types of a module.
I have this struct:
#[table_name = "clients"]
#[derive(Serialize, Deserialize, Queryable, Insertable, Identifiable, Associations)]
pub struct Client {
pub id: Option<i64>,
pub name: String,
pub rank: Option<i64>,
}
and the following implementation:
impl Client {
pub fn get(name: String, connection: &PgConnection) -> Option<Self> {
match clients::table
.filter(clients::name.eq(&name))
.limit(1)
.load::<Client>(connection)
{
Ok(clients) => Some(clients[0]),
Err(_) => None,
}
}
}
which gives me the following error:
.load::<Client>(connection) {
^^^^ the trait `diesel::Queryable<diesel::sql_types::BigInt, _>` is not implemented for `std::option::Option<i64>`
Your error message says that you cannot query a BigInt (a 64 bits int) into an Option<i64>. That is because you forgot to say that id is nullable in your table declaration. It must look like:
table! {
clients {
id -> Nullable<BigInt>,
name -> Text,
rank -> Nullable<BigInt>,
}
}
You can see the implementation of Queryable you are looking for in the documentation.
For me the issue was not realising that diesel maps fields based entirely on the order of fields in the struct. It completely ignores field names.
If your struct has fields defined in a different order to the schema.rs file then it will map them incorrectly and cause type errors.
https://docs.diesel.rs/diesel/deserialize/trait.Queryable.html#deriving
When this trait is derived, it will assume that the order of fields on your struct match the order of the fields in the query. This means that field order is significant if you are using #[derive(Queryable)]. Field name has no effect.
I want to write a function that will insert a type into a database where the database connection parameter is generic, so that it can work on multiple backends.
I came up with the following function to insert an object using a generic connection:
pub fn create_label<C>(connection: &C, label: &model::Label)
where
C: Connection,
C::Backend: diesel::backend::Backend,
C::Backend: diesel::backend::SupportsDefaultKeyword,
{
diesel::insert(&label)
.into(schema::label::table)
.execute(connection);
}
If I don't include the SupportsDefaultKeyword constraint, the function will not compile. When calling it with a SqliteConnection as the connection parameter, I get the following error:
database::create_label(&db_conn, &label);
^^^^^^^^^^^^^^^^^^^^^^ the trait
'diesel::backend::SupportsDefaultKeyword' is not implemented for
'diesel::sqlite::Sqlite'
This would imply that inserting data with a SqliteConnection does not work. That's obviously not the case, and furthermore changing create_label such that it takes a SqliteConnection directly works just fine.
pub fn create_label(connection: &SqliteConnection, label: &model::Label) {
diesel::insert(&label)
.into(schema::label::table)
.execute(connection);
}
Why is it that the generic function requires the SupportsDefaultKeyword constraint and the function taking SqliteConnection does not?
Here is a minimal example illustrating the problem. As per the comments, line 60 of main.rs will not compile with the error from above, whereas line 61 does compile:
#[macro_use]
extern crate diesel;
#[macro_use]
extern crate diesel_codegen;
mod schema {
table! {
labels {
id -> Integer,
name -> VarChar,
}
}
}
mod model {
use schema::labels;
#[derive(Debug, Identifiable, Insertable)]
#[table_name = "labels"]
pub struct Label {
pub id: i32,
pub name: String,
}
}
use diesel::ExecuteDsl;
use diesel::Connection;
use diesel::prelude::*;
use diesel::sqlite::SqliteConnection;
pub fn create_label<C>(connection: &C, label: &model::Label)
where
C: Connection,
C::Backend: diesel::backend::Backend,
C::Backend: diesel::backend::SupportsDefaultKeyword,
{
diesel::insert(label)
.into(schema::labels::table)
.execute(connection)
.expect("nope");
}
pub fn create_label_sqlite(connection: &SqliteConnection, label: &model::Label) {
diesel::insert(label)
.into(schema::labels::table)
.execute(connection)
.expect("nope");
}
pub fn establish_connection() -> SqliteConnection {
let url = "test.db";
SqliteConnection::establish(&url).expect(&format!("Error connecting to {}", url))
}
fn main() {
let label = model::Label {
id: 1,
name: String::from("test"),
};
let conn = establish_connection();
create_label(&conn, &label); /* Does not compile */
create_label_sqlite(&conn, &label); /*Compiles */
}
[dependencies]
diesel = { version = "0.16.0", features = ["sqlite"] }
diesel_codegen = "0.16.0"
The Diesel function execute has multiple concrete implementations. The two that are relevant here are:
impl<'a, T, U, Op, Ret, Conn, DB> ExecuteDsl<Conn, DB> for BatchInsertStatement<T, &'a [U], Op, Ret>
where
Conn: Connection<Backend = DB>,
DB: Backend + SupportsDefaultKeyword,
InsertStatement<T, &'a [U], Op, Ret>: ExecuteDsl<Conn>,
impl<'a, T, U, Op, Ret> ExecuteDsl<SqliteConnection> for BatchInsertStatement<T, &'a [U], Op, Ret>
where
InsertStatement<T, &'a U, Op, Ret>: ExecuteDsl<SqliteConnection>,
T: Copy,
Op: Copy,
Ret: Copy,
As you can see from these two, the implementation for SQLite is special-cased. I don't know enough about the details of Diesel to know why, but I'd guess that SQLite is missing the default keyword.
You can instead reformulate the requirements for any connection that works with that particular statement:
use diesel::query_builder::insert_statement::InsertStatement;
pub fn create_label<C>(connection: &C, label: &model::Label)
where
C: Connection,
for<'a> InsertStatement<schema::labels::table, &'a model::Label>: ExecuteDsl<C>,
{
diesel::insert(label)
.into(schema::labels::table)
.execute(connection)
.expect("nope");
}