Using clap-derive with two groups of arguments - rust

Given this Clap argument struct, I would like to allow users either to supply the config param, or any of the other params that are being flattened from the sub-structs. The connection param would then be required, but other flattened params are optional. Note that the watch param is allowed in both cases. How can this be done in Clap-derive v4+?
#[derive(Parser, Debug)]
#[command(about, version)]
pub struct Args {
connection: Vec<String>,
#[arg(short)]
watch: bool,
#[arg(short)]
config: Option<PathBuf>,
#[command(flatten)]
srv: SrvArgs,
#[command(flatten)]
pg: PgArgs,
}
#[derive(clap::Args, Debug)]
#[command(about, version)]
pub struct SrvArgs {
#[arg(short)]
pub keep: Option<usize>,
}
#[derive(clap::Args, Debug)]
#[command(about, version)]
pub struct PgArgs {
#[arg(short)]
pub pool: Option<i32>,
}
Allowed usages:
-c filename [-w]
[-p 10] [-k 5] [-w] connection [...connection]
I tried to do it by moving all fields except config and watch to another struct and using the #[arg(group = "cfg")] on both, but it doesn't work when the field also has the #[command(flatten)] attribute.

Related

How do I choose a module for serialization and deserialization based on platform?

Is there any way to add a platform based condition on #[serde(with = "path_handling")]?
So, basically I want to use this custome Serde method only in Inix and on Windows I want to use default way.
pub struct Res {
pub last: bool,
#[serde(with = "path_handling")] // ignore this line on windows as path_handling module contains unix specific logic
pub path: PathBuf,
}
Using cfg_attr and target_family.
pub struct Res {
pub last: bool,
#[cfg_attr(target_family = "unix", serde(with = "path_handling"))]
pub path: PathBuf,
}

Rust find in Vec without creating new iterator each time

I am processing a lot of data. I am working with the cargo dumps, in hopes of creating interconnected graph of all crates. I get data from postgres dump, and try to connect it. I'm trying to connect each version of package, with latest possible version of dependency. I have to go through each dependency, find possible versions, parse version and see if the version of the package and required version of the dependency matches.
I can say that first found semver match is always the right one, since I get data ordered DESC from db. Now, to my problem. I programmed this logic, however then I found out that execution of my code is going to take a long time. Around 3 hours on my hardware, and a lot more in production. This is not acceptable. I've pinpointed my problem, being that I always create a new iterator and then proceed to consume it with using find method. Is there any way to find value inside a vec, without consuming the iterator?
I have the following structs:
#[derive(Debug, Clone, sqlx::FromRow)]
pub struct CargoCrateVersionRow {
pub id: i32,
pub crate_id: i32,
pub version_num: String,
pub created_at: sqlx::types::chrono::NaiveDateTime,
pub updated_at: sqlx::types::chrono::NaiveDateTime,
pub downloads: i32,
pub features: sqlx::types::Json<HashMap<String, Vec<String>>>,
pub yanked: bool,
pub license: Option<String>,
pub crate_size: Option<i32>,
pub published_by: Option<i32>,
}
#[derive(Debug, sqlx::FromRow)]
pub struct CargoDependenciesRow {
pub version_id: i32,
pub crate_id: i32,
pub req: String,
pub optional: bool,
pub features: Vec<String>,
pub kind: i32,
}
#[derive(Debug)]
pub struct CargoCrateVersionDependencyEdge {
pub version_id_from: i32,
pub version_id_to: i32,
pub optional: bool,
pub with_features: Vec<String>,
pub kind: i32,
}
SQL command for retrieving crate_dependencies:
select version_id, crate_id, req, optional, features, kind
from dependencies;
SQL command for retrieving crate_versions:
select
id, crate_id, num "version_num", created_at, updated_at,
downloads, features, yanked, license, crate_size, published_by
from
versions
order by
id desc;
My custom logic:
let mut cargo_crate_dependency_edges: Vec<CargoCrateVersionDependencyEdge> = vec![];
let mut i = 0;
for dep in crate_dependencies {
if i % 10_000 == 0 {
println!("done 10k");
}
let req = skip_fail!(VersionReq::parse(dep.req.as_str()));
// I do not wish to create new iter each time, as this includes millions of structs
let possible_crate_version = crate_versions.iter().find(|s| {
if s.crate_id != dep.crate_id {
return false;
}
// Here, I'm using semver crate
if let Ok(parsed_version) = Version::parse(&s.version_num) {
req.matches(&parsed_version)
} else {
false
}
});
if let Some(found_crate_version) = possible_crate_version {
cargo_crate_dependency_edges.push(
CargoCrateVersionDependencyEdge {
version_id_from: dep.version_id,
version_id_to: found_crate_version.id,
optional: dep.optional,
with_features: dep.features.clone(),
kind: dep.kind,
}
);
}
i += 1;
}
println!("{}", cargo_crate_dependency_edges.len());
After some time debugging, I've come to the conclusion that I need to somehow get rid of making the iterator, because that's my bottleneck. I've benchmarked the library, pushing onto the vec, basically everything.

Rust converting a struct o key value pair where the field is not none

I have a struct generated by prost (protobuf implementation based on proto).
pub struct Data {
#[prost(string, tag="1")]
pub field1: ::prost::alloc::string::String,
#[prost(message, optional, tag="2")]
pub struct_2: ::core::option::Option<Struct2>,
#[prost(message, optional, tag="3")]
pub struct_3: ::core::option::Option<Struct3>,
#[prost(string, tag="4")]
pub test_param: ::prost::alloc::string::String,
}
I am able to decode the protobuf data from the above.
The catch is I receive some fields which are none in the above struct.
only some fields are filled at any given time.
eg:
Data { field1: "testfield", struct_2: None, struct_3: None, test_param: "test" }
I want to be able to:
select only non-none values from the struct.
Iterate over them and put them as a key value pair.
Eg:
let keyvalue = Hashmap::new()
for k,v in buffered_data.to_vector().iter() {
if k.IsSome(){
keyvalue.insert("k",v)
}
}
expected output would be something like:
keyvalue:
{
"field1":"test_field",
"test_param": "test"
}

Rust Get request with parameters

I have the following asynchronous function within an implementation #[async_trait]
pub async fn get_field_value(field: web::Path<String>, value: web::Path<String>) -> HttpResponse {
let json_message = json!({
"field": field.0,
"value": value.0
});
HttpResponse::Ok().json(json_message)
}
now i have my router function
pub fn init_router(config: &mut ServiceConfig){
config.service(web::resource("/get/field-value/{field}/{value}").route(web::get().to(get_field_value)));
}
then when running the web application: localhost:3000/get/field-value/name/James
I don't get the Json if not I get the following error:
wrong number of parameters: 2 expected 1
I think I shouldn't get the error because I initialize the value in the parameters correctly.
Neither #[async_trait] allows me to use #[get("/get/field-value/{field}/{value}")]
I think the route handler is given one argument that contains all values in the route pattern, rather than two separate String arguments. You can use a tuple to get the two values:
pub async fn get_field_value(field: web::Path<(String, String)>) -> HttpResponse
Or you can use serde to deserialize the fields from the route pattern for you:
#[derive(Serialize, Deserialize)]
pub struct Field {
pub field: String,
pub value: String,
}
pub async fn get_field_value(data: web::Path<Field>) -> HttpResponse {
let json_message = json!({
"field": data.field,
"value": data.value
});
HttpResponse::Ok().json(json_message)
}

Understanding trait bound error in Diesel

I want to write a function that will insert a type into a database where the database connection parameter is generic, so that it can work on multiple backends.
I came up with the following function to insert an object using a generic connection:
pub fn create_label<C>(connection: &C, label: &model::Label)
where
C: Connection,
C::Backend: diesel::backend::Backend,
C::Backend: diesel::backend::SupportsDefaultKeyword,
{
diesel::insert(&label)
.into(schema::label::table)
.execute(connection);
}
If I don't include the SupportsDefaultKeyword constraint, the function will not compile. When calling it with a SqliteConnection as the connection parameter, I get the following error:
database::create_label(&db_conn, &label);
^^^^^^^^^^^^^^^^^^^^^^ the trait
'diesel::backend::SupportsDefaultKeyword' is not implemented for
'diesel::sqlite::Sqlite'
This would imply that inserting data with a SqliteConnection does not work. That's obviously not the case, and furthermore changing create_label such that it takes a SqliteConnection directly works just fine.
pub fn create_label(connection: &SqliteConnection, label: &model::Label) {
diesel::insert(&label)
.into(schema::label::table)
.execute(connection);
}
Why is it that the generic function requires the SupportsDefaultKeyword constraint and the function taking SqliteConnection does not?
Here is a minimal example illustrating the problem. As per the comments, line 60 of main.rs will not compile with the error from above, whereas line 61 does compile:
#[macro_use]
extern crate diesel;
#[macro_use]
extern crate diesel_codegen;
mod schema {
table! {
labels {
id -> Integer,
name -> VarChar,
}
}
}
mod model {
use schema::labels;
#[derive(Debug, Identifiable, Insertable)]
#[table_name = "labels"]
pub struct Label {
pub id: i32,
pub name: String,
}
}
use diesel::ExecuteDsl;
use diesel::Connection;
use diesel::prelude::*;
use diesel::sqlite::SqliteConnection;
pub fn create_label<C>(connection: &C, label: &model::Label)
where
C: Connection,
C::Backend: diesel::backend::Backend,
C::Backend: diesel::backend::SupportsDefaultKeyword,
{
diesel::insert(label)
.into(schema::labels::table)
.execute(connection)
.expect("nope");
}
pub fn create_label_sqlite(connection: &SqliteConnection, label: &model::Label) {
diesel::insert(label)
.into(schema::labels::table)
.execute(connection)
.expect("nope");
}
pub fn establish_connection() -> SqliteConnection {
let url = "test.db";
SqliteConnection::establish(&url).expect(&format!("Error connecting to {}", url))
}
fn main() {
let label = model::Label {
id: 1,
name: String::from("test"),
};
let conn = establish_connection();
create_label(&conn, &label); /* Does not compile */
create_label_sqlite(&conn, &label); /*Compiles */
}
[dependencies]
diesel = { version = "0.16.0", features = ["sqlite"] }
diesel_codegen = "0.16.0"
The Diesel function execute has multiple concrete implementations. The two that are relevant here are:
impl<'a, T, U, Op, Ret, Conn, DB> ExecuteDsl<Conn, DB> for BatchInsertStatement<T, &'a [U], Op, Ret>
where
Conn: Connection<Backend = DB>,
DB: Backend + SupportsDefaultKeyword,
InsertStatement<T, &'a [U], Op, Ret>: ExecuteDsl<Conn>,
impl<'a, T, U, Op, Ret> ExecuteDsl<SqliteConnection> for BatchInsertStatement<T, &'a [U], Op, Ret>
where
InsertStatement<T, &'a U, Op, Ret>: ExecuteDsl<SqliteConnection>,
T: Copy,
Op: Copy,
Ret: Copy,
As you can see from these two, the implementation for SQLite is special-cased. I don't know enough about the details of Diesel to know why, but I'd guess that SQLite is missing the default keyword.
You can instead reformulate the requirements for any connection that works with that particular statement:
use diesel::query_builder::insert_statement::InsertStatement;
pub fn create_label<C>(connection: &C, label: &model::Label)
where
C: Connection,
for<'a> InsertStatement<schema::labels::table, &'a model::Label>: ExecuteDsl<C>,
{
diesel::insert(label)
.into(schema::labels::table)
.execute(connection)
.expect("nope");
}

Resources