Using HMAC as a generic - rust

I am having trouble updating the crates hmac and digest. I have a function defined that takes a generic type of a HMAC function, and computes the HMAC over a given input. I have a function working with the version of hmac and digest being 0.7 and 0.8 respectively. However, I'm getting blocked when trying to get the same logic running for the latest versions 0.10 and 0.9 respectively.
In my machine, I use rustc 1.48.0 (7eac88abb 2020-11-16).
The working example has the following Cargo.toml dependencies
[dependencies]
hmac = "0.7"
sha2 = "0.8"
digest = "0.8"
The minimal working example is the following:
use sha2::{Sha256};
use hmac::{Mac, Hmac};
type HmacSha256 = Hmac<Sha256>;
use digest::generic_array::typenum::{U32};
pub struct Key([u8; 2]);
impl Key {
pub fn print_hmac<D>(&self, message: &[u8])
where
D: Mac<OutputSize = U32>,
{
let mut mac = D::new_varkey(self.0.as_ref()).unwrap();
mac.input(message);
let result = mac.result();
let code_bytes = result.code();
println!("{:?}", code_bytes)
}
}
pub fn main() {
let verif_key = Key([12u8, 33u8]);
verif_key.print_hmac::<HmacSha256>(&[83u8, 123u8]);
}
The above code works well, and compiles. However, when I try to upgrade the dependencies to the latests versions, everything breaks.
Updated Cargo.toml:
[dependencies]
hmac = "0.10"
sha2 = "0.9"
digest = "0.9"
With the updates, we have some changes in the nomenclature:
.input() -> .update()
.result() -> .finalize()
.code() -> .into_bytes()
When I try to run it, I get the following error
no function or associated item named 'new_varkey' found for type parameter 'D' in the current scope
So I tried to define the generic type to be NewMac (for that need to change the second line to use hmac::{Mac, Hmac, NewMac};). However, now the error is in the functions .update() and .finalize().
I've also tried to pass the Digest generic type, rather than the Hmac, as follows:
pub fn print_hmac<D>(&self, message: &[u8])
where
D: Digest,
{
let mut mac = Hmac::<D>::new_varkey(self.0.as_ref()).unwrap();
mac.update(message);
let result = mac.finalise();
let code_bytes = result.into_bytes();
println!("{:?}", code_bytes)
}
But still not working.
How should I handle the generic Hmac function for the updated crates?
Sorry for the long post, and I hope I made my problem clear. Thanks community!

It's good to look at example code which is updated across version updates, and luckily hmac has some tests in its repository.
Those tests use the new_test macro defined here in the crypto-mac crate. In particular, there is one line similar to yours...
let mut mac = <$mac as NewMac>::new_varkey(key).unwrap();
...which suggests that D should be cast to a NewMac implementor in your code as well.
After implementing the nomenclature updates you've already identified, your code works with the additional as NewMac cast from above, as well as the corresponding new + NewMac trait bound on D:
use sha2::{Sha256};
use hmac::{NewMac, Mac, Hmac};
type HmacSha256 = Hmac<Sha256>;
use digest::generic_array::typenum::{U32};
pub struct Key([u8; 2]);
impl Key {
pub fn print_hmac<D>(&self, message: &[u8])
where
D: Mac<OutputSize = U32> + NewMac, // `+ NewMac` input trait bound
{
let mut mac = <D as NewMac>::new_varkey(self.0.as_ref()).unwrap(); // `as NewMac` cast
mac.update(message);
let result = mac.finalize();
let code_bytes = result.into_bytes();
println!("{:?}", code_bytes)
}
}
pub fn main() {
let verif_key = Key([12u8, 33u8]);
verif_key.print_hmac::<HmacSha256>(&[83u8, 123u8]);
}

Related

HMAC-SHA512 in Rust, can't get expected result

Trying to get HMAC-SHA512 working in Rust, the test-case is taken from kraken API, but just can't get it working for a few days now.
Can anybody spot what I am missing?
I tried different HMAC libraries, and they all seem to yield the same result, so it seems it's something about how I concatenate/combine strings before feeding it to HMAC implementation.
Cargo.toml:
[dependencies]
urlencoding = "2.1.0"
base64 = "0.13.0"
ring = "0.16.20"
sha256 = "1.0.3"
use ring::hmac;
use sha256;
use urlencoding::encode;
pub fn api_sign(
private_key: Option<String>,
nonse: u64,
params: Option<String>,
uri: String,
) -> hmac::Tag {
let private_key = match private_key {
Some(p) => p,
None => panic!("Private key is not provided"),
};
let encoded_params = match params {
Some(p) => encode(&p[..]).into_owned(),
// Some(p) => p, <= tried this one too
None => "".to_string(),
};
let nonse = nonse.to_string();
let hmac_data = [nonse, encoded_params].concat();
let hmac_data = sha256::digest(hmac_data);
let hmac_data = [uri, hmac_data].concat();
let key = base64::decode(private_key).unwrap();
let key = hmac::Key::new(hmac::HMAC_SHA512, &key);
let mut s_ctx = hmac::Context::with_key(&key);
s_ctx.update(hmac_data.as_bytes());
s_ctx.sign()
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_api_sign_0() {
assert_eq!(
base64::encode(api_sign(
Some("kQH5HW/8p1uGOVjbgWA7FunAmGO8lsSUXNsu3eow76sz84Q18fWxnyRzBHCd3pd5nE9qa99HAZtuZuj6F1huXg==".to_string()),
1616492376594,
Some("nonce=1616492376594&ordertype=limit&pair=XBTUSD&price=37500&type=buy&volume=1.25".to_string()),
"/0/private/AddOrder".to_string()
).as_ref()),
"4/dpxb3iT4tp/ZCVEwSnEsLxx0bqyhLpdfOpc6fn7OR8+UClSV5n9E6aSS8MPtnRfp32bAb0nmbRn6H8ndwLUQ=="
)
}
}
The primary issue is that the APIs you're using are not equivalent. I'll be comparing to the Python implementation as that's what I'm most fluent in.
misunderstanding of the urlencoding API
Your Rust code takes a string of params and urlencodes it, but that's not what the Python code does, urllib.parse.urlencode takes a map of params and creates the query string, the value of postdata in the python code is
nonce=1616492376594&ordertype=limit&pair=XBTUSD&price=37500&type=buy&volume=1.25
but the value of encoded_params in your code is:
nonce%3D1616492376594%26ordertype%3Dlimit%26pair%3DXBTUSD%26price%3D37500%26type%3Dbuy%26volume%3D1.25
it's double-urlencoded. That is because you start from the pre-built querystring and urlencode it, while the Python code starts from the params and creates the querystring (properly encoded).
I think serde-urlencode would be a better pick / dependency: it is used a lot more, by pretty big projects (e.g. reqwest and pretty much every high-level web framework), and it can encode a data struct (because serde) which better matches the Python behaviour.
different sha256 API
sha256::digest and hashlib.sha256 have completely different behaviour:
sha256::digest(hmac_data)
returns a hex-encoded string, while
hashlib.sha256(encoded).digest()
returns the "raw" binary hash value: https://docs.python.org/3/library/hashlib.html#hashlib.hash.digest
That's why the Python code encodes the urlpath before the concatenation, message is bytes, not str.
It seems this sha256 outputs only hex strings, and it seems pretty low-use, so I'd suggest you're also using the wrong crate here, you likely want Rust Crypto's SHA2.
Recommendation: porting
For this sort of situations where there are available implementations, I would suggest
dumping the intermediate values of your and your reference implementation to check that they match, that would have made both the postdata and the digest issues obvious at first glance
sticking to following the "reference" implementation as much as you can, until you have something working, once your version works you can make it more rustic or fix the edge cases (e.g. make parameters optional, fix the API, use API conveniences, ...)
Here's a relatively direct conversion of the Python version, I kept your return type of an hmac::Tag but I used ring::hmac's other shortcuts to simplify that bit:
use ring::hmac;
use serde::Serialize;
use sha2::Digest;
pub fn api_sign(uri: String, data: Data, secret: String) -> hmac::Tag {
let postdata = serde_urlencoded::to_string(&data).unwrap();
let encoded = (data.nonce + &postdata).into_bytes();
let mut message = uri.into_bytes();
message.extend(sha2::Sha256::digest(encoded));
let key = hmac::Key::new(hmac::HMAC_SHA512, &base64::decode(secret).unwrap());
hmac::sign(&key, &message)
}
#[derive(Serialize)]
pub struct Data {
nonce: String,
ordertype: String,
pair: String,
price: u32,
r#type: String,
volume: f32,
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_api_sign_0() {
let sig = api_sign(
"/0/private/AddOrder".to_string(),
Data {
nonce: "1616492376594".into(),
ordertype: "limit".into(),
pair: "XBTUSD".into(),
price: 37500,
r#type: "buy".into(),
volume: 1.25,
},
"kQH5HW/8p1uGOVjbgWA7FunAmGO8lsSUXNsu3eow76sz84Q18fWxnyRzBHCd3pd5nE9qa99HAZtuZuj6F1huXg==".into(),
);
assert_eq!(
base64::encode(&sig),
"4/dpxb3iT4tp/ZCVEwSnEsLxx0bqyhLpdfOpc6fn7OR8+UClSV5n9E6aSS8MPtnRfp32bAb0nmbRn6H8ndwLUQ==",
)
}
}
you can follow and match the signature function pretty much line by line to the Python code.

Testing with Rocket and Diesel

I am trying to work on a web app with Diesel and Rocket, by following the rocket guide. I have not able to understand how to do testing of this app.
//in main.rs
#[database("my_db")]
struct PgDbConn(diesel::PgConnection);
#[post("/users", format="application/json", data="<user_info>")]
fn create_new_user(conn: PgDbConn, user_info: Json<NewUser>) {
use schema::users;
diesel::insert_into(users::table).values(&*user_info).execute(&*conn).unwrap();
}
fn main() {
rocket::ignite()
.attach(PgDbConn::fairing())
.mount("/", routes![create_new_user])
.launch();
}
// in test.rs
use crate::PgDbConn;
#[test]
fn test_user_creation() {
let rocket = rocket::ignite().attach(PgDbConn::fairing());
let client = Client::new(rocket).unwrap();
let response = client
.post("/users")
.header(ContentType::JSON)
.body(r#"{"username": "xyz", "email": "temp#abc.com"}"#)
.dispatch();
assert_eq!(response.status(), Status::Ok);
}
But this modifies the database. How can I make sure that the test does not alter the database.
I tried to create two database and use them in the following way(I am not sure if this is recommended)
#[cfg(test)]
#[database("test_db")]
struct PgDbConn(diesel::PgConnection);
#[cfg(not(test))]
#[database("live_db")]
struct PgDbConn(diesel::PgConnection);
Now I thought I can use the test_transaction method of the diesel::connection::Connection trait in the following way:-
use crate::PgDbConn;
#[test]
fn test_user_creation() {
// !!This statment is wrong as PgDbConn is an Fn object instead of a struct
// !!I am not sure how it works but it seems that this Fn object is resolved
// !!into struct only when used as a Request Guard
let conn = PgDbConn;
// Deref trait for PgDbConn is implemented, So I thought that dereferencing
// it will return a diesel::PgConnection
(*conn).test_transaction::<_, (), _>(|| {
let rocket = rocket::ignite().attach(PgDbConn::fairing());
let client = Client::new(rocket).unwrap();
let response = client
.post("/users")
.header(ContentType::JSON)
.body(r#"{"username": "Tushar", "email": "temp#abc.com"}"#)
.dispatch();
assert_eq!(response.status(), Status::Ok);
Ok(())
});
}
The above code obviously fails to compile. Is there a way to resolve this Fn object into the struct and obtain the PgConnection in it. And I am not even sure if this is the right to way to do things.
Is there a recommended way to do testing while using both Rocket and Diesel?
This will fundamentally not work as you imagined there, as conn will be a different connection than whatever rocket generates for you. The test_transaction pattern assumes that you use the same connection for everything.

Cannot trigger Outcome::Failure in FromRequest implementation

While trying to get started with development of an api using rocket, I was implementing a request guard which is supposed to check authorization headers. When the check fails it should result in a Failure, but that is where I cannot get it to work. Outcome::Success works perfectly fine and returns the correct object, but when triggering an Outcome::Failure I always run into the issue that I cannot get it to compile:
error[E0282]: type annotations needed
--> src/main.rs:43:21
|
43 | Outcome::Failure((Status::BadRequest, RequestError::ParseError));
| ^^^^^^^^^^^^^^^^ cannot infer type for type parameter S declared on the enum Outcome
To Reproduce
main.rs
#[macro_use] extern crate rocket;
use rocket::Request;
use rocket::request::{FromRequest, Outcome};
use rocket::http::Status;
use regex::Regex;
#[get("/")]
fn index() -> &'static str {
"Hello, world!"
}
#[get("/test")]
fn test(device: Device) -> &'static str {
"Hello test"
}
#[derive(Debug)]
enum RequestError {
InvalidCredentials,
ParseError,
}
struct Device {
id: i32
}
#[rocket::async_trait]
impl<'r> FromRequest<'r> for Device {
type Error = RequestError;
async fn from_request(request: &'r Request<'_>) -> Outcome<Self, Self::Error> {
Outcome::Failure((Status::BadRequest, RequestError::ParseError));
// TEST
let d1 = Device {
id: 123
};
Outcome::Success(d1)
}
}
#[launch]
fn rocket() -> _ {
rocket::build().mount("/", routes![index,test])
}
Cargo.toml
[package]
name = "api-sandbox"
version = "0.1.0"
edition = "2021"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
rocket = "0.5.0-rc.1"
regex = "1"
Expected Behavior
The parameter S should not needed to be declared, as I am not using the parameter Success(S), but use Failure(E). According to the docs I can either return an error or an tuple with Status and Error, but instead the error message pops up. I have double checked with only available resources and blogs, but was not able to correctly trigger an outcome with status failure.
Environment:
VERSION="20.04.3 LTS (Focal Fossa)"
5.10.60.1-microsoft-standard-WSL2
Rocket 0.5.0-rc.1
I am quite new to this topic, so please let me know, if I need to supply more information. Thanks for your assistance in this topic
The type of Outcome::Failure(_) cannot be deduced. Even if not used in this particular construction, the type parameter S must be known for it to be a complete type. There is no default type available or any context that can help infer the type of S.
The same is true of Outcome::Success(_). By itself, the type of the template parameter E is not known. However, this compiles because it does have context that can help the compiler infer it. It is used as a return value, so it must match the return type, and therefore it can be deduced that E is Self::Error.
This will also work for Outcome::Failure(_) if it were used as a return value:
async fn from_request(request: &'r Request<'_>) -> Outcome<Self, Self::Error> {
Outcome::Failure((Status::BadRequest, RequestError::ParseError))
}

"expected bound lifetime parameter, found concrete lifetime" on StreamExt .scan() method

I am using async-tungstenite to listen to a websocket, and async-std's StreamExt to operate on the resulting stream.
I want to use a HashMap to accumulate the latest Ticker values from the websocket. These Ticker values will be looked up later to be used in calculations. I'm using the symbol (String) value of the Ticker struct as the key for the HashMap. I'm using the .scan StreamExt method to perform the accumulation.
However, I get a compilation error related to lifetimes. Here's some stripped-down code:
let tickers = HashMap::new();
let mut stream = ws.
.scan(tickers, accumulate_tickers);
while let msg = stream.next().await {
println!("{:?}", msg)
}
...and the accumulate_tickers function:
fn accumulate_tickers(tps: &mut HashMap<String, Ticker>, bt: Ticker) -> Option<&HashMap<String, Ticker>> {
tps.insert((*bt.symbol).to_string(), bt);
Some(tps)
}
The compilation error I receive is as follows:
error[E0271]: type mismatch resolving `for<'r> <for<'s> fn(&'s mut std::collections::HashMap<std::string::String, ws_async::model::websocket::Ticker>, ws_async::model::websocket::Ticker) -> std::option::Option<&'s std::collections::HashMap<std::string::String, ws_async::model::websocket::Ticker>> {accumulate_tickers} as std::ops::FnOnce<(&'r mut std::collections::HashMap<std::string::String, ws_async::model::websocket::Ticker>, ws_async::model::websocket::Ticker)>>::Output == std::option::Option<_>`
--> examples/async_std-ws.rs:64:4
|
64 | .scan(tickers, accumulate_tickers);
| ^^^^ expected bound lifetime parameter, found concrete lifetime
I'm unaware of a way to provide a lifetime parameter to the scan method.
I wonder whether the issue may be related to the fact that I modify the HashMap and then try to return it (is it a move issue?). How could I resolve this, or at least narrow down the cause?
I was able to get this working. Working my way through the compiler errors, I ended up with this signature for the accumulate_tickers function:
fn accumulate_tickers<'a>(tps: &'a mut &'static HashMap<String, Ticker>, bt: Ticker) -> Option<&'static HashMap<String, Ticker>>
I do want the accumulator HashMap to have a static lifetime so that makes sense. tps: &'a mut &'static HashMap... does look a bit strange, but it works.
Then, this issue was was that tickers also had to have a static lifetime (it's the initial value for the accumulator. I tried declaring it as static outside the main but it wouldn't let me set it to the result of a function - HashMap::new().
I then turned to lazy_static which allows me to create a static value which does just that:
lazy_static! {
static ref tickers: HashMap<String, Ticker> = HashMap::new();
}
This gave me a HashMap accumulator that had a static lifetime. However, like normal static values declared in the root scope, it was immutable. To fix that, I read some hints from the lazy_static team and then found https://pastebin.com/YES8dsHH. This showed me how to make my static accumulator mutable by wrapping it in Arc<Mutex<_>>.
lazy_static! {
// from https://pastebin.com/YES8dsHH
static ref tickers: Arc<Mutex<HashMap<String, Ticker>>> = {
let mut ts = HashMap::new();
Arc::new(Mutex::new(ts))
};
}
This does mean that I have to retrieve the accumulator from the Mutex (and lock it) before reading or modifying it but, again, it works.
Putting it all together, the stripped-down code now looks like this:
#[macro_use]
extern crate lazy_static;
lazy_static! {
// from https://pastebin.com/YES8dsHH
static ref tickers: Arc<Mutex<HashMap<String, Ticker>>> = {
let mut ts = HashMap::new();
Arc::new(Mutex::new(ts))
};
}
// SNIP
// Inside main()
let mut ticks = ws
.scan(&tickers, accumulate_tickers);
while let Some(msg) = ticks.next().await {
println!("{:?}", msg.lock().unwrap());
}
// SNIP
fn accumulate_tickers<'a>(tps: &'a mut &'static tickers, bt: Ticker) -> Option<&'static tickers> {
tps.lock().unwrap().insert((*bt.symbol).to_string(), bt);
Some(tps)
}
I'd be happy to hear suggestions for ways in which this could be made simpler or more elegant.

How can I create hygienic identifiers in code generated by procedural macros?

When writing a declarative (macro_rules!) macro, we automatically get macro hygiene. In this example, I declare a variable named f in the macro and pass in an identifier f which becomes a local variable:
macro_rules! decl_example {
($tname:ident, $mname:ident, ($($fstr:tt),*)) => {
impl std::fmt::Display for $tname {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let Self { $mname } = self;
write!(f, $($fstr),*)
}
}
}
}
struct Foo {
f: String,
}
decl_example!(Foo, f, ("I am a Foo: {}", f));
fn main() {
let f = Foo {
f: "with a member named `f`".into(),
};
println!("{}", f);
}
This code compiles, but if you look at the partially-expanded code, you can see that there's an apparent conflict:
impl std::fmt::Display for Foo {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let Self { f } = self;
write!(f, "I am a Foo: {}", f)
}
}
I am writing the equivalent of this declarative macro as a procedural macro, but do not know how to avoid potential name conflicts between the user-provided identifiers and identifiers created by my macro. As far as I can see, the generated code has no notion of hygiene and is just a string:
src/main.rs
use my_derive::MyDerive;
#[derive(MyDerive)]
#[my_derive(f)]
struct Foo {
f: String,
}
fn main() {
let f = Foo {
f: "with a member named `f`".into(),
};
println!("{}", f);
}
Cargo.toml
[package]
name = "example"
version = "0.1.0"
edition = "2018"
[dependencies]
my_derive = { path = "my_derive" }
my_derive/src/lib.rs
extern crate proc_macro;
use proc_macro::TokenStream;
use quote::quote;
use syn::{parse_macro_input, DeriveInput, Meta, NestedMeta};
#[proc_macro_derive(MyDerive, attributes(my_derive))]
pub fn my_macro(input: TokenStream) -> TokenStream {
let input = parse_macro_input!(input as DeriveInput);
let name = input.ident;
let attr = input.attrs.into_iter().filter(|a| a.path.is_ident("my_derive")).next().expect("No name passed");
let meta = attr.parse_meta().expect("Unknown attribute format");
let meta = match meta {
Meta::List(ml) => ml,
_ => panic!("Invalid attribute format"),
};
let meta = meta.nested.first().expect("Must have one path");
let meta = match meta {
NestedMeta::Meta(Meta::Path(p)) => p,
_ => panic!("Invalid nested attribute format"),
};
let field_name = meta.get_ident().expect("Not an ident");
let expanded = quote! {
impl std::fmt::Display for #name {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let Self { #field_name } = self;
write!(f, "I am a Foo: {}", #field_name)
}
}
};
TokenStream::from(expanded)
}
my_derive/Cargo.toml
[package]
name = "my_derive"
version = "0.1.0"
edition = "2018"
[lib]
proc-macro = true
[dependencies]
syn = "1.0.13"
quote = "1.0.2"
proc-macro2 = "1.0.7"
With Rust 1.40, this produces the compiler error:
error[E0599]: no method named `write_fmt` found for type `&std::string::String` in the current scope
--> src/main.rs:3:10
|
3 | #[derive(MyDerive)]
| ^^^^^^^^ method not found in `&std::string::String`
|
= help: items from traits can only be used if the trait is in scope
= note: this error originates in a macro outside of the current crate (in Nightly builds, run with -Z external-macro-backtrace for more info)
help: the following trait is implemented but not in scope; perhaps add a `use` for it:
|
1 | use std::fmt::Write;
|
What techniques exist to namespace my identifiers from identifiers outside of my control?
Summary: you can't yet use hygienic identifiers with proc macros on stable Rust. Your best bet is to use a particularly ugly name such as __your_crate_your_name.
You are creating identifiers (in particular, f) by using quote!. This is certainly convenient, but it's just a helper around the actual proc macro API the compiler offers. So let's take a look at that API to see how we can create identifiers! In the end we need a TokenStream, as that's what our proc macro returns. How can we construct such a token stream?
We can parse it from a string, e.g. "let f = 3;".parse::<TokenStream>(). But this was basically an early solution and is discouraged now. In any case, all identifiers created this way behave in a non-hygienic manner, so this won't solve your problem.
The second way (which quote! uses under the hood) is to create a TokenStream manually by creating a bunch of TokenTrees. One kind of TokenTree is an Ident (identifier). We can create an Ident via new:
fn new(string: &str, span: Span) -> Ident
The string parameter is self explanatory, but the span parameter is the interesting part! A Span stores the location of something in the source code and is usually used for error reporting (in order for rustc to point to the misspelled variable name, for example). But in the Rust compiler, spans carry more than location information: the kind of hygiene! We can see two constructor functions for Span:
fn call_site() -> Span: creates a span with call site hygiene. This is what you call "unhygienic" and is equivalent to "copy and pasting". If two identifiers have the same string, they will collide or shadow each other.
fn def_site() -> Span: this is what you are after. Technically called definition site hygiene, this is what you call "hygienic". The identifiers you define and the ones of your user live in different universes and won't ever collide. As you can see in the docs, this method is still unstable and thus only usable on a nightly compiler. Bummer!
There are no really great workarounds. The obvious one is to use a really ugly name like __your_crate_some_variable. To make it a bit easier for you, you can create that identifier once and use it within quote! (slightly better solution here):
let ugly_name = quote! { __your_crate_some_variable };
quote! {
let #ugly_name = 3;
println!("{}", #ugly_name);
}
Sometimes you can even search through all identifiers of the user that could collide with yours and then simply algorithmically chose an identifier that does not collide. This is actually what we did for auto_impl, with a fallback super ugly name. This was mainly to improve the generated documentation from having super ugly names in it.
Apart from that, I'm afraid you cannot really do anything.
You can thanks to a UUID:
fn generate_unique_ident(prefix: &str) -> Ident {
let uuid = uuid::Uuid::new_v4();
let ident = format!("{}_{}", prefix, uuid).replace('-', "_");
Ident::new(&ident, Span::call_site())
}

Resources