How to share a transaction between repositories? - rust

I use sqlx to communicate with my Postgres database. I'm trying to abstract the database communication with a Repository pattern. Aslo, with this abstraction I would like to share the database transaction between repositories using Unit Of Work pattern. The only problem that I have I don't know how to share the sqlx transaction between those repositories without explicitly providing transaction in save arguments (e.g. repository_x.save(entity, transaction)). I would like to create and share transactions in uow (unit of work) abstraction.
I want to achieve something like this
struct CommandHandler {
unit_of_work: UnitOfWork
}
impl CommandHandler {
pub fn handle(&self, &command: Command) {
let repository_a = self.unit_of_work.repository_a();
let repository_b = self.unit_of_work.repository_b();
let entity_a = repository_a.get_by_id(command.id);
let entity_b = repository_b.get_by_id(entity_a.id);
unit_of_work.start_transaction();
repository_a.save(entity_a);
repository_b.save(entity_b);
unit_of_work.commit_transaction();
}
}
Does anyone know what the implementation of the unit_of_work struct would look like to execute the above example?

Related

How to fix Bevy ECS queries conflicting even with filters

I am trying to execute the below two queries in a bevy system function.
fn move_player(
mut player_query: Query<(&mut Velocity, &mut Transform, &SpriteSize, &Player), With<PlayerId>>,
wall_query: Query<(&Transform, &SpriteSize), With<Barrier>>,
) {
for (mut player_velocity, mut player_tf, player_size, player) in player_query.iter_mut() {
for (wall_tf, wall_size) in wall_query.iter() {
}
}
}
I inserted the PlayerId component to the Player entity and the Barrier component to the Wall entities when spawning them. The PlayerId is not inserted in Wall entities and the Barrier component is not inserted in the Player entity.
When I run the above function I get the error below;
thread 'main' panicked at 'error[B0001]: Query<(&mut
bevy_transform::components::transform::Transform,
&bevy_fantasy::Sprite Size),
bevy_ecs::query::filter::With<bevy_fantasy::Barrier>> in system
bevy_fantasy::player::move_player accesses component( s)
bevy_transform::components::transform::Transform in a way that
conflicts with a previous system parameter. Consider using
Without<T> to create disjoint Queries or merging conflicting Queries
into a ParamSet.
Why are the 2 queries conflicting when I filter them using unique components?
You're using
mut player_query: Query<(&mut Velocity, &mut Transform, &SpriteSize, &Player), With<PlayerId>>,
wall_query: Query<(&Transform, &SpriteSize), With<Barrier>>,
All entities that have Transform, Velocity, SpriteSize, Player, PlayerId, and Barrier components are in both queries.
There is no way for Rust or Bevy to tell that there aren't any such Entities.
Getting mutable references to Transform would therefore be undefined behaviour
To fix it just follow one of the suggestions.
I have had this problem myself. As #cafce25 you need to follow the suggestion of the error message. If you add a Without to one of the queries it can exclude the results of the other query.
Here I've modified your snippet to use Without<Barrier> in the player_query.
fn move_player(
mut player_query: Query<(&mut Velocity, &mut Transform, &SpriteSize, &Player), With<PlayerId>, Without<Barrier>,
wall_query: Query<(&Transform, &SpriteSize), With<Barrier>>,
) {
for (mut player_velocity, mut player_tf, player_size, player) in player_query.iter_mut() {
for (wall_tf, wall_size) in wall_query.iter() {
}
}
}

How to avoid clones when using postgres_types::Json?

I'm currently doing a rust app which uses tokio postgres and i need to make a sql request to fetch some data based on a jsonb row. The problem is that tokio postgres use a particular type (postgres_types::Json) which can be used like this : &Json::<Struct>(struct_var).
The struct var can't be a reference so the Json takes ownership which raises a problem as i need to use one of the struct's field after.
I could solve the problem using clone but i wanted to know before if there was an other solution which would not lower the performances.
Here is the function :
pub async fn user_exists_ipv4(
pool: &Pool,
ip: IpAddr,
device: &Device,
) -> Result<Option<Uuid>, String> {
// Get a connection from the pool
let conn = get_connection(pool).await?;
let country = &device.country[..];
// Get the user id from the database
let result = conn
.query(
FETCH_USER_QUERY_FOR_V4,
&[
&ip.to_string(),
&Json::<Device>(device.clone()),
&country.to_string(),
],
)
.await?
...
You can use references with Json, it is simply a wrapper that implements ToSql for types that are Serialize-able. That will include &T where T: Serialize. So you can use it with device directly as it is:
&Json::<&Device>(device)
You also don't need to annotate the type of Json explicitly since it can be inferred directly from what you pass to it. The code above could be more succinctly written as:
&Json(device)

NEP-141 implementation

While trying to implement NEP-141 fungible token, I am using trait
impl FungibleTokenCore for FungibleToken {
fn ft_transfer(&mut self, receiver_id: ValidAccountId, amount: U128, memo: Option<String>) {
assert_one_yocto();
let sender_id = env::predecessor_account_id();
let amount: Balance = amount.into();
self.internal_transfer(&sender_id, receiver_id.as_ref(), amount, memo);
}
}
But the problem is the function ft_transfer is inaccessible from the contract. It gives error:
"Contract method is not found".
export TOKEN=dev-1618119753426-1904392
near call $TOKEN ft_transfer '{"receiver_id":"avrit.testnet", "amount": 10, "memo":""}' --accountId=amiyatulu.testnet
Your method must be public. See the near-sdk-rs docs README for a few examples.
https://github.com/near/near-sdk-rs
You probably need a pub before that fn. See Best Practices.
Also see FT example. You can use the near-contract-standards library to simplify your efforts.
As you mentioned in a comment, the problem is that you need to:
Use #[near_bindgen] for the impl definition:
#[near_bindgen]
impl FungibleTokenCore for FungibleToken { ... }
Use a public method:
pub fn ft_transfer(&mut self, ...)
There is already an implementation of this token in near-contract-standards
I am unable to add custom logic into the token using the library, so not using it.
Regarding how to extend the behaviour of the toke I suggest you to take a look at how the Rainbow Bridge does it for ERC20 tokens bridged to Near: BridgeToken. We also needed to extend its functionalities, and for this end, we used the Token as an internal field, and then changed a bit the public functions exposed.
There is also a useful macro to derive base implementation for common functions.
Like burning some tokens during transfer.
To this, you can follow the previous approach, without using the macro to expose all functions, and instead implement ft_transfer properly for your use case, but still making calls to the inner field: token: FungibleToken.

How can I make this Rust code more idiomatic

Recently I started to learn Rust and one of my main struggles is converting years of Object Oriented thinking into procedural code.
I'm trying to parse a XML that have tags that are processed by an specific handler that can deal with the data it gets from the children.
Further more I have some field members that are common between them and I would prefer not to have to write the same fields to all the handlers.
I tried my hand on it and my code came out like this:
use roxmltree::Node; // roxmltree = "0.14.0"
fn get_data_from(node: &Node) -> String {
let tag_name = get_node_name(node);
let tag_handler: dyn XMLTagHandler = match tag_name {
"name" => NameHandler::new(),
"phone" => PhoneHandler::new(),
_ => DefaultHandler::new()
}
if tag_handler.is_recursive() {
for child in node.children() {
let child_value = get_data_from(&child);
// do something with child value
}
}
let value: String = tag_handler.value()
value
}
// consider that handlers are on my project and can be adapted to my needs, and that XMLTagHandler is the trait that they share in common.
My main issues with this are:
This feels like a Object oriented approach to it;
is_recursive needs to be reimplemented to each struct because they traits cannot have field members, and I will have to add more fields later, which means more boilerplate for each new field;
I could use one type for a Handler and pass to it a function pointer, but this approach seems dirty. e.g.:=> Handler::new(my_other_params, phone_handler_func)
This feels like a Object oriented approach to it
Actually, I don't think so. This code is in clear violation of the Tell-Don't-Ask principle, which falls out from the central idea of object-oriented programming: the encapsulation of data and related behavior into objects. The objects (NameHandler, PhoneHandler, etc.) don't have enough knowledge about what they are to do things on their own, so get_data_from has to query them for information and decide what to do, rather than simply sending a message and letting the object figure out how to deal with it.
So let's start by moving the knowledge about what to do with each kind of tag into the handler itself:
trait XmlTagHandler {
fn foreach_child<F: FnMut(&Node)>(&self, node: &Node, callback: F);
}
impl XmlTagHandler for NameHandler {
fn foreach_child<F: FnMut(&Node)>(&self, _node: &Node, _callback: F) {
// "name" is not a recursive tag, so do nothing
}
}
impl XmlTagHandler for DefaultHandler {
fn foreach_child<F: FnMut(&Node)>(&self, node: &Node, callback: F) {
// all other tags may be recursive
for child in node.children() {
callback(child);
}
}
}
This way you call foreach_child on every kind of Handler, and let the handler itself decide whether the right action is to recurse or not. After all, that's why they have different types -- right?
To get rid of the dyn part, which is unnecessary, let's write a little generic helper function that uses XmlTagHandler to handle one specific kind of tag, and modify get_data_from so it just dispatches to the correct parameterized version of it. (I'll suppose that XmlTagHandler also has a new function so that you can create one generically.)
fn handle_tag<H: XmlTagHandler>(node: &Node) -> String {
let handler = H::new();
handler.foreach_child(node, |child| {
// do something with child value
});
handler.value()
}
fn get_data_from(node: &Node) -> String {
let tag_name = get_node_name(node);
match tag_name {
"name" => handle_tag::<NameHandler>(node),
"phone" => handle_tag::<PhoneHandler>(node),
_ => handle_tag::<DefaultHandler>(node),
}
}
If you don't like handle_tag::<SomeHandler>(node), also consider making handle_tag a provided method of XmlTagHandler, so you can instead write SomeHandler::handle(node).
Note that I have not really changed any of the data structures. Your presumption of an XmlTagHandler trait and various Handler implementors is a pretty normal way to organize code. However, in this case, it doesn't offer any real improvement over just writing three separate functions:
fn get_data_from(node: &Node) -> String {
let tag_name = get_node_name(node);
match tag_name {
"name" => get_name_from(node),
"phone" => get_phone_from(node),
_ => get_other_from(node),
}
}
In some languages, such as Java, all code has to be part of some class – so you can find yourself writing classes that don't exist for any other reason than to group related things together. In Rust you don't need to do this, so make sure that any added complication such as XmlTagHandler is actually pulling its weight.
is_recursive needs to be reimplemented to each struct because they traits cannot have field members, and I will have to add more fields later, which means more boilerplate for each new field
Without more information about the fields, it's impossible to really understand what problem you're facing here; however, in general, if there is a family of structs that have some data in common, you may want to make a generic struct instead of a trait. See the answers to How to reuse codes for Binary Search Tree, Red-Black Tree, and AVL Tree? for more suggestions.
I could use one type for a Handler and pass to it a function pointer, but this approach seems dirty
Elegance is sometimes a useful thing, but it is subjective. I would recommend closures rather than function pointers, but this suggestion doesn't seem "dirty" to me. Making closures and putting them in data structures is a very normal way to write Rust code. If you can elaborate on what you don't like about it, perhaps someone could point out ways to improve it.

How can I transfer some values ​into a Rust generator at each step?

I use generators as long-lived asynchronous threads (see
How to implement a lightweight long-lived thread based on a generator or asynchronous function in Rust?) in a user interaction scenario. I need to pass user input into the generator at each step. I think I can do it with a RefCell, but it is not clear how to transfer the reference to the RefCell inside the generator when creating its instance?
fn user_scenario() -> impl Generator<Yield = String, Return = String> {
|| {
yield format!("what is your name?");
yield format!("{}, how are you feeling?", "anon");
return format!("{}, bye !", "anon");
}
}
The UserData structure contains user input, the second structure contains a user session consisting of UserData and the generator instance. Sessions are collected in a HashMap.
struct UserData {
sid: String,
msg_in: String,
msg_out: String,
}
struct UserSession {
udata_cell: RefCell<UserData>,
scenario: Pin<Box<dyn Generator<Yield = String, Return = String>>>,
}
type UserSessions = HashMap<String, UserSession>;
let mut sessions: UserSessions = HashMap::new();
UserData is created at the time of receiving user input - at this moment I need to send a link to UserData inside the generator, wrapping it in RefCell, but I don’t know how to do it since the generator has a 'static lifetime, and the RefCell lives less!
let mut udata: UserData = read_udata(&mut stream);
let mut session: UserSession;
if udata.sid == "" { //new session
let sid = rnd.gen::<u64>().to_string();
udata.sid = sid.clone();
sessions.insert(
sid.clone(),
UserSession {
udata_cell: RefCell::new(udata),
scenario: Box::pin(user_scenario())
}
);
session = sessions.get_mut(&sid).unwrap();
}
The full code is here, but the generator here does not see user input.
Disclaimer: resumption arguments are a planned extension for generators, so at some point in the future it will be possible to resume the argument with &UserData.
For now, I will recommend sharing ownership. The cost is fairly minor (one memory allocation, one indirection) and will save you a lot of troubles:
struct UserSession {
user_data: Rc<RefCell<UserData>>,
scenario: ..,
}
Which is built with:
let user_data = Rc::new(RefCell::new(udata));
UserSession {
user_data: user_data.clone(),
scenario: Box::pin(user_scenario(user_data))
}
Then, both the session and the generator have access to the UserData each on their turn, and everything is fine.
There is one little wrinkle: be careful of scopes. If you keep a .borrow() alive across a yield point, which is possible, then you will have a run-time error when trying to write to it outside the generator.
A more involved solution would be using a queue of messages; which would also involve memory allocation, etc... I would consider your UserData structure to be a degenerate form of a pair of queues: it's two queues with capacity for one message. You could make it more explicit with a regular queue, but that would not buy you much.

Resources