I wrote the code to commit and rollback using a sequelize transaction. But I do not know whether I need to declare it every time, by calling sequelize.transaction() at the top of every part, where transaction should be applied.
I want to find a more structured and reusable method. For example, should I use middleware or some design pattern? Please let me know.
--- update ---
Sorry, previous question is wrong.
In sequelize, the question was what to do when resource access is restricted by transaction isolation level.
At a high level of isolation, if a resource is accessed at the same time, it throws an error. Is there a way to try this again or do something right?
In some cases even locks at the table level
It directly accesses the database and releases the lock.
Please take a look at the zb-sequelize npm package.
It simplifies transaction management extremely by adding 2 decorators.
import { Transactional, Tx } from 'zb-sequelize';
#Transactional
function fooBar(#Tx transaction) {
foo(transaction);
bar(transaction);
}
#Transactional
function foo(#Tx transaction) {
}
#Transactional
function bar(#Tx transaction) {
}
Related
I am getting n post requests (on each webhook trigger) from a webhook. The data is identical on all requests that come from the same trigger - they all have the same 'orderId'. I'm interested in saving only one of these requests, so on each endpoint hit I'm checking if this specific orderId exists as a row in my database, otherwise - create it.
if (await orderIdExists === null) {
await Order.create(
{
userId,
status: PENDING,
price,
...
}
);
await sleep(3000)
function sleep(ms) {
return new Promise((resolve) => {
setTimeout(resolve, ms);
});
}
}
return res.status(HttpStatus.OK).send({success: true})
} catch (error) {
return res.status(HttpStatus.INTERNAL_SERVER_ERROR).send({success: false})
}
}
else {
return res.status(HttpStatus.UNAUTHORIZED).send(responseBuilder(false, responseErrorCodes.INVALID_API_KEY, {}, req.t));
}
}
Problem is before Sequelize manages to save the new created order in the db (all of the n post requests get to the enpoint in 1 sec - or less), I already get another endpoint hit from the other n post requests, while orderIdExists still equels null, So it ends up creating more identical orders. One (not so good solution) is to make orderId unique in the db, which prevents the creation of on order with the same orderId, but tries to anyway, which results in empty id incrementation in the db. Any idea would be greatly appreciated.
p.s. as you can see, i tried adding a 'sleep' function to no avail.
Your database is failing to complete its save operation before the next request arrives. The problem is similar to the Dogpile Effect or a "cache slam".
This requires some more thinking about how you are framing the problem: in other words the "solution" will be more philosophical and perhaps have less to do with code, so your results on StackOverflow may vary.
The "sleep" solution is no solution at all: there's no guarantee how long the database operation might take or how long you might wait before another duplicate request arrives. As a rule of thumb, any time "sleep" is deployed as a "solution" to problems of concurrency, it usually is the wrong choice.
Let me posit two possible ways of dealing with this:
Option 1: write-only: i.e. don't try to "solve" this by reading from the database before you write to it. Just keep the pipeline leading into the database as dumb as possible and keep writing. E.g. consider a "logging" table that just stores whatever the webhook throws at it -- don't try to read from it, just keep inserting (or upserting). If you get 100 ping-backs about a specific order, so be it: your table would log it all and if you end up with 100 rows for a single orderId, let some other downstream process worry about what to do with all that duplicated data. Presumably, Sequelize is smart enough (and your database supports whatever process locking) to queue up the operations and deal with write repetitions.
An upsert operation here would be helpful if you do want to have a unique constraint on the orderId (this seems sensible, but you may be aware of other considerations in your particular setup).
Option 2: use a queue. This is decidedly more complex, so weigh carefully wether or not your use-case justifies the extra work. Instead of writing data immediately to the database, throw the webhook data into a queue (e.g. a first-in-first-out FIFO queue). Ideally, you would want to choose a queue that supports de-duplication so that exiting messages are guaranteed to be unique, but that infers state, and that usually relies on a database of some sort, which is sort of the problem to begin with.
The most important thing a queue would do for you is it would serialize the messages so you can deal with them one at a time (instead of multiple database operations kicking off concurrently). You can upsert data into the database when you read a message out of the queue. If the webhook keeps firing and more messages enter the queue, that's fine because the queue forces them all to line up single-file and you can handle each insertion one at a time. You'll know that each database operation has completed before it moves on to the next message so you never "slam" the DB. In other words, putting a queue in front of the database will allow it to handle data when the database is ready instead of whenever a webhook comes calling.
The idea of a queue here is similar to what a semaphore accomplishes. Note that your database interface may already implement a kind of queue/pool under-the-hood, so weigh this option carefully: don't reinvent a wheel.
Hope those ideas are useful.
You saved my time #Everett and #april-henig. I found that saving directly into database read to records duplicates. If you store records into an object and deal with one record at time helped me a lot.
May be I would share my solution perhaps some may find it useful in future.
Create an empty object to save success request
export const queueAllSuccessCallBack = {};
Save POST request in object
if (status === 'success') { // I checked the request if is only successfully
const findKeyTransaction = queueAllSuccessCallBack[client_reference_id];
if (!findKeyTransaction) { // check if Id is not added to avoid any duplicates
queueAllSuccessCallBack[client_reference_id] = {
transFound,
body,
}; // save new request id as key and the value as data you want
}
}
Access the object to save into database
const keys = Object.keys(queueAllSuccessCallBack);
keys.forEach(async (key) => {
...
// Do extra checks if you want to do so
// Or save in database direct
});
Is my application service obtaining a lock using JDBC LockRepository supposed to run inside an #Transaction ?
We have a sample application service that updates a JDBCRepository and since this application can run on multiple JVMS (headless). We needed a global lock to serialize those updates.
I looked at your test and was hoping my use case would work too. ... JdbcLockRegistryDifferentClientTests
My config has a DefaultLockRepository and JdbcLockRegistry;
I launched( java -jar boot.jar) my application on two terminals to simulate. When I obtain a lock and issue a tryLock() without #Transaction on my application service both of them get the lock (albeit) one after the other almost immediately. I expected one of them to NOT get it for at least 10 seconds (Default expiry).
Service (Instance -1) {
Obtain("KEY-1")
tryLock()
DoWork()
unlock();
close();
}
Service (Instance -2) {
Obtain("KEY-1")
tryLock() <-- Wait until the lock expires or the unlock happens
DoWork()
unlock();
close();
}
I also noticed here DefaultLockRepository that the transaction scope (if not inherited) is only around the JDBC operation.
When I change my service to
#Transaction
Service (Instance -1) {
Obtain("KEY-1")
tryLock()
DoWork()
unlock();
close();
}
It works as expected.
I am quite sure I missed something ? But I expect my lock operation to honor global-locks (the fact that a lock exists in a JDBC store with an expiration) until an unlock or expiration.
Is my understanding incorrect ?
This works as designed. I didnt configure the DefaultLockRepository correctly and the default ttl was shorter than my service (artificial wait) lock duration. My apologies. :) Josh Long helped me figure this out :)
You have to use different client ids. The same is means the same client. That for special use-case. Use different client ids as they are different instances
The behavior here is subtle (or obvious once you see how this is working) and the general lack of documentation unhelpful, so here's my experience.
I created a lock table by looking at the SQL in DefaultLockRepository, which appeared to imply a composite primary key of REGION, LOCK_KEY and CLIENT_ID - THIS WAS WRONG.
I subsequently found the SQL script in the spring-integration-jdbc JAR, where I could see that the composite primary key MUST BE on just REGION and LOCK_KEY as #ArtemBilan says.
The reason is that the lock doesn't care about the client, obviously, so the primary key must be just the REGION and LOCK_KEY columns. These columns are used when acquiring a lock and it is the key violation that occurs should another client attempt to obtain the lock that is used to restrict other client IDs.
This also implies that, again as #ArtemBilan says, each client instance must have a unique ID, which is the default behavior when no ID specified at construction time.
According to the documentation in beginWriteTransaction method
Only one write transaction can be open at a time. Write transactions
cannot be nested, and trying to begin a write transaction on a
RLMRealm which is already in a write transaction will throw an
exception. Calls to beginWriteTransaction from RLMRealm instances in
other threads will block until the current write transaction
completes.
but when I looked at code I found next one:
void Realm::begin_transaction()
{
check_read_write(this);
verify_thread();
if (is_in_transaction()) {
throw InvalidTransactionException("The Realm is already in a write transaction");
}
// make sure we have a read transaction
read_group();
transaction::begin(*m_shared_group, m_binding_context.get());
}
Could you explain when a condition is met ?
Calls to beginWriteTransaction from RLMRealm instances in other
threads will block until the current write transaction completes.
The last call will lead to a method which will leave this intermediate cross-platform C++ API level and goes one level deeper into our internal storage engine, where it uses a mutex to coordinate exclusive access between processes and threads.
Once this mutex is acquired, it is hold until the write transaction is committed or cancelled.
A lot of articles on CQRS imply that sagas have an internal state and must be saved to the event store. I don't see why this is necessary.
For example, say I have three aggregates: Order, Invoice and Shipment. When a customer places an order, the order process starts. However, the shipment cannot be sent until the invoice has been paid and the shipment has first been prepared.
A customer places an order with the PlaceOrder command.
The OrderCommandHandler calls OrderRepository::placeOrder().
The OrderRepository::placeOrder() method returns an OrderPlaced event, which is stored in the EventStore and sent along the EventBus.
The OrderPlaced event contains the orderId and pre-allocates a invoiceId and shipmentId.
The OrderProcess ("saga") receives the OrderPlaced event, creating the invoice and preparing the shipment if necessary (achieving idempotence in the event handler).
6a. At some point in time, the OrderProcess receives the InvoicePaid event. It checks to see whether the shipment has been prepared by looking up the shipment in the ShipmentRepository, and if so, sends the shipment.
6b. At some point in time, the OrderProcess receives the ShipmentPrepared event. It chekcs to see whether the invoice has been paid by looking up the invoice in the InvoiceRepository, and if so, sends the shipment.
To all the experienced DDD/CQRS/ES gurus out there, can you please tell me what concept I'm missing and why this design of a "stateless saga" will not work?
class OrderCommandHandler {
public function handle(PlaceOrder $command) {
$event = $this->orderRepository->placeOrder($command->orderId, $command->customerId, ...);
$this->eventStore->store($event);
$this->eventBus->emit($event);
}
}
class OrderRepository {
public function placeOrder($orderId, $customerId, ...) {
$invoiceId = randomString();
$shipmentId = randomString();
return new OrderPlaced($orderId, $customerId, $invoiceId, $shipmentId);
}
}
class InvoiceRepository {
public function createInvoice($invoiceId, $customerId, ...) {
// Etc.
return new InvoiceCreated($invoiceId, $customerId, ...);
}
}
class ShipmentRepository {
public function prepareShipment($shipmentId, $customerId, ...) {
// Etc.
return new ShipmentPrepared($shipmentId, $customerId, ...);
}
}
class OrderProcess {
public function onOrderPlaced(OrderPlaced $event) {
if (!$this->invoiceRepository->hasInvoice($event->invoiceId)) {
$invoiceEvent = $this->invoiceRepository->createInvoice($event->invoiceId, $event->customerId, $event->invoiceId, ...);
$this->eventStore->store($invoiceEvent);
$this->eventBus->emit($invoiceEvent);
}
if (!$this->shipmentRepository->hasShipment($event->shipmentId)) {
$shipmentEvent = $this->shipmentRepository->prepareShipment($event->shipmentId, $event->customerId, ...);
$this->eventStore->store($shipmentEvent);
$this->eventBus->emit($shipmentEvent);
}
}
public function onInvoicePaid(InvoicePaid $event) {
$order = $this->orderRepository->getOrders($event->orderId);
$shipment = $this->shipmentRepository->getShipment($order->shipmentId);
if ($shipment && $shipment->isPrepared()) {
$this->sendShipment($shipment);
}
}
public function onShipmentPrepared(ShipmentPrepared $event) {
$order = $this->orderRepository->getOrders($event->orderId);
$invoice = $this->invoiceRepository->getInvoice($order->invoiceId);
if ($invoice && $invoice->isPaid()) {
$this->sendShipment($this->shipmentRepository->getShipment($order->shipmentId));
}
}
private function sendShipment(Shipment $shipment) {
$shipmentEvent = $shipment->send();
$this->eventStore->store($shipmentEvent);
$this->eventBus->emit($shipmentEvent);
}
}
Commands can fail.
That's the primary problem; the entire reason we have aggregates in the first place, is so that they can protect the business from invalid state changes. So what happens in onOrderPlaced() if the createInvoice command fails?
Furthermore (though somewhat related) you are lost in time. Process managers handle events; events are things that have already happened in the past. Ergo -- process managers are running in the past. In a very real sense, they can't even talk to anyone that has seen a more recent event than the one that they are processing right now (in fact, they might be the first handler to see this event, meaning everybody else is a step in the past).
This is why you can't run commands synchronously; your event handler is in the past, and the aggregate can't protect its invariant unless it is running in the present. You need the asynchronous dispatch to get the command running against the correct version of the aggregate.
Next problem: when you dispatch the command asynchronously, you can't directly observe the result. It might fail, or get lost en route, and the event handler won't know. The only way that it can determine that the command succeeded is by observing a generated event.
A consequence is that the process manager cannot distinguish a command that failed from a command that succeeded (but the event hasn't become visible yet). To support a finite sla, you need a timing service that wakes up the process manager from time to time to check on things.
When the process manager wakes up, it needs state to know if it has already finished the work.
With state, everything is so much simpler to manage. The process manager ccan re-issue possibly lost commands to be sure that they get through, without also flooding the domain with commands that have already succeeded. You can model the clock without throwing clock events into the domain itself.
What you are referring to seems to be along the lines of orchestration (with a process manager) vs choreography.
Choreography works absolutely fine but you will not have a process manager as a first-class citizen. Each command handler will determine what to do. Even my current project (December 2015) uses choreography quite a bit with a webMethods integration broker. Messages may even carry some of the state along with them. However, when anything needs to take place in parallel your are rather shafted.
A relevant service orchestration vs choreography question demonstrates these concepts quite nicely. One of the answers contains a nice pictorial representation and, as stated in the answer, more complex interactions typically require state for the process.
I find that you typically will require state when interacting with services and endpoints beyond your control. Human interaction, such as authorizations, also require this type of state.
If you can get away with not having state specifically for a process manager it may be OK. However, later on you may run into issues. For example, some low-level/core/infrastructure service may span across various processes. This may cause issues in a choreography scenario.
I'm reading Accounting Pattern and quite curious about implementing it in CQRS.
I think AccountingTransaction is an aggregate root as it protects the invariant:
No money leaks, it should be transfer from one account to another.
public class AccountingTransaction {
private String sequence;
private AccountId from;
private AccountId to;
private MonetaryAmount quantity;
private DateTime whenCharged;
public AccountingTransaction(...) {
raise(new AccountingEntryBookedEvent(sequence, from, quantity.negate(),...);
raise(new AccountingEntryBookedEvent(sequence, to, quantity,...);
}
}
When the AccountingTransaction is added to its repository. It publishes several AccountingEntryBookedEvent which are used to update the balance of corresponding accounts on the query side.
One aggregate root updated per db transaction, eventual consistency, so far so good.
But what if some accounts apply transfer constraints, such as cannot transfer quantity more that current balance? I can use the query side to get the account's balance, but I'm worried that data from query side is stale.
public class TransferApplication {
public void transfer(...) {
AccountReadModel from = accountQuery.findBy(fromId);
AccountReadModel to = accountQuery.findBy(toId);
if (from.balance() > quantity) {
//create txn
}
}
}
Should I model the account in the command side? I have to update at least three aggregate roots per db transaction(from/to account and account txn).
public class TransferApplication {
public void transfer(...) {
Account from = accountRepository.findBy(fromId);
Account to = accountRepository.findBy(toId);
Transaction txn = new Transaction(from, to, quantity);
//unit or work locks and updates all three aggregates
}
}
public class AccountingTransaction {
public AccountingTransaction(...) {
if (from.permit(quantity) {
from.debit(quantity);
to.credit(quantity);
raise(new TransactionCreatedEvent(sequence, from, to, quantity,...);
}
}
}
There are some use cases that will not allow for eventual consistency. CQRS is fine but the data may need to be 100% consistent. CQRS does not imply/require eventual consistency.
However, the transactional/domain model store will be consistent and the balance will be consistent in that store as it represents the current state. In this case the transaction should fail anyway, irrespective of an inconsistent query side. This will be a somewhat weird user experience though so a 100% consistent approach may be better.
I remember bits of this, however M Fowler uses a different meaning of event compared to a domain event. He uses the 'wrong' term, as we can recognize a command in his 'event' definition. So basically he is speaking about commands, while a domain event is something that happened and it can never change.
It is possible that I didn't fully understood that Fowler was referring to, but I would model things differently, more precisely as close to the Domain as possible. We can't simply extract a pattern that can always be applied to any financial app, the minor details may change a concept's meaning.
In OP's example , I'd say that we can have a non-explicit 'transaction': we need an account debited with an amount and another credit with the same amount. The easiest way, me thinks, is to implement it via a saga.
Debit_Account_A ->Account_A_Debited -> Credit_Account_B-> Account_B_Credited = transaction completed.
This should happen in a few ms at most seconds and this would be enough to update a read model. Humans and browsers are slower than a few seconds. And a user know to hit F5 or to wait a few minutes/hours. I won't worry much about the read model accuracy.
If the transaction is explicit i.e the Domain has a Transaction notion and the business really stores transactions that's a whole different story. But even in that case, probably the Transaction would be defined by a number of accounts id and some amounts and maybe a completed flag. However, at this point is pointless to continue, because it really depends on the the Domain's definition and use cases.
Fixed the answer
Finally my solution is having Transaction as domain model.
And project transactions to AccountBalance but I implement special projection which make sure every data consistence before publish actual event.
Just two words: "Event Sourcing" with the Reservation Pattern.
And maybe, but not always, you may need the "Sagas" pattern also.