Using node.js I'd like to read a RabbitMQ queue and write the message to a MongoDB database. How do I wrap the read and the write within an ACID transaction so the whole thing either works or fails?
Mongo recommends emulate two-phase commit with the following pattern:
https://docs.mongodb.com/manual/core/write-operations-atomicity/
Also it is already implemented:
https://github.com/rystsov/mongodb-transaction-example
Related
I am new to redis. I have an application in which i have multiple redis commands which makes a transaction. If one of them fails does redis rollback the transaction like relational databases ? Is it users responsibility to rollback the transaction ?
Redis does not rollback transactions like the relational databases does.
If you have a relational databases background, the fact that Redis commands can fail during a transaction, but still Redis will execute the rest of the transaction instead of rolling back, may look odd to you.
However there are good opinions for this behavior:
Redis commands can fail only if called with a wrong syntax (and the problem is not detectable during the command queuing), or against keys holding the wrong data type: this means that in practical terms a failing command is the result of a programming errors, and a kind of error that is very likely to be detected during development, and not in production.
Redis is internally simplified and faster because it does not need the ability to roll back.
Check it out Why redis does not support rollback transactions from the documentation and from here .
Documentaion here. Redis does not supports rollback.
So here is my scenario:
I have a Firebase database
I also have a MongoDB Atlas database
There is a scenario where I have to write to a collection in a MongoDB Atlas database, then another write to a collection in a Firebase database and finally a completion write back to the MongoDB Atlas database.
This is how I handle this:
I start a MongoDB transaction
I perform a write to MongoDB (in case this fails, I can just rollback no issues)
I perform a write to Firebase (in case this fails, I can still cancel MongoDB commit and rollback)
I perform another final write to MongoDB (ISSUE HERE)
I then commit the MongoDB transaction (ISSUE HERE)
As you can see that in points 4 and 5, if the operation fails, the writes to MongoDB can be rolled back but not the writes to Firebase. Obviously because both these databases are not linked and are not under the same systems. How does one approach this? I'm sure there are lots of systems out there with multiple databases.
I am using NodeJS and Express to handle this.
There are many strategies:
Accept the changes in the non-transactional database even if the transaction fails. Accept that the non-transactional database may have incorrect data. For example, depending on how you view notifications here on SO the number of notifications in the top nav bar can be wrong.
Have a janitor process that periodically goes through the transactional database and updates the non-transactional database to match.
Same as 2 but trigger the janitor when a transaction is aborted, when you know some changes would need to be made on the non-transactional database.
Perform another write to non-transactional database after the transaction completes. This way you'll miss data from some completed transactions in the non-transactional database but you won't have data from aborted transactions there.
When reading, read from transactional database first before reading from non-transactional database. If the data isn't present in transactional database, skip non-transactional read.
Expire data from non-transactional database to reduce the time when the data there is incorrect.
I am using knew to connect with postgres in my application. I am getting following error when I run
knex migrate:latest
TimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call?
at Timeout._onTimeout
Referring some thread , I understand that I have to add transacting call but Do I need to add in all the sql calls of my app ?
In documentation , It do not give me details about when to add this ? why is must ? My queries are mostly of type "GET", hence not sure if those queries needs to apply transacting?
It seems a library bug, probably.
Generally speaking, any behaviors including SELECT also need a transaction with read locking. DB will organize the resource locking sequence according to the transaction isolation level setting and mostly READ COMMITTED is default. Rows in a table cannot be deleted while a user is reading it until finished the action. Delete (exclusive locking) waits until the Select (read shared lock) release it, even if we didn't mention a begin transaction.
In this reason, most of the database connection libraries are supporting "auto commit" option like this, this and this to automatically wrap with a transaction by default if there is no explicit transaction made (or supported by the DBMS session option natively), so all the request run on a transaction block.
Knex seems not have this option explicitly. I can find
it may differ to the DBMS types. Oracle dialect. While reading the code, I found Oracle implementation have it here but Postgresql implementation here does not have auto commit. It looks incomplete to me.
The document also says it could select query without transacting call. If it leaks many open session, then it's obviously a bug. Please file a bug report with a sample code to reproduce this issue.
Or you could inspect what queries in the pending list from the database side. All the modern database system could list up the sessions and locking status. I suppose you have mixed with the naive select call and the transacting() call and then the naive select calls may appended to an uncommitted open transaction. You can watch what is happening from the DB admin feature like this.
I'm working on a module which consumes some HTTP resources, write in a postgres, and finally push a message to the message bus (RabbitMQ).
I would like to figure out how to deal with transactions inside a module: how to encapsulate my postgres operation and the push to RabbitMQ (i.e in case the message could not be push to RabbitMQ my DB operation should be rollbacked) ?
Thanks.
There are several techniques to wrap parts of a Spring Integration flow in a transaction; see this answer for some examples.
You must, of course, use direct channels throughout.
I have a requirement: when a new comment is posted, i want to get all previous comment's owner id and send a notification.
Problem here is how will i know that a new comment was added to cassandra table. What will the solution for this kind of requirement ?
If you want to use only cassandra, without changes, it's impossible.
With changes, you have three options:
You can use cassandra as embedded service in java. Here is a simple and short how to: http://prettyprint.me/prettyprint.me/2010/02/14/running-cassandra-as-an-embedded-service/index.html
You can create a wrapper for your cassandra connection. An Application which handles the Cassandra Connection and is available via API for your other application.
Cassandra has a trigger functionality. (Never used it and never heard that someone is using it)
I prefer the second solution. Here are the reasons why:
It's simpler to create.
You can handler all your views in this application.
You can validate the input, resolve relations, logging data etc.
You can simply push the new added comment to kafka or another message queue.
This could be a setup:
Create a new comment -> call a backend api -> call the cassandra database interface -> push a new message to kafka -> send the data to all kafka consumer