I know the function's name seems to be self explanatory, however, after researching for quite a while I can't find a transaction number anywhere within a clientSession.
Is it an internal number ? is it possible to get it ?
Transaction numbers are used by mongodb to keep track of operations(read/writes) per transaction per session. Sessions can be started either explicitly by calling startSession() or implicity whenever you create a mongodb connection to db server.
How incrementTransactionNumber() works with sessions (explicit)
When you start a session, by calling client.startSession() method, it will create a new ClientSession. This takes in already created server session pool as one of its' constructor parameters. (See) These server sessions have a property called txnNumber which is initialized to be 0.(Init) So whenever you start a transaction by calling startTransaction(), client session object calls incrementTransactionNumber() internally to increment the txnNumber in server session. And all the successive operations will use the same txnNumber, until you call, commitTransaction() or abortTransaction() methods. Reason that you can't find it anywhere within clientSession is, it is a property of serverSession not clientSession.
ServerSession
class ServerSession {
constructor() {
this.id = { id: new Binary(uuidV4(), Binary.SUBTYPE_UUID) };
this.lastUse = now();
this.txnNumber = 0;
this.isDirty = false;
}
So whenever you try to send a command to database (read/write), this txnNumber would be sent along with it. (Assign transaction number to command)
This is to keep track of database operations that belong to a given transaction per session. (A transaction operation history that uniquely identify each transaction per session.)
How incrementTransactionNumber() works with sessions (implicit)
In this case it would be called every time a new command is issued to the database if that command does not belong to a transaction and it's a write operation where retryWrites are enabled. So each new write operation would have new transaction number as long as it does not belong to a explicitly started transaction with startTransaction(). But in this case as well a txnNumber would be sent along with each command.
execute_operation.
const willRetryWrite =
topology.s.options.retryWrites === true &&
session &&
!inTransaction &&
supportsRetryableWrites(server) &&
operation.canRetryWrite;
if (
operation.hasAspect(Aspect.RETRYABLE) &&
((operation.hasAspect(Aspect.READ_OPERATION) && willRetryRead) ||
(operation.hasAspect(Aspect.WRITE_OPERATION) && willRetryWrite))
) {
if (operation.hasAspect(Aspect.WRITE_OPERATION) && willRetryWrite) {
operation.options.willRetryWrite = true;
session.incrementTransactionNumber();
}
operation.execute(server, callbackWithRetry);
return;
}
operation.execute(server, callback);
Also read this article as well. And yes if you need you can get the transaction number for any session through txnNumber property, clientSession.serverSession.txnNumber.
Related
in our NestJS application we are using TypeORM as ORM to work with db tables and typeorm-transactional-cls-hooked library.
now we have problem with synchronization of requests which are read and modifying database at same time.
Sample:
#Transactional()
async doMagicAndIncreaseCount (id) {
const await { currentCount } = this.fooRepository.findOne(id)
// do some stuff where I receive new count which I need add to current, for instance 10
const newCount = currentCount + 10
this.fooRepository.update(id, { currentCount: newCount })
}
When we executed this operation from the frontend multiple times at the same time, the final count is wrong. The first transaction read currentCount and then start computation, during computation started the second transaction, which read currentCount as well, and first transaction finish computation and save new currentCount, and then also second transaction finish and rewrite result of first transaction.
Our goal is to execute this operation on foo table only once at the time, and other requests should wait until.
I tried set SERIALIZABLE isolation level like this:
#Transactional({ isolationLevel: IsolationLevel.SERIALIZABLE })
which ensure that only one request is executed at time, but other requests failed with error. Can you please give me some advice how to solve that?
I never used TypeORM and moreover you are hiding the DB engine you are using.
Anyway to achieve this target you need write locks.
The doMagicAndIncreaseCount pseudocode should be something like
BEGIN TRANSACTION
ACQUIRE WRITE LOCK ON id
READ id RECORD
do computation
SAVE RECORD
CLOSE TRANSACTION
Alternatively you have to use some operation which is natively atomic on the DB engine; ex. the INCR operation on Redis.
Edit:
Reading on TypeORM find documentation, I can suggest something like:
this.fooRepository.findOne({
where: { id },
lock: { mode: "pessimistic_write", version: 1 },
})
P.S. Looking at the tags of the question I would guess the used DB engine is PostgreSQL.
I am trying to clear timeout set using setTimeout method by node process, in subsequent requests (using express). So, basically, I set timeout when our live stream event starts (get notified by webhook) and aim to stop this for guest users after one hour. One hour is being calculated via setTimeout, which works fine so far. However, if event gets stopped before one hour, I need to clear the timeout. I am trying to use clearTimeOut but it just can't find same variable.
// Event starts
var setTimeoutIds = {};
var val = req.body.eventId;
setTimeoutIds[val] = setTimeout(function() {
req.app.io.emit('disable_for_guest',req.body);
live_events.update({event_id:req.body.eventId},{guest_visibility:false},function(err,data){
//All ok
});
}, disable_after_milliseconds);
console.log(setTimeoutIds);
req.app.io.emit('session_started',req.body);
When event ends:
try{
var event_id = req.body.eventId;
clearTimeout(setTimeoutIds[event_id]);
delete setTimeoutIds[event_id];
}catch(e){
console.log('Event ID could not be removed' + e);
}
req.app.io.emit('event_ended',req.body);
Output :
Output
You are defining setTimeoutIds in the scope of the handler. You must define it at module level.
var setTimeoutIds = {};
router.post('/webhook', function(req, res) {
...
That makes the variable available until the next restart of the server.
Note: this approach only works as long as you only have a single server with a single node process serving your application. Once you go multi-process and/or multi-server, you need a completely different approach.
I have a Meteor application where I use RiveScript, a chatbot module for Node. The module can save some aspects of user input. My issue is that when I run the module on server, the state is not saved for one user, but for all users. How would I go about creating a state for each user?
One method would be to create a new bot for each user like so:
let RiveScript = require('rivescript');
let users = {};
Meteor.onConnection(connection => {
users[connection.id] = new RiveScript({utf8: true});
users[connection.id].bot.loadDirectory('directory/',
() => {
users[connection.id].bot.sortReplies();
}
);
connection.onClose(() => {
delete users[connection.id]
})
});
However, in terms of memory management this can cause an issue. Are there any commonly used patterns in this regard?
I am not familiar with Meteor nor RiveScript, so I hope I'm answering your question.
If you want to keep a certain state for each conversation, e.g.
Chat #1 : the bot is waiting for the user to answer a particular question
Chat #2 : the bot is waiting for a command
...
Why don't you use a simple array mapping a conversation identifier (or in your case connection.id, I'm guessing) to its current state?
Example:
// Just to simulate an enumeration
var States = {WAITING: 0, WAITING_XXX_CONFIRMATION: 1, ...}
var state = [];
Meteor.onConnection(connection => {
state[connection.id] = States.WAITING;
});
// Somewhere else, e.g. in response to a command
if (state[connection.id] == WAITING_XXX_CONFIRMATION && input === 'yes') {
// do something
// reset the state
set[connection.id] = WAITING;
}
You could then keep track of the state of every conversation, and act accordingly. You could also wrap the state management inside an object to make it more reusable and nice to use (e.g. StateManager, to be able to make calls such as StateManager.of(chatId).set(newState).
Hope this helps.
I am facing an issue using SQLite with following scenario.
There are two threads working on database.
Both threads have to insert messages in transactions.
So if say one thread commits after inserting 20k rows and other thread has not committed yet.
In output I see all the data has been committed which was inserted by thread 2 till the moment data was committed by thread 1.
Sample function:
/// <summary>
/// Inserts list of messages in message table
/// </summary>
/// <param name="listMessages"></param>
/// <returns></returns>
public bool InsertMessages(IList<MessageBase> listMessages, bool commitTransaction)
{
bool success = false;
if (listMessages == null || listMessages.Count == 0)
return success;
DatabaseHelper.BeginTransaction(_sqlLiteConnection);
foreach (MessageBase message in listMessages)
{
using (var statement = _sqlLiteConnection.Prepare(_insertMessageQuery))
{
BindMessageData(message, statement);
SQLiteResult result = statement.Step();
success = result == SQLiteResult.DONE;
if (success)
{
Debug.WriteLine("Message inserted suucessfully,messageId:{0}, message:{1}", message.Id, message.Message);
}
else
{
Debug.WriteLine("Message failed,Result:{0}, message:{1}", result, message.Message);
}
}
}
if (commitTransaction)
{
Debug.WriteLine("Data committed");
DatabaseHelper.CommitTransaction(_sqlLiteConnection);
}
else
{
Debug.WriteLine("Data not committed");
}
return success;
}
Is there any way to prevent commit transaction of thread 2 inserts?
In short, it's not possible on a single database.
A single sqlite database cannot have multiple simultaneous writers with separate transaction contexts. sqlite databases also do not have separate contexts per connection; to get a context, you would need to make a new connection. However, as soon as you start the initial write using insert (or update/delete), the transaction needs a RESERVED lock on the database (readers allowed, no other writers), which means parallel writes are impossible. I thought you might be able to fake it with SAVEPOINT and RELEASE, but these are also serialized on the connection and do not generate a separate context.
With that in mind, you may be able to use separate databases connected using ATTACH DATABASE, as long as both threads are not writing to the same table. To do so, you would attach the additional database at runtime which contains the other tables. However, you still need a separate connection for each parallel writer because commits to the connection still apply to all open transactions.
Otherwise, you can get separate transactions by opening an additional connection and the later connection & transaction will just have to wait until the RESERVED lock is released.
References:
SQLite Transactions
SQLite Locking
SQLite ATTACH DATABASE
Overview
I am attempting to understand how to ensure aynchronous safety when using an instance of a model when using Node.js. Here, I use the Mongoose ODM in code samples, but the question applies to any case where a database is used with the asynchronous event-driven I/O approach that Node.js employs.
Consider the following code (which uses Mongoose for MongoDB queries):
Snippet A
MyModel.findOne( { _id : <id #1> }, function( err, doc ) {
MyOtherModel.findOne( { _id : someOtherId }, ( function(err, otherDoc ) {
if (doc.field1 === otherDoc.otherField) {
doc.field2 = 0; // assign some new value to a field on the model
}
doc.save( function() { console.log( 'success' ); }
});
});
In a separate part of the application, the document described by MyModel could be updated. Consider the following code:
Snippet B
MyModel.update( { _id : <id #1> }, { $set : { field1 : someValue }, callback );
In Snippet A, a MongoDB query is issued with a registered callback to be fired once the document is ready. An instance of the document described by MyModel is retained in memory (in the "doc" object). The following sequence could occur:
Snippet A executes
A query is initiated for MyModel, registering a callback (callback A) for later use
<< The Node event loop runs >>
MyModel is retrieved from the database, executing the registered callback (callback A)
A query is initiated for MyOtherModel, registering a callback for later use (callback B)
<< The Node event loop runs >>
Snippet B executes
The document (id #1) is updated
<< The Node event loop runs >>
MyOtherModel is retrieved from the database, executing the registered callback (callback B)
The stale version of the document (id #1) is incorrectly used in a comparison.
Questions
Are there any guarantees that this type of race condition won't happen in Node.js/MongoDB?
What can I do to deterministically prevent this scenario from happening?
While Node runs code in a single-threaded manner, it seems to me that any allowance of the event loop to run opens the door for potentially stale data. Please correct me if this observance is wrong.
No, there are no guarantees that this type of race condition won't occur in node.js/MongoDB. It doesn't have anything to do with node.js though, and this is possible with any database that supports concurrent access, not just MongoDB.
The problem is, however, trickier to solve with MongoDB because it doesn't support transactions like your typical SQL database would. So you have to solve it in your application layer using a strategy like the one outlined in the MongoDB cookbook here.