Question about mongodb capped collections + tailable cursors - multithreading

I'm building a queueing system that passes a message from one process to another via a stack implemented in mongodb with capped_collections and tailable cursors.
The receiving processes loops infinitely looking for new documents in the capped_collection, and when it finds one it performs an operation.
My question is, if I implement multiple receiving processes is there a way to guarantee that a new document will only be read once by one of the processes using a tailable cursor? The goal is to avoid the operation being performed twice if there are two receiving processes looking for new messages in the queue. I'm relatively new to mongodb programming so I'm still getting a feel for all of its features.

MongoDB documents contain a thorough description of ways to achieve an atomic update. You cannot ensure that only one process receives the new document but you can implement an atomic update after receiving it to ensure that only one process acts on it.

I have recently been looking into this problem and I would be interested to know if there are other ways to have multiple readers (consumers) without relying on atomic updates.
This is what I have come up with: divide your logic into two "modules". The first module will be responsible for fetching new documents from the tailable cursor. The second module will be responsible for working with an arbitrary document. In this manner, you can have only one consumer (module one) fetching documents which later sends the document to multiple document workers (second module).
Both modules can be implemented in different processes and even in different languages. For example, a Node.js app could be fetching the documents and sending them to a pool of scripts written in Python ready to process documents concurrently.

Related

How to manage concurrent writes to a large (5mb) MongoDB document with Node JS

I built an app that manages sports tournaments using MongoDB, Mongoose on NodeJS. I'd like to know if I am using the best solution to handle multiple concurrent writes to a large document (5Mb) in rapid succession.
Each "Event" (tournament) is a single document that contains a list of teams. There is a maximum number of teams that can register to each Event. So normally, when a team registers, my Node JS server will load the event, check if the max number of teams has not been reached, add the team to sub-documents and save the Event.
The problem is that some tournaments make players frantic to get a spot and you can have 60 teams complete their registration in the opening seconds which would cause concurrency errors.
For example, if 2 teams click on "save" at the same time, 2 threads (requests) will open on the NodeJS server, both threads will load identical copies of the event, modify them and save two different versions of the document over one another. Obviously, you will get a version error for one of the two threads. Now imagine 60 teams registering within the same second.
The second problem is that the Event document is quite large. Let's be dramatic and say it's 5Mb in size (rare but possible). If I have to load, modify, write 5 megs per registration, the registration system is going to grind to a halt (since my MongoDB is on a different server.)
So I need to know if I built the right solution and if you guys foresee problems with this.
On my node server, I built a Singleton class (accessible to all requests) to manage access to documents. So if a request comes along and asks for Document X, the singleton returns a Promise to the request which will be resolved once this document becomes available to edit. The singleton then turns around, loads the document and grants access to the first request by resolving it's promise. When the request is done editing this document, it tells the singleton that it's done. The singleton then checks if there is queue of other requests waiting to edit this document (other teams that want to register). If so, it does NOT save the document but rather resolves the next promise, allowing the next request to edit the document.
When the last request has finished editing the document and there are no more requests in the queue, the singleton saves the document and clears it from memory.
So in short, the singleton allows the system to load the document once, allow modifications from multiple requests and then saves the document at the end of the rush. This is especially useful since the document is rather large (up to 5mb) and minimizes the number of read/writes to the MongoDB server. The other use is that if we're accepting 50 teams and we get 55 requests wanting to append their teams, the last 5 requests in the queue will take into account that the live document has reached it's team limit and return a "sorry we're full" response.
Is this the best way to manage concurrent writes to a large document?
MongoDB provides a multitude of update operators that you should be using on the specific fields instead of modifying the entire document in your application. For example, for adding to arrays use https://docs.mongodb.com/manual/reference/operator/update/push/.
This way you 1) will only be sending the changed data on each write and 2) avoid racing yourself and clobbering your other changes.
This doesn't help you with the time it takes the server to rewrite that 5 mb document each time it's modified - split the document up to fix this (if you find it to be an issue).

Create atomic stored procedure / "stored JS" with MongoDB

We need to create an atomic routine in our MongoDB database.
We need to iterate through a collection, find the highest number given a field from all documents in the collection, then increment it. We are working with some legacy data that we need to integrate, otherwise we'd have some atomic sequence already in place.
How can I create stored JS or a stored procedure in MongoDB that can run a whole routine atomically?
I am seeing some information but nothing is looking particularly clear to me:
Called a stored javascript function from Mongoose?
https://groups.google.com/forum/#!topic/mongoose-orm/sPN3wfDstX4
https://github.com/mongoosejs/mongoose-function
Where can I find good information how to actually write an atomic/blocking stored procedure that runs in MongoDB, and how to actually invoke the stored procedure from the application?
(summarizing the comments above)
At the moment, there is nothing in mongodb that will allow you to run a piece of arbitrary logic (including, for example, multiple queries to gather data) atomically.
The best atomic thing that mongodb has to offer is findAndModify. Its atomicity is naturally restricted to only one document and you have a pretty limited list of update operators (that is, you can't even use the fields of the document, same as regular updates).
It is somewhat possible using an application-level lock: the application inserts or modifies a special lock document, which will signal to other parts of the application "I'm using/updating this, please refrain from touching it". After the operation is completed, application releases the lock, so it's now free to be re-acquired by someone else. Of course, this relies entirely on actors respecting the lock agreements, which is not very reliable, to put it mildly.

QSQLite Error: Database is locked

I am new to Qt development, the way it handles threads (signals and slots) and databases (and SQLite at that). It has been 4 weeks that I have started working on the mentioned technologies. This is the first time I'm posting a question on SO and I feel I have done research before coming to you all. This may look a little long and possibly a duplicate, but I request you all to read it thoroughly once before dismissing it off as a duplicate or tl;dr.
Context:
I am working on a Windows application that performs a certain operation X on a database. The application is developed in Qt and uses QSQLite as database engine. It's a single threaded application, i.e., the tables are processed sequentially. However, as the DB size grows (in number of tables and records), this processing becomes slower. The result of this operation X is written in a separate results table in the same DB. The processing being done is immaterial to the problem, but in basic terms here's what it does:
Read a row from Table_X_1
Read a row from Table_X_2
Do some operations on the rows (only read)
Push the results in Table_X_Results table (this is the only write being performed on the DB)
Table_X_1 and Table_X_2 are identical in number and types of columns and number of rows, only the data may differ.
What I'm trying to do:
In order to improve the performance, I am trying to make the application multi-threaded. Initially I am spawning two threads (using QtConcurrentRun). The two tables can be categorized in two types, say A and B. Each thread will take care of the tables of two types. Processing within the threads remains same, i.e., within each thread the tables are being processed sequentially.
The function is such that it uses SELECT to fetch rows for processing and INSERT to insert result in results table. For inserting the results I am using transactions.
I am creating all the intermediate tables, result tables and indices before starting my actual operation. I am opening and closing connections everytime. For the threads, I create and open a connection before entering the loop (one for each thread).
THE PROBLEM:
Inside my processing function, I get following (nasty, infamous, stubborn) error:
QSqlError(5, "Unable to fetch row", "database is locked")
I am getting this error when I'm trying to read a row from DB (using SELECT). This is in the same function in which I'm performing my INSERTs into results table. The SELECT and the INSERT are in the same transaction (begin and commit pair). For INSERT I'm using prepared statement (SQLiteStatement).
Reasons for seemingly peculiar things that I am doing:
I am using QtConcurrentRun to create the threads because it is straightforward to do! I have tried using QThread (not subclassing QThread, but the other method). That also leads to same problem.
I am compiling with DSQLITE_THREADSAFE=0 to avoid application from crashing. If I use the default (DSQLITE_THREADSAFE=1), my application crashes at SQLiteStatement::recordSet->Reset(). Also, with the default option, internal SQLITE sync mechanism comes into play which may not be reliable. If the need be, I'll employ explicit sync.
Making the application multi-threaded to improve performance, and not doing this. I'm taking care of all the optimizations recommended there.
Using QSqlDatabase::setConnectOptions with QSQLITE_BUSY_TIMEOUT=0. A link suggested that it will prevent the DB to get locked immediately and hence may give my thread(s) appropriate amount of time to "die peacefully". This failed: the DB got locked much frequently than before.
Observations:
The database goes into lock only and as soon as when one of the threads return. This behavior is consistent.
When compiling with DSQLITE_THREADSAFE=1, the application crashes when one of the threads return. Call stack points at SQLiteStatement::recordSet->Reset() in my function, and at winMutexEnter() (called from EnterCriticalSection()) in sqlite3.c. This is consistent as well.
The threads created using QtConcurrentRun do not die immediately.
If I use QThreads, I can't get them to return. That is to say, I feel the thread never returns even though I have connected the signals and the slots correctly. What is the correct way to wait for threads and how long it takes them to die?
The thread that finishes execution never returns, it has locked the DB and hence the error.
I checked for SQLITE_BUSY and tried to make the thread sleep but could not get it to work. What is the correct way to sleep in Qt (for threads created with QtConcurrentRun or QThreads)?
When I close my connections, I get this warning:
QSqlDatabasePrivate::removeDatabase: connection 'DB_CONN_CREATE_RESULTS' is still in use, all queries will cease to work.
Is this of any significance? Some links suggested that this warning arises because of using local QSqlDatabase, and will not arise if the connection is made a class member. However, could it be the reason for my problem?
Further experiments:
I am thinking of creating another database which will only contain results table (Table_X_Results). The rationale is that while the threads will read from one DB (the one that I have currently), they will get to write to another DB. However, I may still face the same problem. Moreover, I read on the forums and wikis that it IS possible to have two threads doing read and write on same DB. So why can I not get this scenario to work?
I am currently using SQLITE version 3.6.17. Could that be the problem? Will things be better if I used version 3.8.5?
I was trying to post the web resources that I have already explored, but I get a message saying "I'd need 10 reps to post more than 2 links". Any help/suggestions would be much appreciated.

Good approaches for queuing simultaneous NodeJS processes

I am building a simple application to download a set of XML files and parse them into a database using the async module (https://npmjs.org/package/node-async) for flow control. The overall flow is as follows:
Download list of datasets from API (single Request call)
Download metadata for each dataset to get link to XML file (async.each)
Download XML for each dataset (async.parallel)
Parse XML for each dataset into JSON objects (async.parallel)
Save each JSON object to a database (async.each)
In effect, for each dataset there is a parent process (2) which sets of a series of asynchronous child processes (3, 4, 5). The challenge that I am facing is that, because so many parent processes fire before all of the children of a particular process are complete, child processes seem to be getting queued up in the event loop, and it takes a long time for all of the child processes for a particular parent process to resolve and allow garbage collection to clean everything up. The result of this is that even though the program doesn't appear to have any memory leaks, memory usage is still too high, ultimately crashing the program.
One solution which worked was to make some of the child processes synchronous so that they can be grouped together in the event loop. However, I have also seen an alternative solution discussed here: https://groups.google.com/forum/#!topic/nodejs/Xp4htMTfvYY, which pushes parent processes into a queue and only allows a certain number to be running at once. My question then is does anyone know of a more robust module for handling this type of queueing, or any other viable alternative for handling this kind of flow control. I have been searching but so far no luck.
Thanks.
I decided to post this as an answer:
Don't launch all of the processes at once. Let the callback of one request launch the next one. The overall work is still asynchronous, but each request gets run in series. You can then pool up a certain number of the connections to be running simultaneously to maximize I/O throughput. Look at async.eachLimit and replace each of your async.each examples with it.
Your async.parallel calls may be causing issues as well.

Returning LOTS of items from a MongoDB via Node.js

I'm returning A LOT (500k+) documents from a MongoDB collection in Node.js. It's not for display on a website, but rather for data some number crunching. If I grab ALL of those documents, the system freezes. Is there a better way to grab it all?
I'm thinking pagination might work?
Edit: This is already outside the main node.js server event loop, so "the system freezes" does not mean "incoming requests are not being processed"
After learning more about your situation, I have some ideas:
Do as much as you can in a Map/Reduce function in Mongo - perhaps if you throw less data at Node that might be the solution.
Perhaps this much data is eating all your memory on your system. Your "freeze" could be V8 stopping the system to do a garbage collection (see this SO question). You could Use V8 flag --trace-gc to log GCs & prove this hypothesis. (thanks to another SO answer about V8 and Garbage collection
Pagination, like you suggested may help. Perhaps even splitting up your data even further into worker queues (create one worker task with references to records 1-10, another with references to records 11-20, etc). Depending on your calculation
Perhaps pre-processing your data - ie: somehow returning much smaller data for each record. Or not using an ORM for this particular calculation, if you're using one now. Making sure each record has only the data you need in it means less data to transfer and less memory your app needs.
I would put your big fetch+process task on a worker queue, background process, or forking mechanism (there are a lot of different options here).
That way you do your calculations outside of your main event loop and keep that free to process other requests. While you should be doing your Mongo lookup in a callback, the calculations themselves may take up time, thus "freezing" node - you're not giving it a break to process other requests.
Since you don't need them all at the same time (that's what I've deduced from you asking about pagination), perhaps it's better to separate those 500k stuff into smaller chunks to be processed at the nextTick?
You could also use something like Kue to queue the chunks and process them later (thus not everything in the same time).

Resources