Node.js: How to handle MongoDB Write Error? - node.js

How to handle MongoDB write error(due to Mongo connection drop) ? Because I have to do updates on multiple documents, so basically it needs to be transactional, "Nothing or all" approach. I thought I can catch the err and revert the inserted data if one of the updates was failed. But if the MongoDB connection dropped, it's directly caught by the "uncaughtException" of application. So how can I handle this scenario ? All I need is "Nothing or All" on a multi-document update.

Transactions don't exist in MongoDB. There are some workarounds, such as the one #Alex-Blex posted in the comments where you do two-stage commits to progressively perform your update. (It's a bit of a misnomer; their example has seven db ops.)
if the MongoDB connection dropped, it's directly caught by the "uncaughtException" of application
You can listen for this with the connection disconnected and error events:
mongoose.connection.on("disconnected", function () {
// Lost connection to database
});
mongoose.connection.on("reconnected", ...);
Usually those listeners are application-wide, although I suppose you could set one up for the lifetime of your "transaction". You'd have to wait for reconnected, and retry/resume your database op. In any case, you'll still be relying on your application to not crash for any reason for the entire duration of your "transaction."
If you need transactions to be reliable, probably need to look at another database.

Related

How to ensure mongoose.dropDatabase() can ONLY be called when connected to mongo-memory-server

We're using mongodb-memory-server for unit tests. After every unit test suite we execute:
await connection.dropDatabase();
await collection.deleteMany({});
To setup mongoose we have two different methods:
setupMongoose(); <--- Connects to our dev database in the cloud (Atlas)
setupMongooseWithMemoryServer(); <---- Connects mongoose to memory server.
We're a team of developers, and my worst fear is that someone uses "setupMongoose()" to setup unit tests by mistake some day. If that happens, dropDatabase() will be called for our "real" dev database. That would be a catastrophy.
So how can I ensure that dropDatabase() and maybe collection.deleteMany({}) can NEVER ever be called on our cloud database?
Some thoughts:
I have thought about setting up env variables and check for it before calling the dangerous methods. I've also already made a run time check:
checkForUnitTestEnv() {
if (!this.init || process.env.JEST_WORKER_ID === undefined) {
console.error('FATAL TRIED TO DROP DATABSE WITHOUT JEST!');
throw 'FATAL TRIED TO DROP DATABSE WITHOUT JEST!';
}
}
(this.init is only true if memory-server has been initialized).
But these methods are not fool proof. Errors can still happen if our developers are not careful. So I was hoping to either make it "illegal operations" with our database provider (Atlas) if possible, or check the mongoose connection uri on run time before calling the dangerous methods (but I haven't found a good way to do this yet).

Mongo doesn't save data into disk

In our project we often have a problem when mongo doesn't save its state into disk, and after rebooting the application we lose data. I could not determine when and why this happens - somehow and somewhen :). Does anybody know how to synchronize mongodb storage to disk with some api? We use mongorito ODM. PLeasure to hear any variants.
Some details.
Mongo version 3.2.
Application - it is an electron application. Under the hood it uses mongo as storage - we use mongo on client side and install it as a windows service advantagely. Application starts, makes different transactions, read/write data from/to mondo db - nothing strange. When we close this application and reopen next time - we cannot find last rows (documents) in some collections that were succesfully (according to mongo answers) saved. We have no errors.
Can anyone explain what the write concern is and how to setup it not to wait 60 seconds before flushing the data - may be this is the reason?
Some code of db connect/disconnect. app means an electron application:
const {Database} = require('mongorito');
const db = new Database(__DBPATH__);
db.connect();
db.register(__MONGORITO_MODEL__);
app.on('window-all-closed', () => {
db.disconnect();
});
I'd take a look at the write concern setting within your application and make sure it's set to the requirements of your business - https://docs.mongodb.com/manual/reference/write-concern/
Also, make sure you're running a replica set in your production environment šŸ‘
Thanks to everyboy, I've solved the problem. The reason was the journaling. I turn on the journaling for mongodb service and the problem has gone.
mongod.exe --journal

Meteor MongoDB subscription delivering data in 10 second intervals instead of live

I believe this is more of a MongoDB question than a Meteor question, so don't get scared if you know a lot about mongo but nothing about meteor.
Running Meteor in development mode, but connecting it to an external Mongo instance instead of using Meteor's bundled one, results in the same problem. This leads me to believe this is a Mongo problem, not a Meteor problem.
The actual problem
I have a meteor project which continuosly gets data added to the database, and displays them live in the application. It works perfectly in development mode, but has strange behaviour when built and deployed to production. It works as follows:
A tiny script running separately collects broadcast UDP packages and shoves them into a mongo collection
The Meteor application then publishes a subset of this collection so the client can use it
The client subscribes and live-updates its view
The problem here is that the subscription appears to only get data about every 10 seconds, while these UDP packages arrive and gets shoved into the database several times per second. This makes the application behave weird
It is most noticeable on the collection of UDP messages, but not limited to it. It happens with every collection which is subscribed to, even those not populated by the external script
Querying the database directly, either through the mongo shell or through the application, shows that the documents are indeed added and updated as they are supposed to. The publication just fails to notice and appears to default to querying on a 10 second interval
Meteor uses oplog tailing on the MongoDB to find out when documents are added/updated/removed and update the publications based on this
Anyone with a bit more Mongo experience than me who might have a clue about what the problem is?
For reference, this is the dead simple publication function
/**
* Publishes a custom part of the collection. See {#link https://docs.meteor.com/api/collections.html#Mongo-Collection-find} for args
*
* #returns {Mongo.Cursor} A cursor to the collection
*
* #private
*/
function custom(selector = {}, options = {}) {
return udps.find(selector, options);
}
and the code subscribing to it:
Tracker.autorun(() => {
// Params for the subscription
const selector = {
"receivedOn.port": port
};
const options = {
limit,
sort: {"receivedOn.date": -1},
fields: {
"receivedOn.port": 1,
"receivedOn.date": 1
}
};
// Make the subscription
const subscription = Meteor.subscribe("udps", selector, options);
// Get the messages
const messages = udps.find(selector, options).fetch();
doStuffWith(messages); // Not actual code. Just for demonstration
});
Versions:
Development:
node 8.9.3
mongo 3.2.15
Production:
node 8.6.0
mongo 3.4.10
Meteor use two modes of operation to provide real time on top of mongodb that doesnā€™t have any built-in real time features. poll-and-diff and oplog-tailing
1 - Oplog-tailing
It works by reading the mongo databaseā€™s replication log that it uses to synchronize secondary databases (the ā€˜oplogā€™). This allows Meteor to deliver realtime updates across multiple hosts and scale horizontally.
It's more complicated, and provides real-time updates across multiple servers.
2 - Poll and diff
The poll-and-diff driver works by repeatedly running your query (polling) and computing the difference between new and old results (diffing). The server will re-run the query every time another client on the same server does a write that could affect the results. It will also re-run periodically to pick up changes from other servers or external processes modifying the database. Thus poll-and-diff can deliver realtime results for clients connected to the same server, but it introduces noticeable lag for external writes.
(the default is 10 seconds, and this is what you are experiencing , see attached image also ).
This may or may not be detrimental to the application UX, depending on the application (eg, bad for chat, fine for todos).
This approach is simple and and delivers easy to understand scaling characteristics. However, it does not scale well with lots of users and lots of data. Because each change causes all results to be refetched, CPU time and network bandwidth scale O(NĀ²) with users. Meteor automatically de-duplicates identical queries, though, so if each user does the same query the results can be shared.
You can tune poll-and-diff by changing values of pollingIntervalMs and pollingThrottleMs.
You have to use disableOplog: true option to opt-out of oplog tailing on a per query basis.
Meteor.publish("udpsPub", function (selector) {
return udps.find(selector, {
disableOplog: true,
pollingThrottleMs: 10000,
pollingIntervalMs: 10000
});
});
Additional links:
https://medium.baqend.com/real-time-databases-explained-why-meteor-rethinkdb-parse-and-firebase-dont-scale-822ff87d2f87
https://blog.meteor.com/tuning-meteor-mongo-livedata-for-scalability-13fe9deb8908
How to use pollingThrottle and pollingInterval?
It's a DDP (Websocket ) heartbeat configuration.
Meteor real time communication and live updates is performed using DDP ( JSON based protocol which Meteor had implemented on top of SockJS ).
Client and server where it can change data and react to its changes.
DDP (Websocket) protocol implements so called PING/PONG messages (Heartbeats) to keep Websockets alive. The server sends a PING message to the client through the Websocket, which then replies with PONG.
By default heartbeatInterval is configure at little more than 17 seconds (17500 milliseconds).
Check here: https://github.com/meteor/meteor/blob/d6f0fdfb35989462dcc66b607aa00579fba387f6/packages/ddp-client/common/livedata_connection.js#L54
You can configure heartbeat time in milliseconds on server by using:
Meteor.server.options.heartbeatInterval = 30000;
Meteor.server.options.heartbeatTimeout = 30000;
Other Link:
https://github.com/meteor/meteor/blob/0963bda60ea5495790f8970cd520314fd9fcee05/packages/ddp/DDP.md#heartbeats

Unclean shutdown with MongoDB results in corrupted mongod.lock file

I have a MongoDB database storing data in a particular directory, and I am using a Node.js process to write to it. Sometimes my Node.js process experiences a forced shutdown (notice the passive voice) and I get this error message, which pretty much means I have to go in and simply delete the .lock file:
2015-06-16T11:09:19.004-0700 W - [initandlisten] Detected unclean shutdown - /Users/amills001c/mongodb_sc_admin_dev_data/mongod.lock is not empty.
2015-06-16T11:09:19.013-0700 I STORAGE [initandlisten] **************
old lock file: /Users/amills001c/mongodb_sc_admin_dev_data/mongod.lock. probably means unclean shutdown,
but there are no journal files to recover.
this is likely human error or filesystem corruption.
What is the best way to prevent this from happening so I don't have to go in and delete the mongod.lock file everytime I start up the database?
Something like this?
process.on('SIGINT', function (msg1,msg2) {
mongoose.disconnect();
mongoose.connection.close(function (msg) {
console.log(msg);
});
console.log('SIGINT message:',msg1,msg2)
});
Client connections to MongoDB do not affect the mongod.lock file. The error message relates to your DB server aka the mongod process shutting down uncleanly. The solution for that is to always shutdown your DB process cleanly.
Unclean shutdown should be an exceptional situation which happens out of your control. If it is happening repeatedly then there is something wrong with the way you are running and managing your mongod process. Please check why an unclean shutdown happened and fix that cause.
That said, the client code that you have presented in the question is definitely a good practice too. Closing connection to the DB when quitting is desirable. Please do that also. However it is not related to the mongod.lock error message you are seeing. That is purely on the DB server side.

Multithreaded JDBC

Architecturally what is the best way to handle JDBC with multiple threads? I have many threads concurrently accessing the database. With a single connection and statement I get the following error message:
org.postgresql.util.PSQLException: This ResultSet is closed.
Should I use multiple connections, multiple statements or is there a better method? My preliminary thought was to use one statement per thread which would guarantee a single result set per statement.
You should use one connection per task. If you use connection pooling you can't use prepared statements prepared by some other connection. All objects created by connection (ResultSet, PreparedStatements) are invalid for use after connection returned to pool.
So, it's alike
public void getSomeData() {
Connection conn = datasource.getConnection();
PreparedStatement st;
try {
st = conn.prepareStatement(...);
st.execute();
} finally {
close(st);
close(conn);
}
}
So in this case all your DAO objects take not Connection, but DataSource object (java.sql.DataSource) which is poolable connection factory indeed. And in each method you first of all get connection, do all your work and close connection. You should return connection to pool as fast as possible. After connection returned it may not be physically closed, but reinitialized (all active transactions closed, all session variables destroyed etc.)
Yes, use multiple connections with a connection pool. Open the connection for just long enough to do what you need, then close it as soon as you're done. Let the connection pool take care of the "physical" connection management for efficiency.

Resources