Mongo doesn't save data into disk - node.js

In our project we often have a problem when mongo doesn't save its state into disk, and after rebooting the application we lose data. I could not determine when and why this happens - somehow and somewhen :). Does anybody know how to synchronize mongodb storage to disk with some api? We use mongorito ODM. PLeasure to hear any variants.
Some details.
Mongo version 3.2.
Application - it is an electron application. Under the hood it uses mongo as storage - we use mongo on client side and install it as a windows service advantagely. Application starts, makes different transactions, read/write data from/to mondo db - nothing strange. When we close this application and reopen next time - we cannot find last rows (documents) in some collections that were succesfully (according to mongo answers) saved. We have no errors.
Can anyone explain what the write concern is and how to setup it not to wait 60 seconds before flushing the data - may be this is the reason?
Some code of db connect/disconnect. app means an electron application:
const {Database} = require('mongorito');
const db = new Database(__DBPATH__);
db.connect();
db.register(__MONGORITO_MODEL__);
app.on('window-all-closed', () => {
db.disconnect();
});

I'd take a look at the write concern setting within your application and make sure it's set to the requirements of your business - https://docs.mongodb.com/manual/reference/write-concern/
Also, make sure you're running a replica set in your production environment šŸ‘

Thanks to everyboy, I've solved the problem. The reason was the journaling. I turn on the journaling for mongodb service and the problem has gone.
mongod.exe --journal

Related

How to ensure mongoose.dropDatabase() can ONLY be called when connected to mongo-memory-server

We're using mongodb-memory-server for unit tests. After every unit test suite we execute:
await connection.dropDatabase();
await collection.deleteMany({});
To setup mongoose we have two different methods:
setupMongoose(); <--- Connects to our dev database in the cloud (Atlas)
setupMongooseWithMemoryServer(); <---- Connects mongoose to memory server.
We're a team of developers, and my worst fear is that someone uses "setupMongoose()" to setup unit tests by mistake some day. If that happens, dropDatabase() will be called for our "real" dev database. That would be a catastrophy.
So how can I ensure that dropDatabase() and maybe collection.deleteMany({}) can NEVER ever be called on our cloud database?
Some thoughts:
I have thought about setting up env variables and check for it before calling the dangerous methods. I've also already made a run time check:
checkForUnitTestEnv() {
if (!this.init || process.env.JEST_WORKER_ID === undefined) {
console.error('FATAL TRIED TO DROP DATABSE WITHOUT JEST!');
throw 'FATAL TRIED TO DROP DATABSE WITHOUT JEST!';
}
}
(this.init is only true if memory-server has been initialized).
But these methods are not fool proof. Errors can still happen if our developers are not careful. So I was hoping to either make it "illegal operations" with our database provider (Atlas) if possible, or check the mongoose connection uri on run time before calling the dangerous methods (but I haven't found a good way to do this yet).

SoftDelete works on typeorm with dropSchema true?

I'm implementing a test database which uses typeorm with dropSchema activated.
And I noticed... my delete routes which calls a softdelete method just remove the row(until the dabase drops). But it works fine at normal database (dropSchema off).
From TypeOrm Connection Options :
dropSchema - Drops the schema each time connection is being
established. Be careful with this option and don't use this in
production - otherwise you'll lose all production data. This option is
useful during debug and development.
So when you run with dropSchema: true all the data gets deleted including the soft-deleted rows.

Meteor MongoDB subscription delivering data in 10 second intervals instead of live

I believe this is more of a MongoDB question than a Meteor question, so don't get scared if you know a lot about mongo but nothing about meteor.
Running Meteor in development mode, but connecting it to an external Mongo instance instead of using Meteor's bundled one, results in the same problem. This leads me to believe this is a Mongo problem, not a Meteor problem.
The actual problem
I have a meteor project which continuosly gets data added to the database, and displays them live in the application. It works perfectly in development mode, but has strange behaviour when built and deployed to production. It works as follows:
A tiny script running separately collects broadcast UDP packages and shoves them into a mongo collection
The Meteor application then publishes a subset of this collection so the client can use it
The client subscribes and live-updates its view
The problem here is that the subscription appears to only get data about every 10 seconds, while these UDP packages arrive and gets shoved into the database several times per second. This makes the application behave weird
It is most noticeable on the collection of UDP messages, but not limited to it. It happens with every collection which is subscribed to, even those not populated by the external script
Querying the database directly, either through the mongo shell or through the application, shows that the documents are indeed added and updated as they are supposed to. The publication just fails to notice and appears to default to querying on a 10 second interval
Meteor uses oplog tailing on the MongoDB to find out when documents are added/updated/removed and update the publications based on this
Anyone with a bit more Mongo experience than me who might have a clue about what the problem is?
For reference, this is the dead simple publication function
/**
* Publishes a custom part of the collection. See {#link https://docs.meteor.com/api/collections.html#Mongo-Collection-find} for args
*
* #returns {Mongo.Cursor} A cursor to the collection
*
* #private
*/
function custom(selector = {}, options = {}) {
return udps.find(selector, options);
}
and the code subscribing to it:
Tracker.autorun(() => {
// Params for the subscription
const selector = {
"receivedOn.port": port
};
const options = {
limit,
sort: {"receivedOn.date": -1},
fields: {
"receivedOn.port": 1,
"receivedOn.date": 1
}
};
// Make the subscription
const subscription = Meteor.subscribe("udps", selector, options);
// Get the messages
const messages = udps.find(selector, options).fetch();
doStuffWith(messages); // Not actual code. Just for demonstration
});
Versions:
Development:
node 8.9.3
mongo 3.2.15
Production:
node 8.6.0
mongo 3.4.10
Meteor use two modes of operation to provide real time on top of mongodb that doesnā€™t have any built-in real time features. poll-and-diff and oplog-tailing
1 - Oplog-tailing
It works by reading the mongo databaseā€™s replication log that it uses to synchronize secondary databases (the ā€˜oplogā€™). This allows Meteor to deliver realtime updates across multiple hosts and scale horizontally.
It's more complicated, and provides real-time updates across multiple servers.
2 - Poll and diff
The poll-and-diff driver works by repeatedly running your query (polling) and computing the difference between new and old results (diffing). The server will re-run the query every time another client on the same server does a write that could affect the results. It will also re-run periodically to pick up changes from other servers or external processes modifying the database. Thus poll-and-diff can deliver realtime results for clients connected to the same server, but it introduces noticeable lag for external writes.
(the default is 10 seconds, and this is what you are experiencing , see attached image also ).
This may or may not be detrimental to the application UX, depending on the application (eg, bad for chat, fine for todos).
This approach is simple and and delivers easy to understand scaling characteristics. However, it does not scale well with lots of users and lots of data. Because each change causes all results to be refetched, CPU time and network bandwidth scale O(NĀ²) with users. Meteor automatically de-duplicates identical queries, though, so if each user does the same query the results can be shared.
You can tune poll-and-diff by changing values of pollingIntervalMs and pollingThrottleMs.
You have to use disableOplog: true option to opt-out of oplog tailing on a per query basis.
Meteor.publish("udpsPub", function (selector) {
return udps.find(selector, {
disableOplog: true,
pollingThrottleMs: 10000,
pollingIntervalMs: 10000
});
});
Additional links:
https://medium.baqend.com/real-time-databases-explained-why-meteor-rethinkdb-parse-and-firebase-dont-scale-822ff87d2f87
https://blog.meteor.com/tuning-meteor-mongo-livedata-for-scalability-13fe9deb8908
How to use pollingThrottle and pollingInterval?
It's a DDP (Websocket ) heartbeat configuration.
Meteor real time communication and live updates is performed using DDP ( JSON based protocol which Meteor had implemented on top of SockJS ).
Client and server where it can change data and react to its changes.
DDP (Websocket) protocol implements so called PING/PONG messages (Heartbeats) to keep Websockets alive. The server sends a PING message to the client through the Websocket, which then replies with PONG.
By default heartbeatInterval is configure at little more than 17 seconds (17500 milliseconds).
Check here: https://github.com/meteor/meteor/blob/d6f0fdfb35989462dcc66b607aa00579fba387f6/packages/ddp-client/common/livedata_connection.js#L54
You can configure heartbeat time in milliseconds on server by using:
Meteor.server.options.heartbeatInterval = 30000;
Meteor.server.options.heartbeatTimeout = 30000;
Other Link:
https://github.com/meteor/meteor/blob/0963bda60ea5495790f8970cd520314fd9fcee05/packages/ddp/DDP.md#heartbeats

Using memcached failover servers in nodejs app

I'm trying to set up a robust memcached configuration for a nodejs app with the node-memcached driver, but it does not seem to use the specified failover servers when one server dies.
My local experiment goes as follows:
shell
memcached -p 11212
node
MC = require('memcached')
c = new MC('localhost:11211', //this process does not exist
{failOverServers: ['localhost:11212']})
c.get('foo', console.log) //this will eventually time out
c.get('foo', console.log) //repeat 5 or 6 times to exceed the retries number
//wait until all the connection errors appear in the console
//at this point, the failover server should be in use
c.get('foo', console.log) //this still times out :(
Any ideas of what might we be doing wrong?
It seems that the failover feature is somewhat buggy in node-memcached.
To enable failover you must set the remove options:
c = new MC('localhost:11211', //this process does not exist
{failOverServers: ['localhost:11212'],
remove : true})
Unfortunately, this is not going to work because of the following error:
[depricated] HashRing#replaceServer is removed.
[depricated] the API has no replacement
That is, when trying to replace a dead server with a replacement from the failover list, node-memcached outputs a deprecation error from the HashRing library (which, in turn, is maintained by the same author of node-memcached). IMHO, feel free to open a bug :-)
This is come when your nodejs server not getting any session id from memcached
Please check properly in php.ini file you are setting properly or not for memcached
session.save = 'memcache'
session.path = 'tcp://localhost:11212'

How to unit test a method which connects to mongo, without actually connecting to mongo?

I'm trying to write a test to test a method that connects to mongo, but I don't actually want to have to have mongo running and actually make a connection to it to have my tests pass successfully.
Here's my current test which is successful when my mongo daemon is running.
describe('with a valid mongo string parameter', function() {
it('should return a rejected promise', function(done) {
var con = mongoFactory.getConnection('mongodb://localhost:27017');
expect(con).to.be.fulfilled;
done();
});
});
mongoFactory.getConnection code:
getConnection: function getConnection(connectionString) {
// do stuff here
// Initialize connection once
MongoClient.connect(connectionString, function(err, database) {
if (err) {
def.reject(err);
}
def.resolve(database);
});
return def.promise;
}
There are a couple of SO answers related to unit testing code that uses MongoDB as a data store:
Mocking database in node.js?
Mock/Test Mongodb Database Node.js
Embedded MongoDB when running integration tests
Similar: Unit testing classes that have online functionality
I'll make an attempt at consolidating these solutions.
Preamble
First and foremost, you should want MongoDB to be running while performing your tests. MongoDB's query language is complex, so running legitimate queries against a stable MongoDB instance is required to ensure your queries are running as planned and that your application is responding properly to the results. With this in mind, however, you should never run your tests against a production system, but instead a peripheral system to your integration environment. This can be on the same machine as your CI software, or simply relatively close to it (in terms of process, not necessarily network or geographically speaking).
This ENV could be low-footprint and completely run in memory (resource 1) (resource 2), but would not necessarily require the same performance characteristics as your production ENV. (If you want to performance test, this should be handled in a separate environment from your CI anyway.)
Setup
Install a mongod service specifically for CI. If repl sets and/or sharding are of concern (e.g. write concern, no use of $isolated, etc.), it is possible to mimic a clustered environment by running multiple mongod instances (1 config, 2x2 data for shard+repl) and a mongos instance on the same machine with either some init.d scripts/tweaks or something like docker.
Use environment-specific configurations within your application (either embedded via .json files, or in some place like /etc, /home/user/.your-app or similar). Your application can load these based on a node environment variable like NODE_ENV=int. Within these configurations your db connection strings will differ. If you're not using env-specific configs, start doing this as a means to abstract the application runtime settings (i.e. "local", "dev", "int", "pre", "prod", etc.). I can provide a sample upon request.
Include test-oriented fixtures with your application/testing suite. As mentioned in one of the linked questions, MongoDB's Node.js driver supports some helper libraries: mongodb-fixtures and node-database-cleaner. Fixtures provide a working and consistent data set for testing: think of them as a bootstrap.
Builds/Tests
Clean the associated database using something like node-database-cleaner.
Populate your fixtures into the now empty database with the help of mongodb-fixtures.
Perform your build and test.
Repeat.
On the other hand...
If you still decide that not running MongoDB is the correct approach (and you wouldn't be the only one), then abstracting your data store calls from the driver with an ORM is your best bet (for the entire application, not just testing). For example, something like model claims to be database agnostic, although I've never used it. Utilizing this approach, you would still require fixtures and env configurations, however you would not be required to install MongoDB. The caveat here is that you're at the mercy of the ORM you choose.
You could try tingodb.
TingoDB is an embedded JavaScript in-process filesystem or in-memory database upwards compatible with MongoDB at the API level.

Resources