"MongoError: No such cmd: createIndexes" using OpenShift - node.js

I'm creating a node.js app that sends reminders using agenda.js. It works perfectly when I test it locally, but when I test it on OpenShift, I get the following error message:
MongoError: No such cmd: createIndexes
I only get this error when the information for a new reminder is sent to the server, i.e. only when agenda.js is used.
I've looked up createIndexes, and it seems that it was implemented in version 2.6 of MongoDB, and OpenShift currently only appears to support version 2.4.
My question is, is there a way around this? Perhaps a way to manually upgrade to the latest version of MongoDB, or not to use a cartridge at all (not sure what that actually is)?

Before 2.6, there wasn't an internal command called CreateIndexes. It was necessary to insert and object on the system.indexes collection directly.
On mongo shell, there were 2 helpers for that, with different names:
db.collection.createIndex, which still exists nowadays;
db.collection.ensureIndex, which was removed from 2.6 on.
I couldn't understand what exactly is issuing the create index command. Is it your SDK? Because it is supposed to be done just once, and not on every insert.

I had the same issue, and, as a workaround, used Agenda 0.6.28 instead (as it worked in a previous project of mine under that version).
Mind that Agenda does not emit a 'ready' event in this lower version, so you'd better call the 'start' function directly:
agenda.define('delete old session logs', jobs.cleanSessionLogs);
agenda.on('fail', function(err, job) {
errorLog.error({ err: err, job: job });
});
agenda.every('1 hour', 'delete old session logs');
agenda.start();
instead of:
agenda.define('delete old session logs', jobs.cleanSessionLogs);
agenda.on('fail', function(err, job) {
errorLog.error({ err: err, job: job });
});
agenda.on('ready', function() {
agenda.every('1 hour', 'delete old session logs');
agenda.start();
});

Related

How to ensure mongoose.dropDatabase() can ONLY be called when connected to mongo-memory-server

We're using mongodb-memory-server for unit tests. After every unit test suite we execute:
await connection.dropDatabase();
await collection.deleteMany({});
To setup mongoose we have two different methods:
setupMongoose(); <--- Connects to our dev database in the cloud (Atlas)
setupMongooseWithMemoryServer(); <---- Connects mongoose to memory server.
We're a team of developers, and my worst fear is that someone uses "setupMongoose()" to setup unit tests by mistake some day. If that happens, dropDatabase() will be called for our "real" dev database. That would be a catastrophy.
So how can I ensure that dropDatabase() and maybe collection.deleteMany({}) can NEVER ever be called on our cloud database?
Some thoughts:
I have thought about setting up env variables and check for it before calling the dangerous methods. I've also already made a run time check:
checkForUnitTestEnv() {
if (!this.init || process.env.JEST_WORKER_ID === undefined) {
console.error('FATAL TRIED TO DROP DATABSE WITHOUT JEST!');
throw 'FATAL TRIED TO DROP DATABSE WITHOUT JEST!';
}
}
(this.init is only true if memory-server has been initialized).
But these methods are not fool proof. Errors can still happen if our developers are not careful. So I was hoping to either make it "illegal operations" with our database provider (Atlas) if possible, or check the mongoose connection uri on run time before calling the dangerous methods (but I haven't found a good way to do this yet).

Node.js app crashes when creating indexes for mongodb

When i creating indexes via createIndex or ensureIndex methods my node.js app crashing without providing any details of occured error.
Also i have noticed that all my code works very well with my localhost mongodb until i use remote mongdb atlas replica set.
Node.js: 8.5.0
Mongodb: 3.6.4
Mongodb driver for Node.js: 3.0.8
Example code (but i have tried different variants of implementation, nothing helps):
db.collection('stars').ensureIndex({name: 1}, { unique: true });
I am already listening for uncaughtException and unhandledRejection events, but they dont fire on this crash.
What can i do to get details or get rid of this error?
Thanks for any help!
I updated my mongodb driver to version 3.1.0 and eventually got error message with description of reason that causes the problem. According to error message: "The field "retryWrites" is not valid for for an index specification". So, i think the problem was with this field that i used to add to my connection uri string. After removing this field i got rid of this problem.
The problem might be connected with this issue: NODE-1641. I've got similar problem and found a few solutions:
Locking driver version at 3.1.0 and adding useNewUrlParser: true in connection options.
In createIndex add field retryWrites: null, so in your case it will look like:
db.collection('stars').ensureIndex({name: 1}, { unique: true, retryWrites: null });
However you need to check if it won't turn off retryable writes for that collection.
Last option is to vote up that issue and hope that it will be resolved soon, and then update your driver.
Not sure if this will help, but I'm currently trying to work around this.
If you remove {unique: true} it should get the job done, but it will make duplicate documents if you add the same document over again.

MongoDB GET request returning nothing

I am running a MongoDB database using NodeJS + Forever on an Amazon EC2 Instance. (MongoDB and NodeJS code can be found here https://github.com/WyattMufson/MongoDB-AWS-EC2). I installed Mongo on the EC2 instance following this tutorial: https://docs.mongodb.com/manual/tutorial/install-mongodb-on-ubuntu/.
When I run:
curl -H "Content-Type: application/json" -X POST -d '{"test":"field"}' http://localhost:3000/data
The POST returns:
{
"test": "field",
"created_at": "2017-11-20T04:52:12.292Z",
"_id": "5a125f7cead7a00d5a2593ec"
}
But this GET:
curl -X GET http://localhost:3000/data
returns:
[]
My dbpath is set to /data/db/ and I have no permissions issues when running it.
Why is the POST request working, but not the GET?
There are a number of issues and potential issues with that Node app that you're using. I suppose that you could start down the path of fixing/updating those issues or find another sample MEAN application, depending on what you're trying to ultimately accomplish.
One of the glaring issues is that collectionDriver.js is not properly passing errors back to the callback, it's passing null back rather than the error. This appears in 6 different places, but in particular (based on your sample POST) here on lines 46 and 47:
the_collection.insert(obj, function() { //C
callback(null, obj);
Should be (with some extra console logging for good measure):
the_collection.insert(obj, function(err, doc) { //C
console.error("insert error: %s", err.message);
callback(err, doc);
If you make those changes you will almost certainly see that your POST is actually returning a MongoError. And then you can move on to finding and fixing the next set of problems.
One of the errors/issues that you might find is that project is using a really old version of the MongoDB Node.js driver, and you find might this error uncovered when you fix the error handling:
driver is incompatible with this server version
Fixing that will take some additional work, since there are some API changes in the more recent 2.x driver which would be required to support more current versions of MongoDB (e.g. 3.2 or 3.4). See Node.js Driver Compatibility

Starting a scheduling service in sails.js with forever from within sails with access to all waterline models

I have a standalone scheduling service set to execute some logic every 1 hour, I want to start this service with forever right after sails start and I am not sure what's the best way to do that.
// services/Scheduler.js
sails.load(function() {
setInterval( logicFn , config.schedulingInterval);
});
Sails can execute bootstrap logic in the config.bootstrap module and I'll be using the forever-monitor node module \
var forever = require('forever-monitor'),
scheduler = new (forever.Monitor)( schedulerPath, {
max: 20,
silent: true,
args: []
});
module.exports.bootstrap = function(cb) {
scheduler.start();
cb();
};
What if the service failed and restarted for whatever reason would it have access to all waterline models again, how to ensure it works as intended every time?
as brittonjb said in comments, a simple solution is to use the cron module for scheduling.
You can specify a function for it to call at whatever interval you wish; this function could be defined within /config/bootstrap.js or it could be defined somewhere else (e.g. mail.dailyReminders() if you have a mail service with a dailyReminders method);
Please please please, always share your sails.js version number! This is really important for people googling questions/answers!
There are many ways to go about doing this. However, for those that want the "sails.js" way, there are hooks for newer sails.js versions.
See this issue thread in github, specifically, after the issue gets closed some very helpful solutions get provided by some users. The latest is shared by "scott-wyatt", commented on Dec 28, 2014:
https://github.com/balderdashy/sails/issues/2092

Using memcached failover servers in nodejs app

I'm trying to set up a robust memcached configuration for a nodejs app with the node-memcached driver, but it does not seem to use the specified failover servers when one server dies.
My local experiment goes as follows:
shell
memcached -p 11212
node
MC = require('memcached')
c = new MC('localhost:11211', //this process does not exist
{failOverServers: ['localhost:11212']})
c.get('foo', console.log) //this will eventually time out
c.get('foo', console.log) //repeat 5 or 6 times to exceed the retries number
//wait until all the connection errors appear in the console
//at this point, the failover server should be in use
c.get('foo', console.log) //this still times out :(
Any ideas of what might we be doing wrong?
It seems that the failover feature is somewhat buggy in node-memcached.
To enable failover you must set the remove options:
c = new MC('localhost:11211', //this process does not exist
{failOverServers: ['localhost:11212'],
remove : true})
Unfortunately, this is not going to work because of the following error:
[depricated] HashRing#replaceServer is removed.
[depricated] the API has no replacement
That is, when trying to replace a dead server with a replacement from the failover list, node-memcached outputs a deprecation error from the HashRing library (which, in turn, is maintained by the same author of node-memcached). IMHO, feel free to open a bug :-)
This is come when your nodejs server not getting any session id from memcached
Please check properly in php.ini file you are setting properly or not for memcached
session.save = 'memcache'
session.path = 'tcp://localhost:11212'

Resources