MongoDB GET request returning nothing - node.js

I am running a MongoDB database using NodeJS + Forever on an Amazon EC2 Instance. (MongoDB and NodeJS code can be found here https://github.com/WyattMufson/MongoDB-AWS-EC2). I installed Mongo on the EC2 instance following this tutorial: https://docs.mongodb.com/manual/tutorial/install-mongodb-on-ubuntu/.
When I run:
curl -H "Content-Type: application/json" -X POST -d '{"test":"field"}' http://localhost:3000/data
The POST returns:
{
"test": "field",
"created_at": "2017-11-20T04:52:12.292Z",
"_id": "5a125f7cead7a00d5a2593ec"
}
But this GET:
curl -X GET http://localhost:3000/data
returns:
[]
My dbpath is set to /data/db/ and I have no permissions issues when running it.
Why is the POST request working, but not the GET?

There are a number of issues and potential issues with that Node app that you're using. I suppose that you could start down the path of fixing/updating those issues or find another sample MEAN application, depending on what you're trying to ultimately accomplish.
One of the glaring issues is that collectionDriver.js is not properly passing errors back to the callback, it's passing null back rather than the error. This appears in 6 different places, but in particular (based on your sample POST) here on lines 46 and 47:
the_collection.insert(obj, function() { //C
callback(null, obj);
Should be (with some extra console logging for good measure):
the_collection.insert(obj, function(err, doc) { //C
console.error("insert error: %s", err.message);
callback(err, doc);
If you make those changes you will almost certainly see that your POST is actually returning a MongoError. And then you can move on to finding and fixing the next set of problems.
One of the errors/issues that you might find is that project is using a really old version of the MongoDB Node.js driver, and you find might this error uncovered when you fix the error handling:
driver is incompatible with this server version
Fixing that will take some additional work, since there are some API changes in the more recent 2.x driver which would be required to support more current versions of MongoDB (e.g. 3.2 or 3.4). See Node.js Driver Compatibility

Related

Not able to connect to mongodb server also not getting any error

The problem is exactly what the title says, I am not able to connect to mongo db server. I don't know where the problem is. Whether its with starting the server of something else. According to the code that I have written I should get the output 'Connected Successfully'. but I am not getting any output
const mongodb=require('mongodb')
const MongoClient=mongodb.MongoClient
const connectionURL='mongodb://127.0.0.1:27017'
const databaseName='task-manager'
MongoClient.connect(connectionURL,{useNewUrlParser:true},(error,client)=>{
if(error){
return console.log('Unable to connect to database')
}
console.log('Connected Successfully')
})
I have attached the related screenshots.
enter image description here
enter image description here
Please help and thank you in advance
I tried searching for things in the documentation but could not find anything. I am expecting console.log statement to work in the nodejs code that I have written.
I can see from the logs that the server is listening at 127.0.0.1:27017. However, are you sure that it is reachable?
Some other software might be blocking the access, as described in a similar question here.
You can confirm by running curl http://127.0.0.1:27017 from a console terminal, if you are on Linux or Mac, or from a PowerShell window on Windows. Some other software might be blocking the access, as described in a similar question here.
If you get a message like below, your server is fine...
user#server ~]$ curl http://127.0.0.1:27017
It looks like you are trying to access MongoDB over HTTP on the native driver port.
user#server ~]$
The second step would be to ensure that you have the right/updated driver. I see from your screenshot that your server runs MongoDB 6.0.2, which is pretty recent.
You need at least driver v4.8 as per the MongoDB/node driver compatibility list.
Then, I would suggest to try running first a piece of code that it is confirmed working well, alongside the supported driver version.
e.g. You can try theses examples, directly from the MongoDB code-samples: https://mongodb.github.io/node-mongodb-native/api-generated/mongoclient.html

CouchDB gives no_majority when trying to update _security

After couchdb upgrade it was no longer possible to create new databases or update _security of old databases.
I've just run into this with CouchDB 2.1.1. My problem was that the security object I was attempting to pass in was malformed.
From https://issues.apache.org/jira/browse/COUCHDB-2326,
Attempts to write security objects where the "admins" or "members" values are malformed will result in an HTTP 500 response with the following body:
{"error":"error","reason":"no_majority"}
This should really be an HTTP 400 response with a "bad_request" error value and a different error reason.
To reproduce:
$ curl -X PUT http://localhost:15984/test
{"ok":true}
$ curl -X PUT http://localhost:15984/test/_security -d '{"admins":[]}'
{"error":"error","reason":"no_majority"}
Yet another reason could be that old nodes were lingering in _membership configuration.
I.e., _membership showed:
{
"all_nodes": [
"couchdb#localhost"
],
"cluster_nodes": [
"couchdb#127.0.0.1",
"couchdb#localhost"
]
}
when it should show
{
"all_nodes": [
"couchdb#localhost"
],
"cluster_nodes": [
"couchdb#localhost"
]
}
Doing a deletion of the bad cluster node as described in docs helped.
Note, that _nodes might not be available on port 5984, but on 5986 only.
For a PUT to /db/_security, if the user is not a a db or server admin, the response is HTTP status 500 with {"error":"error","reason":"no_majority"} but the server logs are more informative, including: {forbidden,<<"You are not a db or server admin.">>}
One reason could be couchdb process reached its maximum number of open files, which lead to read errors and (wrongly) no_majority errors.
Another reason could be that the server switched from single node configuration to multiple node configuration (for example during an upgrade).
Changing back number of nodes to 1 helped

NodeJS/Express: ECONNRESET when doing multiples requests using Sequelize/Epilogue

I'm building a webapp using the following the architecture:
a postgresql database (called DB),
a NodeJS service (called DBService) using Sequelize to manipulate the DB and Epilogue to expose a REST interface via Express,
a NodeJS service called Backend serving as a backend and using DBService threw REST calls
an AngularJS website called Frontend using Backend
Here are the version I'm using:
PostgreSQL 9.3
Sequelize 2.0.4
Epilogue 0.5.2
Express 4.13.3
My DB schema is quite complex containing 36 tables and some of them contains few hundreds of records. The DB is not meant to write data very often, but mostly to read them.
But recently I created a script in Backend to make a complete check up of datas contained inside the DB: basically this script retrieve all datas of all tables and do some basic checks on datas. Currently the script only does reading on database.
In order to achieve my script I had to remove the pagination limit of Epilogue by using the option pagination: false (see https://github.com/dchester/epilogue#pagination).
But now when I launch my script I randomly obtained that kind of error:
The request failed when trying to retrieve a uniquely associated objects with URL:http://localhost:3000/CallTypes/178/RendererThemes.
Code : -1
Message : Error: connect ECONNRESET 127.0.0.1:3000
The error randomly appears during the script execution: then it's not always this URL which is returned, and even not always the same tables or relations. The error message before code is a custom message returned by Backend.
The URL is a reference to the DBService but I don't see any error in it, even using logging: console.log in Sequelize and DEBUG=express:* to see what happens in Express.
I tried to put some setTimeout in my Backend script to slow it, without real change. I also tried to manipulate different values like PostgreSQL max_connections limit (I set the limit to 1000 connections), or Sequelize maxConcurrentQueries and pool values, but without success yet.
I did not find where I can customize the pool connection of Express, maybe it should do the trick.
I assume that the error comes from DBService, from the Express configuration or somewhere in the configuration of the DB (either in Sequelize/Epilogue or even in the postgreSQL server itself), but as I did not see any error in any log I'm not sure.
Any idea to help me solve it?
EDIT
After further investigation I may have found the answer which is very similar to How to avoid a NodeJS ECONNRESET error?
: I'm using my own object RestClient to do my http request and this object was built as a singleton with this method:
var NodeRestClient : any = require('node-rest-client').Client;
...
static getClient() {
if(RestClient.client == null) {
RestClient.client = new NodeRestClient();
}
return RestClient.client;
}
Then I was always using the same object to do all my requests and when the process was too fast, it created collisions... So I just removed the test if(RestClient.client == null) and for now it seems to work.
If there is a better way to manage that, by closing request or managing a pool feel free to contribute :)

"MongoError: No such cmd: createIndexes" using OpenShift

I'm creating a node.js app that sends reminders using agenda.js. It works perfectly when I test it locally, but when I test it on OpenShift, I get the following error message:
MongoError: No such cmd: createIndexes
I only get this error when the information for a new reminder is sent to the server, i.e. only when agenda.js is used.
I've looked up createIndexes, and it seems that it was implemented in version 2.6 of MongoDB, and OpenShift currently only appears to support version 2.4.
My question is, is there a way around this? Perhaps a way to manually upgrade to the latest version of MongoDB, or not to use a cartridge at all (not sure what that actually is)?
Before 2.6, there wasn't an internal command called CreateIndexes. It was necessary to insert and object on the system.indexes collection directly.
On mongo shell, there were 2 helpers for that, with different names:
db.collection.createIndex, which still exists nowadays;
db.collection.ensureIndex, which was removed from 2.6 on.
I couldn't understand what exactly is issuing the create index command. Is it your SDK? Because it is supposed to be done just once, and not on every insert.
I had the same issue, and, as a workaround, used Agenda 0.6.28 instead (as it worked in a previous project of mine under that version).
Mind that Agenda does not emit a 'ready' event in this lower version, so you'd better call the 'start' function directly:
agenda.define('delete old session logs', jobs.cleanSessionLogs);
agenda.on('fail', function(err, job) {
errorLog.error({ err: err, job: job });
});
agenda.every('1 hour', 'delete old session logs');
agenda.start();
instead of:
agenda.define('delete old session logs', jobs.cleanSessionLogs);
agenda.on('fail', function(err, job) {
errorLog.error({ err: err, job: job });
});
agenda.on('ready', function() {
agenda.every('1 hour', 'delete old session logs');
agenda.start();
});

Requests and connections double on node 4.1.2

We're currently in the process of updating from node 0.10 to node 4.1.2 and we're seeing some weird patterns. The number of connections to our postgres database doubles1 and we're seeing the same pattern with requests to external services2. We are running a clustered app running the native cluster API and the number of workers is the same for both versions.
I'm failing to understand why upgrading the runtime language would apparently change application behaviour by doubling requests to external services.
One of the interesting things I've noticed with 0.12 and 4.x is the change in garbage collection. I've not used the pg module before so I don't know internally how it maintains it's pools of if it would be affected by memory or garbage collection. If you haven't defined default memory setting for node you could try giving that a shot and see if you see any other results.
node --max_old_space_size <some sane value in MB>
I ran into something similar, but I was getting double file writes. I don't know your exact case, but I've seen a scenario where requests could almost exactly double.
in the update to 4.1.2, process.send and child.send has gone from synchronous to asynchronous.
I found an issue like this:
var child = fork('./request.js');
var test = {};
child.send(small request);
child.send(large request);
child.on('response', function (val) {
console.log('small request came back: ' + val);
test = val;
});
if(!test){
//retry request
} ...
So where as previously the blocking sends has allowed this code to work, the non-blocking version assumes an error has occurred and retries. No error actually occurred, so double the requests come in.

Resources