MongoDB is not responding - node.js

I have been searching for why this has been happening, but recently I reinstalled Ubuntu 16.04 and I copied a Node + Express project to a flash drive then pasted it to the exact same corresponding location (~/Programming/project/). Upon doing that, everything else works as I would expect but Mongo is not responding when I make requests to it through Mongoose. I do not have any reason to believe that Mongoose is the reason for the failure to respond. I have a couple of routes in which I know should work, the exact same code works on my friend's machine (same Ubuntu, version and everything). I have uninstalled and reinstalled everything (Including Ubuntu) multiple times. The only thing that works is making a call to find something with a specific ID will return if and only if the ID does not exist. Mongo won't return all the records or anything else. The website just spins endlessly (locally hosted on my machine). However, using Mongo in the terminal works fine. I can query and get results as if everything is normal. Has this happened to anyone else or is there any ideas? I can try to include some code.
This does not work
Greeting.find({}, function(err, greetings) {
res.status(200).json(greetings);
});
This does work.
Greeting.findById(req.params.id, function(err, greeting) {
if(err)
res.status(404).json({"error": "Greeting with that ID does not exist"});
res.status(200).json(greeting);
});
EDIT:
Sorry, I am new to stack overflow so I am still getting the hang of what should be added or not...
mongoose.connect(database.url);
mongoose.connection.on('error', function() {
console.info("Could not run mongodb, did you forget to run mongod?");
});
The database.url is what it needs to be, the connection is open as far as I can tell...
I should also mention that while installing Ubuntu, I wiped my previous dual boot in favor of just having Ubuntu, and so I opted in for the hard-drive encryption... Could that be preventing Mongo from working properly? If so, how would I fix that?

The issue was in fact the encrypted hard drive. I reinstalled Ubuntu and that fixed it. I'm still not sure how to make it work with an encrypted disk.

Related

PG (Node-Postgres) Pool Hangs on Connect ... But Only Inside Gatsby?

NOTE: This is mainly a question about the pg or Node-PostgreSQL module. It has details from Gatsby and Postgraphile, but I don't need expertise in all three, just pg.
I have a database that works great with a PostGraphile-using Express server. I can also acces it via node at the command line ...
const { Pool } = require("pg");
const pool = new Pool({ connectionString: myDbUrl });
pool.connect().then(() => console.log('connected'));
// logs 'connected' immediately
The exact same database also previously worked great with Gatsby/PostGraphile via the gatsby-source-pg plug-in ... but recently I changed dev machines, and when I try to build or run a dev server, Gatsby hangs on the "source and transform nodes" step. When I debug it, it's hanging on a call to pool.connect().
So I literally have two codebases both using PostGraphile, both with the same config, and one works and the other doesn't. Even stranger, if I edit the source code of the Gatsby plug-in in node_modules, to make it use the exact same code (which I can run at the command line successfully) ... it still hangs.
The only thing I can think of is that some other Gatsby plug-in is using up all the connections and not releasing them, but as far as I can tell (eg. by grep-ing through node_modules) no other plug-in even uses pg.
So really I have two questions:
A) Can anyone help me understand why connect would hang? Bonus points if you can help me understand why it would do so with a known-good config and only inside Gatsby (after some environmental factor changed)?
B) Can anyone help me fix it? If it might be some sort of "previous code forgot to release connections" issue, is there any way I can test for that? If I could just log new Pool().areYouBroken() somehow that would be amazingly useful.
Try:
npm install pg#latest
This is what got my pool/connection to start working as expected.
Annoying answer: because of a bug (thank you #charmander). For further details see: https://github.com/brianc/node-postgres/issues/2300
P.S. I never did find any sort of new Pool().areYouBroken() function.

Mongodb with node is using high cpu usage on Docker

Hi I've installed Rocket.chat on ubuntu Aws micro instance, It running with Nginx, MongoDB, and node, where MongoDB is running with docker image mongo:3.0
It was running smoothly on the day of installation but after some times It server was getting slow, I examined within the server with top command. It was MongoDB using cpu% around 70. and the next day It flickers with more than 90%.
I've reinstalled everything on the server but it is same again, no luck.
Here is the screenshot for top cmd.
Please let me know if any other stats are needed for this.
How can I examined the main problem here, how can I optimize it to make it work properly.
Thanks
I got to know why this issue arises. I started implementing my custom chat platform with Meteor.
So the cause of the problem was services.resume.loginTokens in the user object.
We were trying implementing rocket chat methods/api on the custom native android application.
Whenever application is calling the login method from the android app, It was adding a new login token without deleting the previous ones (for multi-system logins)
so if you'll delete the previous one with some date check, It won't create overheads to the user object.
Accounts.registerLoginHandler (loginRequest) ->
# ... Do whatever you need to do to authenticate the user
stampedToken = Accounts._generateStampedLoginToken();
Meteor.users.update userId,
$push: {'services.resume.loginTokens': stampedToken}
# Delete old resume tokens so they don't clog up the db
cutoff = +(new Date) - (24*60*60)*1000
Meteor.users.update userId, {
$pull:
'services.resume.loginTokens':
when: {$lt: cutoff}
},
{multi : true}
return {
id: userId,
token: stampedToken.token
}
I got this solution from this so question

Err_connection_refused using meteor uploads

I am deploying a meteor application to a digital ocean droplet with meteor upload. Everything goes well, the application gets deployed, database works, seeding of data works etc. But there is one problem i can't seem to be able to solve.
I use the meteor-uploads package (https://github.com/tomitrescak/meteor-uploads) for file uploads. Locally everything goes well, the file gets uploaded, finished callback gets called etc. But once I have deployed the application to the server it keeps giving me on of these errors, :
POST http://*ip*/upload net::ERR_CONNECTION_REFUSED
POST http://*ip*/upload net::ERR_EMPTY_RESPONSE
POST http://*ip*/upload net::ERR_CONNECTION_RESET
Any ideas are welcome, I have searched all over for a solution but none seems to fit my problem. I also installed to a fresh droplet but that didn't help. In none of my browsers (Mac Chrome, safari & firefox) does it work, on my phone (Android 5.0) I get the same errors. I am using the newest Meteor version 1.1.0.1
On local host you don't need to set the environmental variables, but the host services provides you should.
Check this tutorial to see how to put the environment variables.
Because the file-upload needs a startup-server-configuration, like this.
//file:/server/init.js
Meteor.startup(function () {
UploadServer.init({
tmpDir: process.env.PWD + '/.uploads/tmp',
uploadDir: process.env.PWD + '/.uploads/',
checkCreateDirectories: true //create the directories for you
})
});
But im not sure if putting this on a startup will work on digital ocean, like i say you you enter it, run printing and check if the /.uploads/ exists

How to check if files exist on different drive with Nodejs

I am working on a Nodejs powered system that runs within a local network and I need to check if files exist on a different local drive of the computer that the Nodejs app runs on.
I have tried using the fs.exists function, but that doesn't.
Is this possible? I am guessing there are security risks involved, but because the system runs 100% on a local network, is there any work around to achieve this?
the reason I need to check that the files exist is because the file name holds the version number, and I need to get the latest version (highest number)
this is what I tried:
// the example looks for example#1.wav in the V:\public folder
var filename = "example"
var versionCount = 1;
if (fs.existsSync("V:\public\"+filename+"#"+versionCount+".wav")) {
console.log("V:\public\"+filename+"#"+versionCount+".wav Found!");
} else {
console.log("V:\public\"+filename+"#"+versionCount+".wav does not exists");
}
I am running Nodejs on Windows.
Any suggestions would be greatly apprecaited! TIA!
Posting an answer incase anyone runs into the same problem in the future..
I resolved this problem by using forward slashes (/) instead of back slashes ()

Meteor server side remote debugging

Versions
I'm using Meteor 1.0.3 and node 0.10.35 on a small Ubuntu Server 14.04 LTS (HVM), SSD Volume Type - ami-3d50120d EC2 instance.
Context
I know how to do server side debugging on my development box, just $ meteor debug and open another browser pointing to the url it produces -- works great.
But now, I'm getting a server error on my EC2 instance I'm not getting in development. So I'd like to set up a remote debug session sever side.
Also, I deployed to the EC2 instance using the Meteor-up package (mup).
EDIT
In an effort to provide more background (and context) around my issue I'm adding the following:
What I'm trying to do is, on my EC2 instance, create a new pdf in a location such as:
application-name/server/.files/user/user-name/pdf-file.pdf
On my OSX development box, the process works fine.
When I deploy to EC2, and try out this process, it doesn't work. The directory:
/user-name/
for the user is never created for some reason.
I'd like to debug in order to figure out why I can't create the directory.
The code to create the directory that works on my development box is like so:
server.js
Meteor.methods({
checkUserFileDir: function () {
var fs = Npm.require('fs');
var dir = process.env.PWD + '/server/.files/users/' + this.userId + '/';
try {
fs.mkdirSync(dir);
} catch (e) {
if (e.code != 'EEXIST') throw e;
}
}
});
I ssh'd into the EC2 instance to make sure the path
/server/.files/user/
exists, because this portion of the path is neccessary in order for the above code to work correctly. I checked the path after the code should have ran, and the
/user-name/
portion of the path is not being created.
Question
How can I debug remote server side code in a easy way on my EC2 instance, like I do on my local development box?
Kadira.io supports remote errors/exceptions tracking. It allows you to see the stacktrace on server side exceptions in the context of your meteor methods.
See https://kadira.io/error-tracking.html for more detail.
It seems in my case, since I'm using Meteor-up (mup), I can not debug per-say, but get access to the remote EC2 instance server console and errors by using command $ mup logs -f on my development box.
This effectively solves my issue with being blind on the server side remote instance.
It still falls short of actual debugging remotely, which speeds up the process of finding errors and performance bottlenecks, but it's all we have for now.
For someone who still searching:
#zodern added server-side debugging of meteor apps to great meteor-up tool:
https://github.com/zodern/meteor-up/pull/976
Do mup meteor debug in deployment dir and you will be almost set, just follow the text.

Resources