I am running an app on Heroku and I want to send a notification every morning at 8. I have about 200k users and it takes a long time and slows down my app, so I would like to separate the two and keep running my API on one instance, and have a separate instance just for sending the notification in the morning.
How can I have two Nodejs servers using the same Mongodb database (and therefore the same models).
I do not understand how to connect the two instances to the same database (I am using MLab on Heroku) without copying the model schema.
Because in that case, if I modify it on one instance, I would need to do the same to the other, and it doesn't make sense to me.
I hope it is clear enough.
Thank you
TGrif: thanks a lot, I just used MongoDB native driver, and it works. I've always used Mongoose and didnt even think about looking for alternatives.
Documentation here: https://github.com/mongodb/node-mongodb-native
Thank you
Related
Here is my situation. I have an extensive REST based API that connects to a MongoDB database using Mongoose. The API is written as a standard "MEAN" stack application.
Currently, when a developer queries the API they're always connecting to the live production database. What I want to do is have an exact duplicate database as a "staging" database, where new data will be added first, vetted over a period of time, and then move to the live database. Then I want developers to be able to query either one simply by modifying their query.
I started looking into this with the Mongoose documentation, and it appears as though the models are tied to the DB connection, and if I want to have multiple connections I also have to have multiple models, one for each connection. This would be a nightmare of WET code and not the path I want to take.
What I want to do is not touch any of my code at all and simply have a switch that changes to the proper database for a given query. So my question is, how can I achieve this? Is it possible? The documentation seems to imply it is not.
Rather than trying to maintain connections two environments in the same code base have you considered setting up stage version of your application? Which database it connects to could be set through an environment variable or some other configuration option.
The developers would still then only have to make a change to query one or the other and you could migrate data from the stage database to production/live database once you have finished your vetting process.
I have NodeJS app that uses MongoDB as database. I'm using native mongo driver (not mongoess).
The application allow users to work on projects and share them and the logic that decide which projects a user is allowed to see is built as mongo criteria selector.
In order to test that I've found TingoDB which looks like a great candidate for mocking the MongoDB to be able to run the real model code and check that it is working.
My question is what is the best way to load the initial data? keep it in separate file? Keep it as another model?
Thank you,
Ido.
TingoDB actually stores it's data in flat-files, so if you want, you could just keep a copy of the database in a directory and load that.
However, if you're just testing with a small amount of data, you'd probably be better off keeping the test-data as in your testing scripts, and inserting it through your application as part of the test. That way, you can easily compare the data in the application to the data you loaded in your assertions.
Finally, if you're running MongoDB in production, then you should probably use MongoDB in your tests. While they do have nearly identical APIs, they have very different performance, which should be something you're keeping track of in testing. Unless there's a need to use TingoDB during testing, I'd try to make it as similar to the production environment as possible.
I am developping an ASP.Net MVC 5 application that will be a SaaS for my clients. I want to use EF6 and I am currently using localDb. I am an Entity Framwork beginner and I am having a hard time learning it. I have been searching the web for the last 2 days, found different approaches but never found something clear for me that would answers my questions.
I followed Scott Allen Tutorial on ASP.Net MVC 4 and 5 so currently, I have 2 contexts, 'IdendityDbContext' and 'MyAppDbContext' both pointing to the DefaultConnection sting using a database called MyAppDb.mdf
I want my customers to be able to login on the website and connect to their own database so I was planning on creating a new ConnectionString (and database) for each of my clients and keeping one ConnectionString for my client Accounts information using my IdendityDbContext.
I have plenty of questions but here the 2 most importants ones :
1) I am not sure how to do that and test it locally. Do I have to create new data connections for all my clients and when a client connect, I edit the connection string dynamically and pass it to 'MyAppContext' ?
2) Even if I am able to do this, let's say I have 200 customers, it means I will have 201 databases : 1 Account Database (IdentityDbContext) and 200 Client Databases (MyAppDbContext). If I change my model in the future, does it means I have to run package manager console migrations command line for each of the 200 databases ? This seems brutal. There must be a way to propagate my model easily on every clients database right?
Sorry for the long post and thank you very much in advance.
The answer to (1) is basically "yes", you need to do just that. The answer to (2) is that you'll have to run migrations against all the databases. I can't imagine how you would think there would be any other way to do it, you've got 200 separate databases that all need the same schema change. The only way to accomplish that is to run the same script (or migration) against each one of them. That's the downside of a single-tenant model like you've got.
A few things you should know since you're new to all of this. First, LocalDB is only for development. It's fine to use it while in development, but remember that you'll need a full SQL Server instance when it comes time to deploy. It's surprising how common a hangup this is, so I just want make sure you know out the gate.
Second, migrations, at least code-first migrations, are also for development. You should never run your code-first migrations against a production database. Not only would this require that you actually access the production database directly from Visual Studio, which is a pretty big no-no in and of itself, but nothing should ever happen on a production database unless you explicitly know what's changing, where. I have a write-up about how to migrate production databases that might be worth looking at.
For something like your 200 database scenario, though, it would probably be better to invest in something like this from Red Gate.
This question already has answers here:
How can Meteor apps work offline?
(4 answers)
Closed 8 years ago.
I am planning to create a web application using Node.js and Meteor Framework with mongoDB. This application will be critical for the business operation, so ideally it should be able to handle network failure.Is this possible? Or my only option here is to create a stand-alone application? The application will probably be run on either a PC or a tablet.
Are there any existing solution for this?
One Idea I have is, is it possible to have a local cache of the user's database on the machine. When the network is up, this cache might not be used but continually updated. But when the network failed, then the connection will be hand off to this database so operation can continue as usual. When the network is back up, this database will sync with the our server and back to normal mode.
In case of a PC, we might be able to run a local server manually to get the webpage backup. I couldn't think of a solution for the tablet though.
it sounds like you are looking for PouchDB. It works with CouchDB as a backend instead of Mongo, but I think these two are quite similar.
PouchDB is a local Javascript based DB on the client device. It syncs with 'real' DB once client is online again.
I am not affiliated with them, and I use Mongo daily as well, never actually tried CouchDB before, but might be worth having a look.
Meteor actually support this out of the box. I guess I was searching with the wrong terms. Check out the link below for more information.
How can Meteor apps work offline?
I'm finishing up building my first site with node.js and I'm curious if there is a checkoff list for all the things that I should complete before I get it up. In development when certain values are not expected in my database calls, (using Mongoose), my site will just die (e.g. node segfaults).
I'll also be using this on a VPS of mine that already have Apache installed on it, so will I be able to run both or do I need to look into something else for that?
Basically once it's up, I want to keep it up, and I'd like to know of any standard precautions I should know of before doing so.
Thanks!
I'm currently in a similar situation (about to deploy my first app on a private VPS), and here is the list I came up with:
1- Error logging: I used a simple WriteStream here, nothing too fancy.
var fs = require('fs');
//You might want to specify a path outside your app
var file = './log.log';
var logger = fs.createWriteStream('./log.log');
app.configure(function(){
//...
app.set(express.logger({stream:logger}));
/...
});
2- Use Forever to ensure that your script will run continuously. Yes, they are plenty of other solutions (using a daemon, for example), but I've been using forever for a while now, and never had any problems.
3- Consider setting up an admin interface. This was actually a requirement in my case, so I went ahead with smog, which will look very nice, especially for your client :).
4- If you use forever, you can monitor its state using Monit. Check out this blog post for a basic setup.
5- If you are using Mongo, consider developing a backup strategy of your data. This page is a very good starting point.
Note that this list does not contain any information regarding multi-app, multi-machine or multi-core support.
If multi-app support interests you, nginx seems to be a trusted solution. This (brilliant) SO answer will help you get set up.
If you have many spare machines to use, node-http-proxy was developed by nodejitsu, and allows you to expose only one machine and reverse-proxy the rest.
If you are looking for multi-core support, cluster comes bundled with node, so you can spawn N different processes (N being the number of cores you have) and have them listen to the shared port.
And, since we all love to hear a nice story, here a few posts about nodejs/mongodb use in production and the lessons learned:
1- Lessons learned from launching i.TV
2- Using Mongodb for 2+ billion documents on craigslist
Given that Node.js is not a web server like Apache or IIS, there's no checklist of configuration settings to follow. Also, given that the modules and/or frameworks you use can vary widely based upon the project you are creating, checklists would always miss something...especially as the Node.js ecosystem continues to evolve and grow.
As such, I'd suggest reviewing the material here as they answer your questions and are generally useful no matter what you are doing with Node.js:
What should every programmer know about web development? - list you should run through to be sure you didn't forget anything generally relevant.
Node Express Mongoose Demo - example code that can show you how to handle errors gracefully, structure your code, use require statements to break up code, add environment-specific configuration, etc.
Node.js Best Practice Exception Handling - additional info on handling problems
Apache and Node.js on the Same Server - the most simple answer is "sure, just make sure you are using different ports". If you want both to run and answer on port 80, then things are more complicated.
I am concerned that your app dies "when certain values are not expected in my database calls".
Mongoose is a nice tool because it allows for custom data validations on individual fields, can filter out data that doesn't fit into the Schema you have defined (keeping your documents consistent), and with the right settings can throw errors when there is 'bad data' passed to it rather than send bad data to the database, and more...
I'm wondering what you are doing that an unhandled error is making it pass Mongoose and past any callback function knowing that callbacks usually take the format function(err, data) and present the opportunity to deal with the error immediately.