Best way to implement server side cache in Node JS - node.js

I'm trying to implement the server-side cache in Node JS, I've read about express-redis-cache, but how would this solution work with load balanced node servers? I might use something like AWS Redis Service, but it loses the whole purpose of using Redis on some external server as it increases latency. Can you suggest the best approach for this?
PS - I have some .md & .json files using which I generate the .html files and return. So, instead of doing this, I want to have some caching which will store this generated .html files. I'll update the cached content only when my .md & .json files are updated.

I've read about express-redis-cache, but how would this solution work
with load balanced node servers?
It wouldn't be a problem because all your load balanced node servers would connect to the same Redis host, which is fine.
I might use something like AWS Redis Service, but it loses the whole
purpose of using Redis on some external server as it increases latency
It depends how you architect your app. If you are fully hosted on AWS, Elasticache is designed for this, latency would be minimal as connection will be within the VPC which is fast. If you need to connect to elasticache from a client on premise, you still have options: VPN (not ideal) or DirectConnect which would be much faster than a VPN.
Having said that, if you are looking to cache .html files
probably then look at CloudFront instead of a bespoke caching solution using Redis.

Related

How to run and start mongodb from within nodejs

Basically I don't want to use an existing mongodb database site like the official mongocloud or whatever-- how can I do what they do, but myself? Do I just include the database folder, along with all of the mongodb executable, in my nodejs folder and call require("child_process").spawn("mongodb.exe", /insert params here/), or is there some kind of way to do this in the mongo module?
And also do I need my own virtual machine to be able to do this or can the following work on a standard heroku nodejs application for example?
Anyone?
Heroku's hosting solution has only ephemeral volumes, so you can't use it for a database. Any files you create are temporary and will be purged on a regular basis.
For example, when your application is idle Heroku will de-provision that resource and clear out any data you've left there.
You can't use Heroku like this, you must use an external database service, or one of their many add-on offerings.

Node JS - Tips on SSH to my MongoDB

I'm currently using Compose.io to host my MongoDB - however its costs $31, my DB isn't so big and I don't really use any specific features.
I've decided to create a droplet on DigitalOcean and then use their one click install for MongoDB.
With Compose.io, I simply use a a connection URL like mongodb://USERNAME:PASSWORD#aws-xxxx.com:xxx/myDB along with a ssl certificate.
However with DigitalOcean, it looks like SSH'ing into the droplet then connecting is the best approach (rather than creating an open access bind_url.
So i want to ask:
Is this SSH process quite intensive/time consuming in terms of would it simply SSH once then remain connected, until the node app (website) was closed?
I'm thinking of using npm install tunnel-ssh. Is this recommended?
Any tips/advice/security notes would be appreciated.
Thanks.
Compose definitely offers a lot of security features that would take quite a bit of configuration to replicate. If this is a production database I would consider $31/month a good value. But speaking directly to your questions:
OpenSSH can be configured to keep the tunnel alive. The settings can be configured on both the client and server configuration file.
Keep SSH session alive
OpenSSH is very efficient an doesn't impose much overhead. Resource-wise it's not a concern. SSH2 implemented in native javascript is not going to perform as well as the OpenSSH binary. So I wouldn't use 'tunnel-ssh' without a convincing reason.
If you store your key with your application when somebody roots your application server they will also have your key. So make sure the user that you tunnel with has reduced privileges on the server, just what they need to access MongoDB and no more.
You might also consider just running your application and MongoDB on the same droplet. Don't expose MongoDB to the network. I wouldn't recommend this for production, but it's fine for low key scenarios. Keep in mind, if someone roots your server or application they will also have full access to the DB. Make sure you have a backup strategy.

Amazon ElasticCache for Redis with Node.js server

I am using Redis in my Node.js application. I don't use it for caching and I don't want to. I want my data in the Redis to be persistent at any point. Also my every call to redis write to the disk. Is it helpful to use the Amazon elastic cache in such case? Because I understand that Amazon elastic cache handles standby replication and automatic failover which is very important to me. I am running my Node.js server on Amazon EC2. Any help or suggestion would be appreciated.
Currently Amazon ElasticCache's way for keeping a persistent state is through snapshotting which means it uses the Backup and Restore feature to keep a copy in an S3 bucket that you can use for loading your data again in the case of losing it or warming up a new instance.
The backup and restore feature uses BGSAVE in the background, and as a heavy operation on your instance if setup to be done periodically , it is recommended to be running on a read replica.
So to answer your question; I do not think Amazon ElasticCache is a solution for your problem. it was meant for solutions who are looking for a cache layer to scale/speed up lookups for their apps that are running on other storage engines.
Update: As a manually setup alternative (taken from the comments)
If you are opened to setting up your own instance of Redis cluster redis.io/topics/cluster-spec that will be your best bet, it takes care of AFO, and replication, with persistence options enabled as append only file or backing up to RDB files

Redis deployment configuration - master slave replication

Currently I have two servers which I have deployed node.js/Express.JS based web services API. I am using Redis for caching the JSON strings.
What will be the best option deploying this setup in to production? I see here it advices to go with a dedicated server redis. OK. I take it and use a dedicated server for running redis master. Can I use existing app servers as slave nodes? Note : these app servers are running an Node/Express application.
What other other options do I have?
You can.
It all depends on the load that those other servers have, it's a problem of resource sharing. To be honest my main issue with your architecture is not the dedicated vs the non-dedicated servers, it's the fact that you are placing a Redis server (master or not) on a host that most likely will be facing the internet (expressJS app), meaning, it's quite exposed.
If you can simulate HTTP load into your Node/Express JS servers, see the difference between running some benchmark tests on your dedicated server vs the non dedicated ones:
On a running redis server type in:
redis-benchmark -q -n 100000
If the app servers are being hammered and using all cores frequently you should see a substantial difference in the benchmarks.
My suggestion is, go ahead with your first setup and add monitoring for the redis response times, and only act when you have to, which might be now if the benchmarks show very poor results.
As a side note, consider the option of not sharing hosts for services that you expose to the internet with services that perform internal functions to your application.

Nodejs webserver for production

A little update to the common question. As of the current version of Nodejs v0.6.5, is it safe to run it as a webserver in production? I really wanna skip the step of using nginx for example for proxy. I am gonna use Expressjs, nowjs, gzippo. And nginx doesnt support websockets yet, and it's a little hard to setup socket.io over ssl. Are there any more benfits to nginx other than that it serves static files better?
Any advice on this matter? And if it's ok to run as a webserver, are there any other modules worth concidering?
To be honest aside from serving static file I don't really see any important benefits (though Nginx may have more server-specific extensions).
Also you might want to use bouncy or node-http-proxy for proxying and browserify to use you server-side modules on the frontend.
Edit: also you would not be the first using Node without Nginx, as far as I know Trello and other websites are also using it.
Other benefits of Nginx besides serving static files.
You can have it compress dynamically or load up a .gz file even if the non-compressed is reqeusted.
You can cache the generation of anything, reducing a call back to node.js.
You can have it route to a cluster of node application servers
Lots of other neat stuff http://wiki.nginx.org/Modules
Using nginx though isn't required, and running node with nothing in front of it is perfectly fine.

Resources