Somewhat trivial question but I feel it would be crucial to get this answered. My question is about Redis and Node; how to 'run' a Redis db and have Node interact with it.
I plan to use node_redis (https://github.com/mranney/node_redis). I am fairly comfortable saying I understand how to use this module to interact with the Redis db.
My questions if one level higher: how and where is the Redis db 'running'? Do I have to install, create and then run/turn on this db before I am able to use node_redis to manipulate it? Or does the act of requiring node_redis already guarantee that there will be a Redis db to interact with?
Asking because my app will run on a device (not a machine) that I know can execute Node because has Node installed but I cannot install Redis on it (or at least I dont know how to) if Node will not be doing it for me.
WHEW I hope that was not too wordy. TIA!
Niko
Redis is a separate program. You have to download it, install, and run separately. If you'll accept default settings (listen port), node_redis with then connect to it automatically as, by default, redis installation has no passphrase set.
You'd just need to call:
var client = require("redis").createClient();
If your requirements are basic (and chances are they are, since you're running it in a limited environment), you might actually use different key-value store, like nStore which is implemented in JS and uses simple files as a storage. This would not require any other program than node itself.
Related
I'm currently using Compose.io to host my MongoDB - however its costs $31, my DB isn't so big and I don't really use any specific features.
I've decided to create a droplet on DigitalOcean and then use their one click install for MongoDB.
With Compose.io, I simply use a a connection URL like mongodb://USERNAME:PASSWORD#aws-xxxx.com:xxx/myDB along with a ssl certificate.
However with DigitalOcean, it looks like SSH'ing into the droplet then connecting is the best approach (rather than creating an open access bind_url.
So i want to ask:
Is this SSH process quite intensive/time consuming in terms of would it simply SSH once then remain connected, until the node app (website) was closed?
I'm thinking of using npm install tunnel-ssh. Is this recommended?
Any tips/advice/security notes would be appreciated.
Thanks.
Compose definitely offers a lot of security features that would take quite a bit of configuration to replicate. If this is a production database I would consider $31/month a good value. But speaking directly to your questions:
OpenSSH can be configured to keep the tunnel alive. The settings can be configured on both the client and server configuration file.
Keep SSH session alive
OpenSSH is very efficient an doesn't impose much overhead. Resource-wise it's not a concern. SSH2 implemented in native javascript is not going to perform as well as the OpenSSH binary. So I wouldn't use 'tunnel-ssh' without a convincing reason.
If you store your key with your application when somebody roots your application server they will also have your key. So make sure the user that you tunnel with has reduced privileges on the server, just what they need to access MongoDB and no more.
You might also consider just running your application and MongoDB on the same droplet. Don't expose MongoDB to the network. I wouldn't recommend this for production, but it's fine for low key scenarios. Keep in mind, if someone roots your server or application they will also have full access to the DB. Make sure you have a backup strategy.
Why there are single web service just for mongodb? Unlike LAMP, I will just install everything on my ec2. So now I'm deploying MEAN stack, should I seperate mongodb and my node server? I'm confused. I don't see any limitation mixing node with mongod under one single instance, I can use tools like mongolab as well.
Ultimately it depends how much load you expect your application to have and whether or not you care about redundancy.
With mongo and node you can install everything on one instance. When you start scaling the first separation is to separate the application from the database. Often its easier to set everything up that way especially if you know you will have the load to require it.
I know that Ruby on Rails has this feature, and in the railstutorial it specifically encourages it. However, I have not found such a thing in nodejs. If I want to run Sqlite3 on my machine so I can have easy to use database access, but postgres in production on Heroku, how would I do this in Nodejs? I can't see to find any tutorials on it.
Thank you!
EDIT: I meant to include Node.JS + Express.
It's possible of course, but be aware that this is probably a bad idea: http://12factor.net/dev-prod-parity
If you don't want to go through the hassle of setting up postgres locally, you could instead use a free postgres plan on Heroku and connect to it from your local machine:
DATABASE_URL=url node server.j
A .env file can make this easier:
https://devcenter.heroku.com/articles/heroku-local#copy-heroku-config-vars-to-your-local-env-file
To switch between production and development Db you use different ports for running you application locally and on Heroku.
As Heroku by default runs the application to port 80 you have a some other port while running your app locally.
This will help you to figure out in run time if your application is running locally or in production and you can switch the Databases accordingly.
You could use something like jugglingdb to do this:
JugglingDB(3) is cross-db ORM for nodejs, providing common interface to access most popular database formats. Currently supported are: mysql, sqlite3, postgres, couchdb, mongodb, redis, neo4j and js-memory-storage (yep, self-written engine for test-usage only). You can add your favorite database adapter, checkout one of the existing adapters to learn how, it's super-easy, I guarantee.
Jugglingdb also works on client-side (using WebService and Memory adapters), which allows to write rich client-side apps talking to server using JSON API.
I personally haven't used it, but having a common API to access all your database instances would make it super simple to use one locally and one in production - you could wire up some location detection without too much trouble as well and have it automatically select the target db depending on the environment it's in.
I already have mongoDB on my mac (OS mavericks) because it comes packaged with Meteor. I'm learning some pure, non-Meteor node.js right now. I'd like to work with mongoDB, but I'm afraid to change any of the configuration I've already got on my machine, as I don't want to screw up the Mongo that comes packaged with Meteor.
Is this something I should be concerned about? How do I protect my other mongo instance?
I assume by the MongoDB that comes with Meteor you mean the MongoDB database Meteor uses internally when you type "meteor" and that resides in .meteor inside your app folder. In that case it's no problem adding a MongoDB installation to the OS, they won't conflict.
In fact, I recommend to separately install MongoDB for different reasons. When you are running a production app it's easier to scale, let multiple apps use the same database etc.
First install MongoDB, for example with Homebrew. Then you just run your app with
MONGO_URL=mongodb://127.0.0.1/<db> meteor
According to mongodb's documentation:
...In many cases running multiple instances of mongod on a single system is not recommended but for testing purposes of course possible.
I don't think that meteor has done intensive configuration changes to mongodb's out-of-the-box configuration (except of course if you've done already configuration amendments for special sharding, Oplog tailing strategies etc.)
I've recently updated my node.js Redis package. Now my data seems to be gone. Does updating remove all my data?
it is strange that updating a client library will destroy your data. I suggest looking at the following possible causes:
Redis is not configured to persist data, or your configuration is to persist using RDB snapshots but not frequently enough and you killed Redis the hard way instead of using the SHUTDOWN command.
The client library has some kind of unit test that if run agains an instance does not detect the instance is not empty and will destroy the data content. Did you ran any test?
Make also sure you don't have FLUSHALL / FLUSHDB commands in your code for some reason or that your keys did not simply expired because of a time to live set (with EXPIRE or SETEX or alike).
I do not know much about the Redis client for node, but I can bet on this that the upgrade of a DB client is not causing the clearing of the DB. This would be buggy behavior.
So either this was some kind of bug you run in to or you did something wrong that cleared the DB that is independent to the upgrade of the Redis client you are using.