I'm new to Cassandra.
I've deployed a Cassandra 2.0 cluster and everything works as expected.
There's one thing I don't understand, though.
From within a web app that uses the database, to which node should I connect? I know they're all the same, but how do I know that node isn't down?
I read that you're not supposed to use a load balancer, so I'm a little confused.
Any help appreciated. Thanks!
Depending on which driver you are using to connect, you can typically provide more than one node to connect to. Usually in the form of "node1, node2" ("192.168.1.1,192.168.1.2")
Related
I moved my nodejs backend to docker(previously it was deployed on ec2 instance).
My mongodb is deployed on another ec2 instance, which i did't move to docker. (want to keep it this way only).
After dockerization of my backend,(Deployed it on ECS) APIs are taking longer time for db queries. Don't understand what went wrong. Or is it suppose to be like this? Any suggestions?
So, I found the issue here.
Due to different availability zone, It was taking more time. There was one extra hop causing extra time.
I have been trying to deploy a Node.js web app which uses mongoose and express on AWS for a week now.
I'm new to AWS and am not the best networker, so please have patience with my lack of networking understanding.
So far I have used AWS's Quickstart Guide to launch a new VPC with mongoDB.
Found here: http://docs.aws.amazon.com/quickstart/latest/mongodb/welcome.html
I verified that the mongo database was working by ssh'ing into the private mongo IP's through the NAT gateway (using the keypair). It appears to be working fine and I have a username and password for admin level of the mongodb setup.
I then launched an elastic beanstalk node.js application within this VPC (or at least I think it was in here - the security rules include the subnets of the mongodb), with a call in my code as follows:
mongoose.connect('mongodb://<MongoUsername>:<MongoPassword>#test.amazonaws.com:27017/admin')
where admin is the database name.
When I try to launch this node.js instance though, it does not run.
I have, however, verified that the node.js app runs independently of the VPS by launching a completely separate elastic beanstalk instance. It runs my code fine (but obviously doesn't connect to a db so forms do not work.)
What am I missing here? Why can I not connect this cluster to my node app? I'm super confused and frustrated with the whole process and would really appreciate any advice. Thanks.
If you need any further info to help me debug this let me know.
Edit: To the person who wants this closed as too broad.. what extra information do you want? I specified in the question that im new at this and to tell me what else you need.. so I find this classification without any clarification pretty rude and unhelpful. Cheers
So this issue here was that I was not applying the correct security rules to my elastic beanstalk instance and it was therefore not rendering to the Ip.
Thanks to those that tried to help, if anyone has a similar issue and needs a hand, feel free to message me.
I'm currently using Compose.io to host my MongoDB - however its costs $31, my DB isn't so big and I don't really use any specific features.
I've decided to create a droplet on DigitalOcean and then use their one click install for MongoDB.
With Compose.io, I simply use a a connection URL like mongodb://USERNAME:PASSWORD#aws-xxxx.com:xxx/myDB along with a ssl certificate.
However with DigitalOcean, it looks like SSH'ing into the droplet then connecting is the best approach (rather than creating an open access bind_url.
So i want to ask:
Is this SSH process quite intensive/time consuming in terms of would it simply SSH once then remain connected, until the node app (website) was closed?
I'm thinking of using npm install tunnel-ssh. Is this recommended?
Any tips/advice/security notes would be appreciated.
Thanks.
Compose definitely offers a lot of security features that would take quite a bit of configuration to replicate. If this is a production database I would consider $31/month a good value. But speaking directly to your questions:
OpenSSH can be configured to keep the tunnel alive. The settings can be configured on both the client and server configuration file.
Keep SSH session alive
OpenSSH is very efficient an doesn't impose much overhead. Resource-wise it's not a concern. SSH2 implemented in native javascript is not going to perform as well as the OpenSSH binary. So I wouldn't use 'tunnel-ssh' without a convincing reason.
If you store your key with your application when somebody roots your application server they will also have your key. So make sure the user that you tunnel with has reduced privileges on the server, just what they need to access MongoDB and no more.
You might also consider just running your application and MongoDB on the same droplet. Don't expose MongoDB to the network. I wouldn't recommend this for production, but it's fine for low key scenarios. Keep in mind, if someone roots your server or application they will also have full access to the DB. Make sure you have a backup strategy.
Why there are single web service just for mongodb? Unlike LAMP, I will just install everything on my ec2. So now I'm deploying MEAN stack, should I seperate mongodb and my node server? I'm confused. I don't see any limitation mixing node with mongod under one single instance, I can use tools like mongolab as well.
Ultimately it depends how much load you expect your application to have and whether or not you care about redundancy.
With mongo and node you can install everything on one instance. When you start scaling the first separation is to separate the application from the database. Often its easier to set everything up that way especially if you know you will have the load to require it.
I'm an experienced dev, but new to the sysadmin side of things. I'm running a node.js application, that uses a redis database and has nginx running a reverse proxy to server the node pages over https.
My concern is that one or all 3 will fall over under heavy load or error and there's nothing to get it started back up. Any advice is greatly appreciated.
My server is Ubuntu 14.04 LTS.
Many thanks =)
One of the best option is to use upstart.
The original documentation is pretty complicated, but take a look:
http://upstart.ubuntu.com/cookbook/
And here is what you exactly need, if I correctly understand your issue:
http://blog.joshsoftware.com/2012/02/14/upstart-scripts-in-ubuntu/
I found the easiest setup and ongoing monitoring for a new sysadmin came from Monit (http://mmonit.com/monit/)
There is a new solution for that called pm2. I use it and it's working fine.