I have been trying to connect to a cassandra database on a Rackspace cloud server with no success.
Can anyone shed any light on the last paragraph of this comment from http://wiki.apache.org/cassandra/StorageConfiguration
listen_address
Commenting out this property leaves it up to InetAddress.getLocalHost(). This will always do the Right Thing if the node is properly configured (hostname, name resolution, etc), and the Right Thing is to use the address associated with the hostname (it might not be: on cloud services you should ensure the private interface is used).
On Rackspace Cloud Servers you will most likely want to listen on eth1 (10.X.X.X), the ServiceNet IP. This is only accessible to other Cloud Servers inside the same datacenter. The max throughput of eth1 is twice that of eth0 and you are not charged for the bandwidth.
Please remember that the ServiceNet network is not private so you will still need to restrict access to the ports that you bind to via iptables.
Related
I want to connect to a vm in the Azure cloud from home i.e. without a fixed IP. I have added the two security rules for network interface and NSG respectively to accept inbound connections on the ssh port 22 using the ipv4 address given by showip.net. This doesn't work and I get a connection time-out - I tried out ipv6 address as well. If I do the very same thing for another server (outside Azure), the very same procedure works. The native ip address for both my home computer and the virtual machine I use as alternative are IPv6.
So the question is - does my connection from home fail, because there is some sort of reverse lookup failing or what could be the other causes?
Thanks!
It sounds like most likely the issue is some weird NATing of your ISP - especially when IPv6 comes into play, it can often be a bit hard to find the actual external IP address that your requests are coming from. You can try different sites like whatsmyip.com etc to see if you find another one that you can add.
Apart from that, there are various things you could try:
Use SSH from the Azure Cloud Shell (https://shell.azure.com)
Use Azure Bastion to have a jump host in the same VNET
Use a point-to-site VPN from your PC into your VNET
I am trying to connect my app, running on one EC2 instance, to MongoDB, running on another EC2 instance. I'm pretty sure the problem is in the security settings, but I'm not quite sure how to handle that.
First off, my app's instance is in an autoscaling group that sits behind an ELB. The inbound security settings for the instance and ELB allow access to port 80 from anywhere, as well as all traffic from its own security group.
The EC2 instance that runs Mongo is able to take connections if the security group for that instance accepts all inbound traffic from anywhere. Any other configuration that I've tried causes the app to say that it cannot make a connection with the remote address. I've set rules to accept inbound traffic from all security groups that I have, but it only seems to work when I allow all traffic from anywhere.
Also, my db instance is set up with an elastic ip. Should I have this instance behind an ELB as well?
So my questions are these:
1) How can I securely make connections to my EC2 instance running mongo?
2) In terms of architecture, does it make sense to run my database this way, or should I have this behind a load balancer as well?
This issue is tripping me up a lot more than I thought it would, so any help would be appreciated.
NOTE
I have also set the bind_ip=0.0.0.0 in /etc/mongo.conf
Your issue is that you are using the public elastic IP to connect to your database server from your other servers. This means that the connection is going out to the internet and back into your VPC, which presents the following issues:
Security issues due to the data transmission not being contained within your VPC
Network latency issues
Your database server's security group can't identify the security group of the inbound connections
Get rid of the elastic IP on the MongoDB server, there is no need for it unless you plan to connect to it from outside your VPC. Modify your servers to use the private internal IP address assigned to your database server when creating connections to it. Finally, lock your security group back down to only allow access to the DB from your other security group(s).
Optional: Create a private hosted zone in Route53, with an A record pointing to your database server's private IP address, then use that hostname instead of the internal IP address.
We're running MongoDB on Azure and are in the process of setting up a production replica set (no shards) and I'm looking at the recommendations here:
http://docs.mongodb.org/ecosystem/tutorial/install-mongodb-on-linux-in-azure/
And I see the replica set config is such that the members will talk to each other via external IP addresses - isn't this going to 1) incur additional Azure costs since the replication traffic goes thru the external IPs and 2) incur replication latency because of the same?
At least one of our applications that will talk to Mongo will be running outside of Azure.
AWS has a feature where external DNS names when looked up from the VMs resolve to internal IPs and when resolved from outside, to the external IP which makes things significantly easier :) In my previous job, I ran a fairly large sharded mongodb in AWS...
I'm curious what your folks recommendations are? I had two ideas...
1) configure each mongo host with an external IP (not entirely sure how to do this in Azure but I'm sure it's possible...) and configure DNS to point to those IPs externally. Then configure each VM to have an /etc/hosts file that points those same names to internal IP addresses. Run Mongo on port 27017 in all cases (or really whatever port). This means that the set does replication traffic over internal IPs but external clients can talk to it using the same DNS names.
2) simiilar to #1 but run mongo on 3 different ports but with only one external IP address and point all three external DNS names to this external IP address. We achieve the same results but it's cleaner I think.
Thanks!
Jerry
There is no best way, but let me clarify a few of the "objective" points:
There is no charge for any traffic moving between services / VMs / storage in the same region. Even if you connect from one VM to the other using servicename.cloudapp.net:port. No charge.
Your choice whether you make the mongod instances externally accessible. If you do create external endpoints, you'll need to worry about securing those endpoints (e.g. Access Control Lists). Since your app is running outside of Azure, this is an option you'll need to consider. You'll also need to think about how to encrypt the database traffic (mongodb Enterprise edition supports SSL; otherwise you need to build mongod yourself).
Again, if you expose your mongod instances externally, you need to consider whether to place them within the same cloud service (sharing an ip address, getting separate ports per mongod instance), or multiple cloud services (unique ip address per cloud service). If the mongod instances are within the same cloud service, they can then be clustered into an availability set which reduces downtime by avoiding host OS updates simultaneously across all vm's, and splits vm's across multiple fault domains).
In the case where your app/web tier live within Azure, you can use internal IP addresses, with both your app and mongodb vm's within the same virtual network.
I have created a Linux VM with a single node Cassandra cluster installed.
Cassandra.yaml has the following:
seeds:
listen address:
rpc address:
netstat -an check with all required port are up and listening. (i.e. 9160, 9042)
I am trying to connect my application which is outside of the Azure cloud to access the cassandra cluster in the cloud. Looks like the connection between the outside host to the Azure cloud Cassandra node has been block.
Wonder if there is a true restriction to access Azure VM from out of network. Is there a way to access this cassandra node from outside?
If someone can answer my question would be very nice.
Thank you!
You need to go to the "Endpoints" of your virtual machine:
At the bottom click on "Add", and add new endpoints for these ports.
Then you will need to manage ACL for each endpoint, defining the IP ranges of the allowed and blocked IP addresses.
Keep in mind that, if the internal IP that is used by the virtual machine, is different from external (public) IP, that is used by the client, then depending on a driver you may need to teach it how to do address translation. Otherwise, the cluster will report only internal IPs upon the discovery request, which will obviously be not accessible from outside.
From this and from the security prospective I would recommend setting up cassandra cluster inside of the virtual network, and accessing it via VPN.
There is a comprehensive tutorial how to do it here: http://azure.microsoft.com/en-us/documentation/articles/virtual-machines-linux-nodejs-running-cassandra/
I am looking for some help setting up a multi-datacenter cassandra cluster. My current setup is MySQL master / slave where MySQL replicates to a disaster-recovery center over the public IP of the remote datacenter and communicates with slaves in the main datacenter via the private IPs. While VPN'd into the network I can connect to any database via the public IP which is nice for testing / troubleshooting and app servers connect to the database via private IPs.
With Cassandra the disaster recovery center would become a second datacenter. I'm hoping to get advice on how each datacenter should be configured. I'm a bit confused over the listen_address and the broadcast_address. Currently, the seeds in the cassandra.yaml file are the public IPs and everything else is the private IP and this works for the app servers. If the rpc_address is configured with the private IP or public IP then connections are only accepted on that address. Connections are accepted via both if the rpc_address is 0.0.0.0 but, in my testing, that causes failover to fail. I have a java program running that inserts a row and reads it back in a loop. If the rpc_address is a specific address I can stop a node and the program continues (replication factor of 2 for testing). If the rpc_address is 0.0.0.0 then the program bombs because it cannot connect to the cluster even tho both node addresses are specified in the Cluster() constructor.
So, with that long lead in, my questions are:
What is the recommended configuration for the nodes to communicate with one another over the private IP for a given datacenter but also communicate to the other datacenter over the public IP. It seems the EC2 Multi snitch accomplishes this but we're not on Amazon's cloud so the code seems specific to amazon.
For development and debugging, can developers VPN into our network and connect via the public IP and have the app servers connect on the private IP? This works fine with rpc_address of 0.0.0.0 however failover doesn't work which was not unexpected given the warning comment in the cassandra.yaml file.
I really appreciate the help.
Thanks,
Greg