AWS MongoDB inbound rules for replica cluster - node.js

I am using AWS for my MEAN stack servers. I have three nodes for NodeJS instances and also three instances for MongoDB clusters running as replica set mode.
To secure MongoDB inbound request I want to configure security group for mongo instances as below:
Inbound allowed for NodeJS instances so that node instances can make connection to database
Inbound allowed for MongoDB instances so that each of mongo instances can talk to each other for replication
I am facing following problems:
In screen to edit, inbound rules have only fields for IP Address. As I need to set custom IP, how can I provide 3+2 five instances in one text box? Since I can't even simple range as all five IP Address are different and can't fit into range sequence.
What will happen if IP address changes? Do I need to use Elastic IP or I can use instances hostname?
Please, advice.

For answer to the first question:
If the IPs of each instance are in a pattern (I'm pretty sure they will) then you should be using CIDR representation of the IP address.
So consider you have 5 IPs from 192.168.1.14 to 192.168.1.18 then your CIDR can be any of the following:
192.168.1.14/31
192.168.1.16/31
192.168.1.18/32
You can put any of the values there in your IP Range field in the edit screen.
There are various online converters out there like this. You can use any one of them.
For answer to the second question:
The internal IP of the Amazon instances generally do not changes if you are within the same virtual network so you can use the internal IPs of your network (like I used in the example). Or if your IPs are fluctuating then yes, you have to use the static IP or elastic IPs for the same.

Related

How to configure 2 files in 2 dependent instances in cloudformation script?

I am doing a lift and shift with software from an on-premises architecture. There are two servers (main and auxiliary) that have to talk to one another over the network. I currently have tested and confirmed that I can manually add their hostnames and private IP address to the hosts file ("C:\Windows\System32\drivers\etc\hosts") and the software works fine.
For those that don't know, this file is used by Windows to map a network hostname like EC2AM-1A2B3C to a IP address. So if I added the hostname and IP address of the main server into the hosts file of the auxiliary server, then the auxiliary server could route to the main server. (i.e. PS> ping EC2AM-1A2B3C would then work).
How could I pass the required information to both servers? They both have to know the other server's private IP address and hostname. If this is not possible at server spin-up time, how might the servers connect and pass this information? I would really like to automate this if possible.
According to your description, I have some suggestions that you can refer to.
If you want two EC2 instances to be able to communicate with each
other, you can use the method of adding rules to the security group.
(1) Create security groups for your instance 1 and instance 2 respectively.
(2) Add an inbound rule to the security group of instance 1, chose "ICMP-ipv4". Enter the security group ID of instance 2.
(3) Create the inbound rule for instance 2 in the same steps.
For more information on security group rules you can refer to the official document.
You have tried adding the hostname and IP address of the primary
server to the host file of the secondary server. To tell each other
the IP Address of the other machine. Amazon CloudFormation cannot
handle the circular dependency between the two instances.
You can refer to the answer of this question. To realize that both instances know each other's IP address.
Hope these suggestions are useful to you.

Azure load balancer with IPv6 and IPv4 frontend support

Currently my LB has a IPv4 frontend address and one backend pool with 5 VMs with IPv4 private addresses.
We would like to add IPv6 support to our Service Fabric cluster. I found this article: https://learn.microsoft.com/en-us/azure/load-balancer/load-balancer-ipv6-overview and I see a lot of "Currently not supported" texts.
The IPv6 address is assigned to the LB, but I cannot make rules:
Failed to save load balancer rule 'rulename'. Error: Frontend ipConfiguration '/subscriptions/...' referring to PublicIp with PublicIpAddressVersion 'IPv6' does not match with PrivateIpAddressVersion 'IPv4' referenced by backend ipConfiguration '/subscriptions/...' for the load balancer rule '/subscriptions/...'.
When I try to add a new backend pool, I get this message:
One basic SKU load balancer can only be associated with one virtual machine scale set at any point of time
Questions:
When can we expect the feature to have multiple LBs before one VMSS?
Is it possible to add IPv6 frontend without adding IPv6 to the backend (NAT64?)?
Is it possible to add IPv6 addresses to an existing VM scale set without recreating it?
Not sure I am understanding you exactly, It seems that some limitations are in that article.
For your questions:
I guess you mean mapping multiple LB frontends to one backend pool. If so, the same frontend protocol and port are reused across multiple frontends since each rule must produce a flow with a unique combination of destination IP address and destination port. You can get more details about multiple frontend configurations with LB.
It is not possible. The IP version of the frontend IP address must match the IP version of the target network IP configuration.
NAT64 (translation of IPv6 to IPv4) is not supported.
It is not possible, A VM Scale Set is essentially a group of load balanced VMs. There are a few differences between VM and A Vmss, you can refer to this. Also, If a network interface has a private IPv6 address assigned to it, you must add (attach) it to a VM when you create the VM. Read the network interface constraints.
You may not upgrade existing VMs to use IPv6 addresses. You must
deploy new VMs.

Setting internally visible DNS entries on Google cloud

I would like set DNS records visible from instances inside the Google cloud.
For example if I query DNS from my PC I'll get one IP; however if I query DNS from the instance I'll get another IP. (A record to be exact)
Ideally I'd like doing this in most sane/convenient way possible; since I can install caching DNS server on every instance and setup authorative results; and forward caching for the rest (I guess bind9 can do that, never tried it before). But this is configuration sync mess; and it's not elegant. I kinda assume there might exist a better way.
One solution is to use totally different zones for different sets of machines and use the DNS search path to select.
So for example you could set up
server1.internal.yourdomain.com IN A 1.2.3.4
server1.external.yourdomain.com IN A 5.6.7.8
Then set up your machines with resolv.conf containing either
search internal.yourdomain.com
or
search external.yourdomain.com
And then when you lookup server1 on such a machine it will return the address from the appropriate zone. This scheme means you don't need to rely complex routing or IP detection. You will be immune to incidents where internal or external IPs get leaked into each others result.
Of course this does mean that you aren't keeping any IP addresses secret, so make sure you have other security layers in place (you probably shouldn't rely on secret IPs for security anyway)
Assuming you want your VM instances to be able to query other instances by name, and retrieve the desired instance’s private IP, this is already baked into GCP.
Google Cloud Platform (GCP) Virtual Private Cloud (VPC) networks have an internal DNS service that allows you to use instance names instead of instance IP addresses to refer to Compute Engine virtual machine (VM) instances.
Each instance has a metadata server that also acts as a DNS resolver for that instance. DNS lookups are performed for instance names. The metadata server itself stores all DNS information for the local network and queries Google's public DNS servers for any addresses outside of the local network.
[snip]
An internal fully qualified domain name (FQDN) for an instance looks like this:
hostName.c.[PROJECT_ID].internal
You can always connect from one instance to another using this FQDN.
Otherwise, if you want to serve up entirely arbitrary records to a set of machines, you’ll need to serve those records yourself (perhaps using Cloud DNS). In this case, you’d need to reconfigure the resolv.conf file on those instances appropriately (although you can’t just change the file as you see fit). Note that you can't restrict queries to only your own machines, but as David also mentioned, security through obscurity isn't security at all.
Google Cloud DNS Private DNS was just announced to beta and does exactly what you need

Recommended replica set config in Azure

We're running MongoDB on Azure and are in the process of setting up a production replica set (no shards) and I'm looking at the recommendations here:
http://docs.mongodb.org/ecosystem/tutorial/install-mongodb-on-linux-in-azure/
And I see the replica set config is such that the members will talk to each other via external IP addresses - isn't this going to 1) incur additional Azure costs since the replication traffic goes thru the external IPs and 2) incur replication latency because of the same?
At least one of our applications that will talk to Mongo will be running outside of Azure.
AWS has a feature where external DNS names when looked up from the VMs resolve to internal IPs and when resolved from outside, to the external IP which makes things significantly easier :) In my previous job, I ran a fairly large sharded mongodb in AWS...
I'm curious what your folks recommendations are? I had two ideas...
1) configure each mongo host with an external IP (not entirely sure how to do this in Azure but I'm sure it's possible...) and configure DNS to point to those IPs externally. Then configure each VM to have an /etc/hosts file that points those same names to internal IP addresses. Run Mongo on port 27017 in all cases (or really whatever port). This means that the set does replication traffic over internal IPs but external clients can talk to it using the same DNS names.
2) simiilar to #1 but run mongo on 3 different ports but with only one external IP address and point all three external DNS names to this external IP address. We achieve the same results but it's cleaner I think.
Thanks!
Jerry
There is no best way, but let me clarify a few of the "objective" points:
There is no charge for any traffic moving between services / VMs / storage in the same region. Even if you connect from one VM to the other using servicename.cloudapp.net:port. No charge.
Your choice whether you make the mongod instances externally accessible. If you do create external endpoints, you'll need to worry about securing those endpoints (e.g. Access Control Lists). Since your app is running outside of Azure, this is an option you'll need to consider. You'll also need to think about how to encrypt the database traffic (mongodb Enterprise edition supports SSL; otherwise you need to build mongod yourself).
Again, if you expose your mongod instances externally, you need to consider whether to place them within the same cloud service (sharing an ip address, getting separate ports per mongod instance), or multiple cloud services (unique ip address per cloud service). If the mongod instances are within the same cloud service, they can then be clustered into an availability set which reduces downtime by avoiding host OS updates simultaneously across all vm's, and splits vm's across multiple fault domains).
In the case where your app/web tier live within Azure, you can use internal IP addresses, with both your app and mongodb vm's within the same virtual network.

How to create Azure Input Endpoint to VRRP Virtual IP Address

I'm setting up a test web farm in Azure. Consisting of:
Four Ubuntu Servers
Two balancers running HAProxy + Keepalived
Two web servers running Apache
Keepalived has been configured and everything has been working fine. HAProxy performs great.
My issue is that I want to enable the Keepalived failover clustering, but I can't seem to figure out how to create an Input Endpoint in Azure for the virtual IP address that the Keepalived VRRP is using.
In other words, I want to create an Input Endpoint for a virtual IP address in Azure, but not for an existing VM. So far, the only thing I've been able to do is create Input Endpoints for existing VMs (using their IP) for specific port numbers. I want to be able to configure:
Take TCP requests on port XX and map them to IP address YY.YY.YY.YY on port ZZ
Anyone know of a way to do this? I've looked on both portals (new and old) and the closest thing I see is the Cloud Services page for my VNET has the Input Endpoints listed. But no add/edit button.
This is not currently possible in Azure. Azure IaaS VMs do not yet support multiple IPs per interface, so keepalived will not be able to move a VIP between the nodes. We tried to do the same thing and were told it's not currently available. However, it's supposed to be on the road-map and it is "coming", as is the ability to have multiple interfaces per machine.
Input endpoints are to expose some service on a single VM (it's a NAT), and they are not attachable to an actual interface. The only option that I thought through, was to use Azure's Traffic Manager to round robin between the two HAProxy instances using two exposed endpoints, with a health check to fail to a single HAProxy instance if one fails.

Resources