Setting internally visible DNS entries on Google cloud - dns

I would like set DNS records visible from instances inside the Google cloud.
For example if I query DNS from my PC I'll get one IP; however if I query DNS from the instance I'll get another IP. (A record to be exact)
Ideally I'd like doing this in most sane/convenient way possible; since I can install caching DNS server on every instance and setup authorative results; and forward caching for the rest (I guess bind9 can do that, never tried it before). But this is configuration sync mess; and it's not elegant. I kinda assume there might exist a better way.

One solution is to use totally different zones for different sets of machines and use the DNS search path to select.
So for example you could set up
server1.internal.yourdomain.com IN A 1.2.3.4
server1.external.yourdomain.com IN A 5.6.7.8
Then set up your machines with resolv.conf containing either
search internal.yourdomain.com
or
search external.yourdomain.com
And then when you lookup server1 on such a machine it will return the address from the appropriate zone. This scheme means you don't need to rely complex routing or IP detection. You will be immune to incidents where internal or external IPs get leaked into each others result.
Of course this does mean that you aren't keeping any IP addresses secret, so make sure you have other security layers in place (you probably shouldn't rely on secret IPs for security anyway)

Assuming you want your VM instances to be able to query other instances by name, and retrieve the desired instance’s private IP, this is already baked into GCP.
Google Cloud Platform (GCP) Virtual Private Cloud (VPC) networks have an internal DNS service that allows you to use instance names instead of instance IP addresses to refer to Compute Engine virtual machine (VM) instances.
Each instance has a metadata server that also acts as a DNS resolver for that instance. DNS lookups are performed for instance names. The metadata server itself stores all DNS information for the local network and queries Google's public DNS servers for any addresses outside of the local network.
[snip]
An internal fully qualified domain name (FQDN) for an instance looks like this:
hostName.c.[PROJECT_ID].internal
You can always connect from one instance to another using this FQDN.
Otherwise, if you want to serve up entirely arbitrary records to a set of machines, you’ll need to serve those records yourself (perhaps using Cloud DNS). In this case, you’d need to reconfigure the resolv.conf file on those instances appropriately (although you can’t just change the file as you see fit). Note that you can't restrict queries to only your own machines, but as David also mentioned, security through obscurity isn't security at all.

Google Cloud DNS Private DNS was just announced to beta and does exactly what you need

Related

How to configure 2 files in 2 dependent instances in cloudformation script?

I am doing a lift and shift with software from an on-premises architecture. There are two servers (main and auxiliary) that have to talk to one another over the network. I currently have tested and confirmed that I can manually add their hostnames and private IP address to the hosts file ("C:\Windows\System32\drivers\etc\hosts") and the software works fine.
For those that don't know, this file is used by Windows to map a network hostname like EC2AM-1A2B3C to a IP address. So if I added the hostname and IP address of the main server into the hosts file of the auxiliary server, then the auxiliary server could route to the main server. (i.e. PS> ping EC2AM-1A2B3C would then work).
How could I pass the required information to both servers? They both have to know the other server's private IP address and hostname. If this is not possible at server spin-up time, how might the servers connect and pass this information? I would really like to automate this if possible.
According to your description, I have some suggestions that you can refer to.
If you want two EC2 instances to be able to communicate with each
other, you can use the method of adding rules to the security group.
(1) Create security groups for your instance 1 and instance 2 respectively.
(2) Add an inbound rule to the security group of instance 1, chose "ICMP-ipv4". Enter the security group ID of instance 2.
(3) Create the inbound rule for instance 2 in the same steps.
For more information on security group rules you can refer to the official document.
You have tried adding the hostname and IP address of the primary
server to the host file of the secondary server. To tell each other
the IP Address of the other machine. Amazon CloudFormation cannot
handle the circular dependency between the two instances.
You can refer to the answer of this question. To realize that both instances know each other's IP address.
Hope these suggestions are useful to you.

Microservices - how to find DNS IP?

In the world of microservices endpoints should not (must not) be hardcoded. One of the best ways to do this is to have a DNS and let each microservice register while starting. By doing this whenever microservice A wants to communicate with microservice B it just asks DNS for endpoints where B currently listens.
What I do not understand is: How microservices know where the DNS lives?
Basically DNS is just a 'special' service and I can have one or multiple instances of it right? So I should not hardcode it's endpoint too or should I? And let's say I do - what if DNS instnace is moved to different location? Do I have to manually change it's location in configuration?
Does anyone happen to know how to design this? (or can anyone just point me to any document where this is explained since although there are many information about microservices and dns I can not find this particular information anywhere - maybe it's just too trivial and I am the only one who does not get it)
Manual setup of DNS is possible, as stated by the other answers, but I would recommend to use an infrastructure that supports the service discovery in all respects. For example kubernetes has built in DNS support and makes it very easy to expose a service that can consist of any number of Pods.
An infrastructure technology like kubernetes will also make many other respects of the microservices architectural style easier to implement, including high availability and scalability.
Please see the official docs for some more information.
DHCP solves this problem. When a host boots it sends a broadcast DHCP message. The DHCP server responds with many values, one of which is the location of DNS servers.
In the case of micro services, the host OS (or container host) will be configured for DNS via DHCP. The microservice code uses the OS DNS functions to resolve addresses.
https://en.m.wikipedia.org/wiki/Dynamic_Host_Configuration_Protocol
You can use your local network to discover services, via Dhcp and whatnot. But that requires that all services are already "registered" within that DNS server.
Microservices can find each other via service discovery, server or client side. If you choose client side service discovery, you can use tools like Consul, which provides a bunch of great features. One of which is a DNS endpoint which allows queries via SRV records with <serviceName>.consul.service domain names.
Consul has it's own DNS endpoint, you can configure your services to use that (usually on port 8600 locally, as Consul agents run locally).
But you can also configure an actual DNS server to forward questions to Consul, so that you can easily mix service discovery drive by Consul with manually setup services within a Bind instance or similar...
Known hostname solution. The fixed part would be the service domain name, for instance xservice.com. You can query this host using standard DNS tools (e.g., dig in your shell, etc).
Finally, in the DNS bound to xservice.com you then add a SRV record with further details.
A SRV record lists all the service details, including:
the symbolic service name;
the canonical hostname of the machine providing the service;
the TCP (or UDP) port on which the service is available.
There are many other info as well. Please see Wikipedia for the complete list.
Please keep in mind this is a somewhat static solution. If you are looking for a more dynamic one, then Oswin answer might be a better fit :-)

How to access a site on AWS EC2 without a domain name

I just created a new site on my IIS on Amazon's EC2 and I was wondering if there is a way to access it publicly without assigning a domain.
In detail. I created a new site dev.example.com which is accessible when I am logged in my instance. Is there a way to access it outside by doing let's say 54.xxx.xx.xxx:80:dev.example.com
I don't know if that's even possible so any hints are appreciated
You can definitely do this, but here's what you'll need to do:
Make sure IIS is configured to route any incoming connection on a particular IP address to your site. This is distinct from IIS specifically listening for a particular hostname (e.g. mywebsite.com).
As an alternative to the above, you could also manually set your DNS on your local computer and then use your web browser to visit mywebsite.com. From IIS's perspective, a user will have requested mywebsite.com just as if public DNS were set
As far as the IP address you visit, your instance will either have an ephemeral Public IP Address which will be reset when the instance is stopped and started, or an Elastic IP Address, which persists across restarts.
As #Anthony Manzo mentioned, you'll need to make sure that your Security Group associated with this instance allows Port 80. In addition, you may want to disable Windows Firewall completely (or check that it allows Port 80 on all three "Zones" (Windows Firewall has 3 different zones to manage).
Afaik the IP addresses assigned to EC2 instances can change throughout its lifetime and therefore you should instead generate an Elastic IP Address (which will always direct to your instance). That way, you don't have to deal with DNS yourself and still are always able to connect to your instance.
Have a look at the "Security Groups" on the left hand of your EC2 web console. You'll have to allow TCP 80 (and whatever else) in the Security Group (probably 'default') first.

Can you use a custom DNS server within EC2?

I need to set up a custom DNS server within EC2. I have one instance that acts as the DNS server, and N other instances that use this DNS server to connect to one another. Is this posible? Basically, I need to modify the DHCP settings for the N instances so that they connect to the DNS server. I can't find any good documentation on modifying the DHCP settings for an instance.
Note: I did find some documents, but they seem to only apply to Amazon VPC. Is there any way to do this without using VPC?
Short answer - no. You need a VPC. But once you have the VPC created - you can effectively do whatever you like with it.
Long answer - traditional AWS hosting gets an address directly from Amazon. This means you've got no control whatsoever of the IP addresses.
New accounts however come with a VPC by default, which means you can install a machine to act as a DNS server. (And I've done this in the past using Windows Active Directory)

Configure DNS for Windows Service Bus Namespace DNSEntry

Windows Service Bus 1.0 supports DNS-registered namespaces using New-SbNamespace -AddressingScheme DNSRegistered.
New-SbNamespace command
My Scenario:
All machines on same domain (cromwell.local)
2 Compute Nodes
1 SQL Node on separate server
2 namespaces (NamespaceA & NamespaceB for example)
Should the DNS entry (I'm leaning CName - not a DNS guru) each point to a compute node? That doesn't seem to give with the whole gateway/redirect situation.
It doesn't matter what kind of entry, as long as you have a host name that maps to an IP (or set of IPs, if you're using a clustered install). The host name can be a simple name (e.g. my-server) or a fully qualified domain name (e.g my-server.mydomain.com). What's important is that the name can be resolved by both parties, and that you pass the same host name to the server when creating the namespace.
One important thing to consider is that the hostname you use must match the CN name ofthe server's ssl certificate, to avoid auth issues (due to CN name matching). If you're using a default install on a domained-joined machine, you should use a hostname with the same domain name (since in default installs on a domain the server uses a *.yourdomain cert). For all the other scenarios (workgroup machines, hostname that doesn't match the domain) you'll need to provide your own cert. This decision will impact the namespaces you'll be able to have on the server (since all of them will need to match the certificate CN somehow), so weight well your options.
Based on the scenario you describe, I recommend you do the following:
Your DNS name should point to the IP of both compute nodes (I'm assuming these are the machines running service bus server. Besides the DNS redirection, this will also give you DNS-based load balancing
You can only have one namespace per DNS name, so when creating Namespace A you need to pass the CNAME you created on the first step. If you need to have more namespaces, you'll need to create more CNAMEs (which could be a problem with your certificate, depending the hostname / domain names you're picking)
PS. Service Bus Server doesn't really support a 2-node configuration. You should either go to one node for simplicity, or three, if you want a highly-available server.

Resources