Configure DNS for Windows Service Bus Namespace DNSEntry - azure

Windows Service Bus 1.0 supports DNS-registered namespaces using New-SbNamespace -AddressingScheme DNSRegistered.
New-SbNamespace command
My Scenario:
All machines on same domain (cromwell.local)
2 Compute Nodes
1 SQL Node on separate server
2 namespaces (NamespaceA & NamespaceB for example)
Should the DNS entry (I'm leaning CName - not a DNS guru) each point to a compute node? That doesn't seem to give with the whole gateway/redirect situation.

It doesn't matter what kind of entry, as long as you have a host name that maps to an IP (or set of IPs, if you're using a clustered install). The host name can be a simple name (e.g. my-server) or a fully qualified domain name (e.g my-server.mydomain.com). What's important is that the name can be resolved by both parties, and that you pass the same host name to the server when creating the namespace.
One important thing to consider is that the hostname you use must match the CN name ofthe server's ssl certificate, to avoid auth issues (due to CN name matching). If you're using a default install on a domained-joined machine, you should use a hostname with the same domain name (since in default installs on a domain the server uses a *.yourdomain cert). For all the other scenarios (workgroup machines, hostname that doesn't match the domain) you'll need to provide your own cert. This decision will impact the namespaces you'll be able to have on the server (since all of them will need to match the certificate CN somehow), so weight well your options.
Based on the scenario you describe, I recommend you do the following:
Your DNS name should point to the IP of both compute nodes (I'm assuming these are the machines running service bus server. Besides the DNS redirection, this will also give you DNS-based load balancing
You can only have one namespace per DNS name, so when creating Namespace A you need to pass the CNAME you created on the first step. If you need to have more namespaces, you'll need to create more CNAMEs (which could be a problem with your certificate, depending the hostname / domain names you're picking)
PS. Service Bus Server doesn't really support a 2-node configuration. You should either go to one node for simplicity, or three, if you want a highly-available server.

Related

How to configure 2 files in 2 dependent instances in cloudformation script?

I am doing a lift and shift with software from an on-premises architecture. There are two servers (main and auxiliary) that have to talk to one another over the network. I currently have tested and confirmed that I can manually add their hostnames and private IP address to the hosts file ("C:\Windows\System32\drivers\etc\hosts") and the software works fine.
For those that don't know, this file is used by Windows to map a network hostname like EC2AM-1A2B3C to a IP address. So if I added the hostname and IP address of the main server into the hosts file of the auxiliary server, then the auxiliary server could route to the main server. (i.e. PS> ping EC2AM-1A2B3C would then work).
How could I pass the required information to both servers? They both have to know the other server's private IP address and hostname. If this is not possible at server spin-up time, how might the servers connect and pass this information? I would really like to automate this if possible.
According to your description, I have some suggestions that you can refer to.
If you want two EC2 instances to be able to communicate with each
other, you can use the method of adding rules to the security group.
(1) Create security groups for your instance 1 and instance 2 respectively.
(2) Add an inbound rule to the security group of instance 1, chose "ICMP-ipv4". Enter the security group ID of instance 2.
(3) Create the inbound rule for instance 2 in the same steps.
For more information on security group rules you can refer to the official document.
You have tried adding the hostname and IP address of the primary
server to the host file of the secondary server. To tell each other
the IP Address of the other machine. Amazon CloudFormation cannot
handle the circular dependency between the two instances.
You can refer to the answer of this question. To realize that both instances know each other's IP address.
Hope these suggestions are useful to you.

Azure subnets for cloned Dev, Test, Product with common web server

I have 3 VMs (app, content, DB) that are part of an application deployment. I need to clone multiple copies of this VM set. There is a common web server for all sets that proxies requests to the app server in each set.
Because hostnames are duplicated, I believe I can put each SET of 3 VMs into their own subnet and prevent communication and hostname duplication.
The web server will be outside these subnets (I guess in its own subnet).
If you have multiple hosts with the same hostname in the same VNET, will they have the same internal DNS name? The fact that they are firewalled into separate subnets should prevent cross traffic?
The web server will proxy based on IP address, since hostname will not resolve easily.
An alternative is one web server per VM set, 4 servers per VNET. This will work, but means 25% more VMs to manage.
Anyone suggest the "typical" way a network engineer would architect this? (Yes this could be cross posted to networking group, but it is dependent upon Azure specifics as well as general network architecture).
Many thanks experts.
You cannot have multiple hosts with the same hostname and internal DNS name. These have to be unique. The alternative seems more favorable here.

Setting internally visible DNS entries on Google cloud

I would like set DNS records visible from instances inside the Google cloud.
For example if I query DNS from my PC I'll get one IP; however if I query DNS from the instance I'll get another IP. (A record to be exact)
Ideally I'd like doing this in most sane/convenient way possible; since I can install caching DNS server on every instance and setup authorative results; and forward caching for the rest (I guess bind9 can do that, never tried it before). But this is configuration sync mess; and it's not elegant. I kinda assume there might exist a better way.
One solution is to use totally different zones for different sets of machines and use the DNS search path to select.
So for example you could set up
server1.internal.yourdomain.com IN A 1.2.3.4
server1.external.yourdomain.com IN A 5.6.7.8
Then set up your machines with resolv.conf containing either
search internal.yourdomain.com
or
search external.yourdomain.com
And then when you lookup server1 on such a machine it will return the address from the appropriate zone. This scheme means you don't need to rely complex routing or IP detection. You will be immune to incidents where internal or external IPs get leaked into each others result.
Of course this does mean that you aren't keeping any IP addresses secret, so make sure you have other security layers in place (you probably shouldn't rely on secret IPs for security anyway)
Assuming you want your VM instances to be able to query other instances by name, and retrieve the desired instance’s private IP, this is already baked into GCP.
Google Cloud Platform (GCP) Virtual Private Cloud (VPC) networks have an internal DNS service that allows you to use instance names instead of instance IP addresses to refer to Compute Engine virtual machine (VM) instances.
Each instance has a metadata server that also acts as a DNS resolver for that instance. DNS lookups are performed for instance names. The metadata server itself stores all DNS information for the local network and queries Google's public DNS servers for any addresses outside of the local network.
[snip]
An internal fully qualified domain name (FQDN) for an instance looks like this:
hostName.c.[PROJECT_ID].internal
You can always connect from one instance to another using this FQDN.
Otherwise, if you want to serve up entirely arbitrary records to a set of machines, you’ll need to serve those records yourself (perhaps using Cloud DNS). In this case, you’d need to reconfigure the resolv.conf file on those instances appropriately (although you can’t just change the file as you see fit). Note that you can't restrict queries to only your own machines, but as David also mentioned, security through obscurity isn't security at all.
Google Cloud DNS Private DNS was just announced to beta and does exactly what you need

Microservices - how to find DNS IP?

In the world of microservices endpoints should not (must not) be hardcoded. One of the best ways to do this is to have a DNS and let each microservice register while starting. By doing this whenever microservice A wants to communicate with microservice B it just asks DNS for endpoints where B currently listens.
What I do not understand is: How microservices know where the DNS lives?
Basically DNS is just a 'special' service and I can have one or multiple instances of it right? So I should not hardcode it's endpoint too or should I? And let's say I do - what if DNS instnace is moved to different location? Do I have to manually change it's location in configuration?
Does anyone happen to know how to design this? (or can anyone just point me to any document where this is explained since although there are many information about microservices and dns I can not find this particular information anywhere - maybe it's just too trivial and I am the only one who does not get it)
Manual setup of DNS is possible, as stated by the other answers, but I would recommend to use an infrastructure that supports the service discovery in all respects. For example kubernetes has built in DNS support and makes it very easy to expose a service that can consist of any number of Pods.
An infrastructure technology like kubernetes will also make many other respects of the microservices architectural style easier to implement, including high availability and scalability.
Please see the official docs for some more information.
DHCP solves this problem. When a host boots it sends a broadcast DHCP message. The DHCP server responds with many values, one of which is the location of DNS servers.
In the case of micro services, the host OS (or container host) will be configured for DNS via DHCP. The microservice code uses the OS DNS functions to resolve addresses.
https://en.m.wikipedia.org/wiki/Dynamic_Host_Configuration_Protocol
You can use your local network to discover services, via Dhcp and whatnot. But that requires that all services are already "registered" within that DNS server.
Microservices can find each other via service discovery, server or client side. If you choose client side service discovery, you can use tools like Consul, which provides a bunch of great features. One of which is a DNS endpoint which allows queries via SRV records with <serviceName>.consul.service domain names.
Consul has it's own DNS endpoint, you can configure your services to use that (usually on port 8600 locally, as Consul agents run locally).
But you can also configure an actual DNS server to forward questions to Consul, so that you can easily mix service discovery drive by Consul with manually setup services within a Bind instance or similar...
Known hostname solution. The fixed part would be the service domain name, for instance xservice.com. You can query this host using standard DNS tools (e.g., dig in your shell, etc).
Finally, in the DNS bound to xservice.com you then add a SRV record with further details.
A SRV record lists all the service details, including:
the symbolic service name;
the canonical hostname of the machine providing the service;
the TCP (or UDP) port on which the service is available.
There are many other info as well. Please see Wikipedia for the complete list.
Please keep in mind this is a somewhat static solution. If you are looking for a more dynamic one, then Oswin answer might be a better fit :-)

Load balancing / redundancy for Windows domain controllers FOR LDAP/LDAPS ONLY

Where I work there are many apps that query Active Directory using LDAP/LDAPS and which can only be configured with a single name to query.
Obviously if that name is a domain controller there's a single point of failure. What's the best way of achieving redundancy? I think I need something like a load balancer that knows if a domain controller's up or down. The domain controllers must be in separate sites. A solution would also need to be handle LDAPS.
We're currently trying a DNS alias ldap. which is DNS round robin ie it resolves to multiple domain controllers, combined with a BMC Patrol script that polls the domain controllers and deletes their ldap. record if they're offline. But in testing we're having a peculiar (to me) result where an ldap query to ldap. succeeds and the domain controller sends the answer, but then sends a referral to a name LDAP://domaindnszones. and a couple of (unix) apps crack up at that point and try to do the second query authenticating as "root", which fails.
I'd be grateful for any thoughts... thanks in advance.
Doing this with a load balancer is not uncommon if you have apps that just want to do simple binds. You'll want to load balance port 636 for LDAP/S if you can make that a requirement for your apps. If you have multiple domains in your forest, port 3269 is the global catalog LDAP/S port.
As far as certificates go, you have two options:
Put an SSL cert on each DC with just the DC's hostname, and then put a cert on the load balancer for the VIP (e.g. ldap.contoso.com). Have the load balancer do re-encryption.
Put an SSL cert on each DC that has the DC's hostname in it, and a subject alternate name (SAN) of ldap.contoso.com. Simply pass the traffic through the load balancer.
For #2, it's important to note that AD will only bind to a certificate that has the DC's hostname either in the subject name field or the /first/ subject alternate name field.

Resources