Docker Swarm across 2 sites or xCluster in YugabyteDB? - yugabytedb

[Question posted by a user on YugabyteDB Community Slack]
I am wondering if for the attached (image) setup I should be trying to setup xCluster/Asynchronous Replication between Site A and Site B as described in https://docs.yugabyte.com/preview/deploy/multi-dc/async-replication/ or simply create a single swarm of containers across both sites. Is there any reason that xCluster/Async won’t work?
Some notes:
Basically there are two physical sites that connect to a single telco company.
Via dynamic DNS I can get a static url that maps to whatever the variable IP address of the link is at the time.
Internally to each site the machines only natively talk to each other via their internal IP addresses but use the dynamic DNS to communicate with machines at the other site via port forwarding.
The goal at this stage is a yb-master and yb-server on each node.
Perhaps it’s all handled by the magic of Docker networking and Yugabyte works seamlessly with all that?

No, this is not possible. The servers will need to connect to each other directly as they do internally. Think A2 sends/receives changes to B2 directly.

Related

How to create and manage floating IP for a local highly available cluster?

I currently have a highly available cluster for multiple services of my application. The cluster is working without any problem on AWS and now I want to replicate and adapt the whole structure within a local network.
I use Pacemaker/Corosync to share AWS Elastic IP between two HAProxy instances. But I'm not sure if it is possible to create the same flow within my local network since I don't know how to share a single local IP between two of the computers.
Is it possible to manage a single local IP as a floating IP within local network?
Have a look at the HAPROXY with VRRP and Keepalived setup. I will have to do a test in my homelab if you need configs.

How is the billing registered of a Azure Cogntive Service - Container?

Azure Container Services has the option now to run in containers.
To register the billing you have to give your API key + Billing URL.
Even though I configured everything correctly and the service works locally, my calls are not registered as quoata's.
PS: Dont try to run the container without an internet connection, it will block the calls then ;)
Willem,
Here's what I think is going on: The problem is the linux container host picks an IP address range for the container that includes the IP addresses of your local DNS servers. This makes it impossible for the container to resolve names as requests for that range just end up on the local container network and won't go to the DNS servers.
The problem is described in this entry along with several solutions. The best solution seems to be at the very bottom which is also described in the docker documentation. The short version of this is to update the routing table on the host with the reserved IP range so that docker won’t pick it for the container.
Hope this helps,
Henrik

Microservices - how to find DNS IP?

In the world of microservices endpoints should not (must not) be hardcoded. One of the best ways to do this is to have a DNS and let each microservice register while starting. By doing this whenever microservice A wants to communicate with microservice B it just asks DNS for endpoints where B currently listens.
What I do not understand is: How microservices know where the DNS lives?
Basically DNS is just a 'special' service and I can have one or multiple instances of it right? So I should not hardcode it's endpoint too or should I? And let's say I do - what if DNS instnace is moved to different location? Do I have to manually change it's location in configuration?
Does anyone happen to know how to design this? (or can anyone just point me to any document where this is explained since although there are many information about microservices and dns I can not find this particular information anywhere - maybe it's just too trivial and I am the only one who does not get it)
Manual setup of DNS is possible, as stated by the other answers, but I would recommend to use an infrastructure that supports the service discovery in all respects. For example kubernetes has built in DNS support and makes it very easy to expose a service that can consist of any number of Pods.
An infrastructure technology like kubernetes will also make many other respects of the microservices architectural style easier to implement, including high availability and scalability.
Please see the official docs for some more information.
DHCP solves this problem. When a host boots it sends a broadcast DHCP message. The DHCP server responds with many values, one of which is the location of DNS servers.
In the case of micro services, the host OS (or container host) will be configured for DNS via DHCP. The microservice code uses the OS DNS functions to resolve addresses.
https://en.m.wikipedia.org/wiki/Dynamic_Host_Configuration_Protocol
You can use your local network to discover services, via Dhcp and whatnot. But that requires that all services are already "registered" within that DNS server.
Microservices can find each other via service discovery, server or client side. If you choose client side service discovery, you can use tools like Consul, which provides a bunch of great features. One of which is a DNS endpoint which allows queries via SRV records with <serviceName>.consul.service domain names.
Consul has it's own DNS endpoint, you can configure your services to use that (usually on port 8600 locally, as Consul agents run locally).
But you can also configure an actual DNS server to forward questions to Consul, so that you can easily mix service discovery drive by Consul with manually setup services within a Bind instance or similar...
Known hostname solution. The fixed part would be the service domain name, for instance xservice.com. You can query this host using standard DNS tools (e.g., dig in your shell, etc).
Finally, in the DNS bound to xservice.com you then add a SRV record with further details.
A SRV record lists all the service details, including:
the symbolic service name;
the canonical hostname of the machine providing the service;
the TCP (or UDP) port on which the service is available.
There are many other info as well. Please see Wikipedia for the complete list.
Please keep in mind this is a somewhat static solution. If you are looking for a more dynamic one, then Oswin answer might be a better fit :-)

Recommended replica set config in Azure

We're running MongoDB on Azure and are in the process of setting up a production replica set (no shards) and I'm looking at the recommendations here:
http://docs.mongodb.org/ecosystem/tutorial/install-mongodb-on-linux-in-azure/
And I see the replica set config is such that the members will talk to each other via external IP addresses - isn't this going to 1) incur additional Azure costs since the replication traffic goes thru the external IPs and 2) incur replication latency because of the same?
At least one of our applications that will talk to Mongo will be running outside of Azure.
AWS has a feature where external DNS names when looked up from the VMs resolve to internal IPs and when resolved from outside, to the external IP which makes things significantly easier :) In my previous job, I ran a fairly large sharded mongodb in AWS...
I'm curious what your folks recommendations are? I had two ideas...
1) configure each mongo host with an external IP (not entirely sure how to do this in Azure but I'm sure it's possible...) and configure DNS to point to those IPs externally. Then configure each VM to have an /etc/hosts file that points those same names to internal IP addresses. Run Mongo on port 27017 in all cases (or really whatever port). This means that the set does replication traffic over internal IPs but external clients can talk to it using the same DNS names.
2) simiilar to #1 but run mongo on 3 different ports but with only one external IP address and point all three external DNS names to this external IP address. We achieve the same results but it's cleaner I think.
Thanks!
Jerry
There is no best way, but let me clarify a few of the "objective" points:
There is no charge for any traffic moving between services / VMs / storage in the same region. Even if you connect from one VM to the other using servicename.cloudapp.net:port. No charge.
Your choice whether you make the mongod instances externally accessible. If you do create external endpoints, you'll need to worry about securing those endpoints (e.g. Access Control Lists). Since your app is running outside of Azure, this is an option you'll need to consider. You'll also need to think about how to encrypt the database traffic (mongodb Enterprise edition supports SSL; otherwise you need to build mongod yourself).
Again, if you expose your mongod instances externally, you need to consider whether to place them within the same cloud service (sharing an ip address, getting separate ports per mongod instance), or multiple cloud services (unique ip address per cloud service). If the mongod instances are within the same cloud service, they can then be clustered into an availability set which reduces downtime by avoiding host OS updates simultaneously across all vm's, and splits vm's across multiple fault domains).
In the case where your app/web tier live within Azure, you can use internal IP addresses, with both your app and mongodb vm's within the same virtual network.

How to create Azure Input Endpoint to VRRP Virtual IP Address

I'm setting up a test web farm in Azure. Consisting of:
Four Ubuntu Servers
Two balancers running HAProxy + Keepalived
Two web servers running Apache
Keepalived has been configured and everything has been working fine. HAProxy performs great.
My issue is that I want to enable the Keepalived failover clustering, but I can't seem to figure out how to create an Input Endpoint in Azure for the virtual IP address that the Keepalived VRRP is using.
In other words, I want to create an Input Endpoint for a virtual IP address in Azure, but not for an existing VM. So far, the only thing I've been able to do is create Input Endpoints for existing VMs (using their IP) for specific port numbers. I want to be able to configure:
Take TCP requests on port XX and map them to IP address YY.YY.YY.YY on port ZZ
Anyone know of a way to do this? I've looked on both portals (new and old) and the closest thing I see is the Cloud Services page for my VNET has the Input Endpoints listed. But no add/edit button.
This is not currently possible in Azure. Azure IaaS VMs do not yet support multiple IPs per interface, so keepalived will not be able to move a VIP between the nodes. We tried to do the same thing and were told it's not currently available. However, it's supposed to be on the road-map and it is "coming", as is the ability to have multiple interfaces per machine.
Input endpoints are to expose some service on a single VM (it's a NAT), and they are not attachable to an actual interface. The only option that I thought through, was to use Azure's Traffic Manager to round robin between the two HAProxy instances using two exposed endpoints, with a health check to fail to a single HAProxy instance if one fails.

Resources