Application using fasthttp(go) cannot resolve domain ip with fixed domain after expiring dns TTL - dns

My team uses eks + istio and here is the tricky situation.
A golang application container is deployed with istio-proxy container in a same eks pod. This application uses fasthttp to communicate another external service outside of eks. This external service can be resolved as a specific domain name enlisted on aws route53. After I changed A record of this domain with 300 seconds of TTL and wait for 300 seconds of TTL expiration, I was able to resolve changed A record with nslookup command from the application container but the application process tried to connect old A record with same domain. After re-deploy this application, this application can resolve new A record.
Here is the question. Is it possible HTTP connection pool or HTTP connection caches remember old A record after dns TTL expiration no matter what its container already resolving new A record?

Related

Use Azure DNS if local DNS not available

I have an Azure VM which is a DNS Server.
If this server goes down, is it possible for the other VM's in the same Vnet to then switch over to Azure DNS?
In my opinion, if you are affraid the DNS might become unavailable, rather configure it for high availability instead:
Create an availability set in Azure
Create multiple instances of your DNS server inside the availability set and set them to replicate each other
However, if you are running Windows, then this seems to be something you can configure inside your VMs. Just, in my ears it sounds very hackish to rely on in case of an outage, see answer link for more info. Shortly:
The DNS Client service queries the DNS servers in the following order:
The DNS Client service sends the name query to the first DNS server
on the preferred adapter’s list of DNS servers and waits one second
for a response.
If the DNS Client service does not receive a response from the first
DNS server within one second, it sends the name query to the first
DNS servers on all adapters that are still under consideration and
waits two seconds for a response.
If the DNS Client service does not receive a response from any DNS
server within two seconds, the DNS Client service sends the query to
all DNS servers on all adapters that are still under consideration
and waits another two seconds for a response.
If the DNS Client service still does not receive a response from any
DNS server, it sends the name query to all DNS servers on all
adapters that are still under consideration and waits four seconds
for a response.
If it the DNS Client service does not receive a response from any
DNS server, the DNS client sends the query to all DNS servers on all
adapters that are still under consideration and waits eight seconds
for a response.

Elixir Ecto - DNS Resolution

We have a Phoenix App that is connecting to an AWS Aurora RDS instance for the database. However, we are using the DNS string (e.g. company.cluster-sdfssfd.us-east-1.rds.amazonaws.com) which is dynamic. Last night we noticed that the underlying ips were rotated by AWS, however, our app did not pick up on these changes and was trying to write to the old dns mapped host which was now a read-only replica. How can we get Phoenix/Ecto to automatically refresh the DNS?

Access Kubernetes DNS server in node

I'm trying to access Kubernetes internal DNS server from a node (not a pod).
Everything is working just fine for inter pods communications, but now I have a use case where I need a non docker/k8s app to access a service in kubernetes.
Since my app doesn't use k8s internal DNS, I cannot use the service name to access it.
Is there a way to tell my node to use Kubernetes dns ?
Kubernetes use skyDNS and Kube2sky for DNS server. Kube2sky maintain k8s related DNS records such as service name, while skyDNS read these records from ETCD. So you can add k8s DNS nameserver and search domain into system DNS configuration. For example, your k8s DNS server is 10.16.42.197, search domain is domeos.sohu, and your app is running in Centos 7. So you need add nameserver 10.16.42.197 and search default.svc.domeos.sohu svc.domeos.sohu domeos.sohu into /etc/resolv.conf file.

Azure VM fails to register in the DNS server external to Azure

We are trying to register the Azure VM to our own DNS Server but not able to do so.
We have already setup the VPC, Virtual Network and Gateway to connect to our DNS server.
we have also specified our DNS server within the Virtual network.
From what I understand, you're looking to register your VMs internal IPs in your DNS server. Is that correct?
If so, Windows clients do this automatically when domain joined and will send an unsecured Dynamic DNS update when not domain joined but you need to create a DNS zone for the records and allow unsecured updates, which is not the default. Linux clients need a script added to the DHCP client to send the dynamic DNS updates. I'm in the process of creating a page on Azure.com for this and can share the commands in the meantime if you're using that setup.
Gareth
(Azure DNS)

Microsoft Azure Traffic Manager

I've created an Azure Traffic Manager profile which uses failover as the load balancing method. The primary endpoint is an on-premises website test.company.com. The other endpoint is an Azure Website App which has a custom domain name xxx.mysite.com. When I added the endpoint to Traffic Manager it points to mysite.azurewebsites.net.
I've created a CNAME record at the ISP to point xxx.mysite.com to mycompany.trafficmanager.net.
When I stop the primary website to simulate a failover to the second website I get Error 404 - Web App Not Found. If I go directly to mycompany.trafficmanager.net it works as expected and displays the xxx.mysite.com website.
What am I missing in the configuration so that when I failover it displays the xxx.mysite.com website?
Azure Traffic Manager is a DNS routing system, not a load-balancer. Using DNS will always have latency with changes. By default, Traffic Manager uses a TTL of 300 which is 5 minutes (300 seconds).
This means any clients (like web browsers) will only check for a new address every 5 minutes, and that's if they actually follow the TTL value and don't cache the DNS entry even longer. There are also lots of DNS proxies and caches (like in your ISP) that can still cache the old DNS entry. Any updates will take minutes at least before clients go to the failover site.
You can lower the TTL although this will increase number of queries (and resulting cost) and might decrease performance. If you absolutely can't have any downtime then you'll have to look into running an actual load-balancer that will handle the traffic directly and send it to the right place.
As of 2020, Azure now has the Front Door service which is a global load balancer that will handle the requests and failover seamlessly. Try that instead. More info here: https://azure.microsoft.com/en-us/services/frontdoor/
Can you check and see if the custom domain is also added to the web app? e.g. something.mysite.com is registered as a custom hostname with mysite.azurewebsites.net.
If that step isn't done, then when the request is routed to the azurewebsites app, it will fail because there is nothing configuration wise to indicate that something.mysite.com is really mysite.azurewebsites.net.

Resources