I'm trying to use Google Cloud Platform's Cloud DNS to resolve internal IPs of Compute Engine instances by DNS from my local machine. I was able to setup an OpenVPN server on an instance by following this guide: https://www.digitalocean.com/community/tutorials/how-to-set-up-an-openvpn-server-on-ubuntu-16-04
My VPN configuration successfully connects to the OpenVPN server, and allows me to ping internal IPs of my GCE instances. The instance hosting my OpenVPN server is able to resolve and ping cloud DNS entries, but my client local machine is unable to do the same.
Here's the content of my /etc/resolve.conf file after connecting to the VPN server.
search openvpn
nameserver 169.254.169.254
What additional configuration do I need to do to allow my local machine to resolve Cloud DNS addresses?
In Compute Engine, DNS resolution is performed against the metadata server, which always has IP 169.254.169.254. The issue arises from the fact that this IP is link-local and is non-routable, thus will not work over VPN/IPSEC.
There are a few solutions/workarounds for it:
You could map all internal GCE instances IPs in the hosts files of the servers in your private network - the drawback is that the process is manual and time-consuming depending on how many instances you have.
The second option would be an internal GCE server (internal resolver) running a DNS server which could cross networks. More information on this is available in this documentation.
Related
I have created an Azure DNS Zone that is acting as the public resolver for hostname resolution. For example, bash $> nslookup myhost.mydomain.com will resolve to xx.yy.zz.aa via Azure name servers when called by an external non-azure host.
The domain mydomain.com is obtained from Google Domains where I have delegated all 4 name servers over to Azure servers. The Google Domain DNS recordset is otherwise empty.
In Azure, The DNS Zone includes an "A" RecordSet that is an Azure Alias to the public IP of the internal VM that is externally known as myhost.
Working well for external hosts, the lookups (and other usages) fail if called from an internal host. For example, on myhost itself or on a peer host in the same internal subnet, the nslookups fail (don't resolve) and the nslookup mydomain.com request retrieves only the internal private IP for the virtual network, the 10. one.
What am I failing to do in order to get internal hosts to resolve FQDNs like the external ones can?
After my validation, the Azure host does work the same as the external clients:
You could verify if the DNS servers on the Azure virtual network set the default azure provided DNS or a Custom DNS server 168.63.129.16. Once you change it, you may restart your azure VM to make this effect.
Please let me know if you have any questions or show the output when you run nslookup myhost.mydomain.com on the internal hosts.
I am trying to connect an on-premises laptop with dynamic external IP to our Azure SQL Server. To do this, I created a virtual network gateway and connected the laptop to the gateway. Also, I added a private endpoint to the SQL server. After this, I can successfully connect to the SQL server IP using telnet, and if I resolve the SQL server FQDS in hosts file, I can connect to the server via SSMS. But without hosts file, the laptop always tries to connect to the SQL server via its public endpoint/address.
I found the following article: https://techcommunity.microsoft.com/t5/azure-database-support-blog/azure-sql-db-private-link-private-endpoint-connectivity/ba-p/1235573 The article is great. It recommends using your own DNS server to resolve the SQL server FQDN to the local IP. Unfortunately, the laptop does not have access to any custom DNS, so this solution does not suit.
There are two questions:
Is there any possibility to establish connection between an on-premises computer with dynamic IP and an Azure SQL server using a private endpoint but without own DNS server?
If the answer to the first question is "No", is there another way to connect an on-premises computer with dynamic IP to an Azure SQL server using any other Azure application(s)?
First of all, you can not use FQDN without DNS service. So you indeed need a custom DNS server in using FQDN of the server in connection strings for your clients to connect from on-premise VM to the Azure SQL server.
Since you are using a laptop, the DNS servers used by your computer are most likely specified by your ISP. You have no more control over it or ask your ISP to configure the DNS forwarder. Otherwise, you need to deploy a DNS server in your internal network. Currently, in this scenario, the best method is to use the HOSTS file on the local machine to override the Public DNS.
However, if you don't like using the HOSTS file, you can provision an Azure VM as the DNS server in the same Azure virtual network as the virtual network gateway.
Main steps:
Deploy an Azure VM, and RDP to that VM and run the PowerShell commands to install the DNS server role.
Install-WindowsFeature -Name DNS -IncludeManagementTools
Get-WindowsFeature *DNS*
Add Azure DNS (168.63.129.16) as a forwarder on your custom DNS server according to the step 5 in this blog. If you do not want to use forwarder you can also create a forward lookup zone and added manually the host to match the FQDN. You could read On-premises workloads using a DNS forwarder for more details.
After you have configured the DNS server and set the DNS forwarder. You could change the DNS server of Azure VNet to your Azure VM's private IP address.
Restart your Azure VM and re-download the VPN client package and re-connect the VPN connection to make the networking update. Check the DNS server on the local VPN client machine and set the DNS server to the custom DNS server in the TCP/IP settings. Then you will look up your private IP address via the default FQDN of Azure service.
In my example, I am using Azure storage account but it works the same with Azure
SQL database when using a private endpoint on the Azure and P2S VPN connection.
In this way, it requires that there are not any other VPN connections except the
P2S VPN connection on the local machine.
Then you could resolve the Azure SQL server FQDN to the private IP address of the private endpoint. However, it perhaps does not have a better performance to connect to Azure SQL Server with a VPN connection than directly connect to it through the public Internet and public DNS sevice.
I'm using spark on Google Cloud Platform instance(hana express).
I installed spark and run spark shell, then shell is well running but I can't access spark web UI.
I added fire wall rules to instance but still doesn't work.
I added screen shot.
Thank you.
Have a look at the console messages:
...
Spark context Web UI available at http://sap-hanaexpress-serverinclapps-1-vm.c.hana.271411.internal:4040
...
You're not able to reach Web UI running at http://sap-hanaexpress-serverinclapps-1-vm.c.hana.271411.internal:4040 from your remote PC. As it was mentioned by #Lamanus this record is for internal usage only. Have a look at the documentation Internal DNS:
Virtual Private Cloud networks on Google Cloud have an internal DNS
service that lets instances in the same network access each other by
using internal DNS names. Internal A records for virtual machine (VM)
instances are created in a DNS zone for .internal. PTR records for VM
instances are created in corresponding reverse zones. As you manage
your instances, Google Cloud automatically creates, updates, and
removes these DNS records.
and
The internal DNS name of a VM instance only resolves to its primary
internal IP address. Internal DNS names cannot be used to connect to
the external IP addresses of an instance.
To solve this issue follow steps below:
add the SPARK_LOCAL_IP="<IP address>" to your configuration file as it suggested in console messages where IP address is local IP of your VM
set network tag to your VM
create firewall rule to enable incoming connections to your VM at port 4040
check your firewall by running nmap -Pn EXTENAL_IP_OF_YOUR_VM from your pc
check Web UI via browser http://EXTENAL_IP_OF_YOUR_VM:4040
We are trying to register the Azure VM to our own DNS Server but not able to do so.
We have already setup the VPC, Virtual Network and Gateway to connect to our DNS server.
we have also specified our DNS server within the Virtual network.
From what I understand, you're looking to register your VMs internal IPs in your DNS server. Is that correct?
If so, Windows clients do this automatically when domain joined and will send an unsecured Dynamic DNS update when not domain joined but you need to create a DNS zone for the records and allow unsecured updates, which is not the default. Linux clients need a script added to the DHCP client to send the dynamic DNS updates. I'm in the process of creating a page on Azure.com for this and can share the commands in the meantime if you're using that setup.
Gareth
(Azure DNS)
I've Hadoop running on Amazon EC2 in 2 different sites, but when the components starts, they get the internal IP. I want to put the components in different sites communicating with each other using internal IP. I'm not discussing if it's safe. I've an idea to put a DNS server that translates the internal IPs to external IPs, without the components notice. So, when traffic goes with the internal IP, the DNS relays the traffic to the other site.
Is it possible? Any suggestion on how to put a DNS server in EC2?
Two options:
Use VPC, in which case you have control of what internal ips are assigned to your instances. Some limitations however.
Use elastic IPs. Connecting to the DNS name of the elastic ip will resolve to the internal IP within an aws region.