I would like to enable the dns addon for my container engine cluster..
as described here.. http://kubernetes.io/docs/user-guide/services/#dns
How do I actually enable this clusterwide so that I can autodiscover my services instead of manually specifying IP addresses every time I relaunch a service.
DNS is enabled by default in Google Container Engine. You should be able to use it exactly as specified in the docs.
Related
I have a virutal windows server machine running on GCP.
On that machine I have an IIS with several web sites.
Untill recent windows update and restart in Site Binding I was able to see the external IP from the forwarding rule, like this:
Example from another, working server
But now the external IP is not listed, despite the forwarding rule exists:
The real state of the server in question
And because of this my sites are not working.
I've tried to deleting and creating forwarding rules via GCP Shell using gcloud compute forwarding-rules delete ... and gcloud compute forwarding-rules create ... to no avail.
With and without restarts after executing each of these commands or ageter both in a row.
Thank you for any help.
The problem was resolved with GCP support.
There's a service GCEAgent (Google Compute Engine Agent).
It was stopped despite its startup type is automatic.
Starting the service brought all forwarded IP addresses back to IIS site binding window.
I have a tomcat server with port 8080 which is running on a Google cloud platform VM instance. Also i have enabled SSL for my server. In that i have deployed my web application. When i enter my domain name in browser my application will be running.
But it will be appended with the port 8443. It looks like hostname:8443. By using load balancing in GCP i can able to achieve it. But i am new to GCP so i don't know how to configure and all. Eventhough i have configured but it shows some error like problem with backend service.
Kindly anyone can help me to resolve this.
If I understand correctly you would like to know whether in the DNS record you need to add VM instance External IP or Load Balancer’s External IP address. If my understanding is correct, in order to use Load Balancer, you need to put the load balancer’s External IP in your DNS A record.
Regarding your 1 backend service is unhealthy, I would request you to check ‘Firewall rules’ section of GCP’s Creating Health Checks documentation. You need to create ingress firewall rules applicable to all VMs being load balanced to allow traffic from health check prober IP ranges. You did mentioned which load balancer you are using. You will find GCP load balancers offering from this link. Based on the Load Balancer you are using, you need to create an appropriate heal check firewall rule.
I would recommend posting this type of questions in ServerFault as StackOverflow is for Q&A for professional and enthusiast programmers.
I own a domain name with cloudflare nameservers. I've set up an Azure Container Instance with a running container hosted in Docker-Hub container registry. When i created the container instance i specified dns-name-label in azure's namespace, but i want to point my custom domain to this running container instead of the azure's one.
I've searching in Azure Docs for a way to point my custom domain name to this running container in ACI, but i didn't found any information about this configuration in Azure Container Instances.
I did found some information regarding custom domains for blob storages, or cloud services, but none of those applied to ACI, as the custom domain setting doesn't appear in my ACI dashboard, neither in Azure CLI help commands.
Any information will be appreciated. I hope there is a solution to this that doesn't involve switching my NS to Azure's as cloudlflare is working just fine.
As for as I know, ACI exposes the DNS name using a dns-name-label in azure's namespace. The FQDN like customlabel.azureregion.azurecontainer.io. is provided by Azure DNS service. Unfortunately, there is no way to directly set the custom domain for ACI, but you could create a CNAME record in your DNS provider to redirect your subdomain like www.example.com to this FQDN.
If so, you can access your ACI via subdomain www.example.com.
You may consider creating a web app for the container. Then you can create a custom domain for the web app.
cname won`t work since docker sees that the ssl cert is from *.azurecr.io and not example.expemple.com and then proceeds to do nothing. it also doenst work with cloudflare ssl
edit: i just did more research and i found something very very promising
https://github.com/Azure/acr/tree/main/docs/custom-domain
cheers,
Zakaria
I use the Google compute engine service and have configured a static IP for the instance.
The firewall uses the defaults open tcp:1-65535, udp:1-65535.
But use localhost can't ping to instance.
It's like a Google compute engine firewall setting issue, but I don't know how to change the settings.
In cloud console networks panel's "Protocol & Ports" field, you will need to add "icmp" to make it ping-able.
for example: tcp:80,443;udp:5000-6000;icmp
I've just set up a windows azure VM and installed IIS on it.
When I remote desktop onto the box I can see the default IIS website fine but I can't get this to serve on the web from the IP address of the box.
I've opened up port 80 on windows firewall and also added an endpoint for port 80.
I've tried to access it with the firewall completely turned off also but to no avail...
I cant work out if there is anything else I need to do to get this working?
Add endpoints for port 80 (http) and port 443 (https) to the VM in the Azure portal (tip: this can be automated with powershell or the Azure cli).
Remote desktop to the machine. Open the Windows firewall control panel and allow traffic to port 80 (http) and port 443 (https) or just turn it off ... the firewall is ON by default (tip: can also be scripted through the VM agent / powershell).
Go to the Azure portal and find the cloudapp.net subdomain for your VM (actually the cloud service) your VM is running under. Try accessing the site with that domain. If that doesn't work, try browsing to http://localhost on the server (remote desktop) to make sure IIS works and troubleshoot from there.
Modify the DNS records of your custom domain to use a CNAME to the .cloudapp.net domain. If you need A records make sure to use the public IP of the cloud service (just ping the .cloudapp.net domain to find it or look in the Azure portal).
You might want to look into Azure Websites or Azure Cloud Services (web roles). Those are a lot easier to manage and a lot cheaper. They still offer most of the functionality.
What fixed the issue for me was to go into the Azure Portal, browse to 'Network Security Groups', select the VM and then create an inbound rule to allow traffic to port 80.
Note: Also ensure that the inbound rule to port 80 is added and enabled on the actual VM.
Well, I deleted the existing VM and Cloud service and started again - all worked fine out of the box this time.
How annoying! The only thing I did notice was that before my cloud service had the same name as my VM - this time they had different names so that might have been what was causing the issue.
Cheers
For the newer VMs and pre-configured setups (2015+), it's possible your setup is using an azure asset called "Public IP". If so, you can set a custom DNS name label in it, inside "Configuration". Note that this name will consider any type of region used when creating the VM (e.g. my-site.brazilsouth.cloudapp.azure.com).
It's good to remember that for testing purposes, it still suffices to use the value of the public IP that is randomly designated to you.
The VMs are actually accessed via a Cloud Service (well they are for me). Azure created a Cloud Service automatically to be the scaling engine/load balancer on the front of the VM. I have to connect to the web site via that cloud service, not the VM directly.
Its possible you were using the internal IP rather than the external IP.
The sites have to use the internal IP address in the bindings section of IIS. However, in your dns you will need to use the external IP. This is presumably since the 'internal IP' is just a virtual one that Azure uses to map traffic from the external network to the VM's inside azure.
You should find both the internal and external IP's are visible on the VM's desktop.
Switch off TLS 1.3 in the Registry Editor.
This is what worked for me as of writing this in Mar 2021.