I am new to azure. I created an azure acs with kubenetes by using the following config(part of the whole file).
- apiVersion: v1
kind: Service
metadata:
name: my-web
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 3000
selector:
app: my-web
The service can be visited, but only via IP address. There is no secondary dns(like: xxx.azurewebsite.com) generated. Currently, I use A record points to the ip address. It works, but I am afraid the ip address will be changed, and I have to manually change the dns A record. Just asking if there is a way to generate some stable dns for acs services?
Just asking if there is a way to generate some stable dns for acs
services?
We can via Azure portal to change the public IP address to static, in this way, restart the service will not change the IP.
But in Azure, if we delete the k8s service, the Public IP address will collected by Azure platform, and we will lose this IP address. For now, Azure does not support to keep the public IP address for k8s service.
Related
First of all I am pretty new on Kubernetes and containerized world.
My scenario is as follows:
I have a application which is deployed to AKS, we are using AGIC as ingress. The application is consuming endpoints hosted outside the AKS. The consumed application is publicly accessible but it has IP whitelisting. I am whitelisting the Application Gateway IP. Also I created a External Service as such.
kind: Service
apiVersion: v1
metadata:
name: service-endpoint
spec:
type: ExternalName
externalName: endpointname.something.com
ports:
- protocol: TCP
port: 433
But it does not work.
Additionally I tried to ping the direct endpoint URL(https://endpointname.something.com) from the pod, and I receive 403.
Can someone advice what would be the correct steps in order to achieve this connectivity?
Please note that we fixed this issue by whitelisting the public IP of the AKS load balancer to the target system.
I deployed an application in an Azure K8S cluster, using NGINX as gateway, with a public static IP, based on AKS & PUBLIC-IP and on AKS & NGINX.
Now I need to deploy the application in an Azure private cluster, ie, running in a private vnet (see CREATE PRIVATE AKS); attempting to assign a public static IP to NGINX does not work, which can be expected as the load-balancer expects a private IP, not a public IP.
How can I provide inbound access to my app hosted in a private cluster, using NGINX and a public static IP?
Hi you have two ways two achieve that...Depending on your needs (and Azure costs...):
1-Use Azure Application Gateway. For myself I use Terraform. And here you can the see official documentation regarding internal IP address.
Now you can use this one as your new Ingress (and get rid of NGINX) like:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: guestbook
annotations:
kubernetes.io/ingress.class: azure/application-gateway
spec:
rules:
- http:
paths:
- backend:
serviceName: frontend
servicePort: 80
Or you could use NGINX internally as your ingress like explained on option 2.
2- First you must have a Public IP with a Load Balancer associated with it.The backend from that LB must be up to your needs.
But here is the trick...Do not create NGINX with that public IP but with an internal IP and an internal load balancer, you can see how to do that in the following url:
https://learn.microsoft.com/en-us/azure/aks/ingress-internal-ip
And the important thing you must do is the nginx ovveride on the helm parameters:
controller:
service:
loadBalancerIP: 10.240.0.42
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
Of course the internal VNET must be created an the load balancer IP must be a correct one.
And the final trick now that you have NGINX listening behind a private IP is to verify your traffic from the Public IP is redirected to that internal VNET...Of course it depends on how you have infrastructure setup behind that LB that holds the public IP.
As stated in the comment above you can do the same via Application Gateway in Azure. But if you are going to only use AKS you might want to just use Application Gateway as your ingress controller which is already created with the private cluster.
Please follow this to achieve the same https://microsoft.github.io/AzureTipsAndTricks/blog/tip256.html
Based on your description i understand that you want to have ingress traffic through your NGINX ingress controller which has a Loadbalancer service with static IP. If your deployment is correctly configured the a Loadbalancer service should be assigned to your NGINX ingress controller with a public IP. Since i dont know your namespaces, naming of deployments etc try:
kubectl get services --all-namespaces | grep -i loadbalancer
You should be able to find that an nginx loadbalancer service has a public IP. Now since NGINX is your ingress controller this means that you have a Layer 7 loadbalancer as ingress so you need to create an ingress route to your application running in AKS. This is documented here from Azure NGINX ingress but also here Ingress K8s
I created AKS with internal ingress Nginx. This comes up like below in the cluster.
Then I created Azure private DNS Service. In the Azure private DNS service, I created a 'Record set' like
Blockquote
Technically, i should be able to access LoadBalancer External ip with promotion.mydomain.com (as example). Insted of this, I'm having '502 Bad Gateway' error when i hit http://promotion.mydomain.com in the browser. Any advice?
I faced the same issue and have been able to solve it recently.
I created another Ingress but in the desired namespace (mine was default) with the following definition :
(I have enabled tls but you can remove that part)
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
name: my-custom-ingress
spec:
tls:
- hosts:
- foo.mydomain.com
secretName: my-tls-secret
rules:
- host: foo.mydomain.com
http:
paths:
- path: /
backend:
serviceName: my-foo-app-service-nodeport
servicePort: 4444
First of all find the EXTERNAL IP of your nginx ingress and keep it in mind:
kubectl get svc --namespace ingress-basic
Then in the Azure DNS zone you can attach the domain to an Azure Resource :
Open azure portal.
Go in the MC_... resource group created by your AKS cluster.
Find the LoadBalancer resource and click it.
On the LoadBalancer, go into "Frontend IP Configuration". You'll then see a list of public IP with a related ResourceId (example: 11.22.33.44 (xxx-yyyy-bbb))
Find the IP that is corresponding to the LoadBalancer IP you found on the LoadBalancer (before step 1) and memorize the associated object id.
Open you Azure DNS zone and create new domain (or edit one)
Set "Alias Record Set: Yes" then "Alias type: Resource"
Under "Azure Resource" find the resource that has the ResourceId you found in step 5 and select it.
Save
Now it should work.
I see your purpose is to create AKS with internal Ingress Nginx and use the custom DNS. And I see your Ingress external IP is 10.240.0.42. It seems it's a private IP of the subnet which you AKS nodes in.
So I think you need to create An Azure Application Gateway or Azure Load Balancer to route your request from the Internet to your internal Ingress Nginx interface. And the A record also needs to be changed, you need to change the IP into the public IP of the one which you choose from Azure Application Gateway and Azure Load Balancer. I think you know you need to update your custom DNS setting in the DNS server which you DNS in.
When all things are being done. The requests routing path will like this:
Internet ( your custom DNS)
Azure DNS Server
Azure Public IP of the Application Gateway or Load Balancer ( this is what I think you missed)
10.240.0.42 ( ingress Nginx internal IP)
AKS Ingress Nginx
Service
Deployment or Pod
I have deployed multiple microservices on an AKS cluster and exposed it on nginx ingress controller. The ingress pointing to a static ip with dns as blabla.eastus.azure.com
Application is exposed on blabla.eastus.azure.com/application/ and blabla.eastus.azure.com/application2/ .. etc.
I have created a Traffic manager profile in blabla.trafficmanager.net in Azure. How should i configure the AKS ingress in traffic manager such that traffic manager reroutes the request to an application deployed on AKS ingress.
---Ingress.yaml configuration used
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
namespace: ns
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
rules:
- host: blabla.eastus.azure.com
http:
paths:
- backend:
serviceName: application1
servicePort: 80
path: /application1(/|$)(.*)
- backend:
serviceName: application2
servicePort: 80
path: /application2(/|$)(.*)
- backend:
serviceName: aks-helloworld
servicePort: 80
path: /(.*)
When i hit curl http://blabla.trafficmanager.net the response is default backend - 404
When i update the host to http://blabla.trafficmanager.net, i am able to access the application through http://blabla.trafficmanager.net\application1
The same is true for any custom cname created. I created a cname as custom.domain.com and redirected it to blabla.eastus.azure.com. So unless i update the host in ingress directly to custom.domain.com I am not able to access it through the custom domain
The actual request will never pass via Traffic Manager. Traffic Manager is a DNS based load balancing solution that is offered by Azure.
When you browse Azure TM endpoint, it resolves and gives you an IP. Then your browser request that IP address.
In your case, your AKS should have a Public Endpoint to which TM can resolve the DNS query. Also you need to create an CNAME record to map TM FQDN to your Custom Domain. If this is not done, you will get 404.
The above mentioned custom header settings are for the probes, but the actual request will be sent from the client browser to the endpoint/IP which the TM resolves to.
One approach to achieve your need is to rigidly control the traffic between public DNS and the Ingress Controller Public IP in each region; delegate
the flexibility of how you publish services to the HTTP SNI protocol:
To keep it simple, the Ingress Controller does not have any A DNS record assigned to its public IP.
So, we'll implement the architecture from right to left following the diagram.
The traffic manager will have two endpoints: one per region. The value of each endpoint will be the corresponded Ingress public IP.
The DNS service will have configured a CNAME (alias) for app.mydomain.com as mine-apps.trafficmanager.net.
In this way, the client connecting to app.mydomain.com will resolve the Traffic Manager (TM) service, which is a Geo DNS, and based on the client's IP, will return to the client the closer target region between A and B.
In the same way, you can use the URL or path-based routing for exposing services via the Ingres and control how clients connect to them. Just make sure that your DNS is aware of how to connect to the Traffic Manager. The rest will be handled
magically by TM and the Ingress object in Kubernetes.
Last but not least, once all the integrations are properly configured and they satisfy your primary need you can start to extend the existing architecture and adapt to your real requirements; for example: getting rid of static IPs in the Traffic Manager's endpoints.
I have deployed a Kubernetes cluster to a custom virtual network on Azure using acs-engine. There is an ASP.NET Core 2.0 Kestrel app running on the agent VMs and the app is accessed over VPN through a Service of the Azure internal load balancer type. Now I would like to enable HTTPS on the service. I have already obtained a domain name and a certificate but have no idea how to proceed. Apparently configuring Kestrel to use HTTPS and copying the certificate to each container is not the way to go.
I have checked out tutorials such as ingress on k8s using acs and configure Nginx Ingress Controller for TLS termination on k8s on Azure but both of them end up exposing a public external IP and I want to keep the IP internal and not accessible from the internet. Is this possible? Can it be done without ingresses and their controllers?
While for some reason I still can't access the app through the ingress I was able to create an internal ingress service with the IP I want with the following configuration:
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
name: nginx-ingress-svc
spec:
type: LoadBalancer
ports:
- port: 443
targetPort: 443
loadBalancerIP: 130.10.1.9
selector:
k8s-app: nginx-ingress-controller
The tutorial you linked is a bit outdated, at least the instructions have you go to a 'examples' folder in the GitHub repo they link but that doesn't exist. Anyhow, a normal nginx ingress controller consists of several parts: the nginx deployment, the service that exposes it and the default backed parts. You need to look at the yamls they ask you to deploy, look for the second part of what I listed - the ingress service - and change type from LoadBalancer to ClusterIP (or delete type altogether since ClusterIP is the default)