I have deployed a Kubernetes cluster to a custom virtual network on Azure using acs-engine. There is an ASP.NET Core 2.0 Kestrel app running on the agent VMs and the app is accessed over VPN through a Service of the Azure internal load balancer type. Now I would like to enable HTTPS on the service. I have already obtained a domain name and a certificate but have no idea how to proceed. Apparently configuring Kestrel to use HTTPS and copying the certificate to each container is not the way to go.
I have checked out tutorials such as ingress on k8s using acs and configure Nginx Ingress Controller for TLS termination on k8s on Azure but both of them end up exposing a public external IP and I want to keep the IP internal and not accessible from the internet. Is this possible? Can it be done without ingresses and their controllers?
While for some reason I still can't access the app through the ingress I was able to create an internal ingress service with the IP I want with the following configuration:
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
name: nginx-ingress-svc
spec:
type: LoadBalancer
ports:
- port: 443
targetPort: 443
loadBalancerIP: 130.10.1.9
selector:
k8s-app: nginx-ingress-controller
The tutorial you linked is a bit outdated, at least the instructions have you go to a 'examples' folder in the GitHub repo they link but that doesn't exist. Anyhow, a normal nginx ingress controller consists of several parts: the nginx deployment, the service that exposes it and the default backed parts. You need to look at the yamls they ask you to deploy, look for the second part of what I listed - the ingress service - and change type from LoadBalancer to ClusterIP (or delete type altogether since ClusterIP is the default)
Related
First of all I am pretty new on Kubernetes and containerized world.
My scenario is as follows:
I have a application which is deployed to AKS, we are using AGIC as ingress. The application is consuming endpoints hosted outside the AKS. The consumed application is publicly accessible but it has IP whitelisting. I am whitelisting the Application Gateway IP. Also I created a External Service as such.
kind: Service
apiVersion: v1
metadata:
name: service-endpoint
spec:
type: ExternalName
externalName: endpointname.something.com
ports:
- protocol: TCP
port: 433
But it does not work.
Additionally I tried to ping the direct endpoint URL(https://endpointname.something.com) from the pod, and I receive 403.
Can someone advice what would be the correct steps in order to achieve this connectivity?
Please note that we fixed this issue by whitelisting the public IP of the AKS load balancer to the target system.
I have setup an AKS cluster, with a POD configured to run multiple Tomcat services. My Apache web server is outside the AKS cluster and hosted on a VM, but in the same subnet. Apache server sends a request to the Tomcat with ajp://10.x.x.x:5009/dbp_webui, which is inside the AKS cluster. I am looking for options on how to expose the Tomcat service, so that my Apache can make a successful connection.
You can use ingress to expose you service. From version 0.18.0 it supports AJP protocol.
https://github.com/kubernetes/ingress-nginx/blob/main/Changelog.md#0180. Intro into ingress: https://kubernetes.io/docs/concepts/services-networking/ingress/
You will probably need to set additional annotation to describe the backend protocol: https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#backend-protocol
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-name
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "AJP"
spec:
...
As #CSharpRocks mentioned in the comments, AKS nodes don't have public IP addresses by default. This means that a better option is to use LoadBalancerservice type.
https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer
It will deploy a LB that will route traffic to the Pod no matter on witch node it will resident. AFAIK with AKS have option to install Ingress out of the box, with a LB.
Edit
Scratch this
Easier way: use a NodePort type service:
https://kubernetes.io/docs/concepts/services-networking/service/#nodeport
I have following setup deployed on an Azure Kubernetes Services (K8S version 1.18.14) cluster:
Nginx installed via helm chart and scaled down to a single instance. It is deployed in namespace "ingress".
A simple stateful application (App A) deployed in a separate namespace with 5 replicas. The "statefulness" of the application is represented by a single random int generated at startup. The application exposes one http end point that just returns the random int. It is deployed in namespace "test".
service A of type ClusterIP exposing the http port of App A and also deployed in namespace "test":
apiVersion: v1
kind: Service
metadata:
name: stateful-service
namespace: "test"
spec:
selector:
app: stateful
ports:
- name: http
port: 80
targetPort: 8080
type: ClusterIP
service B of type "ExternalName" (proxy service) pointing to the cluster name Service A deployed in namespace "ingress":
apiVersion: "v1"
kind: "Service"
metadata:
name: "stateful-proxy-service"
namespace: "ingress"
spec:
type: "ExternalName"
externalName: "stateful-service.test.svc.cluster.local"
ingress descriptor for the application with sticky sessions enabled:
apiVersion: extensions/v1beta1
kind: "Ingress"
metadata:
annotations:
kubernetes.io/ingress.class: internal
nginx.ingress.kubernetes.io/force-ssl-redirect: "false"
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-name: "route"
name: "ingress-stateful"
namespace: "ingress"
spec:
rules:
- host: stateful.foo.bar
http:
paths:
- path: /
backend:
serviceName: "stateful-proxy-service"
servicePort: 80
The issue is that sticky sessions is not working correctly with this setup. The "route" cookie is issued but does not guarantee "stickiness". Requests are dispatched to different pods of the backend service although the same sticky session cookie is sent. To be precise the pod changes every 100 requests which seems to be the default round-robin setting - it is the same also without sticky sessions enabled.
I was able to make sticky sessions work when everything is deployed in the same namespace and no "proxy" service is used. Then it is OK - request carrying the same "route" cookie always land on the same pod.
However my setup uses multiple namespaces and using a proxy service is the recommended way of using ingress on applications deployed in other namespaces.
Any ideas how to resolve this?
This is a community wiki answer. Feel free to expand it.
There are two ways to resolve this issue:
Common approach: Deploy your Ingress rules in the same namespace where the app that they configure resides.
Potentially tricky approach: try to use the ExternalName type of Service. You can define ingress and a service with ExternalName type in namespace A, while the ExternalName points to DNS of the service in namespace B. There are two well-written answers explaining this approach in more detail:
aurelius' way
garlicFrancium's way
Notice the official docs and bear in mind that:
Warning: You may have trouble using ExternalName for some common protocols, including HTTP and HTTPS. If you use ExternalName then the
hostname used by clients inside your cluster is different from the
name that the ExternalName references.
For protocols that use hostnames this difference may lead to errors or
unexpected responses. HTTP requests will have a Host: header that
the origin server does not recognize; TLS servers will not be able to
provide a certificate matching the hostname that the client connected
to.
what is the best way to access a web app running in aks container from outside the cluster with a name, which is already defined in Azure DNS zone? and an external DNS server can be helpful for this?
I would setup an ingress that would point to your service which exposes the web app.
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: simple-fanout-example
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: your.web.app.address
http:
paths:
- path: /
backend:
serviceName: service
servicePort: 8080
Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource.
internet
|
[ Ingress ]
--|-----|--
[ Services ]
An Ingress may be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL / TLS, and offer name based virtual hosting. An Ingress controller is responsible for fulfilling the Ingress, usually with a load balancer, though it may also configure your edge router or additional frontends to help handle the traffic.
An Ingress does not expose arbitrary ports or protocols. Exposing services other than HTTP and HTTPS to the internet typically uses a service of type Service.Type=NodePort or Service.Type=LoadBalancer.
I would recommend reading Create an ingress controller in Azure Kubernetes Service (AKS), or use Azure Application Gateway as an ingress this is explained here and you can find tutorials on GitHub
We have created the kubernetes cluster on the azure VM, with Kube master and two nodes. We have deployed application and created the service with "NodePort" which works well. But when we try to use the type: LoadBalancer then it create service but the external IP goes pending state. Currently we unable create service type load balance and due this "ingress" nginx controller also going to same state. So we are not sure how do we setup load balancing in this case.
We have tried creating Load Balancer in Azure and trying to use that ip like shown below in service.
kind: Service
apiVersion: v1
metadata:
name: jira-service
labels:
app: jira-software
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
spec:
selector:
app: jira-software
type: LoadBalancer
loadBalancerIP: xxx.xxx.xxx.xxx
ports:
- name: jira-http
port: 8080
targetPort: jira-http
similarly we have one more application running on this kube cluster and we want to access application based on the context path.
if we invoke jira it should call backend server jira http://dns-name/jira
if we invoke some other app like bitbucket http://dns-name/bitbukcet
If I understand correctly you used type LoadBalancer in Virtual Machine, which will not work - type LoadBalancer works only in managed Kubernetes services like GKE, AKS etc.
You can find more information here.