AKS can't modify AGIC on ingress creation due to the policy - azure

I've just finished setting up AKS with AGIC and using Azure CNI. I'm trying to deploy NGINX to test if I set the AKS up correctly with the following configuration:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-ingress
annotations:
kubernetes.io/ingress.class: azure/application-gateway
kubernetes.io/ingress.allow-http: "false"
appgw.ingress.kubernetes.io/use-private-ip: "false"
appgw.ingress.kubernetes.io/override-frontend-port: "443"
spec:
tls:
- hosts:
- my.domain.com
secretName: aks-ingress-tls
rules:
- host: my.domain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx-service
port:
number: 80
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 1
selector:
matchLabels:
component: nginx
template:
metadata:
labels:
component: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
type: ClusterIP
selector:
component: nginx
ports:
- port: 80
protocol: TCP
There's no error or any other log message on apply the above configurations.
> k apply -f nginx-test.yml
deployment.apps/nginx-deployment created
service/nginx-service created
ingress.networking.k8s.io/nginx-ingress created
But after a further investigation in the Application Gateway I found these entries in the Activity log popped up at the same time I applied the said configuration.
Further details in one of the entries is as follows:
Operation name: Create or Update Application Gateway
Error code: RequestDisallowedByPolicy
Message: Resource 'my-application-gateway' was disallowed by policy.
[
{
"policyAssignment": {
"name": "Encryption In Transit",
"id": "/providers/Microsoft.Management/managementGroups/***/providers/Microsoft.Authorization/policyAssignments/EncryptionInTransit"
},
"policyDefinition": {
"name": "HTTPS protocol only on Application Gateway listeners",
"id": "/providers/microsoft.management/managementgroups/***/providers/Microsoft.Authorization/policyDefinitions/HttpsOnly_App_Gateways"
},
"policySetDefinition": {
"name": "Encryption In Transit",
"id": "/providers/Microsoft.Management/managementgroups/***/providers/Microsoft.Authorization/policySetDefinitions/EncryptionInTransit"
}
}
]
My organization have a policy to enforce TLS but from my configuration I'm not sure what I did wrong as I have already configured the ingress to only use HTTPS and also have certificate (from the secret) installed.
I'm not sure where to look and wish someone could guide me in the correct direction. Thanks!

• As you said, your organization has a policy for enforcing TLS for securing encrypted communication over HTTPS. Therefore, when you create an ‘NGINX’ deployment through the ‘yaml’ file posted, you can see that the nginx application is trying to connect to the application gateway ingress controller over Port 80 which is reserved for HTTP communications. Thus, your nginx application has also disallowed the usage of private IPs with the AGIC due to which the nginx application is directly overriding the HTTPS 443 port for reaching out to the domain ‘my.domain.com’ over port 80 without using the SSL/TLS certificate-based port for communication.
Thus, would suggest you to please configure NGINX application for port 443 as the frontend port for the cluster IP and ensure ‘SSL redirection’ is set to enabled due to which when the NGINX application is deployed, it will be not face the policy restrictions and get failed. Also, refer to the below snapshot of the listeners in application gateway and load balancer when provisioning an AGIC for an AKS cluster.
Also, for more detailed information on deploying the NGINX application in AKS cluster on ports, kindly refer to the below documentation link: -
https://learn.microsoft.com/en-us/azure/aks/ingress-basic?tabs=azure-cli

Related

Expose Ingress with host - what is a host and where to get it?

I am using Azure Cloud and I set up a kubernetes cluster.
Now I want to expose 1 service in my cluster with gRPC, so I learned I need Ingress.
I set up ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
#nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/backend-protocol: "GRPC"
name: fortune-ingress
#namespace: default
spec:
ingressClassName: nginx
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: apigateway-service
port:
number: 80
Now the ingress is working but it has Host "*" and no external IP, so I dont know how to connect with that.
I know I have a domain on another service, but I dont want to use it right now for developmenmt reasons, I just want to have a test-host or external IP to try if everything is working within my cluster and focus on that.
What to do?

Is it safe to have an internal ClusterIp backend service using HTTP behind an Ngynx Ingress controller accessible via HTTPS?

I have a Service configured to be accessible via HTTP.
kind: Service
apiVersion: v1
metadata:
name: myservice
spec:
ports:
- name: http
protocol: TCP
port: 80
targetPort: 8080
type: ClusterIP
And an Ngynx Ingress configured to make that internal service accessible from a specific secure subdomain.domain
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: myservice-ingress
selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/myservice-ingress
annotations:
certmanager.k8s.io/issuer: letsencrypt-prod
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/backend-protocol: HTTP
spec:
tls:
- hosts:
- myservice.mydomain.com
secretName: myservice-ingress-secret-tls
rules:
- host: myservice.mydomain.com
http:
paths:
- path: /
backend:
serviceName: myservice
servicePort: 80
status:
loadBalancer:
ingress:
- {}
So when I reach https://myservice.mydomain.com I can access to my service through HTTPS.
Is it safe enough or should I configure my service and pods to communicate only through HTTPS?
It's expected behaviour since you've set TLS in your Ingress.
Note that by default the controller redirects (308) to HTTPS if TLS is enabled for that ingress. If you want to disable this behavior globally, you can use ssl-redirect: "false" in the NGINX ConfigMap.
To configure this feature for specific ingress resources, you can use the nginx.ingress.kubernetes.io/ssl-redirect: "false" annotation in the particular resource.
About your question: "Is it safe enough.." - it's opinion based question, so I can answer to use better HTTPS, rather than HTTP, but it's just my opinion. You can always find the difference between HTTP and HTTPS

dashboard not working with https - K8s Version- v1.19.6

I have deployed Kubernetes Dashboard with a command:
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.2.0/aio/deploy/recommended.yaml
and I've edited the Service as a Nodeport and configured Ingress object accordingly. I could able to login to dashboard with http but getting issue while login the same URL with https:
"TLS handshake error from 10.244.0.0:44950: remote error: tls: unknown certificate" .
When i configured ingress rule with ssl it is giving error:
"Client sent an HTTP request to an HTTPS server."
I have jenkins application running on same cluster with real certificate and i could able to login the jenkins url with https .
Cluster Information:
k8s cluster running on (Linux Server release 7.9)
kubernetes version (v1.19.6)
Request you to confirm if any suggestion to fix this issue
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: kube-system-ingress
namespace: kubernetes-dashboard
annotations:
kubernetes.io/ingress.class: "haproxy"
ingress.kubernetes.io/ssl-passthrough: "false"
spec:
tls:
- hosts:
- console.qa.test.com
secretName: qa-pss-dashboard
rules:
- host: console.qa.test.com
http:
paths:
- path: /
backend:
serviceName: kubernetes-dashboard
servicePort: 8443
I think you have to add the annotation
ingress.kubernetes.io/backend-protocol: "HTTPS"
Please not that the kubernetes dashboard service is exposed in 443 port, not 8443 that is related to the deployment (pod port).
so:
backend:
service:
name: kubernetes-dashboard
port:
number: 443

Replacing ip address in kubernetes with custom name

I have created a sample spring boot app and did the following:-
1.created a docker image
2.created an Azure container registry and did a docker push to this
3.Created a cluster in Azure Kubernetes service and deployed it successfully.I have chosen external endpoint option for this.
Kubernetes external end point
say for service to service call i dont want to use IP like http://20.37.134.68:80 but another custom name how can i do it?
Also if i chose internal then is there any way to replace the name.
Tried editing YAML with endpoint name property but failed.Any ideas?
I think you mixing some concept, so I'll try to explain and help you to reach what you want.
When you deploy a container image in a Kubernetes cluster, in the most cases you will use a pod or deployment spec, that basically is a yaml file with all your deployment/pod configuration, name, image name etc. Here is an example of a simple echo-server app:
apiVersion: apps/v1
kind: Deployment
metadata:
name: echo
spec:
selector:
matchLabels:
app: echo
template:
metadata:
labels:
app: echo
spec:
containers:
- name: echo
image: mendhak/http-https-echo
ports:
- name: http
containerPort: 80
Observe the fields name in the file. Here you can configure the name for your deployment and for your containers.
In order to expose your application, you will need to use a service. Services can be internal and external. Here you can find all service types.
For a internal service, you need to use the service type ClusterIP (default), it means only your cluster will reach the pods. To reach your service from other pods, you can use the service name composed by my-svc.my-namespace.svc.cluster-domain.example.
Here is an example of a service for the deployment above:
apiVersion: v1
kind: Service
metadata:
name: echo-svc
spec:
selector:
app: echo
ports:
- protocol: TCP
port: 80
targetPort: 80
To expose your service externally, you have the option to use a service type NodePort, LoadBalancer or use an ingress.
You can configure your DNS name in the ingress rules and make path rules if you want, or even configure a HTTPS for your application. There are few options to ingresses in kubernetes, and one of the most popular is nginx-ingress.
Here is an example of how to configure a simple ingress for our example service:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "false"
name: echo-ingress
spec:
rules:
- host: myapp.mydomain.com
http:
paths:
- path: "/"
backend:
serviceName: echo-svc
servicePort: 80
In the example, i'm using the dns name myapp.mydomain.com, so it means you can only will reach your application by this name.
After create the ingress, you can see the external ip with the command kubectl get ing, and you can create a A entry in your dns server.

Loadbalancer IP and Ingress IP status is pending in kubernetes

I have created the Kubernetes Cluster using two Azure Ubuntu VMs. I am able to deploy and access pods and deployments using the Nodeport service type. I have also checked the pod's status in Kube-system namespace. All of the pod's status showing as running. but, whenever I mention service type to Loadbalancer, it was not creating the LoadBalancer IP and it's status always showing as pending. I have also created an Ingress controller for the Nginx service. still, it is not creating an ingress Address. While initializing the Kubernetes master, I am using the following command.
kubeadm init
Below is deployment, svc and Ingress manifest files.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
selector:
matchLabels:
app: nginx
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: test-ingress
spec:
backend:
serviceName: nginx
servicePort: 80
$ kubectl describe svc nginx
Name: nginx
Namespace: default
Labels: app=nginx
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"nginx"},"name":"nginx","namespace":"default"},"spec":{"p...
Selector: app=nginx
Type: ClusterIP
IP: 10.96.107.97
Port: http 80/TCP
TargetPort: 80/TCP
Endpoints: 10.44.0.4:80,10.44.0.5:80,10.44.0.6:80
Session Affinity: None
Events: <none>
$ kubectl describe ingress nginx
Name: test-ingress
Namespace: default
Address:
Default backend: nginx:80 (10.44.0.4:80,10.44.0.5:80,10.44.0.6:80)
Rules:
Host Path Backends
---- ---- --------
`*` `*` nginx:80 (10.44.0.4:80,10.44.0.5:80,10.44.0.6:80)
Annotations:
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"networking.k8s.io/v1beta1","kind":"Ingress","metadata":{"annotations":{},"name":"test-ingress","namespace":"default"},"spec":{"backend":{"serviceName":"nginx","servicePort":80}}}
Events: `<none>`
Do we need to mention any IP ranges(private or public) of VMs while initializing the kubeadm init? or
Do we need to change any network settings in Azure Ubuntu VMs?
As you created your own Kubernetes cluster rather than AWS, Azure or GCP provided one, there is no load balancer integrated. Due to this reason, you are getting IP status pending.
But with the use of Ingress Controller or directly through NodePort you can circumvent this problem.
However, I also observed in your nginx service you are using an annotation service.beta.kubernetes.io/aws-load-balancer-type: nlb and you said you are using Azure and those are platform specific annotations for the service and that annotation is AWS specific.
However, you can give something like this a try, if you would like to experiment directly with public IPs, you can define your service by providing externalIPs in your service if you have a public ip allocated to your node and allows ingress traffic from somewhere.
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: MyApp
ports:
- name: http
protocol: TCP
port: 80
targetPort: 9376
externalIPs:
- 80.11.12.10
But, a good approach to get this done is using an ingress controller if you are planning to build your own Kubernetes cluster.
Hope this helps.

Resources