I have an kubernetes cluster (AKS) within azure, With Application Gateway Ingress Controller.
I deployed on it, wordpress helm release. I Would like to enable ingress traffic only from Application Gateway ingress controller pod which is in kube-system
So, my values.yml look like :
# I paste only NetworkPolicy part
networkPolicy:
enabled: true
ingress:
enabled: true
ingressRules:
customRules:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: kube-system
podSelector:
matchLabels:
app: ingress-appgw
However when i deploy a release, the wordpress its self works fine. But i can not access to it via <Application_Gateway_Ingress_Controller_PublicIP>
On my Azure Portal, when i go Application Gateway resource i got the following messages:
Image1:
Image2:
But when i remove Network Policy part from values.yml, the AGIC become healthy!
Any Help please ?
Thank you in adavance!
This error mentions as you don't have any healthy backend pools, try to add healthy probes as below. This will continuously monitor your backend pools for the health status, and it will update the application gateway. App gateway will only send traffic to healthy backend pools and refresh
After adding healthy probes and refresh it takes few minutes for updating these and try to access the application gateway and then if you received invalid hostname error try to change the setting as below in advance
Check out this So thread by Vladam for some pointers.
Alternately, try to develop a network policy that blocks all traffic. Create a file named as backend-policy.yaml and use the below manifest to block all incoming traffic to the pod:
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: backend-policy
namespace: development
spec:
podSelector:
matchLabels:
app: webapp
role: backend
ingress: []
Reference:
Troubleshoot backend health issues in Azure Application Gateway | Microsoft Docs
Secure pod traffic with network policy - Azure Kubernetes Service | Microsoft Docs
Related
Good guys
Let's see if someone can help me.
I have configured Azure Kubernetes (AKS) in version 1.13
I am trying to create an Ingress with static IP, but it is impossible for me.
I am use kubectl create -f static-ip-svc.yaml
#File
apiVersion: v1
kind: Service
metadata:
name: nginx-ingress-lb
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
externalTrafficPolicy: Local
type: LoadBalancer
loadBalancerIP: 40.121.219.126
ports:
- port: 80
name: http
targetPort: 80
- port: 443
name: https
targetPort: 443
selector:
# Selects nginx-ingress-controller pods
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
and show error:
Type Reason Age From
Message ---- ------ ---- ----
------- Normal EnsuringLoadBalancer 8s (x4 over 43s) service-controller Ensuring load balancer Warning
CreatingLoadBalancerFailed 7s (x4 over 43s) service-controller
Error creating load balancer (will retry): failed to ensure load
balancer for service default/nginx-ingress-lb: timed out waiting for
the condition
For you, to create an Ingress with static IP, there are two ways to achieve it. But firstly, you need to know the info that resource groups for Azure Kubernetes service and its infrastructure.
The info is Why are two resource groups created with AKS? It explains that there are two resource groups, one for the Azure Kubernetes service itself and another for its infrastructure. So that there are two ways to create an Ingress with static IP.
Here the two ways:
Use the static IP which created in the group named MC_xxxx_xxxx_location.
Use the static IP which created in another group except for the group MC_xxxx_xxxx_location. In this way, you need to assign enough permission to the service principal of AKS, at least is "Network Contributor".
You get more details about "Use a static public IP address with the Azure Kubernetes Service (AKS) load balancer" here. I think you use the second way but you do not assign enough permission so that you got the error. Check the steps and try again following the steps.
We have created the kubernetes cluster on the azure VM, with Kube master and two nodes. We have deployed application and created the service with "NodePort" which works well. But when we try to use the type: LoadBalancer then it create service but the external IP goes pending state. Currently we unable create service type load balance and due this "ingress" nginx controller also going to same state. So we are not sure how do we setup load balancing in this case.
We have tried creating Load Balancer in Azure and trying to use that ip like shown below in service.
kind: Service
apiVersion: v1
metadata:
name: jira-service
labels:
app: jira-software
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
spec:
selector:
app: jira-software
type: LoadBalancer
loadBalancerIP: xxx.xxx.xxx.xxx
ports:
- name: jira-http
port: 8080
targetPort: jira-http
similarly we have one more application running on this kube cluster and we want to access application based on the context path.
if we invoke jira it should call backend server jira http://dns-name/jira
if we invoke some other app like bitbucket http://dns-name/bitbukcet
If I understand correctly you used type LoadBalancer in Virtual Machine, which will not work - type LoadBalancer works only in managed Kubernetes services like GKE, AKS etc.
You can find more information here.
I have to create a Kubernetes cluster in MS Azure manually, not using AKS. So:
I've created 2 VM's in one Availability set: one for k8s master and second for k8s node.
I've created External Load Balancer and add 2 VM's to the backend pool.
I've created k8s cluster using kubespray.
I've created Deployment and LoadBalancer Service:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: wrapper
spec:
replicas: 2
template:
metadata:
labels:
app: wrapper
spec:
containers:
- name: wrapper
image: wrapper:latest
ports:
- containerPort: 8080
name: wrapper
---
apiVersion: v1
kind: Service
metadata:
name: wrapper
spec:
loadBalancerIP: <azure_loadbalancer_public_ip>
type: LoadBalancer
ports:
- port: 8080
selector:
app: wrapper
But LoadBalancer service External-IP is always pending:
kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP
wrapper LoadBalancer 10.233.38.7 <pending>
Also, telnet azure_loadbalancer_public_ip doesn't work. I've tried to use NodePort instead of LoadBalancer, but in that case, I have two endpoints for my service on k8s master and on k8s node.
What I want is one entrypoint: azure_loadbalancer_public_ip, that is balances traffic between all nodes in the cluster.
Could you please help me to understand what I'm doing wrong and is it possible to "bind" Azure External Load Balancer with LoadBalancer service in Kubernetes?
You dont have to do that, k8s (when its configured properly) handles that for you. All you have to do it give it proper rights to be able to create a load balancer in Azure.
It basically can't talk to the Azure API to create a Load Balancer. You basically need to:
Add this option: --cloud-provider=azure to your kube-apiserver, kube-controller-manager and all the kubelets running on your nodes.
Make sure that your Azure VM has access to the Azure API
Restart all the components from 1.
This is not needed if you have the Cloud Controller Manager installed which is Beta in K8s 1.12 as of this writing. Note that the --cloud-provider option will be deprecated at some point in favor of this.
I'm trying to create a load balancer for azure Kubernetes deployment, I'm using the following yaml file
apiVersion: v1
kind: Service
metadata:
name: test-api-lb
spec:
type: LoadBalancer
loadBalancerIP : XXX.XXX.XXX.XXX
ports:
- port: 8080
selector:
app: test-api
and run it with
kubectl apply -f
What I need is to create a balancer with source IP affinity.
I found the following stuff https://learn.microsoft.com/en-us/azure/load-balancer/load-balancer-distribution-mode how to configure it on Azure and modes the LB supports. There is LoadBalancerDistribution attribute which specifies the mode type. Unfortunately, I didn't find any documentation how it could be done for Kubernetes deployment.
Thanks in advance
Rather than creating session affinity from the Azure LB to a specific node, you should configure it on the Kubernetes service by setting sessionAffinity to ClientIP as described here.
I have deployed a Kubernetes cluster to a custom virtual network on Azure using acs-engine. There is an ASP.NET Core 2.0 Kestrel app running on the agent VMs and the app is accessed over VPN through a Service of the Azure internal load balancer type. Now I would like to enable HTTPS on the service. I have already obtained a domain name and a certificate but have no idea how to proceed. Apparently configuring Kestrel to use HTTPS and copying the certificate to each container is not the way to go.
I have checked out tutorials such as ingress on k8s using acs and configure Nginx Ingress Controller for TLS termination on k8s on Azure but both of them end up exposing a public external IP and I want to keep the IP internal and not accessible from the internet. Is this possible? Can it be done without ingresses and their controllers?
While for some reason I still can't access the app through the ingress I was able to create an internal ingress service with the IP I want with the following configuration:
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
name: nginx-ingress-svc
spec:
type: LoadBalancer
ports:
- port: 443
targetPort: 443
loadBalancerIP: 130.10.1.9
selector:
k8s-app: nginx-ingress-controller
The tutorial you linked is a bit outdated, at least the instructions have you go to a 'examples' folder in the GitHub repo they link but that doesn't exist. Anyhow, a normal nginx ingress controller consists of several parts: the nginx deployment, the service that exposes it and the default backed parts. You need to look at the yamls they ask you to deploy, look for the second part of what I listed - the ingress service - and change type from LoadBalancer to ClusterIP (or delete type altogether since ClusterIP is the default)