I have a problem with headers not forwarded into my services, I am not sure how support for Ingress was added, however I have the following Ingress service:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
annotations:
ingress.kubernetes.io/rewrite-target: /
"nginx.org/proxy-pass-headers": "custom_header"
spec:
rules:
- host: myingress.westus.cloudapp.azure.com
http:
paths:
- path: /service1
backend:
serviceName: service1
servicePort: 8080
However, my custom_header will not be forwarded. In nginx I set underscores_in_headers:
underscores_in_headers on;
How can I add this configuration into my ingress nginx service?
Thanks.
I've just changed "true" instead of "on", for nginx ingress controller , and workd for me .
As mentioned here : https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-configuration
namespace: ingress-nginx
labels:
app: ingress-nginx
data:
enable-underscores-in-headers: "true"
kubectl apply -f configmap.yml
enter image description here
According to ingress configmap spec you can use this header directly in configspec e.g.:
apiVersion: v1
data:
enable-underscores-in-headers: "on"
kind: ConfigMap
metadata:
name: nginx-configuration
namespace: ingress-nginx
labels:
app: ingress-nginx
kubectl apply -f configmap.yml
Also there is an example of setting custom headers
Did you try that?
Whenever you install Nginx Ingress in Kubernetes weather from Helm Or manually, It always creates Controllers with it. Controllers are the primary containers that handles all the routing.
These controller pods are defined in the deployments that resides in the Kube-System Namespace.
This deployment is attached with a ConfigMap that as well reside in the Kube-System.
Deployment that have Nginx Ingress Controllers definition.
Default Config Map that is Connected to Ingress Deployment.
Now all you have to do is to add your configuration in this Config Map file.
Altered/Edited Config Map.
Related
I'm trying to access a simple Asp.net core application deployed on Azure AKS but I'm doing something wrong.
This is the deployment .yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: aspnetapp
spec:
replicas: 1
selector:
matchLabels:
app: aspnet
template:
metadata:
labels:
app: aspnet
spec:
containers:
- name: aspnetapp
image: <my_image>
resources:
limits:
cpu: "0.5"
memory: 64Mi
ports:
- containerPort: 8080
and this is the service .yml
apiVersion: v1
kind: Service
metadata:
name: aspnet-loadbalancer
spec:
type: LoadBalancer
ports:
- protocol: TCP
port: 80
targetPort: 8080
selector:
name: aspnetapp
Everything seems deployed correctly
Another check I did was to enter the pod and run
curl http://localhost:80,
and the application is running correctly, but if I try to access the application from the browser using http://20.103.147.69 a timeout is returned.
What else could be wrong?
Seems that you do not have an Ingress Controller deployed on your AKS as you have your application exposed directly. You will need that in order to get ingress to work.
To verify if your application is working your can use port-forward and then access http://localhost:8080 :
kubectl port-forward aspnetapp 8080:8080
But you should def. install a ingress-controller: Here is a Workflow from MS to install ingress-nginx as IC on your Cluster.
You will then only expose the ingress-controller to the internet and could also specify the loadBalancerIP statically if you created the PublicIP in advance:
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/azure-load-balancer-resource-group: myResourceGroup # only needed if the LB is in another RG
name: ingress-nginx-controller
spec:
loadBalancerIP: <YOUR_STATIC_IP>
type: LoadBalancer
The Ingress Controller then will route incoming traffic to your application with an Ingress resource:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minimal-ingress
spec:
ingressClassName: nginx # ingress-nginx specifix
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: test
port:
number: 80
PS: Never expose your application directly to the internet, always use the ingress controller
In your Deployment, you configured your container to listen on port 8080. You need to add targetport value set to 8080 in the Service definition.
Documentation
Here I have explained the scenario. I can reach a clusterIP using nginx ingress But I can't reach the same service using Azure Application Gateway Ingress. Bellow annotation is not helping me
appgw.ingress.kubernetes.io/rewrite-target: /
Any Idea?
Make sure you add below annotations to example-ingress.
appgw.ingress.kubernetes.io/use-private-ip: "false"
kubernetes.io/ingress.class: azure/application-gateway
You can see the full list and examples here.
You were using wrong annotation. I have updated your ingress with correct annotation:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
annotations:
kubernetes.io/ingress.class: azure/application-gateway
appgw.ingress.kubernetes.io/backend-path-prefix: "/"
spec:
rules:
- http:
paths:
- path: /apple/*
pathType: Prefix
backend:
service:
name: apple-service
port:
number: 5678
Checkout all the AGIC annotations here
I have a nginx ingress controller on aks which I configured using official guide. I also wanted to configure the nginx to allow underscores in the header so I wrote down the following configmap
apiVersion: v1
kind: ConfigMap
data:
enable-underscores-in-headers: "true"
metadata:
name: nginx-configuration
Note that I am using default namespace for nginx. However applying the configmap nothing seem to be happening. I see no events. What am I doing wrong here?
Name: nginx-configuration
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","data":{"enable-underscores-in-headers":"true"},"kind":"ConfigMap","metadata":{"annotations":{},"name":"nginx-configura...
Data
====
enable-underscores-in-headers:
----
true
Events: <none>
Solution was to correctly name the configmap, firstly I did kubectl describe deploy nginx-ingress-controller which contained the configmap this deployment is looking for. In my case it was something like this --configmap=default/nginx-ingress-controller. I changed name of my configmap to nginx-ingress-controller. As soon I did that controller picked up the data from my configmap and changed the configuration inside my nginx pod.
The nginx ingress controller deployment refer to a ConfigMap which can be checked by describing the deployment.
args:
- /nginx-ingress-controller
- --configmap=$(POD_NAMESPACE)/nginx-configuration
You need to edit that configMap and add that parameter rather than creating new one.
kubectl edit cm nginx-configuration -n namespacename
I have created the Kubernetes Cluster using two Azure Ubuntu VMs. I am able to deploy and access pods and deployments using the Nodeport service type. I have also checked the pod's status in Kube-system namespace. All of the pod's status showing as running. but, whenever I mention service type to Loadbalancer, it was not creating the LoadBalancer IP and it's status always showing as pending. I have also created an Ingress controller for the Nginx service. still, it is not creating an ingress Address. While initializing the Kubernetes master, I am using the following command.
kubeadm init
Below is deployment, svc and Ingress manifest files.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
selector:
matchLabels:
app: nginx
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: test-ingress
spec:
backend:
serviceName: nginx
servicePort: 80
$ kubectl describe svc nginx
Name: nginx
Namespace: default
Labels: app=nginx
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"nginx"},"name":"nginx","namespace":"default"},"spec":{"p...
Selector: app=nginx
Type: ClusterIP
IP: 10.96.107.97
Port: http 80/TCP
TargetPort: 80/TCP
Endpoints: 10.44.0.4:80,10.44.0.5:80,10.44.0.6:80
Session Affinity: None
Events: <none>
$ kubectl describe ingress nginx
Name: test-ingress
Namespace: default
Address:
Default backend: nginx:80 (10.44.0.4:80,10.44.0.5:80,10.44.0.6:80)
Rules:
Host Path Backends
---- ---- --------
`*` `*` nginx:80 (10.44.0.4:80,10.44.0.5:80,10.44.0.6:80)
Annotations:
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"networking.k8s.io/v1beta1","kind":"Ingress","metadata":{"annotations":{},"name":"test-ingress","namespace":"default"},"spec":{"backend":{"serviceName":"nginx","servicePort":80}}}
Events: `<none>`
Do we need to mention any IP ranges(private or public) of VMs while initializing the kubeadm init? or
Do we need to change any network settings in Azure Ubuntu VMs?
As you created your own Kubernetes cluster rather than AWS, Azure or GCP provided one, there is no load balancer integrated. Due to this reason, you are getting IP status pending.
But with the use of Ingress Controller or directly through NodePort you can circumvent this problem.
However, I also observed in your nginx service you are using an annotation service.beta.kubernetes.io/aws-load-balancer-type: nlb and you said you are using Azure and those are platform specific annotations for the service and that annotation is AWS specific.
However, you can give something like this a try, if you would like to experiment directly with public IPs, you can define your service by providing externalIPs in your service if you have a public ip allocated to your node and allows ingress traffic from somewhere.
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: MyApp
ports:
- name: http
protocol: TCP
port: 80
targetPort: 9376
externalIPs:
- 80.11.12.10
But, a good approach to get this done is using an ingress controller if you are planning to build your own Kubernetes cluster.
Hope this helps.
I’m deploying istio in azure kubernetes services (AKS) and I have the following question:
Is it possible to deploy istio using an internal load balancer. Looks like it is deployed in Azure with a public load balancer by default. What do I need to change to make it use an internal load balancer?
To answer the second question :
It is possible to add AKS annotation for an internal load balancer according to AKS documentation:
To create an internal load balancer, create a service manifest named internal-lb.yaml with the service type LoadBalancer and the azure-load-balancer-internal annotation as shown in the following example:
apiVersion: v1
kind: Service
metadata:
name: internal-app
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: internal-app
So You can set this annotation by using helm with the following --set:
helm template install/kubernetes/helm/istio --name istio --namespace istio-system --set gateways.istio-ingressgateway.serviceAnnotations.'service\.beta\.kubernetes\.io/azure-load-balancer-internal'="true" > aks-istio.yaml
As mentioned in comment You should stick to One question per post as advised here. So I suggest creating second post with other question.
Hope it helps.
Update:
For istioctl You can do the following:
Generate manifest file for Your istio deployment for this example I used demo profile.
istioctl manifest generate --set profile=demo > istio.yaml
Modify the istio.yaml and search for text for type: LoadBalancer.
---
apiVersion: v1
kind: Service
metadata:
name: istio-ingressgateway
namespace: istio-system
annotations:
labels:
app: istio-ingressgateway
release: istio
istio: ingressgateway
spec:
type: LoadBalancer
selector:
app: istio-ingressgateway
ports:
Add the annotation for the internal load balancer like this:
---
apiVersion: v1
kind: Service
metadata:
name: istio-ingressgateway
namespace: istio-system
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
labels:
app: istio-ingressgateway
release: istio
istio: ingressgateway
spec:
type: LoadBalancer
selector:
app: istio-ingressgateway
ports:
After saving changes deploy modified istio.yaml to Your K8s cluster using:
kubectl apply -f istio.yaml
After that You can verify if annotation is present in istio-ingressgateway service.
$ kubectl get svc istio-ingressgateway -n istio-system -o yaml
apiVersion: v1
kind: Service
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"service.beta.kubernetes.io/azure-load-balancer-internal":"true"},"labels":{"app":"istio-ingressgateway","istio":"ingressgateway","release":"istio"},"name":"istio-ingressgateway","namespace":"istio-system"},"spec":{"ports":[{"name":"status-port","port":15020,"targetPort":15020},{"name":"http2","port":80,"targetPort":80},{"name":"https","port":443},{"name":"kiali","port":15029,"targetPort":15029},{"name":"prometheus","port":15030,"targetPort":15030},{"name":"grafana","port":15031,"targetPort":15031},{"name":"tracing","port":15032,"targetPort":15032},{"name":"tls","port":15443,"targetPort":15443}],"selector":{"app":"istio-ingressgateway"},"type":"LoadBalancer"}}
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
creationTimestamp: "2020-01-27T13:51:07Z"
Hope it helps.