I have a nginx ingress controller on aks which I configured using official guide. I also wanted to configure the nginx to allow underscores in the header so I wrote down the following configmap
apiVersion: v1
kind: ConfigMap
data:
enable-underscores-in-headers: "true"
metadata:
name: nginx-configuration
Note that I am using default namespace for nginx. However applying the configmap nothing seem to be happening. I see no events. What am I doing wrong here?
Name: nginx-configuration
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","data":{"enable-underscores-in-headers":"true"},"kind":"ConfigMap","metadata":{"annotations":{},"name":"nginx-configura...
Data
====
enable-underscores-in-headers:
----
true
Events: <none>
Solution was to correctly name the configmap, firstly I did kubectl describe deploy nginx-ingress-controller which contained the configmap this deployment is looking for. In my case it was something like this --configmap=default/nginx-ingress-controller. I changed name of my configmap to nginx-ingress-controller. As soon I did that controller picked up the data from my configmap and changed the configuration inside my nginx pod.
The nginx ingress controller deployment refer to a ConfigMap which can be checked by describing the deployment.
args:
- /nginx-ingress-controller
- --configmap=$(POD_NAMESPACE)/nginx-configuration
You need to edit that configMap and add that parameter rather than creating new one.
kubectl edit cm nginx-configuration -n namespacename
Related
Afaik, the K8s NetworkPolicy can only allow pods matching a label to do something. I do not want to:
Deny all traffic
Allow traffic for all pods except the ones matching my label
but instead:
Allow all traffic
Deny traffic for pods matching my label
How do I do that?
From kubectl explain NetworkPolicy.spec.ingress.from:
DESCRIPTION:
List of sources which should be able to access the pods selected for this
rule. Items in this list are combined using a logical OR operation. If this
field is empty or missing, this rule matches all sources (traffic not
restricted by source). If this field is present and contains at least one
item, this rule allows traffic only if the traffic matches at least one
item in the from list.
As far as I understand this, we can only allow, not deny.
As you mentioned in the comments, you are using the Kind tool for running Kubernetes. Instead of kindnet CNI plugin (default CNI plugin for Kind) which does not support Kubernetes network policies, you can use Calico CNI plugin which support Kubernetes network policies + it has its own, similar solution called Calico network policies.
Example - I will create cluster with disabled default kind CNI plugin + enabled NodePort for testing (assuming that you have kind + kubectl tools already installed):
kind-cluster-config.yaml file:
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
networking:
disableDefaultCNI: true # disable kindnet
podSubnet: 192.168.0.0/16 # set to Calico's default subnet
nodes:
- role: control-plane
extraPortMappings:
- containerPort: 30000
hostPort: 30000
listenAddress: "0.0.0.0" # Optional, defaults to "0.0.0.0"
protocol: tcp # Optional, defaults to tcp
Time for create a cluster using above config:
kind create cluster --config kind-cluster-config.yaml
When cluster is ready, I will install Calico CNI plugin:
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
I will wait until all calico pods are ready (kubectl get pods -n kube-system command to check). Then, I will create sample nginx deployment + service type NodePort for accessing:
nginx-deploy-service.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 2 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
type: NodePort
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
nodePort: 30000
Let's apply it: kubectl apply -f nginx-deploy-service.yaml
So far so good. Now I will try to access nginx-service using node IP (kubectl get nodes -o wide command to check node IP address):
curl 172.18.0.2:30000
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...
Okay, it's working.
Now time to install calicoctl and apply some example policy - based on this tutorial - to block ingress traffic only for pods with label app with value nginx:
calico-rule.yaml:
apiVersion: projectcalico.org/v3
kind: GlobalNetworkPolicy
metadata:
name: default-deny
spec:
selector: app == "nginx"
types:
- Ingress
Apply it:
calicoctl apply -f calico-rule.yaml
Successfully applied 1 'GlobalNetworkPolicy' resource(s)
Now I can't reach the address 172.18.0.2:30000 which was working previously. The policy is working fine!
Read more about calico policies:
Get started with Calico network policy
Calico policy tutorial
Also check this GitHub topic for more information about NetworkPolicy support in Kind.
EDIT:
Seems like Calico plugin supports as well Kubernetes NetworkPolicy, so you can just install Calico CNI plugin and apply the following policy:
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: default-deny
spec:
podSelector:
matchLabels:
app: nginx
policyTypes:
- Ingress
I tested it and seems it's working fine as well.
My requirement is as follows:
Developer creates a branch in Jenkins. Lets say branch name is "mystory-101"
Now developer push the code to this branch
Jenkins job starts as soon as commit is pushed to the branch "mystory-101" and create a new docker image for this branch if not created already
My application is Node.js based app, so docker container starts with node.js and deployes the code from the branch "mystory-101"
After the code is deployed and node.js is running, then I would also like this node.js app to be accessible via the URL : https://mystory-101.mycompany.com
For this purpose I was reading this https://medium.com/swlh/ci-cd-pipeline-using-jenkins-dynamic-nodes-86ea854ff7a7
but I am not sure how to achive step #5. Can you please advice how to achive this autometically?
Reformatting answers from commentaries, having a Jenkins installation and Kubernetes cluster, you may automate your deployments using a Jenkins plugin such as oc or kubernetes, or you could prefer using the kubectl client directly, assuming your agents do have that binary.
Not going through the RBAC specifics, you would probably need a ServiceAccount for Jenkins, and use a token (can be found in a Secret named after your ServiceAccount). That ServiceAccount should have enough privileges to create resources in the namespaces you intend to deploy stuff into -- usually the edit ClusterRole, with a namespace-scoped RoleBinding:
kubectl create sa jenkins -n my-namespace
kubectl create rolebinding jenkins-edit \
--clusterrole=edit \
--serviceaccount=my-namespace:jenkins-edit \
--namespace=my-namespace
Once Jenkins is done building your image, you would deploy it to Kubernetes, most likely creating a Deployment, a Service, and an Ingress, substituting resource names, namespaces and your ingress requested FQDN to match your requirements.
Prepare your deployment yaml, something like:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-BRANCH
spec:
selector:
matchLabels:
name: app-BRANCH
template:
spec:
containers:
- image: my-registry/path/to/image:BRANCH
[...]
---
apiVersion: v1
kind: Service
metadata:
name: app-BRANCH
spec:
selector:
name: app-BRANCH
ports:
[...]
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: app-BRANCH
spec:
rules:
- host: app-BRANCH.my-base-domain.com
http:
paths:
- backend:
serviceName: app-BRANCH
Then, have your Jenkins agent apply that configuration, substituting values properly:
sed "s|BRANCH|$BRANCH|" deploy.yaml | kubectl apply -n my-namespace -f-
kubectl wait -n my-namespace deploy/app-$BRANCH --for=condition=Available
kubectl logs -n my-namespace deploy/app-$BRANCH --tail=200
I have created a sample spring boot app and did the following:-
1.created a docker image
2.created an Azure container registry and did a docker push to this
3.Created a cluster in Azure Kubernetes service and deployed it successfully.I have chosen external endpoint option for this.
Kubernetes external end point
say for service to service call i dont want to use IP like http://20.37.134.68:80 but another custom name how can i do it?
Also if i chose internal then is there any way to replace the name.
Tried editing YAML with endpoint name property but failed.Any ideas?
I think you mixing some concept, so I'll try to explain and help you to reach what you want.
When you deploy a container image in a Kubernetes cluster, in the most cases you will use a pod or deployment spec, that basically is a yaml file with all your deployment/pod configuration, name, image name etc. Here is an example of a simple echo-server app:
apiVersion: apps/v1
kind: Deployment
metadata:
name: echo
spec:
selector:
matchLabels:
app: echo
template:
metadata:
labels:
app: echo
spec:
containers:
- name: echo
image: mendhak/http-https-echo
ports:
- name: http
containerPort: 80
Observe the fields name in the file. Here you can configure the name for your deployment and for your containers.
In order to expose your application, you will need to use a service. Services can be internal and external. Here you can find all service types.
For a internal service, you need to use the service type ClusterIP (default), it means only your cluster will reach the pods. To reach your service from other pods, you can use the service name composed by my-svc.my-namespace.svc.cluster-domain.example.
Here is an example of a service for the deployment above:
apiVersion: v1
kind: Service
metadata:
name: echo-svc
spec:
selector:
app: echo
ports:
- protocol: TCP
port: 80
targetPort: 80
To expose your service externally, you have the option to use a service type NodePort, LoadBalancer or use an ingress.
You can configure your DNS name in the ingress rules and make path rules if you want, or even configure a HTTPS for your application. There are few options to ingresses in kubernetes, and one of the most popular is nginx-ingress.
Here is an example of how to configure a simple ingress for our example service:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "false"
name: echo-ingress
spec:
rules:
- host: myapp.mydomain.com
http:
paths:
- path: "/"
backend:
serviceName: echo-svc
servicePort: 80
In the example, i'm using the dns name myapp.mydomain.com, so it means you can only will reach your application by this name.
After create the ingress, you can see the external ip with the command kubectl get ing, and you can create a A entry in your dns server.
I’m deploying istio in azure kubernetes services (AKS) and I have the following question:
Is it possible to deploy istio using an internal load balancer. Looks like it is deployed in Azure with a public load balancer by default. What do I need to change to make it use an internal load balancer?
To answer the second question :
It is possible to add AKS annotation for an internal load balancer according to AKS documentation:
To create an internal load balancer, create a service manifest named internal-lb.yaml with the service type LoadBalancer and the azure-load-balancer-internal annotation as shown in the following example:
apiVersion: v1
kind: Service
metadata:
name: internal-app
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: internal-app
So You can set this annotation by using helm with the following --set:
helm template install/kubernetes/helm/istio --name istio --namespace istio-system --set gateways.istio-ingressgateway.serviceAnnotations.'service\.beta\.kubernetes\.io/azure-load-balancer-internal'="true" > aks-istio.yaml
As mentioned in comment You should stick to One question per post as advised here. So I suggest creating second post with other question.
Hope it helps.
Update:
For istioctl You can do the following:
Generate manifest file for Your istio deployment for this example I used demo profile.
istioctl manifest generate --set profile=demo > istio.yaml
Modify the istio.yaml and search for text for type: LoadBalancer.
---
apiVersion: v1
kind: Service
metadata:
name: istio-ingressgateway
namespace: istio-system
annotations:
labels:
app: istio-ingressgateway
release: istio
istio: ingressgateway
spec:
type: LoadBalancer
selector:
app: istio-ingressgateway
ports:
Add the annotation for the internal load balancer like this:
---
apiVersion: v1
kind: Service
metadata:
name: istio-ingressgateway
namespace: istio-system
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
labels:
app: istio-ingressgateway
release: istio
istio: ingressgateway
spec:
type: LoadBalancer
selector:
app: istio-ingressgateway
ports:
After saving changes deploy modified istio.yaml to Your K8s cluster using:
kubectl apply -f istio.yaml
After that You can verify if annotation is present in istio-ingressgateway service.
$ kubectl get svc istio-ingressgateway -n istio-system -o yaml
apiVersion: v1
kind: Service
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"service.beta.kubernetes.io/azure-load-balancer-internal":"true"},"labels":{"app":"istio-ingressgateway","istio":"ingressgateway","release":"istio"},"name":"istio-ingressgateway","namespace":"istio-system"},"spec":{"ports":[{"name":"status-port","port":15020,"targetPort":15020},{"name":"http2","port":80,"targetPort":80},{"name":"https","port":443},{"name":"kiali","port":15029,"targetPort":15029},{"name":"prometheus","port":15030,"targetPort":15030},{"name":"grafana","port":15031,"targetPort":15031},{"name":"tracing","port":15032,"targetPort":15032},{"name":"tls","port":15443,"targetPort":15443}],"selector":{"app":"istio-ingressgateway"},"type":"LoadBalancer"}}
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
creationTimestamp: "2020-01-27T13:51:07Z"
Hope it helps.
I have a problem with headers not forwarded into my services, I am not sure how support for Ingress was added, however I have the following Ingress service:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
annotations:
ingress.kubernetes.io/rewrite-target: /
"nginx.org/proxy-pass-headers": "custom_header"
spec:
rules:
- host: myingress.westus.cloudapp.azure.com
http:
paths:
- path: /service1
backend:
serviceName: service1
servicePort: 8080
However, my custom_header will not be forwarded. In nginx I set underscores_in_headers:
underscores_in_headers on;
How can I add this configuration into my ingress nginx service?
Thanks.
I've just changed "true" instead of "on", for nginx ingress controller , and workd for me .
As mentioned here : https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-configuration
namespace: ingress-nginx
labels:
app: ingress-nginx
data:
enable-underscores-in-headers: "true"
kubectl apply -f configmap.yml
enter image description here
According to ingress configmap spec you can use this header directly in configspec e.g.:
apiVersion: v1
data:
enable-underscores-in-headers: "on"
kind: ConfigMap
metadata:
name: nginx-configuration
namespace: ingress-nginx
labels:
app: ingress-nginx
kubectl apply -f configmap.yml
Also there is an example of setting custom headers
Did you try that?
Whenever you install Nginx Ingress in Kubernetes weather from Helm Or manually, It always creates Controllers with it. Controllers are the primary containers that handles all the routing.
These controller pods are defined in the deployments that resides in the Kube-System Namespace.
This deployment is attached with a ConfigMap that as well reside in the Kube-System.
Deployment that have Nginx Ingress Controllers definition.
Default Config Map that is Connected to Ingress Deployment.
Now all you have to do is to add your configuration in this Config Map file.
Altered/Edited Config Map.