kustomize how to target replacement in every array - kustomize

As per the examples https://kubectl.docs.kubernetes.io/references/kustomize/kustomization/replacements/
one can target specific array items like such spec.template.spec.containers.[name=hello].env.[name=SECRET_TOKEN].value
But how do we target every array item ?
Tried - as below but it is replacing only the last occurrence.
fieldPaths:
- spec.http.-.route.0.destination.host
Here is my kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: ef
images:
- name: gateway-image
resources:
- configmap.yaml
- virtual-services.yaml
replacements:
- source:
kind: ConfigMap
fieldPath: data.HOST
targets:
- select:
kind: VirtualService
fieldPaths:
- spec.http.-.route.0.destination.host
options:
create: true
Here is my target yaml service-entries.yaml - I am expect that the HOST gets replaced in both places.
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: v-rules-1
spec:
hosts:
- EXTERNAL-HOST
gateways:
- istio-system/default-gateway
- mesh
http: # note these are ordered - first rule matching wins
- match:
- uri:
prefix: /foo
route:
- destination:
host: PLACEHOLDER1
- match:
- uri:
prefix: /bar
route:
- destination:
host: PLACEHOLDER2
But as you can see below the results has only the last value:-
kustomize build .
produces this. As you can see example.com appears correctly in the last entry only - PLACEHOLDE1 is unaffected:-
apiVersion: v1
data:
HOST: example.com
kind: ConfigMap
metadata:
name: gateway-configmap
namespace: ef
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: v-rules-1
namespace: ef
spec:
gateways:
- istio-system/default-gateway
- mesh
hosts:
- EXTERNAL-HOST
http:
- match:
- uri:
prefix: /foo
route:
- destination:
host: PLACEHOLDER1
- match:
- uri:
prefix: /bar
route:
- destination:
host: example.com
So question is how do we replace every array element ? What is the syntax for fieldPath.
kustomize version is 4.2.0

The answer is that you can't. However, the author seems to have opened a feature request to be able to do so.
https://github.com/kubernetes-sigs/kustomize/issues/4053

Related

Add secret to projected list volume kustomize

I am trying to use kustomize to patch existing Deployment by adding environment secrets in the list of projected
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: microservice-1
name: microservice-1
spec:
selector:
matchLabels:
app: microservice-1
template:
metadata:
labels:
app: microservice-1
spec:
containers:
image: URL
imagePullPolicy: Always
name: microservice-1
ports:
- containerPort: 80
name: http
volumeMounts:
- mountPath: /config/secrets
name: files
readOnly: true
imagePullSecrets:
- name: image-pull-secret
restartPolicy: Always
volumes:
- name: files
projected:
sources:
- secret:
name: my-secret-1
- secret:
name: my-secret-2
patch.yaml
- op: add
path: /spec/template/spec/volumes/0/projected/sources/0
value:
secret: "my-new-secret"
kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
patchesJson6902:
- target:
version: v1
kind: Deployment
name: microservice-1
path: patch.yaml
Error
Error: updating name reference in 'spec/template/spec/volumes/projected/sources/secret/name' field of 'Deployment.v1.apps/microservice-1.itc-microservices': considering field 'spec/template/spec/volumes/projected/sources/secret/name' of object Deployment.v1.apps/ms-pedigree.itc-microservices: visit traversal on path: [projected sources secret name]: visit traversal on path: [secret name]: expected sequence or mapping no
How can I add new secret to the list with key secret and field name:
- secret
name: "my-new-secret"
NB: I have tried to to a PatchStrategic merge but the list is all remplaced.
I have found.
- op: add
path: /spec/template/spec/volumes/0/projected/sources/-
name: ok
value:
secret:
name: "my-new-secret"

Error converting YAML to JSON: mapping values are not allowed in this context ( Azure context)

I did the following tutorial:
https://learn.microsoft.com/en-us/learn/modules/cloud-native-apps-orchestrate-containers/7-exercise-connect-container-to-web-app
I created the ingress.yaml file as follows:
#ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: cna-express
annotations:
kubernetes.io/ingress.class: addon-http-application-routing
spec:
rules:
- host: cna-express.4d667ba6676144bbac81.westeurope.aksapp.io
paths:
- path: / # Which path is this rule referring to
pathType: Prefix
backend: # How the ingress will handle the requests
service:
name: cna-express # Which service the request will be forwarded to
port:
name: http # Which port in that service
However, I have the following output:
error: error parsing ./ingress.yaml: error converting YAML to JSON:
yaml: line 11: mapping values are not allowed in this context
Do you have any idea about what could be the issue?
With my best
You forgot to add http line. Try this one please:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: cna-express
annotations:
kubernetes.io/ingress.class: addon-http-application-routing
spec:
rules:
- host: cna-express.4d667ba6676144bbac81.westeurope.aksapp.io
http:
paths:
- path: / # Which path is this rule referring to
pathType: Prefix
backend: # How the ingress will handle the requests
service:
name: cna-express # Which service the request will be forwarded to
port:
name: http # Which port in that service
Also you can use this documentation for more information about ingress:
https://kubernetes.io/docs/concepts/services-networking/ingress/

I am completely stuck when trying to run skaffold on my project. It keeps throwing an error when run from the ingress srv

When I run skaffold this is the error I get. Skaffold generates tags, checks the cache, starts the deploy then it cleans up.
- stderr: "error: error parsing C: ~\k8s\\ingress-srv.yaml: error converting YAML to JSON: yaml: line 20: mapping values are not allowed in this context
\n"
- cause: exit status 1
Docker creates a container for the server. Here is the ingress server yaml file:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-srv
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: northernherpgeckosales.dev
http:
paths:
- path: /api/users/?(.*)
pathType: Prefix
backend:
service:
name: auth-srv
port:
number: 3000
- path: /?(.*)
pathType: Prefix
backend:
service:
name: front-end-srv
port:
number: 3000
For good measure here is the skaffold file:
apiVersion: skaffold/v2alpha3
kind: Config
deploy:
kubectl:
manifests:
- ./infra/k8s/*
build:
local:
push: false
artifacts:
- image: giantgecko/auth
context: auth
docker:
dockerfile: Dockerfile
sync:
manual:
- src: 'src/**/*.ts'
dest: .
- image: giantgecko/front-end
context: front-end
docker:
dockerfile: Dockerfile
sync:
manual:
- src: '**/*.js'
dest: .
Take a closer look at your Ingress definition file (starting from line 19):
- path: /?(.*)
pathType: Prefix
backend:
service:
name: front-end-srv
port:
number: 3000
You have unnecessary indents from the line 20 (pathType: Prefix) till the end of the file. Just format your YAML file properly. For the previous path: /api/users/?(.*) everything is alright - no unnecessary indents.
Final YAML looks like this:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-srv
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: northernherpgeckosales.dev
http:
paths:
- path: /api/users/?(.*)
pathType: Prefix
backend:
service:
name: auth-srv
port:
number: 3000
- path: /?(.*)
pathType: Prefix
backend:
service:
name: front-end-srv
port:
number: 3000

Kubernetes ingress for Teamcity blank page

I have problem whit a ingress. I'm using haproxy, but after apply yaml file(s) I'm not able to open teamcity site in web browser. I got blank page. If I use curl it shows nothing.
Test echo (image: jmalloc/echo-server) is working just fine.
Of course kubernetes.local is added to hosts file to be able to resolve DNS name.
My config yaml files:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
run: teamcity
name: teamcity
namespace: default
spec:
replicas: 1
selector:
matchLabels:
run: teamcity
strategy:
type: Recreate
template:
metadata:
labels:
run: teamcity
spec:
tolerations:
- key: node.kubernetes.io/not-ready
effect: NoExecute
tolerationSeconds: 10
- key: node.kubernetes.io/unreachable
effect: NoExecute
tolerationSeconds: 10
containers:
- image: jetbrains/teamcity-server
imagePullPolicy: Always
name: teamcity
ports:
- containerPort: 8111
volumeMounts:
- name: teamcity-pvc-data
mountPath: "/data/teamcity_server/datadir"
- name: teamcity-pvc-logs
mountPath: "/opt/teamcity/logs"
volumes:
- name: teamcity-pvc-data
persistentVolumeClaim:
claimName: teamcity-pvc-data
- name: teamcity-pvc-logs
persistentVolumeClaim:
claimName: teamcity-pvc-logs
---
apiVersion: v1
kind: Service
metadata:
labels:
run: teamcity
name: teamcity
namespace: default
annotations:
haproxy.org/check: "true"
haproxy.org/forwarded-for: "true"
haproxy.org/load-balance: "roundrobin"
spec:
selector:
run: teamcity
ports:
- name: port-tc
port: 8111
protocol: TCP
targetPort: 8111
externalIPs:
- 192.168.22.152
- 192.168.22.153
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: teamcity
namespace: default
spec:
rules:
- host: kubernetes.local
http:
paths:
- path: /teamcity
pathType: Prefix
backend:
service:
name: teamcity
port:
number: 8111
I wold be grateful for every hint. Struggling whit this for hours. connection to http://192.168.22.152:8111 is working fine too. Just Ingress having troubles.
subdomain is fixing problem teamcity.kubernetes.local kubernetes.local/teamcity doesn't work.
Solution:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: teamcity-ingress
namespace: default
spec:
rules:
- host: teamcity.kubernetes.local
http:
paths:
- pathType: Prefix
path: /
backend:
service:
name: teamcity
port:
number: 8111

NGINX Ingress external oauth with Azure Active Directory

I want to use Azure Active Directory as an external oauth2 provider to protect my services on the ingress level. In the past, I used basic ouath and everything worked like expected. But nginx provides the extern ouath methode which sounds much more confortable!
For that I created an SP:
$ az ad sp create-for-rbac --skip-assignment --name test -o table
AppId DisplayName Name Password Tenant
<AZURE_CLIENT_ID> test http://test <AZURE_CLIENT_SECRET> <TENANT_ID>
My ingress ressource:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx-ingress
namespace: ingress-nginx
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/auth-url: "https://\$host/oauth2/auth"
nginx.ingress.kubernetes.io/auth-signin: "https://\$host/oauth2/start?rd=$escaped_request_uri"
# nginx.ingress.kubernetes.io/auth-type: basic
# nginx.ingress.kubernetes.io/auth-secret: basic-auth
# nginx.ingress.kubernetes.io/auth-realm: 'Authentication Required'
And the externel-oauth:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: oauth2-proxy
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
app: oauth2-proxy
template:
metadata:
labels:
app: oauth2-proxy
spec:
containers:
- args:
- --provider=azure
- --email-domain=microsoft.com
- --upstream=file:///dev/null
- --http-address=0.0.0.0:4180
- --azure-tenant=$AZURE_TENANT_ID
env:
- name: OAUTH2_PROXY_CLIENT_ID
value: $API_CLIENT_ID
- name: OAUTH2_PROXY_CLIENT_SECRET
value: $API_CLIENT_SECRET
- name: OAUTH2_PROXY_COOKIE_SECRET
value: $API_COOKIE_SECRET
# created by docker run -ti --rm python:3-alpine python -c 'import secrets,base64; print(base64.b64encode(base64.b64encode(secrets.token_bytes(16))));
image: docker.io/colemickens/oauth2_proxy:latest
imagePullPolicy: Always
name: oauth2-proxy
ports:
- containerPort: 4180
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: oauth2-proxy
namespace: kube-system
spec:
ports:
- name: http
port: 4180
protocol: TCP
targetPort: 4180
selector:
app: oauth2-proxy
It looks like something wents wrong but I have no idea what I missed.
When i try to enter the page it loads up to one minute and ends in a '500 internal server error'.
The logs of the ingress controller show an infinity loop of following:
10.244.2.1 - - [16/Jan/2020:15:32:30 +0000] "GET /oauth2/auth HTTP/1.1" 499 0 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/xxxxx Safari/xxxx" 727 0.003 [upstream-default-backend] [] - - - - <AZURE_CLIENT_ID>
So you need another ingress for the oAuth deployment as well. here's how my setup looks like:
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: grafana-ingress-oauth
namespace: grafana
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: xxx
http:
paths:
- path: /oauth2
backend:
serviceName: oauth2-proxy
servicePort: 4180
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: grafana-ingress
namespace: grafana
annotations:
kubernetes.io/ingress.class: "nginx"
kubernetes.io/tls-acme: "true"
certmanager.k8s.io/cluster-issuer: letsencrypt-production
ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/auth-url: "https://$host/oauth2/auth"
nginx.ingress.kubernetes.io/auth-signin: "https://$host/oauth2/start?rd=$escaped_request_uri"
spec:
rules:
- host: xxx
http:
paths:
- path: /
backend:
serviceName: grafana
servicePort: 80
this way second ingress redirects to first and first does the auth and redirects back

Resources