Add secret to projected list volume kustomize - kustomize

I am trying to use kustomize to patch existing Deployment by adding environment secrets in the list of projected
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: microservice-1
name: microservice-1
spec:
selector:
matchLabels:
app: microservice-1
template:
metadata:
labels:
app: microservice-1
spec:
containers:
image: URL
imagePullPolicy: Always
name: microservice-1
ports:
- containerPort: 80
name: http
volumeMounts:
- mountPath: /config/secrets
name: files
readOnly: true
imagePullSecrets:
- name: image-pull-secret
restartPolicy: Always
volumes:
- name: files
projected:
sources:
- secret:
name: my-secret-1
- secret:
name: my-secret-2
patch.yaml
- op: add
path: /spec/template/spec/volumes/0/projected/sources/0
value:
secret: "my-new-secret"
kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
patchesJson6902:
- target:
version: v1
kind: Deployment
name: microservice-1
path: patch.yaml
Error
Error: updating name reference in 'spec/template/spec/volumes/projected/sources/secret/name' field of 'Deployment.v1.apps/microservice-1.itc-microservices': considering field 'spec/template/spec/volumes/projected/sources/secret/name' of object Deployment.v1.apps/ms-pedigree.itc-microservices: visit traversal on path: [projected sources secret name]: visit traversal on path: [secret name]: expected sequence or mapping no
How can I add new secret to the list with key secret and field name:
- secret
name: "my-new-secret"
NB: I have tried to to a PatchStrategic merge but the list is all remplaced.

I have found.
- op: add
path: /spec/template/spec/volumes/0/projected/sources/-
name: ok
value:
secret:
name: "my-new-secret"

Related

phpMyAdmin on AKS (Kubernetes) to connect azure MariaDb failed with "No such file or directory"

I run a MariaDB PaaS on azure with SSL and run phpMyAdmin on AKS. By trying to connect I get a very unclear message: Cannot log in to the MySQL server and mysqli::real_connect(): (HY000/2002): No such file or directory
At this point ssl is not the issue. I've tried the same without enforcing ssl on the DB side and configured phpmyadmin without those ssl settings.
I also tested the connectivity from the phpmyadmin pod using curl -v telnet://my-database-12345.mariadb.database.azure.com:3306 successfully.
This is how I tried to get phpmyadmin working with azure mariadb:
apiVersion: v1
kind: Namespace
metadata:
name: pma
---
apiVersion: v1
kind: ConfigMap
metadata:
name: pma-cfg
namespace: pma
labels:
app: phpmyadmin
data:
config-user-inc: |
<?php
$i = 0;
$i++;
$cfg['Servers'][$i]['auth_type'] = 'cookie';
$cfg['Servers'][$i]['host'] = 'my-database-12345.mariadb.database.azure.com';
$cfg['Servers'][$i]['port'] = '3306';
$cfg['Servers'][$i]['ssl'] = true;
$cfg['Servers'][$i]['ssl_ca'] = 'ssl/BaltimoreCyberTrustRoot.crt.pem';
$cfg['Servers'][$i]['ssl_verify'] = false;
---
apiVersion: v1
kind: ConfigMap
metadata:
name: ssl-cert
namespace: oneup
labels:
app: phpmyadmin
data:
ssl-cert: |
-----BEGIN CERTIFICATE-----
# truncated BaltimoreCyberTrustRoot.crt
-----END CERTIFICATE-----
---
apiVersion: v1
kind: Service
metadata:
name: internal-pma
namespace: pma
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
spec:
type: LoadBalancer
loadBalancerIP: 10.xxx.xxx.xxx
ports:
- port: 80
targetPort: pma
selector:
app: pma
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: pma
namespace: pma
labels:
app: pma
spec:
replicas: 1
selector:
matchLabels:
app: pma
template:
metadata:
labels:
app: pma
spec:
containers:
- name: pma
image: phpmyadmin/phpmyadmin
ports:
- containerPort: 80
name: pma
volumeMounts:
- name: pma-cfg
mountPath: /etc/phpmyadmin/
- name: ssl-cert
mountPath: /etc/phpmyadmin/ssl/
volumes:
- name: pma-cfg
configMap:
name: pma-cfg
items:
- key: config-user-inc
path: config.user.inc.php
- name: ssl-cert
configMap:
name: ssl-cert
items:
- key: ssl-cert
path: BaltimoreCyberTrustRoot.crt.pem
Many thanks!
When mounting an custom configuration for phpmyadmin without using any environment variables (which is required if you use ssl), there's no default config file generated by the image.
Eg: if you start the pod like:
apiVersion: apps/v1
...
spec:
containers:
- name: pma
image: phpmyadmin/phpmyadmin
env:
name: PMA_HOST
value: myhost.local
ports:
- containerPort: 80
name: pma
A config.inc.php file will be generated in /etc/phpmyadmin
By mounting an config.user.inc.php, no config.inc.php will be generated.
What I did
is copying the content from /var/www/html/config.sample.inc.php in my configMap and do the needful changes for my azure mariadb:
ConfigMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: pma-cfg
namespace: pma
labels:
app: pma
data:
config-inc: |
<?php
declare(strict_types=1);
$cfg['blowfish_secret'] = '*****'; /* YOU MUST FILL IN THIS FOR COOKIE AUTH! */
$i = 0;
$i++;
/* Authentication type */
$cfg['Servers'][$i]['auth_type'] = 'cookie';
/* Server parameters */
$cfg['Servers'][$i]['host'] = 'mydb123456.mariadb.database.azure.com';
$cfg['Servers'][$i]['compress'] = false;
$cfg['Servers'][$i]['AllowNoPassword'] = false;
/* SSL */
$cfg['Servers'][$i]['ssl'] = true;
$cfg['Servers'][$i]['ssl_ca'] = '/etc/phpmyadmin/ssl/BaltimoreCyberTrustRoot.crt.pem';
$cfg['Servers'][$i]['ssl_verify'] = true;
/* Directories for saving/loading files from server */
$cfg['UploadDir'] = '';
$cfg['SaveDir'] = '';
ssl-cert: |
-----BEGIN CERTIFICATE-----
# Trunkated BaltimoreCyberTrustRoot.crt
-----END CERTIFICATE-----
Finally mount the config map to the deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: pma
namespace: pma
labels:
app: pma
spec:
replicas: 1
selector:
matchLabels:
app: pma
template:
metadata:
labels:
app: pma
spec:
containers:
- name: pma
image: phpmyadmin/phpmyadmin
ports:
- containerPort: 80
name: pma
volumeMounts:
- name: pma-cfg
mountPath: /etc/phpmyadmin/
volumes:
- name: pma-cfg
configMap:
name: pma-cfg
items:
- key: config-inc
path: config.inc.php
- key: ssl-cert
path: ssl/BaltimoreCyberTrustRoot.crt.pem
Maybe it will help others too.
Cheers!
The Error you are getting is an known issue can be resolve by restarting the MSSQL server or
do the following change:
$cfg['Servers'][$i]['host'] = 'my-database-12345.mariadb.database.azure.com';
to
$cfg['Servers'][$i]['host'] = '127.0.0.1'
You can refer this SO thread for more information and Troubleshooting

How to authenticate against AAD (Azure Active Directory) with oauth2_proxy and obtain Access Token

I'm trying to authenticate against AAD (Azure Active Directory) with oauth2_proxy used in Kubernetes to obtain Access Token.
First of all, I'm struggling to get the correct authentication flow to work.
Second, after being redirected to my application, Access Token is not in the request headers specified in oauth2_proxy documentation.
Here is some input on authentication against Azure Active Directory (AAD) using oauth2_proxy in kubernetes.
First you need to create an application in AAD and add it email, profile and User.Read permissions to Microsoft Graph.
The default behavior of authentication flow, is that after login against Microsoft authentication server, you will be redirected to root of website with authentication code (e.g. https://exampler.com/). You would expect the Access Token to be visible there -this is a faulty assumption. The url that Access Token is injected into is https://exampler.com/oauth2 !!!
Successful configuration of oauth2_proxt that worked is below.
oauth2-proxy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
k8s-app: oauth2-proxy
name: oauth2-proxy
namespace: oa2p
spec:
replicas: 1
selector:
matchLabels:
k8s-app: oauth2-proxy
template:
metadata:
labels:
k8s-app: oauth2-proxy
spec:
containers:
- args:
- --provider=oidc
- --azure-tenant=88888888-aaaa-bbbb-cccc-121212121212
- --email-domain=example.com
- --http-address=0.0.0.0:4180
- --set-authorization-header=true
- --set-xauthrequest=true
- --pass-access-token=true
- --pass-authorization-header=true
- --pass-user-headers=true
- --pass-host-header=true
- --skip-jwt-bearer-tokens=true
- --oidc-issuer-url=https://login.microsoftonline.com/88888888-aaaa-bbbb-cccc-121212121212/v2.0
env:
- name: OAUTH2_PROXY_CLIENT_ID
valueFrom:
secretKeyRef:
name: oauth2-proxy-secret
key: OAUTH2_PROXY_CLIENT_ID
- name: OAUTH2_PROXY_CLIENT_SECRET
valueFrom:
secretKeyRef:
name: oauth2-proxy-secret
key: OAUTH2_PROXY_CLIENT_SECRET
- name: OAUTH2_PROXY_COOKIE_SECRET
valueFrom:
secretKeyRef:
name: oauth2-proxy-secret
key: OAUTH2_PROXY_COOKIE_SECRET
image: quay.io/oauth2-proxy/oauth2-proxy:v7.1.3
imagePullPolicy: Always
name: oauth2-proxy
ports:
- containerPort: 4180
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
labels:
k8s-app: oauth2-proxy
name: oauth2-proxy
namespace: oa2p
spec:
ports:
- name: http
port: 4180
protocol: TCP
targetPort: 4180
selector:
k8s-app: oauth2-proxy
ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: oa2p
namespace: oa2p
annotations:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/limit-rps: "1"
nginx.ingress.kubernetes.io/auth-url: "https://$host/oauth2/auth"
nginx.ingress.kubernetes.io/auth-signin: "https://$host/oauth2/start?rd=$escaped_request_uri"
nginx.ingress.kubernetes.io/auth-response-headers: "X-Auth-Request-Email,X-Auth-Request-Preferred-Username"
spec:
tls:
- hosts:
- oa2p.example.com
secretName: oa2p-tls
rules:
- host: oa2p.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: oa2p
port:
number: 8080
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: oa2p-proxy
namespace: oa2p
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/limit-rps: "1"
nginx.ingress.kubernetes.io/proxy-buffer-size: "8k"
spec:
tls:
- hosts:
- oa2p.example.com
secretName: oa2p-tls
rules:
- host: oa2p.example.com
http:
paths:
- path: /oauth2
pathType: Prefix
backend:
service:
name: oauth2-proxy
port:
number: 4180

Azure Key Vault integration with AKS works for nginx tutorial Pod, but not actual project deployment

Per the title, I have the integration working following the documentation.
I can deploy the nginx.yaml and after about 70 seconds I can print out secrets with:
kubectl exec -it nginx -- cat /mnt/secrets-store/secret1
Now I'm trying to apply it to a PostgreSQL deployment for testing and I get the following from the Pod description:
Warning FailedMount 3s kubelet MountVolume.SetUp failed for volume "secrets-store01-inline" : rpc error: code = Unknown desc = failed to mount secrets store objects for pod staging/postgres-deployment-staging-69965ff767-8hmww, err: rpc error: code = Unknown desc = failed to mount objects, error: failed to get keyvault client: failed to get key vault token: nmi response failed with status code: 404, err: <nil>
And from the nmi logs:
E0221 22:54:32.037357 1 server.go:234] failed to get identities, error: getting assigned identities for pod staging/postgres-deployment-staging-69965ff767-8hmww in CREATED state failed after 16 attempts, retry duration [5]s, error: <nil>. Check MIC pod logs for identity assignment errors
I0221 22:54:32.037409 1 server.go:192] status (404) took 80003389208 ns for req.method=GET reg.path=/host/token/ req.remote=127.0.0.1
Not sure why since I basically copied the settings from the nignx.yaml into the postgres.yaml. Here they are:
# nginx.yaml
kind: Pod
apiVersion: v1
metadata:
name: nginx
namespace: staging
labels:
aadpodidbinding: aks-akv-identity-binding-selector
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- name: secrets-store01-inline
mountPath: /mnt/secrets-store
readOnly: true
volumes:
- name: secrets-store01-inline
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: aks-akv-secret-provider
# postgres.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres-deployment-staging
namespace: staging
labels:
aadpodidbinding: aks-akv-identity-binding-selector
spec:
replicas: 1
selector:
matchLabels:
component: postgres
template:
metadata:
labels:
component: postgres
spec:
containers:
- name: postgres
image: postgres:13-alpine
ports:
- containerPort: 5432
volumeMounts:
- name: secrets-store01-inline
mountPath: /mnt/secrets-store
readOnly: true
- name: postgres-storage-staging
mountPath: /var/postgresql
volumes:
- name: secrets-store01-inline
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: aks-akv-secret-provider
- name: postgres-storage-staging
persistentVolumeClaim:
claimName: postgres-storage-staging
---
apiVersion: v1
kind: Service
metadata:
name: postgres-cluster-ip-service-staging
namespace: staging
spec:
type: ClusterIP
selector:
component: postgres
ports:
- port: 5432
targetPort: 5432
Suggestions for what the issue is here?
Oversight on my part... the aadpodidbinding should be in the template: per:
https://azure.github.io/aad-pod-identity/docs/best-practices/#deploymenthttpskubernetesiodocsconceptsworkloadscontrollersdeployment
The resulting YAML should be:
# postgres.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres-deployment-production
namespace: production
spec:
replicas: 1
selector:
matchLabels:
component: postgres
template:
metadata:
labels:
component: postgres
aadpodidbinding: aks-akv-identity-binding-selector
spec:
containers:
- name: postgres
image: postgres:13-alpine
ports:
- containerPort: 5432
env:
- name: POSTGRES_DB_FILE
value: /mnt/secrets-store/DEV-PGDATABASE
- name: POSTGRES_USER_FILE
value: /mnt/secrets-store/DEV-PGUSER
- name: POSTGRES_PASSWORD_FILE
value: /mnt/secrets-store/DEV-PGPASSWORD
- name: POSTGRES_INITDB_ARGS
value: "-A md5"
- name: PGDATA
value: /var/postgresql/data
volumeMounts:
- name: secrets-store01-inline
mountPath: /mnt/secrets-store
readOnly: true
- name: postgres-storage-production
mountPath: /var/postgresql
volumes:
- name: secrets-store01-inline
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: aks-akv-secret-provider
- name: postgres-storage-production
persistentVolumeClaim:
claimName: postgres-storage-production
---
apiVersion: v1
kind: Service
metadata:
name: postgres-cluster-ip-service-production
namespace: production
spec:
type: ClusterIP
selector:
component: postgres
ports:
- port: 5432
targetPort: 5432
Adding template in spec will resolve the issue, use label "aadpodidbinding: "your azure pod identity selector" in the template labels section in deployment.yaml file
sample deployment file
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
aadpodidbinding: azure-pod-identity-binding-selector
spec:
containers:
- name: nginx
image: nginx
env:
- name: SECRET
valueFrom:
secretKeyRef:
name: test-secret
key: key
volumeMounts:
- name: secrets-store-inline
mountPath: "/mnt/secrets-store"
readOnly: true
volumes:
- name: secrets-store-inline
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: dev-1spc

Kubernetes ingress for Teamcity blank page

I have problem whit a ingress. I'm using haproxy, but after apply yaml file(s) I'm not able to open teamcity site in web browser. I got blank page. If I use curl it shows nothing.
Test echo (image: jmalloc/echo-server) is working just fine.
Of course kubernetes.local is added to hosts file to be able to resolve DNS name.
My config yaml files:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
run: teamcity
name: teamcity
namespace: default
spec:
replicas: 1
selector:
matchLabels:
run: teamcity
strategy:
type: Recreate
template:
metadata:
labels:
run: teamcity
spec:
tolerations:
- key: node.kubernetes.io/not-ready
effect: NoExecute
tolerationSeconds: 10
- key: node.kubernetes.io/unreachable
effect: NoExecute
tolerationSeconds: 10
containers:
- image: jetbrains/teamcity-server
imagePullPolicy: Always
name: teamcity
ports:
- containerPort: 8111
volumeMounts:
- name: teamcity-pvc-data
mountPath: "/data/teamcity_server/datadir"
- name: teamcity-pvc-logs
mountPath: "/opt/teamcity/logs"
volumes:
- name: teamcity-pvc-data
persistentVolumeClaim:
claimName: teamcity-pvc-data
- name: teamcity-pvc-logs
persistentVolumeClaim:
claimName: teamcity-pvc-logs
---
apiVersion: v1
kind: Service
metadata:
labels:
run: teamcity
name: teamcity
namespace: default
annotations:
haproxy.org/check: "true"
haproxy.org/forwarded-for: "true"
haproxy.org/load-balance: "roundrobin"
spec:
selector:
run: teamcity
ports:
- name: port-tc
port: 8111
protocol: TCP
targetPort: 8111
externalIPs:
- 192.168.22.152
- 192.168.22.153
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: teamcity
namespace: default
spec:
rules:
- host: kubernetes.local
http:
paths:
- path: /teamcity
pathType: Prefix
backend:
service:
name: teamcity
port:
number: 8111
I wold be grateful for every hint. Struggling whit this for hours. connection to http://192.168.22.152:8111 is working fine too. Just Ingress having troubles.
subdomain is fixing problem teamcity.kubernetes.local kubernetes.local/teamcity doesn't work.
Solution:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: teamcity-ingress
namespace: default
spec:
rules:
- host: teamcity.kubernetes.local
http:
paths:
- pathType: Prefix
path: /
backend:
service:
name: teamcity
port:
number: 8111

Kubernetes volume mounting

I ' m trying to mount a directory to my pods but always it shows me an error "no file or directory found"
This is my yaml file used for the deployment :
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp1-deployment
labels:
app: myapp
spec:
replicas: 3
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
volumes:
- name: test-mount-1
persistentVolumeClaim:
claimName: task-pv-claim-1
containers:
- name: myapp
image: 192.168.11.168:5002/dev:0.0.1-SNAPSHOT-6f4b1db
command: ["java -jar /jar/myapp1-0.0.1-SNAPSHOT.jar --spring.config.location=file:/etc/application.properties"]
ports:
- containerPort: 8080
volumeMounts:
- mountPath: "/etc/application.properties"
#subPath: application.properties
name: test-mount-1
# hostNetwork: true
imagePullSecrets:
- name: regcred
#volumes:
# - name: test-mount
and this is the persistance volume config :
kind: PersistentVolume
apiVersion: v1
metadata:
name: test-mount-1
labels:
type: local
app: myapp
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/mnt/share"
and this the claim volume config :
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: task-pv-claim-1
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
and this for the service config used for the deployment :
apiVersion: v1
kind: Service
metadata:
name: myapp-service
spec:
selector:
app: myapp
externalIPs:
- 192.168.11.145
ports:
- protocol: TCP
port: 8080
nodePort: 31000
type: LoadBalancer
status:
loadBalancer:
ingress:
If any one can help , I will be grateful and thanks .
You haven't included your storage class in your question, but I'm assuming you're attempting local storage on a node. Might be a simple thing to check, but does the directory exist on the node where your pod is running? And is it writeable? Depending on how many worker nodes you have, it looks like your pod could be running on any node, and the pv isn't set to any particular node. You could use node affinity to ensure that your pod runs on the same node that contains the directory referenced in your pv, if that's the issue.
Edit, if it's nfs, you need to change your pv to include:
nfs:
path: /mnt/share
server: <nfs server node ip/fqdn>
Example here

Resources