Replacement not working with overlay configmap values - kustomize

I'm trying to replace the configMapGenerator env values by creating an overlay.
The expected overlay values show up in the configmap, but not in the ExternalSecret trough the replacements.
I have the following base/kustomization.yaml where I'm doing some replacements:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ex-secret.yaml
configMapGenerator:
- name: test
envs:
- env.properties
replacements:
- source:
name: test
kind: ConfigMap
fieldPath: data.KEY
targets:
- select:
kind: ExternalSecret
name: prime-manager
fieldPaths:
- "spec.data.[secretKey=PRIME_MANAGER_CONSOLE_DB_password].remoteRef.key"
- source:
name: test
kind: ConfigMap
fieldPath: data.KEY2
targets:
- select:
kind: ExternalSecret
name: prime-manager
fieldPaths:
- "spec.data.[secretKey=PRIME_MANAGER_WEB_DB_password].remoteRef.key"
- source:
name: test
kind: ConfigMap
fieldPath: data.KEY3
targets:
- select:
kind: ExternalSecret
name: prime-manager
fieldPaths:
- "spec.data.[secretKey=PRIME_MANAGER_PARAMS_admin_password].remoteRef.key"
The base/env.properties file in the base contains the following properties and values:
KEY=test
KEY2=test2
KEY3=test3
base/ex-secret.yaml file for reproduce:
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: prime-manager
labels:
app.kubernetes.io/name: prime-manager
spec:
refreshInterval: 1h
secretStoreRef:
kind: ClusterSecretStore
name: keyvault-secret-store
data:
- secretKey: PRIME_MANAGER_CONSOLE_DB_password
remoteRef:
key: prime-manager-console-postgres-password-staging
- secretKey: PRIME_MANAGER_WEB_DB_password
remoteRef:
key: prime-manager-web-postgres-password-staging
- secretKey: PRIME_MANAGER_PARAMS_admin_password
remoteRef:
key: prime-manager-ferrontest-admin-password-staging
Now I have created an overlay overlay/kustomization.yaml to overwrite the values like
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../base
configMapGenerator:
- name: test
behavior: replace
envs:
- env.properties
overlay/env.properties:
KEY=testen
KEY2=testen2
KEY3=testen3
File tree:
C:.
├───base
│ env.properties
│ ex-secret.yaml
│ kustomization.yaml
│
└───overlay
env.properties
kustomization.yaml
Expected output
I would expect testen, testen2 and testen3 to show up in the ExternalSecret output.
apiVersion: v1
data:
KEY: testen
KEY2: testen2
KEY3: testen3
kind: ConfigMap
metadata:
name: test-hbtgc5f7dm
---
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
labels:
app.kubernetes.io/name: prime-manager
name: prime-manager
spec:
data:
- remoteRef:
key: testen
secretKey: PRIME_MANAGER_CONSOLE_DB_password
- remoteRef:
key: testen2
secretKey: PRIME_MANAGER_WEB_DB_password
- remoteRef:
key: testen3
secretKey: PRIME_MANAGER_PARAMS_admin_password
refreshInterval: 1h
secretStoreRef:
kind: ClusterSecretStore
name: keyvault-secret-store
Actual output
But I'm still getting the base values in the ExternalSecret output.
apiVersion: v1
data:
KEY: testen
KEY2: testen2
KEY3: testen3
kind: ConfigMap
metadata:
name: test-hbtgc5f7dm
---
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
labels:
app.kubernetes.io/name: prime-manager
name: prime-manager
spec:
data:
- remoteRef:
key: test
secretKey: PRIME_MANAGER_CONSOLE_DB_password
- remoteRef:
key: test2
secretKey: PRIME_MANAGER_WEB_DB_password
- remoteRef:
key: test3
secretKey: PRIME_MANAGER_PARAMS_admin_password
refreshInterval: 1h
secretStoreRef:
kind: ClusterSecretStore
name: keyvault-secret-store
Kustomize version
v4.5.7
Platform
Windows

Related

Add secret to projected list volume kustomize

I am trying to use kustomize to patch existing Deployment by adding environment secrets in the list of projected
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: microservice-1
name: microservice-1
spec:
selector:
matchLabels:
app: microservice-1
template:
metadata:
labels:
app: microservice-1
spec:
containers:
image: URL
imagePullPolicy: Always
name: microservice-1
ports:
- containerPort: 80
name: http
volumeMounts:
- mountPath: /config/secrets
name: files
readOnly: true
imagePullSecrets:
- name: image-pull-secret
restartPolicy: Always
volumes:
- name: files
projected:
sources:
- secret:
name: my-secret-1
- secret:
name: my-secret-2
patch.yaml
- op: add
path: /spec/template/spec/volumes/0/projected/sources/0
value:
secret: "my-new-secret"
kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
patchesJson6902:
- target:
version: v1
kind: Deployment
name: microservice-1
path: patch.yaml
Error
Error: updating name reference in 'spec/template/spec/volumes/projected/sources/secret/name' field of 'Deployment.v1.apps/microservice-1.itc-microservices': considering field 'spec/template/spec/volumes/projected/sources/secret/name' of object Deployment.v1.apps/ms-pedigree.itc-microservices: visit traversal on path: [projected sources secret name]: visit traversal on path: [secret name]: expected sequence or mapping no
How can I add new secret to the list with key secret and field name:
- secret
name: "my-new-secret"
NB: I have tried to to a PatchStrategic merge but the list is all remplaced.
I have found.
- op: add
path: /spec/template/spec/volumes/0/projected/sources/-
name: ok
value:
secret:
name: "my-new-secret"

phpMyAdmin on AKS (Kubernetes) to connect azure MariaDb failed with "No such file or directory"

I run a MariaDB PaaS on azure with SSL and run phpMyAdmin on AKS. By trying to connect I get a very unclear message: Cannot log in to the MySQL server and mysqli::real_connect(): (HY000/2002): No such file or directory
At this point ssl is not the issue. I've tried the same without enforcing ssl on the DB side and configured phpmyadmin without those ssl settings.
I also tested the connectivity from the phpmyadmin pod using curl -v telnet://my-database-12345.mariadb.database.azure.com:3306 successfully.
This is how I tried to get phpmyadmin working with azure mariadb:
apiVersion: v1
kind: Namespace
metadata:
name: pma
---
apiVersion: v1
kind: ConfigMap
metadata:
name: pma-cfg
namespace: pma
labels:
app: phpmyadmin
data:
config-user-inc: |
<?php
$i = 0;
$i++;
$cfg['Servers'][$i]['auth_type'] = 'cookie';
$cfg['Servers'][$i]['host'] = 'my-database-12345.mariadb.database.azure.com';
$cfg['Servers'][$i]['port'] = '3306';
$cfg['Servers'][$i]['ssl'] = true;
$cfg['Servers'][$i]['ssl_ca'] = 'ssl/BaltimoreCyberTrustRoot.crt.pem';
$cfg['Servers'][$i]['ssl_verify'] = false;
---
apiVersion: v1
kind: ConfigMap
metadata:
name: ssl-cert
namespace: oneup
labels:
app: phpmyadmin
data:
ssl-cert: |
-----BEGIN CERTIFICATE-----
# truncated BaltimoreCyberTrustRoot.crt
-----END CERTIFICATE-----
---
apiVersion: v1
kind: Service
metadata:
name: internal-pma
namespace: pma
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
spec:
type: LoadBalancer
loadBalancerIP: 10.xxx.xxx.xxx
ports:
- port: 80
targetPort: pma
selector:
app: pma
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: pma
namespace: pma
labels:
app: pma
spec:
replicas: 1
selector:
matchLabels:
app: pma
template:
metadata:
labels:
app: pma
spec:
containers:
- name: pma
image: phpmyadmin/phpmyadmin
ports:
- containerPort: 80
name: pma
volumeMounts:
- name: pma-cfg
mountPath: /etc/phpmyadmin/
- name: ssl-cert
mountPath: /etc/phpmyadmin/ssl/
volumes:
- name: pma-cfg
configMap:
name: pma-cfg
items:
- key: config-user-inc
path: config.user.inc.php
- name: ssl-cert
configMap:
name: ssl-cert
items:
- key: ssl-cert
path: BaltimoreCyberTrustRoot.crt.pem
Many thanks!
When mounting an custom configuration for phpmyadmin without using any environment variables (which is required if you use ssl), there's no default config file generated by the image.
Eg: if you start the pod like:
apiVersion: apps/v1
...
spec:
containers:
- name: pma
image: phpmyadmin/phpmyadmin
env:
name: PMA_HOST
value: myhost.local
ports:
- containerPort: 80
name: pma
A config.inc.php file will be generated in /etc/phpmyadmin
By mounting an config.user.inc.php, no config.inc.php will be generated.
What I did
is copying the content from /var/www/html/config.sample.inc.php in my configMap and do the needful changes for my azure mariadb:
ConfigMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: pma-cfg
namespace: pma
labels:
app: pma
data:
config-inc: |
<?php
declare(strict_types=1);
$cfg['blowfish_secret'] = '*****'; /* YOU MUST FILL IN THIS FOR COOKIE AUTH! */
$i = 0;
$i++;
/* Authentication type */
$cfg['Servers'][$i]['auth_type'] = 'cookie';
/* Server parameters */
$cfg['Servers'][$i]['host'] = 'mydb123456.mariadb.database.azure.com';
$cfg['Servers'][$i]['compress'] = false;
$cfg['Servers'][$i]['AllowNoPassword'] = false;
/* SSL */
$cfg['Servers'][$i]['ssl'] = true;
$cfg['Servers'][$i]['ssl_ca'] = '/etc/phpmyadmin/ssl/BaltimoreCyberTrustRoot.crt.pem';
$cfg['Servers'][$i]['ssl_verify'] = true;
/* Directories for saving/loading files from server */
$cfg['UploadDir'] = '';
$cfg['SaveDir'] = '';
ssl-cert: |
-----BEGIN CERTIFICATE-----
# Trunkated BaltimoreCyberTrustRoot.crt
-----END CERTIFICATE-----
Finally mount the config map to the deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: pma
namespace: pma
labels:
app: pma
spec:
replicas: 1
selector:
matchLabels:
app: pma
template:
metadata:
labels:
app: pma
spec:
containers:
- name: pma
image: phpmyadmin/phpmyadmin
ports:
- containerPort: 80
name: pma
volumeMounts:
- name: pma-cfg
mountPath: /etc/phpmyadmin/
volumes:
- name: pma-cfg
configMap:
name: pma-cfg
items:
- key: config-inc
path: config.inc.php
- key: ssl-cert
path: ssl/BaltimoreCyberTrustRoot.crt.pem
Maybe it will help others too.
Cheers!
The Error you are getting is an known issue can be resolve by restarting the MSSQL server or
do the following change:
$cfg['Servers'][$i]['host'] = 'my-database-12345.mariadb.database.azure.com';
to
$cfg['Servers'][$i]['host'] = '127.0.0.1'
You can refer this SO thread for more information and Troubleshooting

Kubernetes ingress for Teamcity blank page

I have problem whit a ingress. I'm using haproxy, but after apply yaml file(s) I'm not able to open teamcity site in web browser. I got blank page. If I use curl it shows nothing.
Test echo (image: jmalloc/echo-server) is working just fine.
Of course kubernetes.local is added to hosts file to be able to resolve DNS name.
My config yaml files:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
run: teamcity
name: teamcity
namespace: default
spec:
replicas: 1
selector:
matchLabels:
run: teamcity
strategy:
type: Recreate
template:
metadata:
labels:
run: teamcity
spec:
tolerations:
- key: node.kubernetes.io/not-ready
effect: NoExecute
tolerationSeconds: 10
- key: node.kubernetes.io/unreachable
effect: NoExecute
tolerationSeconds: 10
containers:
- image: jetbrains/teamcity-server
imagePullPolicy: Always
name: teamcity
ports:
- containerPort: 8111
volumeMounts:
- name: teamcity-pvc-data
mountPath: "/data/teamcity_server/datadir"
- name: teamcity-pvc-logs
mountPath: "/opt/teamcity/logs"
volumes:
- name: teamcity-pvc-data
persistentVolumeClaim:
claimName: teamcity-pvc-data
- name: teamcity-pvc-logs
persistentVolumeClaim:
claimName: teamcity-pvc-logs
---
apiVersion: v1
kind: Service
metadata:
labels:
run: teamcity
name: teamcity
namespace: default
annotations:
haproxy.org/check: "true"
haproxy.org/forwarded-for: "true"
haproxy.org/load-balance: "roundrobin"
spec:
selector:
run: teamcity
ports:
- name: port-tc
port: 8111
protocol: TCP
targetPort: 8111
externalIPs:
- 192.168.22.152
- 192.168.22.153
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: teamcity
namespace: default
spec:
rules:
- host: kubernetes.local
http:
paths:
- path: /teamcity
pathType: Prefix
backend:
service:
name: teamcity
port:
number: 8111
I wold be grateful for every hint. Struggling whit this for hours. connection to http://192.168.22.152:8111 is working fine too. Just Ingress having troubles.
subdomain is fixing problem teamcity.kubernetes.local kubernetes.local/teamcity doesn't work.
Solution:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: teamcity-ingress
namespace: default
spec:
rules:
- host: teamcity.kubernetes.local
http:
paths:
- pathType: Prefix
path: /
backend:
service:
name: teamcity
port:
number: 8111

How to mount cassandra data location to azure file share using stateful set kubernetes

I am setting up 3 node Cassandra cluster on Azure using Statefull set Kubernetes and not able to mount data location in azure file share.
I am able to do using default kubenetes storage but not with Azurefile share option.
I have tried the following steps given below, finding difficulty in volumeClaimTemplates
apiVersion: "apps/v1"
kind: StatefulSet
metadata:
name: cassandra
labels:
app: cassandra
spec:
serviceName: cassandra
replicas: 3
selector:
matchLabels:
app: cassandra
template:
metadata:
labels:
app: cassandra
spec:
containers:
- name: cassandra
image: cassandra
imagePullPolicy: Always
ports:
- containerPort: 7000
name: intra-node
- containerPort: 7001
name: tls-intra-node
- containerPort: 7199
name: jmx
- containerPort: 9042
name: cql
env:
- name: CASSANDRA_SEEDS
value: cassandra-0.cassandra.default.svc.cluster.local
- name: MAX_HEAP_SIZE
value: 256M
- name: HEAP_NEWSIZE
value: 100M
- name: CASSANDRA_CLUSTER_NAME
value: "Cassandra"
- name: CASSANDRA_DC
value: "DC1"
- name: CASSANDRA_RACK
value: "Rack1"
- name: CASSANDRA_ENDPOINT_SNITCH
value: GossipingPropertyFileSnitch
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
volumeMounts:
- mountPath: /var/lib/cassandra/data
name: pv002
volumeClaimTemplates:
- metadata:
name: pv002
spec:
storageClassName: default
accessModes:
- ReadWriteOnce
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv002
accessModes:
- ReadWriteOnce
azureFile:
secretName: storage-secret
shareName: xxxxx
readOnly: false
claimRef:
namespace: default
name: az-files-02
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: az-files-02
spec:
accessModes:
- ReadWriteOnce
---
apiVersion: v1
kind: Secret
metadata:
name: storage-secret
type: Opaque
data:
azurestorageaccountname: xxxxx
azurestorageaccountkey: jjbfjbsfljbafkljasfkl;jf;kjd;kjklsfdhjbsfkjbfkjbdhueueeknekneiononeojnjnjHBDEJKBJBSDJBDJ==
I should able to mount data folder of each cassandra node into azure file share.
For using azure file in statefulset, I think you could following this example: https://github.com/andyzhangx/demo/blob/master/linux/azurefile/attach-stress-test/statefulset-azurefile1-2files.yaml

How to use kubernetes secrets in nodejs application?

I have one kubernetes cluster on gcp, running my express and node.js application, operating CRUD operations with MongoDB.
I created one secret, containing username and password,
connecting with mongoDB specifiedcified secret as environment in my kubernetes yml file.
Now My question is "How to access that username and password
in node js application for connecting mongoDB".
I tried process.env.SECRET_USERNAME and process.env.SECRET_PASSWORD
in Node.JS application, it is throwing undefined.
Any idea ll'be appreciated .
Secret.yaml
apiVersion: v1
data:
password: pppppppppppp==
username: uuuuuuuuuuuu==
kind: Secret
metadata:
creationTimestamp: 2018-07-11T11:43:25Z
name: test-mongodb-secret
namespace: default
resourceVersion: "00999"
selfLink: /api-path-to/secrets/test-mongodb-secret
uid: 0900909-9090saiaa00-9dasd0aisa-as0a0s-
type: Opaque
kubernetes.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:deployment.kubernetes.io/
revision: "4"
creationTimestamp: 2018-07-11T11:09:45Z
generation: 5
labels:
name: test
name: test
namespace: default
resourceVersion: "90909"
selfLink: /api-path-to/default/deployments/test
uid: htff50d-8gfhfa-11egfg-9gf1-42010gffgh0002a
spec:
replicas: 1
selector:
matchLabels:
name: test
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
name: test
spec:
containers:
- env:
- name: SECRET_USERNAME
valueFrom:
secretKeyRef:
key: username
name: test-mongodb-secret
- name: SECRET_PASSWORD
valueFrom:
secretKeyRef:
key: password
name: test-mongodb-secret
image: gcr-image/env-test_node:latest
imagePullPolicy: Always
name: env-test-node
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 1
conditions:
- lastTransitionTime: 2018-07-11T11:10:18Z
lastUpdateTime: 2018-07-11T11:10:18Z
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
observedGeneration: 5
readyReplicas: 1
replicas: 1
updatedReplicas: 1
Yourkubernetes.yaml file specifies which environment variable to store your secret so it is accessible by apps in that namespace.
Using kubectl secrets cli interface you can upload your secret.
kubectl create secret generic -n node-app test-mongodb-secret --from-literal=username=a-username --from-literal=password=a-secret-password
(the namespace arg -n node-app is optional, else it will uplaod to the default namespace)
After running this command, you can check your kube dashboard to see that the secret has been save
Then from you node app, access the environment variable process.env.SECRET_PASSWORD
Perhaps in your case the secretes are created in the wrong namespace hence why undefined in yourapplication.
EDIT 1
Your indentation for container.env seems to be wrong
apiVersion: v1
kind: Pod
metadata:
name: secret-env-pod
spec:
containers:
- name: mycontainer
image: redis
env:
- name: SECRET_USERNAME
valueFrom:
secretKeyRef:
name: mysecret
key: username
- name: SECRET_PASSWORD
valueFrom:
secretKeyRef:
name: mysecret
key: password
restartPolicy: Never

Resources