Environmental variables returning undefined for Kubernetes deployment - node.js

I posted a question similar to this and tried to implement what the answer for this question said: How to access Kubernetes container environment variables from Next.js application?
However, when I still call my environment variables doing process.env.USERNAME, I'm still getting undefined back... Am I doing something wrong in my deployment file? Here is a copy of my deployment.yaml:
metadata:
namespace: <namespace>
releaseName: <release name>
releaseVersion: 1.0.0
target: <target>
auth:
replicaCount: 1
image:
repository: '<name of repository is here>'
pullPolicy: <always>
container:
multiPorts:
- containerPort: 443
name: HTTPS
protocol: TCP
- containerPort: 80
name: HTTP
protocol: TCP
env:
- name: USERNAME
valueFrom:
secretKeyRef:
name: my-username
key: username
- name: PASSWORD
valueFrom:
secretKeyRef:
name: my-password
key: password
- name: HOST
valueFrom:
secretKeyRef:
name: my-host
key: host
volumeMounts:
- name: config
mountPath: "/configMap"
readOnly: true
volume:
- name: config
configMap:
name: environmental-variables
resources:
requests:
cpu: 0.25
memory: 256Mi
limits:
cpu: 1
memory: 1024Mi
variables:
- name: NODE_ENV
value: <node env value here>
ingress:
enabled: true
ingressType: <ingressType>
applicationType: <application type>
serviceEndpoint: <endpoint>
multiPaths:
- path: /
- HTTPS
tls:
enabled: true
secretName: <name>
autoscale:
enabled: false
minReplicas: 1
maxReplicas: 5
cpuAverageUtilization: 50
memoryUtilizationValue: 50
annotations:
ingress:
nginx.ingress.kubernetes.io/affinity: <affinity>
nginx.ingress.kubernetes.io/session-cookie-name: <cookie-name>
nginx.ingress.kubernetes.io/session-cookie-expires: <number>
nginx.ingress.kubernetes.io/session-cookie-max-age: <number>
I also created a configMap.yaml file, although I'm not sure if that's the right way to do this. Here is my configMap.yaml file:
apiVersion: v1
kind: ConfigMap
metadata:
name: environmental-variables
data:
.env: |
USERNAME: <username>
PASSWORD: <password>
HOST: <host>
Any help will be greatly appreciated! Also I'm trying to make my environment variable as Secrets since I don't want to expose any of my variables because it contains sensitive information. I am trying to do this on a Node.js application using Express. Thank you!
EDIT: Here is how the Secrets part looks like in my yaml file
secrets:
- name: environmental-variables
key: USERNAME
- name: environmental-variables
key: PASSWORD
How my Secrets yaml file looks like:
kind: Secret
apiVersion: v1
metadata:
name: environmental-variables
namespace: tda-dev-duck-dev
data:
USERNAME: <username>
PASSWORD: <password>

After days of figuring out how to use Secrets as an environmental variable, I figured out how to reference it in my nodejs application!
Before I was doing the normal way of calling environmental variables, process.env.VARIABLE_NAME, but that did not work for me when I had Secrets as an environment variable. In order to get the value of the variable, I had to do process.env.ENVIRONMENTAL_VARIABLES_USERNAME in my Javascript file and that worked for me! Where ENVIRONMENTAL_VARIABLES is the name and USERNAME is the key!
Not sure if this will help anyone else but this is how I managed to access my Secrets in my nodejs application!

You created ConfigMap and trying to get value from secret. If you want set value from configmap then update env like following
env:
- name: USERNAME
valueFrom:
configMapKeyRef:
name: environmental-variables # this is ConfigMap Name
key: USERNAME # this is key in ConfigMap
- name: PASSWORD
valueFrom:
configMapKeyRef:
name: environmental-variables
key: PASSWORD
- name: HOST
valueFrom:
configMapKeyRef:
name: environmental-variables
key: HOST
and update the configmap like following
apiVersion: v1
kind: ConfigMap
metadata:
name: environmental-variables
data:
USERNAME: <username>
PASSWORD: <password>
HOST: <host>
To learn how to define container environment variables using ConfigMap data click here
If you want to use secrets as environment variables check here

Related

Add secret to projected list volume kustomize

I am trying to use kustomize to patch existing Deployment by adding environment secrets in the list of projected
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: microservice-1
name: microservice-1
spec:
selector:
matchLabels:
app: microservice-1
template:
metadata:
labels:
app: microservice-1
spec:
containers:
image: URL
imagePullPolicy: Always
name: microservice-1
ports:
- containerPort: 80
name: http
volumeMounts:
- mountPath: /config/secrets
name: files
readOnly: true
imagePullSecrets:
- name: image-pull-secret
restartPolicy: Always
volumes:
- name: files
projected:
sources:
- secret:
name: my-secret-1
- secret:
name: my-secret-2
patch.yaml
- op: add
path: /spec/template/spec/volumes/0/projected/sources/0
value:
secret: "my-new-secret"
kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
patchesJson6902:
- target:
version: v1
kind: Deployment
name: microservice-1
path: patch.yaml
Error
Error: updating name reference in 'spec/template/spec/volumes/projected/sources/secret/name' field of 'Deployment.v1.apps/microservice-1.itc-microservices': considering field 'spec/template/spec/volumes/projected/sources/secret/name' of object Deployment.v1.apps/ms-pedigree.itc-microservices: visit traversal on path: [projected sources secret name]: visit traversal on path: [secret name]: expected sequence or mapping no
How can I add new secret to the list with key secret and field name:
- secret
name: "my-new-secret"
NB: I have tried to to a PatchStrategic merge but the list is all remplaced.
I have found.
- op: add
path: /spec/template/spec/volumes/0/projected/sources/-
name: ok
value:
secret:
name: "my-new-secret"

Application running in Kubernetes cron job does not connect to database in same Kubernetes cluster

I have a Kubernetes cluster running the a PostgreSQL database, a Grafana dashboard, and a Python single-run application (built as a Docker image) that runs hourly inside a Kubernetes CronJob (see manifests below). Additionally, this is all being deployed using ArgoCD with Istio side-car injection.
The issue I'm having (as the title indicates) is that my Python application cannot connect to the database in the cluster. This is very strange to me since the dashboard, in fact, can connect to the database so I'm not sure what might be different for the Python app.
Following are my manifests (with a few things changed to remove identifiable information):
Contents of database.yaml:
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: database
name: database
spec:
replicas: 1
selector:
matchLabels:
app: database
strategy: {}
template:
metadata:
labels:
app: database
spec:
containers:
- image: postgres:12.5
imagePullPolicy: ""
name: database
ports:
- containerPort: 5432
env:
- name: POSTGRES_DB
valueFrom:
secretKeyRef:
name: postgres-secret
key: POSTGRES_DB
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: postgres-secret
key: POSTGRES_USER
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-secret
key: POSTGRES_PASSWORD
resources: {}
readinessProbe:
initialDelaySeconds: 30
tcpSocket:
port: 5432
restartPolicy: Always
serviceAccountName: ""
volumes: null
status: {}
---
apiVersion: v1
kind: Service
metadata:
labels:
app: database
name: database
spec:
ports:
- name: "5432"
port: 5432
targetPort: 5432
selector:
app: database
status:
loadBalancer: {}
Contents of dashboard.yaml:
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: dashboard
name: dashboard
spec:
replicas: 1
selector:
matchLabels:
app: dashboard
strategy: {}
template:
metadata:
labels:
app: dashboard
spec:
containers:
- image: grafana:7.3.3
imagePullPolicy: ""
name: dashboard
ports:
- containerPort: 3000
resources: {}
env:
- name: POSTGRES_DB
valueFrom:
secretKeyRef:
name: postgres-secret
key: POSTGRES_DB
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: postgres-secret
key: POSTGRES_USER
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-secret
key: POSTGRES_PASSWORD
volumeMounts:
- name: grafana-datasource
mountPath: /etc/grafana/provisioning/datasources
readinessProbe:
initialDelaySeconds: 30
httpGet:
path: /
port: 3000
restartPolicy: Always
serviceAccountName: ""
volumes:
- name: grafana-datasource
configMap:
defaultMode: 420
name: grafana-datasource
- name: grafana-dashboard-provision
status: {}
---
apiVersion: v1
kind: Service
metadata:
labels:
app: dashboard
name: dashboard
spec:
ports:
- name: "3000"
port: 3000
targetPort: 3000
selector:
app: dashboard
status:
loadBalancer: {}
Contents of cronjob.yaml:
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: python
spec:
concurrencyPolicy: Replace
# TODO: Go back to hourly when finished testing/troubleshooting
# schedule: "#hourly"
schedule: "*/15 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- image: python-tool:1.0.5
imagePullPolicy: ""
name: python
args: []
command:
- /bin/sh
- -c
- >-
echo "$(POSTGRES_USER)" > creds/db.creds;
echo "$(POSTGRES_PASSWORD)" >> creds/db.creds;
echo "$(SERVICE1_TOKEN)" > creds/service1.creds;
echo "$(SERVICE2_TOKEN)" > creds/service2.creds;
echo "$(SERVICE3_TOKEN)" > creds/service3.creds;
python3 -u main.py;
echo "Job finished with exit code $?";
env:
- name: POSTGRES_DB
valueFrom:
secretKeyRef:
name: postgres-secret
key: POSTGRES_DB
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: postgres-secret
key: POSTGRES_USER
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-secret
key: POSTGRES_PASSWORD
- name: SERVICE1_TOKEN
valueFrom:
secretKeyRef:
name: api-tokens-secret
key: SERVICE1_TOKEN
- name: SERVICE2_TOKEN
valueFrom:
secretKeyRef:
name: api-tokens-secret
key: SERVICE2_TOKEN
- name: SERVICE3_TOKEN
valueFrom:
secretKeyRef:
name: api-tokens-secret
key: SERVICE3_TOKEN
restartPolicy: OnFailure
serviceAccountName: ""
status: {}
Now, as I mentioned Istio is also a part of this picture so I have a Virtual service for the dashboard since it should be accessible outside of the cluster, but that's it.
With all of that out of the way, here's what I've done to try and solve this, myself:
Confirm the CronJob is using the correct connection settings (i.e. host, database name, username, and password) for connecting to the database.
For this, I added echo statements to the CronJob deployment showing the username and password (I know, I know) and they were the expected values. I also know those were the correct connection settings for the database because I used them verbatim to connect the dashboard to the database, which gave a successful connection.
The data source settings for the Grafana dashboard:
The error message from the Python application (shown in the ArgoCD logs for the container):
Thinking Istio might be causing this problem, I tried disabling Istio side-car injection for the CronJob resource (by adding this annotation to the metadata.annotations section: sidecar.istio.io/inject: false) but the annotation never actually showed up in the Argo logs and no change was observed when the CronJob was running.
I tried kubectl execing into the CronJob container that was running the Python script to debug more but was never actually able to since the container exited as soon as the connection error occurs.
That said, I've been banging my head into a wall for long enough on this. Could anyone spot what I might be missing and point me in the right direction, please?
I think the problem is that your pod tries to connect to the database before the istio sidecar is ready. And thus the connection can't be established.
Istio runs an init container that configures the pods route table so all traffic is routed through the sidecar. So if the sidecar isn't running and the other pod tries to connect to the db, no connection can be established.
There are two solutions.
First your job could wait for eg 30 seconds before calling main.py with some sleep command.
Alternatively you could enable holdApplicationUntilProxyStarts. By this main container will not start until the sidecar is running.

Managing multiple Pods with one Deploy

Good afternoon, I need help defining a structure of my production cluster, i want something like.
1 Deployment that controlled the pods
multiple PODS (one pod per-customer)
multiple services (one service-per pod)
but how will I do this structure if for each POD I have env vars that will connect to the customer database, like that
env:
- name: dbuser
value: "svc_iafox_test#***"
- name: dbpassword
value: "****"
- name: dbname
value: "ts-demo1"
- name: dbconnectstring
value: "jdbc:sqlserver://***-test.database.windows.net:1433;database=$(dbname);user=$(dbuser);password=$(dbpassword);encrypt=true;trustServerCertificate=true;hostNameInCertificate=*.database.windows.net;loginTimeout=30;"
so for each pod I will have to change these env vars ... anyway, what is the best way for me to do this??
you could use configmap to achieve that:
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "echo $(SPECIAL_LEVEL_KEY) $(SPECIAL_TYPE_KEY)" ]
env:
- name: SPECIAL_LEVEL_KEY
valueFrom:
configMapKeyRef:
name: special-config
key: SPECIAL_LEVEL
- name: SPECIAL_TYPE_KEY
valueFrom:
configMapKeyRef:
name: special-config
key: SPECIAL_TYPE
restartPolicy: Never
https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#use-configmap-defined-environment-variables-in-pod-commands
ps. I dont think 1 deployment per pod makes sense. 1 deployment per customer does. I dont think you understand exactly what a deployment does: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/

How to use kubernetes secrets in nodejs application?

I have one kubernetes cluster on gcp, running my express and node.js application, operating CRUD operations with MongoDB.
I created one secret, containing username and password,
connecting with mongoDB specifiedcified secret as environment in my kubernetes yml file.
Now My question is "How to access that username and password
in node js application for connecting mongoDB".
I tried process.env.SECRET_USERNAME and process.env.SECRET_PASSWORD
in Node.JS application, it is throwing undefined.
Any idea ll'be appreciated .
Secret.yaml
apiVersion: v1
data:
password: pppppppppppp==
username: uuuuuuuuuuuu==
kind: Secret
metadata:
creationTimestamp: 2018-07-11T11:43:25Z
name: test-mongodb-secret
namespace: default
resourceVersion: "00999"
selfLink: /api-path-to/secrets/test-mongodb-secret
uid: 0900909-9090saiaa00-9dasd0aisa-as0a0s-
type: Opaque
kubernetes.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:deployment.kubernetes.io/
revision: "4"
creationTimestamp: 2018-07-11T11:09:45Z
generation: 5
labels:
name: test
name: test
namespace: default
resourceVersion: "90909"
selfLink: /api-path-to/default/deployments/test
uid: htff50d-8gfhfa-11egfg-9gf1-42010gffgh0002a
spec:
replicas: 1
selector:
matchLabels:
name: test
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
name: test
spec:
containers:
- env:
- name: SECRET_USERNAME
valueFrom:
secretKeyRef:
key: username
name: test-mongodb-secret
- name: SECRET_PASSWORD
valueFrom:
secretKeyRef:
key: password
name: test-mongodb-secret
image: gcr-image/env-test_node:latest
imagePullPolicy: Always
name: env-test-node
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 1
conditions:
- lastTransitionTime: 2018-07-11T11:10:18Z
lastUpdateTime: 2018-07-11T11:10:18Z
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
observedGeneration: 5
readyReplicas: 1
replicas: 1
updatedReplicas: 1
Yourkubernetes.yaml file specifies which environment variable to store your secret so it is accessible by apps in that namespace.
Using kubectl secrets cli interface you can upload your secret.
kubectl create secret generic -n node-app test-mongodb-secret --from-literal=username=a-username --from-literal=password=a-secret-password
(the namespace arg -n node-app is optional, else it will uplaod to the default namespace)
After running this command, you can check your kube dashboard to see that the secret has been save
Then from you node app, access the environment variable process.env.SECRET_PASSWORD
Perhaps in your case the secretes are created in the wrong namespace hence why undefined in yourapplication.
EDIT 1
Your indentation for container.env seems to be wrong
apiVersion: v1
kind: Pod
metadata:
name: secret-env-pod
spec:
containers:
- name: mycontainer
image: redis
env:
- name: SECRET_USERNAME
valueFrom:
secretKeyRef:
name: mysecret
key: username
- name: SECRET_PASSWORD
valueFrom:
secretKeyRef:
name: mysecret
key: password
restartPolicy: Never

Multi-broker Kafka on Kubernetes how to set KAFKA_ADVERTISED_HOST_NAME

My current Kafka deployment file with 3 Kafka brokers looks like this:
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: kafka
spec:
selector:
matchLabels:
app: kafka
serviceName: kafka-headless
replicas: 3
updateStrategy:
type: RollingUpdate
podManagementPolicy: Parallel
template:
metadata:
labels:
app: kafka
spec:
containers:
- name: kafka-instance
image: wurstmeister/kafka
ports:
- containerPort: 9092
env:
- name: KAFKA_ADVERTISED_PORT
value: "9092"
- name: KAFKA_ADVERTISED_HOST_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: KAFKA_ZOOKEEPER_CONNECT
value: "zookeeper-0.zookeeper-headless.default.svc.cluster.local:2181,\
zookeeper-1.zookeeper-headless.default.svc.cluster.local:2181,\
zookeeper-2.zookeeper-headless.default.svc.cluster.local:2181"
- name: BROKER_ID_COMMAND
value: "hostname | awk -F '-' '{print $2}'"
- name: KAFKA_CREATE_TOPICS
value: hello:2:1
volumeMounts:
- name: data
mountPath: /var/lib/kafka/data
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 50Gi
This creates 3 Kafka brokers as a Stateful Set and connects to the Zookeeper cluster using the Kubedns service with FQDN (Fully Qualified Domain Names) such as:
zookeeper-0.zookeeper-headless.default.svc.cluster.local:2181
Broker IDs are generated based on the pod name:
- name: BROKER_ID_COMMAND
value: "hostname | awk -F '-' '{print $2}'"
Result:
kafka-0 = 0
kafka-1 = 1
kafka-2 = 2
However, In order to use the Kubedns names for the Kafka brokers:
kafka-0.kafka-headless.default.svc.cluster.local:9092
kafka-1.kafka-headless.default.svc.cluster.local:9092
kafka-2.kafka-headless.default.svc.cluster.local:9092
I need to be able to set the KAFKA_ADVERTISED_HOST_NAME variable to the above FQDN values based on the name of the pod.
Currently I have the variable set to the name of the pod:
- name: KAFKA_ADVERTISED_HOST_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
Result:
KAFKA_ADVERTISED_HOST_NAME=kafka-0
KAFKA_ADVERTISED_HOST_NAME=kafka-1
KAFKA_ADVERTISED_HOST_NAME=kafka-2
But somehow I would need to append the rest of the DNS name.
Is there a way I could set the DNS value directly?
Something like that:
- name: KAFKA_ADVERTISED_HOST_NAME
valueFrom:
fieldRef:
fieldPath: kubedns.name
I managed to solve the problem with a command field inside the pod definition:
command:
- sh
- -c
- "export KAFKA_ADVERTISED_HOST_NAME=$(hostname).kafka-headless.default.svc.cluster.local &&
start-kafka.sh"
This runs a shell command which exports the advertised hostname environment variable based on the hostname value.
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: KAFKA_ZOOKEEPER_CONNECT
value: zook-zookeeper.zook.svc.cluster.local:2181
- name: KAFKA_PORT_NUMBER
value: "9092"
- name: KAFKA_LISTENERS
value: SASL_SSL://:$(KAFKA_PORT_NUMBER)
- name: KAFKA_ADVERTISED_LISTENERS
value: SASL_SSL://$(MY_POD_NAME).kafka-kafka-headless.kafka.svc.cluster.local:$(KAFKA_PORT_NUMBER)
The above config would create your FQDN.
You should be able to see those names in your Kafka logs when Kafka server starts.
NOTE: Kubernetes allows you to reference environment variables using the syntax $(VARIABLE)
None of the above worked for me; my setup it wurstmeister/kafka:2.12-2.5.0 and wurstmeister/zookeeper:3.4.6 in a single pod on Kubernetes 1.16 (don't ask); ClusterIp service on top which forwards 9092 to the Kafka container.
This set of environment variables works:
- name: KAFKA_LISTENERS
value: "INSIDE://:9094,OUTSIDE://:9092"
- name: KAFKA_ADVERTISED_LISTENERS
value: "INSIDE://:9094,OUTSIDE://my-service.my-namespace.svc.cluster.local:9092"
- name: KAFKA_LISTENER_SECURITY_PROTOCOL_MAP
value: "INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT" # not production-ready!
- name: KAFKA_INTER_BROKER_LISTENER_NAME
value: INSIDE
- name: KAFKA_ZOOKEEPER_CONNECT
value: "localhost:2181" # since it's in the same pod
Sources: wurstmeister/kafka doc, Kafka doc
The inherent problem seems to be that Kafka itself needs to be an IP-ish thing to bind to and to talk to itself via, while clients need a DNS-ish name to connect to from the outside. The latter one can't contain the pod name for some reason. (Might be a separate configuration issue on my end.)

Resources