Access environment variables set by configMapRef in kubernetes pod - node.js

I have a set of environment variables in my deployment using EnvFrom and configMapRef. The environment variables held in these configMaps were set by kustomize originally from json files.
spec.template.spec.containers[0].
envFrom:
- secretRef:
name: eventstore-login
- configMapRef:
name: environment
- configMapRef:
name: eventstore-connection
- configMapRef:
name: graylog-connection
- configMapRef:
name: keycloak
- configMapRef:
name: database
The issue is that it's not possible for me to access the specific environment variables directly.
Here is the result of running printenv in the pod:
...
eventstore-login={
"EVENT_STORE_LOGIN": "admin",
"EVENT_STORE_PASS": "changeit"
}
evironment={
"LOTUS_ENV":"dev",
"DEV_ENV":"dev"
}
eventstore={
"EVENT_STORE_HOST": "eventstore-cluster",
"EVENT_STORE_PORT": "1113"
}
graylog={
"GRAYLOG_HOST":"",
"GRAYLOG_SERVICE_PORT_GELF_TCP":""
}
...
This means that from my nodejs app I need to do something like this
> process.env.graylog
'{\n "GRAYLOG_HOST":"",\n "GRAYLOG_SERVICE_PORT_GELF_TCP":""\n}\n'
This only returns the json string that corresponds to my original json file. But I want to be able to do something like this:
process.env.GRAYLOG_HOST
To retrieve my environment variables. But I don't want to have to modify my deployment to look something like this:
env:
- name: NODE_ENV
value: dev
- name: EVENT_STORE_HOST
valueFrom:
secretKeyRef:
name: eventstore-secret
key: EVENT_STORE_HOST
- name: EVENT_STORE_PORT
valueFrom:
secretKeyRef:
name: eventstore-secret
key: EVENT_STORE_PORT
- name: KEYCLOAK_REALM_PUBLIC_KEY
valueFrom:
configMapKeyRef:
name: keycloak-local
key: KEYCLOAK_REALM_PUBLIC_KEY
Where every variable is explicitly declared. I could do this but this is more of a pain to maintain.

Short answer:
You will need to define variables explicitly or change configmaps so they have 1 environment variable = 1 value structure, this way you will be able to refer to them using envFrom. E.g.:
"apiVersion": "v1",
"data": {
"EVENT_STORE_LOGIN": "admin",
"EVENT_STORE_PASS": "changeit"
},
"kind": "ConfigMap",
More details
Configmaps are key-value pairs that means for one key there's only one value, configmaps can get string as data, but they can't work with map.
I tried edited manually the configmap to confirm the above and got following:
invalid type for io.k8s.api.core.v1.ConfigMap.data: got "map", expected "string"
This is the reason why environment comes up as one string instead of structure.
For example this is how configmap.json looks:
$ kubectl describe cm test2
Name: test2
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
test.json:
----
environment={
"LOTUS_ENV":"dev",
"DEV_ENV":"dev"
}
And this is how it's stored in kubernetes:
$ kubectl get cm test2 -o json
{
"apiVersion": "v1",
"data": {
"test.json": "evironment={\n \"LOTUS_ENV\":\"dev\",\n \"DEV_ENV\":\"dev\"\n}\n"
},
In other words observed behaviour is expected.
Useful links:
ConfigMaps
Configure a Pod to Use a ConfigMap

Related

How to iterate all the variables from a variables template file in azure pipeline?

test_env_template.yml
variables:
- name: DB_HOSTNAME
value: 10.123.56.222
- name: DB_PORTNUMBER
value: 1521
- name: USERNAME
value: TEST
- name: PASSWORD
value: TEST
- name: SCHEMANAME
value: SCHEMA
- name: ACTIVEMQNAME
value: 10.123.56.223
- name: ACTIVEMQPORT
value: 8161
and many more variables in the list.
I wanted to iterate through all the variables in the test_env_template.yml using a loop to replace the values in a file, Is there a way to do that rather than calling each values separately like ${{ variables.ACTIVEMQNAME}} as the no. of variables in the template is dynamic.
In short no. There is no easy way to get your azure pipeline variables specific to tamplet variable. You can get env variables, but there you will get regular env variables and pipeline variables mapped to env variables.
You can get it via env | sort but I'm pretty sure that this is not waht you want.
You can't display variables specific to template but you can get all pipeline variables in this way:
steps:
- pwsh:
Write-Host "${{ convertToJson(variables) }}"
and then you will get
{
system: build,
system.hosttype: build,
system.servertype: Hosted,
system.culture: en-US,
system.collectionId: be1a2b52-5ed1-4713-8508-ed226307f634,
system.collectionUri: https://dev.azure.com/thecodemanual/,
system.teamFoundationCollectionUri: https://dev.azure.com/thecodemanual/,
system.taskDefinitionsUri: https://dev.azure.com/thecodemanual/,
system.pipelineStartTime: 2021-09-21 08:06:07+00:00,
system.teamProject: DevOps Manual,
system.teamProjectId: 4fa6b279-3db9-4cb0-aab8-e06c2ad550b2,
system.definitionId: 275,
build.definitionName: kmadof.devops-manual 123 ,
build.definitionVersion: 1,
build.queuedBy: Krzysztof Madej,
build.queuedById: daec281a-9c41-4c66-91b0-8146285ccdcb,
build.requestedFor: Krzysztof Madej,
build.requestedForId: daec281a-9c41-4c66-91b0-8146285ccdcb,
build.requestedForEmail: krzysztof.madej#hotmail.com,
build.sourceVersion: 583a276cd9a0f5bf664b4b128f6ad45de1592b14,
build.sourceBranch: refs/heads/master,
build.sourceBranchName: master,
build.reason: Manual,
system.pullRequest.isFork: False,
system.jobParallelismTag: Public,
system.enableAccessToken: SecretVariable,
DB_HOSTNAME: 10.123.56.222,
DB_PORTNUMBER: 1521,
USERNAME: TEST,
PASSWORD: TEST,
SCHEMANAME: SCHEMA,
ACTIVEMQNAME: 10.123.56.223,
ACTIVEMQPORT: 8161
}
If you prefix them then you can try to filter using jq.

kustomize with azure secret provider class

I have a secretsProviderClass resource defined for my Azure Kubernetes Service deployment, which allows me to create secrets from Azure Key Vault. I'd like to use Kustomize with it in order to unify my deployments across multiple environments. Here is my manifest:
apiVersion: secrets-store.csi.x-k8s.io/v1alpha1
kind: SecretProviderClass
metadata:
name: azure-kvname
spec:
provider: azure
secretObjects:
- data:
- key: dbuser
objectName: db-user
- key: dbpassword
objectName: db-pass
- key: admin
objectName: admin-user
- key: adminpass
objectName: admin-password
secretName: secret
type: Opaque
parameters:
usePodIdentity: "true"
keyvaultName: "dev-keyvault"
cloudName: ""
objects: |
array:
- |
objectName: db-user
objectType: secret
objectVersion: ""
- |
objectName: db-pass
objectType: secret
objectVersion: ""
- |
objectName: admin-user
objectType: secret
objectVersion: ""
- |
objectName: admin-password
objectType: secret
objectVersion: ""
tenantId: "XXXXXXXXXXXX"
This is the manifest that I use as a base. I'd like to use overlay on this and apply values depending on the environment that I am deploying to. To be specific, I'd like to modify the objectName property. I tried applying the Json6902 patch:
- op: replace
path: /spec/parameters/objects/array/0/objectName
value: "dev-db-user"
- op: replace
path: /spec/parameters/objects/array/1/objectName
value: "dev-db-password"
- op: replace
path: /spec/parameters/objects/array/2/objectName
value: "dev-admin-user"
- op: replace
path: /spec/parameters/objects/array/3/objectName
value: "dev-admin-password"
Unfortunately, it's not working and it is not replacing the values. Is it possible with Kustomize?
Unfortunately - the value that you're trying to access is not another nested YAML array - the pipe symbol at the end of a line in YAML signifies that any indented text that follows should be interpreted as a multi-line scalar value
With kustomize you'd probably need to replace whole /spec/parameters/objects value
if you haven't started using kustomize for good yet, you may consider rather templating engine like Helm, which should allow you to replace value inside of this string
...or you can use a combination of Helm for templating and the Kustomize for resource management, patches for specific configuration, and overlays.

Environmental variables returning undefined for Kubernetes deployment

I posted a question similar to this and tried to implement what the answer for this question said: How to access Kubernetes container environment variables from Next.js application?
However, when I still call my environment variables doing process.env.USERNAME, I'm still getting undefined back... Am I doing something wrong in my deployment file? Here is a copy of my deployment.yaml:
metadata:
namespace: <namespace>
releaseName: <release name>
releaseVersion: 1.0.0
target: <target>
auth:
replicaCount: 1
image:
repository: '<name of repository is here>'
pullPolicy: <always>
container:
multiPorts:
- containerPort: 443
name: HTTPS
protocol: TCP
- containerPort: 80
name: HTTP
protocol: TCP
env:
- name: USERNAME
valueFrom:
secretKeyRef:
name: my-username
key: username
- name: PASSWORD
valueFrom:
secretKeyRef:
name: my-password
key: password
- name: HOST
valueFrom:
secretKeyRef:
name: my-host
key: host
volumeMounts:
- name: config
mountPath: "/configMap"
readOnly: true
volume:
- name: config
configMap:
name: environmental-variables
resources:
requests:
cpu: 0.25
memory: 256Mi
limits:
cpu: 1
memory: 1024Mi
variables:
- name: NODE_ENV
value: <node env value here>
ingress:
enabled: true
ingressType: <ingressType>
applicationType: <application type>
serviceEndpoint: <endpoint>
multiPaths:
- path: /
- HTTPS
tls:
enabled: true
secretName: <name>
autoscale:
enabled: false
minReplicas: 1
maxReplicas: 5
cpuAverageUtilization: 50
memoryUtilizationValue: 50
annotations:
ingress:
nginx.ingress.kubernetes.io/affinity: <affinity>
nginx.ingress.kubernetes.io/session-cookie-name: <cookie-name>
nginx.ingress.kubernetes.io/session-cookie-expires: <number>
nginx.ingress.kubernetes.io/session-cookie-max-age: <number>
I also created a configMap.yaml file, although I'm not sure if that's the right way to do this. Here is my configMap.yaml file:
apiVersion: v1
kind: ConfigMap
metadata:
name: environmental-variables
data:
.env: |
USERNAME: <username>
PASSWORD: <password>
HOST: <host>
Any help will be greatly appreciated! Also I'm trying to make my environment variable as Secrets since I don't want to expose any of my variables because it contains sensitive information. I am trying to do this on a Node.js application using Express. Thank you!
EDIT: Here is how the Secrets part looks like in my yaml file
secrets:
- name: environmental-variables
key: USERNAME
- name: environmental-variables
key: PASSWORD
How my Secrets yaml file looks like:
kind: Secret
apiVersion: v1
metadata:
name: environmental-variables
namespace: tda-dev-duck-dev
data:
USERNAME: <username>
PASSWORD: <password>
After days of figuring out how to use Secrets as an environmental variable, I figured out how to reference it in my nodejs application!
Before I was doing the normal way of calling environmental variables, process.env.VARIABLE_NAME, but that did not work for me when I had Secrets as an environment variable. In order to get the value of the variable, I had to do process.env.ENVIRONMENTAL_VARIABLES_USERNAME in my Javascript file and that worked for me! Where ENVIRONMENTAL_VARIABLES is the name and USERNAME is the key!
Not sure if this will help anyone else but this is how I managed to access my Secrets in my nodejs application!
You created ConfigMap and trying to get value from secret. If you want set value from configmap then update env like following
env:
- name: USERNAME
valueFrom:
configMapKeyRef:
name: environmental-variables # this is ConfigMap Name
key: USERNAME # this is key in ConfigMap
- name: PASSWORD
valueFrom:
configMapKeyRef:
name: environmental-variables
key: PASSWORD
- name: HOST
valueFrom:
configMapKeyRef:
name: environmental-variables
key: HOST
and update the configmap like following
apiVersion: v1
kind: ConfigMap
metadata:
name: environmental-variables
data:
USERNAME: <username>
PASSWORD: <password>
HOST: <host>
To learn how to define container environment variables using ConfigMap data click here
If you want to use secrets as environment variables check here

Is it possible to overwrite or create a new version of a secret through ansible in azure?

I need to deploy my secrets in Azure's keyvault through ansible.
If the secret is a new one (i.e. it didnt exist before) it works perfectly, the secret is created properly.
Problem came when I need to update the secret, it is never overwritten.
I tried to delete it and create it again but is not working either since it performs a soft delete so it can be created again with the same name.
Here what I tried so far:
Secret creation (working fine the first time but not overwriting it)
- name: "Create endpoint secret."
azure_rm_keyvaultsecret:
secret_name: mysecret
secret_value: "desiredvalue"
keyvault_uri: "https://{{ AZURE_KV_NAME }}.vault.azure.net/"
tags:
environment: "{{ ENV }}"
role: "endpointsecret"
Here is how I try to delete it first and then creating it again
- name: "Delete endpoint secret."
azure_rm_keyvaultsecret:
secret_name: mysecret
keyvault_uri: "https://{{ AZURE_KV_NAME }}.vault.azure.net/"
state: "absent"
- name: "Create endpoint secret."
azure_rm_keyvaultsecret:
secret_name: mysecret
secret_value: "desiredvalue"
keyvault_uri: "https://{{ AZURE_KV_NAME }}.vault.azure.net/"
tags:
environment: "{{ ENV }}"
role: "endpointsecret"
When trying this error is:
Secret mysecret is currently being deleted and cannot be re-created; retry later
**Secret creation with state: present (it's not creating a new version either) **
- name: "Create endpoint secret."
azure_rm_keyvaultsecret:
secret_name: mysecret
secret_value: "desiredvalue"
keyvault_uri: "https://{{ AZURE_KV_NAME }}.vault.azure.net/"
state: "present"
tags:
environment: "{{ ENV }}"
role: "endpointsecret"
Any idea how to overwrite ( create a new version )a secret or at least perform a hard delete?
I find no way other than deploy it through ARM
- name: "Create ingestion keyvault secrets."
azure_rm_deployment:
state: present
resource_group_name: "{{ AZURE_RG_NAME }}"
location: "{{ AZURE_RG_LOCATION }}"
template:
$schema: "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#"
contentVersion: "1.0.0.0"
parameters:
variables:
resources:
- apiVersion: "2018-02-14"
type: "Microsoft.KeyVault/vaults/secrets"
name: "{{AZURE_KV_NAME}}/{{item.name}}"
properties:
value: "{{item.secret}}"
contentType: "string"
loop: "{{ SECRETLIST }}"
register: publish_secrets
async: 300 # Maximum runtime in seconds.
poll: 0 # Fire and continue (never poll)
- name: Wait for the secret deployment task to finish
async_status:
jid: "{{ publish_secrets_item.ansible_job_id }}"
loop: "{{publish_secrets.results}}"
loop_control:
loop_var: "publish_secrets_item"
register: jobs_publish_secrets
until: jobs_publish_secrets.finished
retries: 5
delay: 2
And then in other file the SECRETLIST declared as a variable:
SECRETLIST :
- name: mysecret
secret: "secretvalue"
- name: othersecret
secret: "secretvalue2"
Hope this helps to anyone with a similar problem

Managing multiple Pods with one Deploy

Good afternoon, I need help defining a structure of my production cluster, i want something like.
1 Deployment that controlled the pods
multiple PODS (one pod per-customer)
multiple services (one service-per pod)
but how will I do this structure if for each POD I have env vars that will connect to the customer database, like that
env:
- name: dbuser
value: "svc_iafox_test#***"
- name: dbpassword
value: "****"
- name: dbname
value: "ts-demo1"
- name: dbconnectstring
value: "jdbc:sqlserver://***-test.database.windows.net:1433;database=$(dbname);user=$(dbuser);password=$(dbpassword);encrypt=true;trustServerCertificate=true;hostNameInCertificate=*.database.windows.net;loginTimeout=30;"
so for each pod I will have to change these env vars ... anyway, what is the best way for me to do this??
you could use configmap to achieve that:
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "echo $(SPECIAL_LEVEL_KEY) $(SPECIAL_TYPE_KEY)" ]
env:
- name: SPECIAL_LEVEL_KEY
valueFrom:
configMapKeyRef:
name: special-config
key: SPECIAL_LEVEL
- name: SPECIAL_TYPE_KEY
valueFrom:
configMapKeyRef:
name: special-config
key: SPECIAL_TYPE
restartPolicy: Never
https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#use-configmap-defined-environment-variables-in-pod-commands
ps. I dont think 1 deployment per pod makes sense. 1 deployment per customer does. I dont think you understand exactly what a deployment does: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/

Resources