I have a node application running in a container that works well when I run it locally on docker.
When I try to run it in my k8 cluster, I get the following error.
kubectl -n some-namespace logs --follow my-container-5d7dfbf876-86kv7
> code#1.0.0 my-container /src
> node src/app.js
Error: unable to get local issuer certificate
at TLSSocket.onConnectSecure (_tls_wrap.js:1486:34)
at TLSSocket.emit (events.js:315:20)
at TLSSocket._finishInit (_tls_wrap.js:921:8)
at TLSWrap.ssl.onhandshakedone (_tls_wrap.js:695:12) {
code: 'UNABLE_TO_GET_ISSUER_CERT_LOCALLY'
}
This is strange as the only I run the container with
command: ["npm", "run", "consumer"]
I have also tried adding to my Dockerfile
npm config set strict-ssl false
as per the recommendation here: npm install error - unable to get local issuer certificate but it doesn't seem to help.
So it should be trying to authenticate this way.
I would appreciate any pointers on this.
Here is a copy of my .yaml file for completeness.
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
name: label
name: label
namespace: some-namespace
spec:
replicas: 1
selector:
matchLabels:
name: lable
template:
metadata:
labels:
name: label
spec:
containers:
- name: label
image: some-registry:latest
resources:
limits:
memory: 7000Mi
cpu: '3'
ports:
- containerPort: 80
command: ["npm", "run", "application"]
env:
- name: "DATABASE_URL"
valueFrom:
secretKeyRef:
name: postgres
key: DBUri
- name: "DEBUG"
value: "*,-babel,-mongo:*,mongo:queries,-http-proxy-agent,-https-proxy-agent,-proxy-agent,-superagent,-superagent-proxy,-sinek*,-kafka*"
- name: "ENV"
value: "production"
- name: "NODE_ENV"
value: "production"
- name: "SERVICE"
value: "consumer"
volumeMounts:
- name: certs
mountPath: /etc/secrets
readOnly: true
volumes:
- name: certs
secret:
secretName: certs
items:
- key: certificate
path: certificate
- key: key
path: key
It looks that the pod is not mounting the secrets in the right place. Make sure that .spec.volumeMounts.mountPath is pointing on the right path for the container image.
Related
The documentation is a bit confusing there are two sets:
https://learn.microsoft.com/en-us/azure/key-vault/general/key-vault-integrate-kubernetes
https://azure.github.io/secrets-store-csi-driver-provider-azure/configurations/identity-access-modes/pod-identity-mode/
At any rate, I'm able to do the following to see that secrets are in the Pod:
kubectl exec -it nginx-secrets-store-inline -- ls /mnt/secrets-store/
kubectl exec -it nginx-secrets-store-inline -- cat /mnt/secrets-store/secret1
This is basically where the documentation and tutorials I've seen end.
Cool... but what needs to be done to get them into the environmental variables in the application running in the Pod?
For example, this is how my API deployment is setup from when I was doing kubectl create secret generic app-secrets --from-literal=PGUSER=$pguser...:
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-deployment-dev
namespace: production
spec:
replicas: 3
revisionHistoryLimit: 5
selector:
matchLabels:
component: api
template:
metadata:
labels:
component: api
spec:
containers:
- name: api
image: api
ports:
- containerPort: 5000
env:
- name: PGDATABASE
valueFrom:
secretKeyRef:
name: k8stut-dev-secrets
key: PGDATABASE
- name: PGHOST
value: postgres-cluster-ip-service-dev
- name: PGPORT
valueFrom:
secretKeyRef:
name: k8stut-dev-secrets
key: PGPORT
- name: PGUSER
valueFrom:
secretKeyRef:
name: k8stut-dev-secrets
key: PGUSER
- name: PGPASSWORD
valueFrom:
secretKeyRef:
name: k8stut-dev-secrets
key: PGPASSWORD
volumeMounts:
- mountPath: /mnt/file-storage
name: file-storage-dev
subPath: file-storage
volumes:
- name: file-storage-dev
persistentVolumeClaim:
claimName: file-storage-dev
---
apiVersion: v1
kind: Service
metadata:
name: api-cluster-ip-service-dev
namespace: development
spec:
type: ClusterIP
selector:
component: api
ports:
- port: 5000
targetPort: 5000
What needs to be done now with all of these?
env:
- name: PGDATABASE
valueFrom:
secretKeyRef:
name: k8stut-dev-secrets
key: PGDATABASE
The CSI secret store driver is a container storage interface driver - it can only mount to files.
For postgres specifically, you can use docker secrets environment variables to point to the path you're mounting the secret in and it will read it from the file instead. This works via appending _FILE to the variable name.
Per that document: Currently, this is only supported for POSTGRES_INITDB_ARGS, POSTGRES_PASSWORD, POSTGRES_USER, and POSTGRES_DB.
- name: POSTGRES_DB_FILE
value: /mnt/secrets-store/db-secret
In the general case, if you need the secrets in environment variables, I would typically use a startup script in the container to read the CSI mounted secrets and export them. If it's a custom container this is usually easy enough to add; if it's a standard container you may be able to override the command with a small set of shell commands that can export the appropriate variables by reading the files before calling whatever the normal ENTRYPOINT of the container would have been.
The answer above by Patrick helped, but is not fully correct. AKS provides support as well to "sync" Key Vault Secrets into Kubernetes Secrets which can be used as ENV variables.
See the microsoft docs on how to setup sync of a secret into kubernetes:
https://learn.microsoft.com/en-us/azure/aks/csi-secrets-store-driver#sync-mounted-content-with-a-kubernetes-secret
And this article shows how you can reference the secret into an environment variable:
https://learn.microsoft.com/en-us/azure/aks/csi-secrets-store-driver#set-an-environment-variable-to-reference-kubernetes-secrets
So I am learning about Kubernetes with a guide, I am trying to deploy a MongoDB Pod with 1 replica. This is the deployment config file.
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongodb-deployment
labels:
app: mongodb
spec:
replicas: 1
selector:
matchLabels:
app: mongodb
template:
metadata:
labels:
app: mongodb
spec:
containers:
- name: mongodb
image: mongo
ports:
- containerPort: 27017
env:
- name: MONGO_INITDB_ROOT_USERNAME
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-root-username
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-root-password
---
apiVersion: v1
kind: Service
metadata:
name: mongodb-service
spec:
selector:
app: mongodb
ports:
- protocol: TCP
port: 27017
targetPort: 27017
I also try to deploy a Mongo-Express Pod with almost the same config file, but I keep getting CrashLoopBackOff for both Pods, From the little understanding I have, this is caused by the container failing and restarting in a cycle. I tried going through the events with kubectl get events and I see that a warning with message Back-off restarting failed container keeps occurring. I also tried doing a little digging around and came across a solution that says to add
command: ['sleep']
args: ['infinity']
That fixed the CrashLoopBackOff issue, but when I try to get the logs for the Pod, nothing is displayed on the terminal. Please I need some help and possible explanation as how the command and args seem to fix it, also how do I stop this crash from happening to my Pods and current one, Thank you very much.
My advice is to deploy MongoDB as StatefulSet on Kubernetes.
In stateful application, the N-replicas of master nodes manages several worker nodes under a cluster. So, if any master node goes down the other ordinal instances will be active to execute the workflow. The master node instances must be identified as a unique ordinal number known as StatefulSet.
See more: mongodb-sts, mongodb-on-kubernetes.
Also use Headless service to manage the domain of a Pod. In general understanding of Headless Service, there is no need for LoadBalancer or a kube-proxy to interact directly with Pods but using a Service IP, so the Cluster IP is set to none.
In your case:
apiVersion: v1
kind: Service
metadata:
name: mongodb
spec:
clusterIP: None
selector:
app: mongodb
ports:
- port: 27017
The error:
Also uncaught exception: Error: couldn't add user: Error preflighting normalization: U_STRINGPREP_PROHIBITED_ERROR _getErrorWithCode#src/mongo/shell/utils.js:25:13
indicates that the secret may be missing. Take a look: mongodb-initializating.
In your case secret should look similar:
apiVersion: v1
kind: Secret
metadata:
name: mongodb-secret
type: Opaque
data:
mongo-root-username: YWRtaW4=
mongo-root-password: MWYyZDFlMmU2N2Rm
Remember to configure also a volume for your pods - follow tutorials I have linked above.
Deploy mongodb with StatefulSet not as deployment.
Example:
apiVersion: v1
kind: Service
metadata:
name: mongodb-service
labels:
name: mongo
spec:
ports:
port: 27017
targetPort: 27017
clusterIP: None
selector:
role: mongo
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: mongod
spec:
serviceName: mongodb-service
replicas: 3
template:
metadata:
labels:
role: mongo
environment: test
replicaset: MainRepSet
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: replicaset
operator: In
values:
- MainRepSet
topologyKey: kubernetes.io/hostname
terminationGracePeriodSeconds: 10
volumes:
- name: secrets-volume
secret:
secretName: shared-bootstrap-data
defaultMode: 256
containers:
- name: mongod-container
#image: pkdone/mongo-ent:3.4
image: mongo
command:
- "numactl"
- "--interleave=all"
- "mongod"
- "--wiredTigerCacheSizeGB"
- "0.1"
- "--bind_ip"
- "0.0.0.0"
- "--replSet"
- "MainRepSet"
- "--auth"
- "--clusterAuthMode"
- "keyFile"
- "--keyFile"
- "/etc/secrets-volume/internal-auth-mongodb-keyfile"
- "--setParameter"
- "authenticationMechanisms=SCRAM-SHA-1"
resources:
requests:
cpu: 0.2
memory: 200Mi
ports:
- containerPort: 27017
volumeMounts:
- name: secrets-volume
readOnly: true
mountPath: /etc/secrets-volume
- name: mongodb-persistent-storage-claim
mountPath: /data/db
volumeClaimTemplates:
metadata:
name: mongodb-persistent-storage-claim
annotations:
volume.beta.kubernetes.io/storage-class: "standard"
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
I'm trying to build a Azure DevOps Linux Build Agent in Azure Kubernetes Service.
I created the yaml file and created the secrets to use inside of the file.
I applied the file and have "CreateContainerConfigError" with my pod in a "waiting" state.
I run command
"kubectl get pod <pod name> -o yaml"
and it states the secret "vsts" could not be found.
I find this weird because I used "kubectl get secrets" and I see the secrets "vsts-account" and "vsts-token" listed.
You may check your kubernetes configuration, which is supposed to be like below:
apiVersion: v1
kind: ReplicationController
metadata:
name: vsts-agent
spec:
replicas: 1
template:
metadata:
labels:
app: vsts-agent
version: "0.1"
spec:
containers:
– name: vsts-agent
image: microsoft/vsts-agent:ubuntu-16.04-docker-18.06.1-ce-standard
env:
– name: VSTS_ACCOUNT
valueFrom:
secretKeyRef:
name: vsts
key: VSTS_ACCOUNT
– name: VSTS_TOKEN
valueFrom:
secretKeyRef:
name: vsts
key: VSTS_TOKEN
– name: VSTS_POOL
value: dockerized-vsts-agents
volumeMounts:
– mountPath: /var/run/docker.sock
name: docker-volume
volumes:
– name: docker-volume
hostPath:
path: /var/run/docker.sock
You may follow the blog below to see whether it helps you:
https://mohitgoyal.co/2019/01/10/run-azure-devops-private-agents-in-kubernetes-clusters/
I have reactjs app running on my pod and I have mounted source code from the host machine to the pod. It works fine but when I change my code in the host machine, pod source code also changes but when I run the site it has not affected the application. here is my manifest, what I'm doing wrong?
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: webapp
spec:
replicas: 1
minReadySeconds: 15
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
template:
metadata:
labels:
app: webapp
tier: frontend
phase: development
spec:
containers:
- name: webapp
image: xxxxxx
command:
- npm
args:
- run
- dev
env:
- name: environment
value: dev
- name: AUTHOR
value: webapp
ports:
- containerPort: 3000
volumeMounts:
- mountPath: /code
name: code
imagePullSecrets:
- name: regcred
volumes:
- name: code
hostPath:
path: /hosthome/xxxx/development/react-app/src
and i know for a fact npm is not watching my changes, how can i resolve it in pods?
Basically, you need to reload your application everytime you change your code and your pods don't reload or restart when you change the code under the /code directory. You will have to re-create your pod since you are using a deployment you can either:
kubectl delete <pod-where-your-app-is-running>
or
export PATCH='{"spec":{"template":{"metadata":{"annotations":{"timestamp":"'$(date)'"}}}}}'
kubectl patch deployment webapp -p "$PATCH"
Your pods should restart after that.
what Rico has mentioned is correct, you need to patch or rebuild with every changes, but you can avoid that by running minikube without vm-driver here is the command to run minikube without vm-driver only works in Linux, by doing this you can mount host path to pod. hope this will help
sudo minikube start --bootstrapper=localkube --vm-driver=none --apiserver-ips 127.0.0.1 --apiserver-name localhost -v=1
I can't deploy pod using private image (ACR) using CLI and yaml file.
Deploying from registry directly using either az container or kubectl run does work however.
Pod status:
"containers": [
{
"count": 3,
"firstTimestamp": "2017-08-26T07:31:36+00:00",
"lastTimestamp": "2017-08-26T07:32:20+00:00",
"message": "Failed: Failed to pull image \"ucont01.azurecr.io/unreal-deb\": rpc error: code 2 desc Error: im age unreal-deb:latest not found",
"type": "Warning"
},
],
},
Yaml file:
apiVersion: v1
kind: Pod
metadata:
generateName: "game-"
namespace: default
spec:
nodeName: aci-connector
dnsPolicy: ClusterFirst
restartPolicy: Never
containers:
- name: unreal-dev-server
image: ucont01.azurecr.io/unreal-deb
imagePullPolicy: Always
ports:
- containerPort: 7777
protocol: UDP
imagePullSecrets:
- name: registrykey
Unfortunately the aci-connector-k8s doesn't currently support images from private repositories. There is an issue open to add support but it's not currently implemented.
https://github.com/Azure/aci-connector-k8s/issues/35
According to your description, could you please check your repositories via Azure portal, like this:
Use your YAML, it work for me:
apiVersion: v1
kind: Pod
metadata:
generateName: "game-"
namespace: default
spec:
nodeName: k8s-agent-379980cb-0
dnsPolicy: ClusterFirst
restartPolicy: Never
containers:
- name: unreal-dev-server
image: jasontest.azurecr.io/samples/nginx
imagePullPolicy: Always
ports:
- containerPort: 7777
protocol: TCP
imagePullSecrets:
- name: secret1
Here is the screenshot:
Here is my secret:
jason#k8s-master-379980CB-0:~$ kubectl get secret
NAME TYPE DATA AGE
default-token-865dj kubernetes.io/service-account-token 3 1h
secret1 kubernetes.io/dockercfg 1 47m
If the credentials (corresponding to registrykey) are incorrect, you may get 'image not found' error, though the image exists. you may want to verify the registrykey credentials again..