I am reading the configuration details from the configmap as shown below
apiVersion: v1
kind: ConfigMap
metadata:
namespace: akv2k8s-test
name: akv2k8s-test
data:
Log_Level: Debug
Alert_Email: Test#demo.com
pod definition
apiVersion: v1
kind: Pod
metadata:
name: akv2k8s-test
namespace: akv2k8s-test
spec:
containers:
- name: akv2k8s-env-test
image: kavija/python-env-variable:latest
envFrom:
- configMapRef:
name: akv2k8s-test
and reading it with python
import os
loglevel = os.environ['Log_Level']
alertemail = os.environ['Alert_Email']
print("Running with Log Level: %s, Alert Email:%s" % (loglevel,alertemail))
I want to update the Configuration values while deploying, the below command fails
kubectl apply -f deploy.yaml --env="Log_Level=error"
How do I pass the environment variable while deploying?
Since you want to update the configmap while deplyoing, The easiest way would be by getting the file content using cat and then update as you wish -
cat configmap.yaml |sed -e 's|Log_Level: Debug|Log_Level: error|' | kubectl apply -f -
In case you want to update the existing configmap, use kubectl patch command.
kubectl patch configmap/akv2k8s-test -n akv2k8s-test --type merge -p '{"data":{"Log_Level":"error"}}'
Related
My requirement is as follows:
Developer creates a branch in Jenkins. Lets say branch name is "mystory-101"
Now developer push the code to this branch
Jenkins job starts as soon as commit is pushed to the branch "mystory-101" and create a new docker image for this branch if not created already
My application is Node.js based app, so docker container starts with node.js and deployes the code from the branch "mystory-101"
After the code is deployed and node.js is running, then I would also like this node.js app to be accessible via the URL : https://mystory-101.mycompany.com
For this purpose I was reading this https://medium.com/swlh/ci-cd-pipeline-using-jenkins-dynamic-nodes-86ea854ff7a7
but I am not sure how to achive step #5. Can you please advice how to achive this autometically?
Reformatting answers from commentaries, having a Jenkins installation and Kubernetes cluster, you may automate your deployments using a Jenkins plugin such as oc or kubernetes, or you could prefer using the kubectl client directly, assuming your agents do have that binary.
Not going through the RBAC specifics, you would probably need a ServiceAccount for Jenkins, and use a token (can be found in a Secret named after your ServiceAccount). That ServiceAccount should have enough privileges to create resources in the namespaces you intend to deploy stuff into -- usually the edit ClusterRole, with a namespace-scoped RoleBinding:
kubectl create sa jenkins -n my-namespace
kubectl create rolebinding jenkins-edit \
--clusterrole=edit \
--serviceaccount=my-namespace:jenkins-edit \
--namespace=my-namespace
Once Jenkins is done building your image, you would deploy it to Kubernetes, most likely creating a Deployment, a Service, and an Ingress, substituting resource names, namespaces and your ingress requested FQDN to match your requirements.
Prepare your deployment yaml, something like:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-BRANCH
spec:
selector:
matchLabels:
name: app-BRANCH
template:
spec:
containers:
- image: my-registry/path/to/image:BRANCH
[...]
---
apiVersion: v1
kind: Service
metadata:
name: app-BRANCH
spec:
selector:
name: app-BRANCH
ports:
[...]
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: app-BRANCH
spec:
rules:
- host: app-BRANCH.my-base-domain.com
http:
paths:
- backend:
serviceName: app-BRANCH
Then, have your Jenkins agent apply that configuration, substituting values properly:
sed "s|BRANCH|$BRANCH|" deploy.yaml | kubectl apply -n my-namespace -f-
kubectl wait -n my-namespace deploy/app-$BRANCH --for=condition=Available
kubectl logs -n my-namespace deploy/app-$BRANCH --tail=200
I have a nodejs application which stores variables in environment variables.
I'm using the dotenv module, so I have a .env file that looks like :
VAR1=value1
VAR2=something_else
I'm currently setting up a BitBucket Pipeline to auto deploy this to a Kubernetes cluster.
I'm not very familiar with kubernetes secrets, though I'm reading up on them.
I'm wondering :
Is there an easy way to send to a Docker-container / kubernetes-deployment all of the environment variables I have defined in my .env file so they are available in the pods my app is running in ?
I'm hoping for an example secrets.yml file or similar which takes everything from .env and makes in into environment variables in the container. But it could also be done in the BitBucket pipeline level, or at the Docker container level .. I'm not sure ...
Step 1: Create a k8s secret with your .env file:
# kubectl create secret generic <secret-name> --from-env-file=<path-to-env-file>
$ kubectl create secret generic my-env-list --from-env-file=.env
secret/my-env-list created
Step 2: Varify secret:
$ kubectl get secret my-env-list -o yaml
apiVersion: v1
data:
VAR1: dmFsdWUx
VAR2: c29tZXRoaW5nX2Vsc2U=
kind: Secret
metadata:
name: my-env-list
namespace: default
type: Opaque
Step 3: Add env to your pod's container:
apiVersion: v1
kind: Pod
metadata:
name: demo-pod
spec:
containers:
- name: demo-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "env" ]
envFrom:
- secretRef:
name: my-env-list # <---- here
restartPolicy: Never
Step 4: Run the pod and check if the env exist or not:
$ kubectl apply -f pod.yaml
pod/demo-pod created
$ kubectl logs -f demo-pod
KUBERNETES_PORT=tcp://10.96.0.1:443
KUBERNETES_SERVICE_PORT=443
HOSTNAME=demo-pod
SHLVL=1
HOME=/root
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
VAR1=value1 # <------------------------------------------------------here
VAR2=something_else # <-----------------------------------------------here
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
KUBERNETES_SERVICE_PORT_HTTPS=443
PWD=/
KUBERNETES_SERVICE_HOST=10.96.0.1
You can also use the kustomize operator to create a secret from file as follows:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
metadata:
name: kust-example
generatorOptions:
# Prevents adding hash at the end of the secret name
disableNameSuffixHash: true
secretGenerator:
- name: your-secret
namespace: default
envs:
- path/secret.env
Then you just have to run `kubectl apply -k dir
You can also use this to achieve the same result as using Kustomization but with more control to automate your job
https://github.com/juliosmelo/dotenv2k8s
I want to run a simple backup of my postgres db deployed in Openshift. What are the best practices for running a cron job? Since systemd is not available on the containers and can only be enabled through a hack, I'd rather use a 'cleaner' approach. Besides cronie or systemd timer units, what options are there? There seems one could enable cron in earlier Openshift versions, however Openshift v4.x doesn't support this feature anymore and the docs only mention the Kubernetes Cron Jobs objects.
Here is what I use:
Dedicated Pod with same image (ensure db dump client is available) and PVC for backup mounted
ConfigMap with backup script
Cronjob running that pod frequently
Here's some example manifests:
PVC:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: database-bkp
namespace: database
annotations:
volume.beta.kubernetes.io/storage-class: "storage-class-name"
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
CM:
apiVersion: v1
kind: ConfigMap
metadata:
name: psqldump
namespace: database
labels:
job-name: db-backup
data:
psqldump.sh: |
#!/bin/bash
DBS=$(psql -xl |awk /^Name/'{print $NF}')
for DB in ${DBS}; do
SCHEMAS=$(psql -d ${DB} -xc "\dn" |awk /^Name/'{print $NF}')
for SCHEMA in ${SCHEMAS}; do
echo "Dumping database '${DB}' from Schema '${SCHEMA}' into ${BACKUPDIR}/${PGHOST}_${SCHEMA}_${DB}_${ENVMNT}_$(date -I).sql"
pg_dump -n "${SCHEMA}" ${DB} > ${BACKUPDIR}/${PGHOST}_${SCHEMA}_${DB}_${ENVMNT}_$(date -I).sql
done
done
echo "Deleting dumps older than ${RETENTION} days"
find ${BACKUPDIR} -name "*.sql" -mtime +${RETENTION} -exec rm -rf {} \;
CronJob:
apiVersion: v1
kind: Template
metadata:
name: postgres-backup
namespace: database
objects:
- kind: CronJob
apiVersion: batch/v1beta1
metadata:
name: postgres-backup
namespace: database
spec:
schedule: "0 3 * * *"
successfulJobsHistoryLimit: 1
jobTemplate:
spec:
template:
metadata:
namespace: database
spec:
containers:
- name: postgres-dbbackup
image: "postgres:11"
env:
- name: PGHOST
value: "${_PGHOST}"
- name: PGUSER
value: "${_PGUSER}"
- name: RETENTION
value: "${_RETENTION}"
- name: BACKUPDIR
value: "${_BACKUPDIR}"
command: ["/bin/bash", "-c", "/usr/local/bin/psqldump.sh"]
volumeMounts:
- mountPath: /usr/local/bin
name: psqldump-volume
- mountPath: /backup
name: backup-volume
volumes:
- name: psqldump-volume
configMap:
name: psqldump
defaultMode: 0755
- name: backup-volume
persistentVolumeClaim:
claimName: database-bkp
restartPolicy: Never
parameters:
- name: _PGHOST
value: postgres
- name: _PGUSER
value: postgres
- name: _RETENTION
value: "30"
- name: _BACKUPDIR
value: "/backup"
PGHOST is the pod name of your data base. If you have a dedicated user and password for your backup, export the env vars PGUSER and PGPASS accordingly
Running the cronjob inside the same pod as your db is not a good idea (the pod where the db runs can be killed/respawned etc)
IMHO the best solution is to define a Cronjob in the same project as the db, the Job will use an official OpenShift base image with the OC CLI, and from there execute a script that will connect to the pod where the db runs (oc rsh..) and perform the backup
Or execute a script from outside OCP that will connect to the cluster (with a system account), then executeoc rsh <db pod name> <backup command>
I'm following this Microsoft Tutorial to create a Windows Server container on an Azure Kubernetes Service (AKS) cluster using Azure Cli. In the Run the Application section of this turorial, I get the following error when running the following command to deploy the application using YAML config file:
kubectl apply -f sample.yaml
error: error validating "sample.yaml": error validating data: apiVersion not set; if you choose to ignore these errors, turn validation off with --validate=false
Question: As shown in the following sample.yaml file, the apiVersion is already set. So what this error is about and how can we fix the issue?
apiVersion: apps/v1
kind: Deployment
metadata:
name: sample
labels:
app: sample
spec:
replicas: 1
template:
metadata:
name: sample
labels:
app: sample
spec:
nodeSelector:
"beta.kubernetes.io/os": windows
containers:
- name: sample
image: mcr.microsoft.com/dotnet/framework/samples:aspnetapp
resources:
limits:
cpu: 1
memory: 800M
requests:
cpu: .1
memory: 300M
ports:
- containerPort: 80
selector:
matchLabels:
app: sample
---
apiVersion: v1
kind: Service
metadata:
name: sample
spec:
type: LoadBalancer
ports:
- protocol: TCP
port: 80
selector:
app: sample
Issue resolved. The issue was related to copy/paste to Azure Cloud Shell. When you copy/paste content to vi editor in Azure Cloud Shell and if the content's first letter happens to be a then following may happen:
when opened vi in read mode, then by pasting, the first a may put user in edit mode and may not actually get that a inserted in the editor. So, in my case the content was pasted as follows (I'm only showing the first few lines here for brevity). So you notice here a was missing in the first line apiVersion: apps/v1 below:
sample.yaml file:
piVersion: apps/v1
kind: Deployment
metadata:
…..
...
This happens when you use an outdated kubectl. Can you try updating to 1.2.5 or 1.3.0 and run it again
I fixed it in my case! For more context, feel free to visit here.
Summary:
If there is any file in which you are applying the yaml configs as follows:
kubectl apply -f .
then change that to the following:
kubectl apply -f namespace.yaml
kubectl apply -f deployment.yaml
kubectl apply -f service.yaml
kubectl apply -f ingress.yaml
Basically, apply configs separately with each file.
I'm trying to deploy Kubernetes Web UI as described here: https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/
My system configuration is as follows:
$ uname -a
Linux debian 4.19.0-6-amd64 #1 SMP Debian 4.19.67-2+deb10u2 (2019-11-11) x86_64 GNU/Linux
$ /usr/bin/qemu-system-x86_64 --version
QEMU emulator version 3.1.0 (Debian 1:3.1+dfsg-8+deb10u3)
Copyright (c) 2003-2018 Fabrice Bellard and the QEMU Project developers
$ minikube version
minikube version: v1.5.2
commit: 792dbf92a1de583fcee76f8791cff12e0c9440ad-dirty
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.3", GitCommit:"b3cbbae08ec52a7fc73d334838e18d17e8512749", GitTreeState:"clean", BuildDate:"2019-11-13T11:23:11Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.2", GitCommit:"c97fe5036ef3df2967d086711e6c0c405941e14b", GitTreeState:"clean", BuildDate:"2019-10-15T19:09:08Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}
After starting the minukube cluster minikube start I created a Service Account and ClusterRoleBinding as described here: https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/creating-sample-user.md
$ nano dashboard-adminuser.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
$ kubectl apply -f dashboard-adminuser.yaml
$ nano dashboard-adminuser.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard
$ kubectl apply -f dashboard-adminuser.yaml
Now I execute:
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta6/aio/deploy/recommended.yaml
or
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended.yaml
and get the following output:
namespace/kubernetes-dashboard configured
serviceaccount/kubernetes-dashboard configured
service/kubernetes-dashboard configured
secret/kubernetes-dashboard-certs configured
secret/kubernetes-dashboard-csrf configured
secret/kubernetes-dashboard-key-holder configured
configmap/kubernetes-dashboard-settings configured
role.rbac.authorization.k8s.io/kubernetes-dashboard configured
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard configured
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard configured
deployment.apps/kubernetes-dashboard configured
service/dashboard-metrics-scraper configured
deployment.apps/dashboard-metrics-scraper configured
The ClusterRoleBinding "kubernetes-dashboard" is invalid: roleRef: Invalid value: rbac.RoleRef{APIGroup:"rbac.authorization.k8s.io", Kind:"ClusterRole", Name:"kubernetes-dashboard"}: cannot change roleRef
What happened and how to fix it?
The error "cannot change roleRef" was referring to the fact that the ClusterRoleBinding already existed.
Try deleting the existing ClusterRoleBinding kubernetes-dashboard
Run below to delete existing:
kubectl delete clusterrolebinding kubernetes-dashboard
After that try installing again. Let us know if that resolves the issue.
For me work it deleting the existing cluster role:
kubectl delete clusterrolebinding kubernetes-dashboard
issue is you missed this note :
NOTE: apiVersion of ClusterRoleBinding resource may differ between Kubernetes versions.
Prior to Kubernetes v1.8 the apiVersion was rbac.authorization.k8s.io/v1beta1.
This should solve this problem.
Edit1:
this issue talks about same problem. specifically this comment. talks about
Role bindings are immutable
Cause here is
dashboard-adminuser.yaml sets roleRef.
and
yaml file you are applying later has roleRef in same namespace.
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
Just reproduced.
1) created namespace, ServiceAccount and ClusterRolebinding
cat dashboard-adminuser.yaml
---
apiVersion: v1
kind: Namespace
metadata:
name: kubernetes-dashboard
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard
2) apply it
kubectl apply -f dashboard-adminuser.yaml
namespace/kubernetes-dashboard created
serviceaccount/admin-user created
clusterrolebinding.rbac.authorization.k8s.io/admin-user unchanged
3) Install dashboard
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta6/aio/deploy/recommended.yaml
namespace/kubernetes-dashboard unchanged
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created
I'm getting this error too, solved by running dashboard through minikube:
minikube dashboard
Output:
🤔 Verifying dashboard health ...
🚀 Launching proxy ...
🤔 Verifying proxy health ...
🎉 Opening http://127.0.0.1:34653/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ in your default browser...
Run:
kubectl delete clusterrolebinding kubernetes-dashboard
...AFTER the apply -f command, not before.