Azure AKS: Create a Kubeconfig from service account [duplicate] - azure

I have a kubernetes cluster on Azure and I created 2 namespaces and 2 service accounts because I have two teams deploying on the cluster.
I want to give each team their own kubeconfig file for the serviceaccount I created.
I am pretty new to Kubernetes and haven't been able to find a clear instruction on the kubernetes website. How do I create a kube config file for a serviceaccount?
Hopefully someone can help me out :), I rather not give the default kube config file to the teams.
With kind regards,
Bram

# your server name goes here
server=https://localhost:8443
# the name of the secret containing the service account token goes here
name=default-token-sg96k
ca=$(kubectl get secret/$name -o jsonpath='{.data.ca\.crt}')
token=$(kubectl get secret/$name -o jsonpath='{.data.token}' | base64 --decode)
namespace=$(kubectl get secret/$name -o jsonpath='{.data.namespace}' | base64 --decode)
echo "
apiVersion: v1
kind: Config
clusters:
- name: default-cluster
cluster:
certificate-authority-data: ${ca}
server: ${server}
contexts:
- name: default-context
context:
cluster: default-cluster
namespace: default
user: default-user
current-context: default-context
users:
- name: default-user
user:
token: ${token}
" > sa.kubeconfig

I cleaned up Jordan Liggitt's script a little.
Unfortunately I am not yet allowed to comment so this is an extra answer:
Be aware that starting with Kubernetes 1.24 you will need to create the Secret with the token yourself and reference that
# The script returns a kubeconfig for the ServiceAccount given
# you need to have kubectl on PATH with the context set to the cluster you want to create the config for
# Cosmetics for the created config
clusterName='some-cluster'
# your server address goes here get it via `kubectl cluster-info`
server='https://157.90.17.72:6443'
# the Namespace and ServiceAccount name that is used for the config
namespace='kube-system'
serviceAccount='developer'
# The following automation does not work from Kubernetes 1.24 and up.
# You might need to
# define a Secret, reference the ServiceAccount there and set the secretName by hand!
# See https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#manually-create-a-long-lived-api-token-for-a-serviceaccount for details
secretName=$(kubectl --namespace="$namespace" get serviceAccount "$serviceAccount" -o=jsonpath='{.secrets[0].name}')
######################
# actual script starts
set -o errexit
ca=$(kubectl --namespace="$namespace" get secret/"$secretName" -o=jsonpath='{.data.ca\.crt}')
token=$(kubectl --namespace="$namespace" get secret/"$secretName" -o=jsonpath='{.data.token}' | base64 --decode)
echo "
---
apiVersion: v1
kind: Config
clusters:
- name: ${clusterName}
cluster:
certificate-authority-data: ${ca}
server: ${server}
contexts:
- name: ${serviceAccount}#${clusterName}
context:
cluster: ${clusterName}
namespace: ${namespace}
user: ${serviceAccount}
users:
- name: ${serviceAccount}
user:
token: ${token}
current-context: ${serviceAccount}#${clusterName}
"

Look to https://github.com/superbrothers/kubectl-view-serviceaccount-kubeconfig-plugin
This plugin helps to get service account config via
kubectl view-serviceaccount-kubeconfig <service_account> -n <namespace>

Kubectl can be initialized to use a cluster account. To do so, get the cluster url, cluster certificate and account token.
KUBE_API_EP='URL+PORT'
KUBE_API_TOKEN='TOKEN'
KUBE_CERT='REDACTED'
echo $KUBE_CERT >deploy.crt
kubectl config set-cluster k8s --server=https://$KUBE_API_EP \
--certificate-authority=deploy.crt \
--embed-certs=true
kubectl config set-credentials gitlab-deployer --token=$KUBE_API_TOKEN
kubectl config set-context k8s --cluster k8s --user gitlab-deployer
kubectl config use-context k8s
The cluster file is stored under: ~/.kube/config. Now the cluster can be accessed using:
kubectl --context=k8s get pods -n test-namespace
add this flag --insecure-skip-tls-verify if you are using self signed certificate.

Revisiting this as I was looking for a way to create a serviceaccount from the command line instead of repetitive point/click tasks through Lens IDE. I came across this thread and took the original authors ideas and expanded on the capabilities as well as supporting serviceaccount creations for Kubernetes 1.24+
#!/bin/sh
# This shell script is intended for Kubernetes clusters running 1.24+ as secrets are no longer auto-generated with serviceaccount creations
# The script does a few things: creates a serviceaccount, creates a secret for that serviceaccount (and annotates accordingly), creates a clusterrolebinding or rolebinding
# provides a kubeconfig output to the screen as well as writing to a file that can be included in the KUBECONFIG or PATH
# Feed variables to kubectl commands (modify as needed). crb and rb can not both be true
# ------------------------------------------- #
clustername=some_cluster
name=some_user
ns=some_ns # namespace
server=https://some.server.com:6443
crb=false # clusterrolebinding
crb_name=some_binding # clusterrolebindingname_name
rb=true # rolebinding
rb_name=some_binding # rolebinding_name
# ------------------------------------------- #
# Check for existing serviceaccount first
sa_precheck=$(kubectl get sa $name -o jsonpath='{.metadata.name}' -n $ns) > /dev/null 2>&1
if [ -z "$sa_precheck" ]
then
kubectl create serviceaccount $name -n $ns
else
echo "serviceacccount/"$sa_precheck" already exists"
fi
sa_name=$(kubectl get sa $name -o jsonpath='{.metadata.name}' -n $ns)
sa_uid=$(kubectl get sa $name -o jsonpath='{.metadata.uid}' -n $ns)
# Check for existing secret/service-account-token, if one does not exist create one but do not output to external file
secret_precheck=$(kubectl get secret $sa_name-token-$sa_uid -o jsonpath='{.metadata.name}' -n $ns) > /dev/null 2>&1
if [ -z "$secret_precheck" ]
then
kubectl apply -f - <<EOF
apiVersion: v1
kind: Secret
type: kubernetes.io/service-account-token
metadata:
name: $sa_name-token-$sa_uid
namespace: $ns
annotations:
kubernetes.io/service-account.name: $sa_name
EOF
else
echo "secret/"$secret_precheck" already exists"
fi
# Check for adding clusterrolebinding or rolebinding (both can not be true)
if [ "$crb" = "true" ] && [ "$rb" = "true" ]
then
echo "Both clusterrolebinding and rolebinding can not be true, please fix"
exit
elif [ "$crb" = "true" ]
then
crb_test=$(kubectl get clusterrolebinding $crb_name -o jsonpath='{.metadata.name}') > /dev/null 2>&1
if [ "$crb_name" = "$crb_test" ]
then
kubectl patch clusterrolebinding $crb_name --type='json' -p='[{"op": "add", "path": "/subjects/-", "value": {"kind": "ServiceAccount", "name": '$sa_name', "namespace": '$ns' } }]'
else
echo "clusterrolebinding/"$crb_name" does not exist, please fix"
exit
fi
elif [ "$rb" = "true" ]
then
rb_test=$(kubectl get rolebinding $rb_name -n $ns -o jsonpath='{.metadata.name}' -n $ns) > /dev/null 2>&1
if [ "$rb_name" = "$rb_test" ]
then
kubectl patch rolebinding $rb_name -n $ns --type='json' -p='[{"op": "add", "path": "/subjects/-", "value": {"kind": "ServiceAccount", "name": '$sa_name', "namespace": '$ns' } }]'
else
echo "rolebinding/"$rb_name" does not exist in "$ns" namespace, please fix"
exit
fi
fi
# Create Kube Config and output to config file
ca=$(kubectl get secret $sa_name-token-$sa_uid -o jsonpath='{.data.ca\.crt}' -n $ns)
token=$(kubectl get secret $sa_name-token-$sa_uid -o jsonpath='{.data.token}' -n $ns | base64 --decode)
echo "
apiVersion: v1
kind: Config
clusters:
- name: ${clustername}
cluster:
certificate-authority-data: ${ca}
server: ${server}
contexts:
- name: ${sa_name}#${clustername}
context:
cluster: ${clustername}
namespace: ${ns}
user: ${sa_name}
users:
- name: ${sa_name}
user:
token: ${token}
current-context: ${sa_name}#${clustername}
" | tee $sa_name#${clustername}

Related

Elastic Search upgrade to v8 on Kubernetes

I am having an elastic search deployment on a Microsoft Kubernetes cluster that was deployed with a 7.x chart and I changed the image to 8.x. This upgrade worked and both elastic and Kibana was accessible, but now i need to enable THE new security feature which is included in the basic license from now on. The reason behind the security first came from the requirement to enable APM Server/Agents.
I have the following values:
- name: cluster.initial_master_nodes
value: elasticsearch-master-0,
- name: discovery.seed_hosts
value: elasticsearch-master-headless
- name: cluster.name
value: elasticsearch
- name: network.host
value: 0.0.0.0
- name: cluster.deprecation_indexing.enabled
value: 'false'
- name: node.roles
value: data,ingest,master,ml,remote_cluster_client
The elastic search and kibana pods are able to start but i am unable to set APM Integration due security. So I am enabling security using the below values:
- name: xpack.security.enabled
value: 'true'
Then i am getting an error log from the elasic search pod: "Transport SSL must be enabled if security is enabled. Please set [xpack.security.transport.ssl.enabled] to [true] or disable security by setting [xpack.security.enabled] to [false]". So i am enabling ssl using the below values:
- name: xpack.security.transport.ssl.enabled
value: 'true'
Then i am getting an error log from elastic search pod: "invalid SSL configuration for xpack.security.transport.ssl - server ssl configuration requires a key and certificate, but these have not been configured; you must set either [xpack.security.transport.ssl.keystore.path] (p12 file), or both [xpack.security.transport.ssl.key] (pem file) and [xpack.security.transport.ssl.certificate] (pem key file)".
I start with Option1, i am creating the keys using the below commands (no password / enter, enter / enter, enter, enter) and i am coping them to a persistent folder:
./bin/elasticsearch-certutil ca
./bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12
cp elastic-stack-ca.p12 data/elastic-stack-ca.p12
cp elastic-certificates.p12 data/elastic-certificates.p12
In addition I am also configuring the below values:
- name: xpack.security.transport.ssl.truststore.path
value: '/usr/share/elasticsearch/data/elastic-certificates.p12'
- name: xpack.security.transport.ssl.keystore.path
value: '/usr/share/elasticsearch/data/elastic-certificates.p12'
But the pod is still in initializing, if generate the certificates with password. then i am getting an error log from elastic search pod: "cannot read configured [PKCS12] keystore (as a truststore) [/usr/share/elasticsearch/data/elastic-certificates.p12] - this is usually caused by an incorrect password; (no password was provided)"
Then i go to Option2, i am creating the keys using the below commands and i am coping them to a persistent folder
./bin/elasticsearch-certutil ca --pem
unzip elastic-stack-ca.zip –d
cp ca.crt data/ca.crt
cp ca.key data/ca.key
In addition I am also configuring the below values:
- name: xpack.security.transport.ssl.key
value: '/usr/share/elasticsearch/data/ca.key'
- name: xpack.security.transport.ssl.certificate
value: '/usr/share/elasticsearch/data/ca.crt'
But the pod is still in initializing state without providing any logs, as i know while pod is in initializing state it does not produce any container logs. From portal side in events everything seems to be ok, except the elastic pod which is not in ready state.
At last i located the same issue to the eleastic search community, without any response: https://discuss.elastic.co/t/elasticsearch-pods-are-not-ready-when-xpack-security-enabled-is-configured/281709?u=s19k15
Here is my StatefullSet
status:
observedGeneration: 169
replicas: 1
updatedReplicas: 1
currentRevision: elasticsearch-master-7449d7bd69
updateRevision: elasticsearch-master-7d8c7b6997
collisionCount: 0
spec:
replicas: 1
selector:
matchLabels:
app: elasticsearch-master
template:
metadata:
name: elasticsearch-master
creationTimestamp: null
labels:
app: elasticsearch-master
chart: elasticsearch
release: platform
spec:
initContainers:
- name: configure-sysctl
image: docker.elastic.co/elasticsearch/elasticsearch:8.1.2
command:
- sysctl
- '-w'
- vm.max_map_count=262144
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: IfNotPresent
securityContext:
privileged: true
runAsUser: 0
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:8.1.2
ports:
- name: http
containerPort: 9200
protocol: TCP
- name: transport
containerPort: 9300
protocol: TCP
env:
- name: node.name
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: cluster.initial_master_nodes
value: elasticsearch-master-0,
- name: discovery.seed_hosts
value: elasticsearch-master-headless
- name: cluster.name
value: elasticsearch
- name: cluster.deprecation_indexing.enabled
value: 'false'
- name: ES_JAVA_OPTS
value: '-Xmx512m -Xms512m'
- name: node.roles
value: data,ingest,master,ml,remote_cluster_client
- name: xpack.license.self_generated.type
value: basic
- name: xpack.security.enabled
value: 'true'
- name: xpack.security.transport.ssl.enabled
value: 'true'
- name: xpack.security.transport.ssl.truststore.path
value: /usr/share/elasticsearch/data/elastic-certificates.p12
- name: xpack.security.transport.ssl.keystore.path
value: /usr/share/elasticsearch/data/elastic-certificates.p12
- name: xpack.security.http.ssl.enabled
value: 'true'
- name: xpack.security.http.ssl.truststore.path
value: /usr/share/elasticsearch/data/elastic-certificates.p12
- name: xpack.security.http.ssl.keystore.path
value: /usr/share/elasticsearch/data/elastic-certificates.p12
- name: logger.org.elasticsearch.discovery
value: debug
- name: path.logs
value: /usr/share/elasticsearch/data
- name: xpack.security.enrollment.enabled
value: 'true'
resources:
limits:
cpu: '1'
memory: 2Gi
requests:
cpu: 100m
memory: 512Mi
volumeMounts:
- name: elasticsearch-master
mountPath: /usr/share/elasticsearch/data
readinessProbe:
exec:
command:
- bash
- '-c'
- >
set -e
# If the node is starting up wait for the cluster to be ready
(request params: "wait_for_status=green&timeout=1s" )
# Once it has started only check that the node itself is
responding
START_FILE=/tmp/.es_start_file
# Disable nss cache to avoid filling dentry cache when calling
curl
# This is required with Elasticsearch Docker using nss < 3.52
export NSS_SDB_USE_CACHE=no
http () {
local path="${1}"
local args="${2}"
set -- -XGET -s
if [ "$args" != "" ]; then
set -- "$#" $args
fi
if [ -n "${ELASTIC_PASSWORD}" ]; then
set -- "$#" -u "elastic:${ELASTIC_PASSWORD}"
fi
curl --output /dev/null -k "$#" "http://127.0.0.1:9200${path}"
}
if [ -f "${START_FILE}" ]; then
echo 'Elasticsearch is already running, lets check the node is healthy'
HTTP_CODE=$(http "/" "-w %{http_code}")
RC=$?
if [[ ${RC} -ne 0 ]]; then
echo "curl --output /dev/null -k -XGET -s -w '%{http_code}' \${BASIC_AUTH} http://127.0.0.1:9200/ failed with RC ${RC}"
exit ${RC}
fi
# ready if HTTP code 200, 503 is tolerable if ES version is 6.x
if [[ ${HTTP_CODE} == "200" ]]; then
exit 0
elif [[ ${HTTP_CODE} == "503" && "8" == "6" ]]; then
exit 0
else
echo "curl --output /dev/null -k -XGET -s -w '%{http_code}' \${BASIC_AUTH} http://127.0.0.1:9200/ failed with HTTP code ${HTTP_CODE}"
exit 1
fi
else
echo 'Waiting for elasticsearch cluster to become ready (request params: "wait_for_status=green&timeout=1s" )'
if http "/_cluster/health?wait_for_status=green&timeout=1s" "--fail" ; then
touch ${START_FILE}
exit 0
else
echo 'Cluster is not yet ready (request params: "wait_for_status=green&timeout=1s" )'
exit 1
fi
fi
initialDelaySeconds: 10
timeoutSeconds: 5
periodSeconds: 10
successThreshold: 3
failureThreshold: 3
lifecycle:
postStart:
exec:
command:
- bash
- '-c'
- >
#!/bin/bash
# Create the
dev.general.logcreation.elasticsearchlogobject.v1.json index
ES_URL=http://localhost:9200
while [[ "$(curl -s -o /dev/null -w '%{http_code}\n'
$ES_URL)" != "200" ]]; do sleep 1; done
curl --request PUT --header 'Content-Type: application/json'
"$ES_URL/dev.general.logcreation.elasticsearchlogobject.v1.json/"
--data
'{"mappings":{"properties":{"Properties":{"properties":{"StatusCode":{"type":"text"}}}}},"settings":{"index":{"number_of_shards":"1","number_of_replicas":"0"}}}'
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: IfNotPresent
securityContext:
capabilities:
drop:
- ALL
runAsUser: 1000
runAsNonRoot: true
restartPolicy: Always
terminationGracePeriodSeconds: 120
dnsPolicy: ClusterFirst
automountServiceAccountToken: true
securityContext:
runAsUser: 1000
fsGroup: 1000
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- elasticsearch-master
topologyKey: kubernetes.io/hostname
schedulerName: default-scheduler
enableServiceLinks: true
volumeClaimTemplates:
- kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: elasticsearch-master
creationTimestamp: null
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 4Gi
volumeMode: Filesystem
status:
phase: Pending
serviceName: elasticsearch-master-headless
podManagementPolicy: Parallel
updateStrategy:
type: RollingUpdate
revisionHistoryLimit: 10
Any ideas?
Finally found the answer, maybe it helps lot of people in case they face something similar. When the pod is initializing endlessly is like sleeping. In my case a strange code inside my chart StatefullSet started causing this issue when security became enabled.
while [[ "$(curl -s -o /dev/null -w '%{http_code}\n'
$ES_URL)" != "200" ]]; do sleep 1; done
This will not return 200 as now the http excepts also a user and a password to authenticate and therefore is goes for a sleep.
So make sure that in case the pods are in initializing state and remaining there, there is no any while/sleep

kubernetes: Values from secret yaml are broken in node js container after gpg decryption

I am new to Kubernetes. I have a Kubernetes secret yaml file:
apiVersion: v1
kind: Secret
metadata:
name: my-secret
type: Opaque
data:
API_KEY: 123409uhttt
SECRET_KEY: yu676jfjehfuehfu02
that I have encoded using gpg encryption:
gpg -a --symmetric --cipher-algo AES256 -o "secrets.yaml.gpg" "secrets.yaml"
and decrypting it in github action's workflow like this:
gpg -q --batch --yes --decrypt --passphrase=$GPG_SECRET my/location/to/secrets.yaml.gpg | kubectl apply -n $NAMESPACE -f -
When I run:
kubectl get secret my-secret -n my-namespace -o yaml
I get yaml showing correct values set for API_KEY and SECRET_KEY, like this:
apiVersion: v1
data:
API_KEY: 123409uhttt
SECRET_KEY: yu676jfjehfuehfu02
kind: Secret
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","data":{"API_KEY":"123409uhttt","SECRET_KEY":"yu676jfjehfuehfu02"},"kind":"Secret","metadata":{"annotations":{},"name":"my-secret","namespace":"my-namespace"},"type":"Opaque"}
creationTimestamp: "2021-07-12T23:28:56Z"
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:data:
.: {}
f:API_KEY: {}
f:SECRET_KEY: {}
f:metadata:
f:annotations:
.: {}
f:kubectl.kubernetes.io/last-applied-configuration: {}
f:type: {}
manager: kubectl-client-side-apply
operation: Update
time: "2021-07-10T23:28:56Z"
name: my-secret
namespace: my-namespace
resourceVersion: "29813715"
uid: 89a34b6d-914eded509
type: Opaque
But when application requests using SECRET_KEY and API_KEY, it shows these values in broken encoding. I get these values printed When I log them:
Api_Key - ᶹ��4yַӭ�ӯu�ï¿8
Secret_Key - �V�s��Û[ï¶×¿zoï½9s��{�ï¿
When I don't use Api_Key and Secret_Key from secrets.yaml (as a hardcoded value in application) then it works as expected.
I need help to access secret data (Api_Key and Secret_Key) with correct values in container running node js application.
it appears as though the value of your secrets are not base64 encoded.
either change the type of data to "stringData" which does not need to be base64 encoded or encode the value of your secrets first.
e.g. echo "§SECRET_KEY" | base64 and use this value in your secrets.
the problem you describe happens as the values of the secret get injected base64 decoded into your pods.
However, when you try to decode the values you supplied by
echo "123409uhttt" | base64 -d
you get the following output: �m��ۡ��base64: invalid input

Is it possible to get k8s pod limits in my application?

I have an application running a kafka consumer inside a pod with 1.5GB of memory limit.
As you probably know, we need to write some logic to stop the consumer from fetching messages when we are about to reach the memory limit.
I was wondering to stop the consumer when the memory my application is using is above 75% of the memory limit.
So my question is... is it possible to get k8s memory limit runtime? How can I stop my consumer based how much free memory I have?
this.consumer.on('message', (message) => {
checkApplicationMemoryUsage();
executeSomethingWithMessage(message);
});
function checkApplicationMemoryUsage() {
const appMemoryConsumption = process.memoryUsage().heapUsed;
const appMemoryLimit = <?????>;
if (appMemoryConsumption / appMemoryLimit > 0.75) this.consumer.pause();
else this.consumer.resume();
}
The solution I was thinking is to pass the limits as env vars to my pod on the deployment spec, but I wish there was a better way
Yes, the Downward API provides a way to achieve what you need.
apiVersion: v1
kind: Pod
metadata:
name: kubernetes-downwardapi-volume-example-2
spec:
containers:
- name: client-container
image: k8s.gcr.io/busybox:1.24
command: ["sh", "-c"]
args:
- while true; do
echo -en '\n';
if [[ -e /etc/podinfo/cpu_limit ]]; then
echo -en '\n'; cat /etc/podinfo/cpu_limit; fi;
if [[ -e /etc/podinfo/cpu_request ]]; then
echo -en '\n'; cat /etc/podinfo/cpu_request; fi;
if [[ -e /etc/podinfo/mem_limit ]]; then
echo -en '\n'; cat /etc/podinfo/mem_limit; fi;
if [[ -e /etc/podinfo/mem_request ]]; then
echo -en '\n'; cat /etc/podinfo/mem_request; fi;
sleep 5;
done;
resources:
requests:
memory: "32Mi"
cpu: "125m"
limits:
memory: "64Mi"
cpu: "250m"
volumeMounts:
- name: podinfo
mountPath: /etc/podinfo
volumes:
- name: podinfo
downwardAPI:
items:
- path: "cpu_limit"
resourceFieldRef:
containerName: client-container
resource: limits.cpu
divisor: 1m
- path: "cpu_request"
resourceFieldRef:
containerName: client-container
resource: requests.cpu
divisor: 1m
- path: "mem_limit"
resourceFieldRef:
containerName: client-container
resource: limits.memory
divisor: 1Mi
- path: "mem_request"
resourceFieldRef:
containerName: client-container
resource: requests.memory
divisor: 1Mi
See the above example taken from Store Container fields section in K8s docs.

Create kubeconfig with restricted permission

I need to create a kubeconfig with restricted access, I want to be able to provide permission to update configmap in specific namesapce, how can I create such a kubeconfig with the following permission
for specefic namespace (myns)
update only configmap (mycm)
Is there a simple way to create it ?
The tricky part here is that I need that some program will have access to cluster X and modify only this comfigMap, How would I do it from outside process without providing the full kubeconfig file which can be problematic from security reason
To make it clear, I own the cluster, I just want to give to some program restricted permissions
This is not straight forward. But still possible.
Create the namespace myns if not exists.
$ kubectl create ns myns
namespace/myns created
Create a service account cm-user in myns namespace. It'll create a secret token as well.
$ kubectl create sa cm-user -n myns
serviceaccount/cm-user created
$ kubectl get sa cm-user -n myns
NAME SECRETS AGE
cm-user 1 18s
$ kubectl get secrets -n myns
NAME TYPE DATA AGE
cm-user-token-kv5j5 kubernetes.io/service-account-token 3 63s
default-token-m7j9v kubernetes.io/service-account-token 3 96s
Get the token and ca.crt from cm-user-token-kv5j5 secret.
$ kubectl get secrets cm-user-token-kv5j5 -n myns -oyaml
Base64 decode the value of token from cm-user-token-kv5j5.
Now create a user using the decoded token.
$ kubectl config set-credentials cm-user --token=<decoded token value>
User "cm-user" set.
Now generate a kubeconfig file kubeconfig-cm.
apiVersion: v1
kind: Config
clusters:
- cluster:
certificate-authority-data: <ca.crt value from cm-user-token-kv5j5 secret>
server: <kubernetes server>
name: <cluster>
contexts:
- context:
cluster:<cluster>
namespace: myns
user: cm-user
name: cm-user
current-context: cm-user
users:
- name: cm-user
user:
token: <decoded token>
Now create a role and rolebinding for sa cm-user.
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: myns
name: cm-user-role
rules:
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["update", "get", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: cm-user-rb
namespace: myns
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: cm-user-role
subjects:
- namespace: myns
kind: ServiceAccount
name: cm-user
We are done. Now using this kubeconfig file you can update the mycm configmap. It doesn't have any other privileges.
$ kubectl get cm -n myns --kubeconfig kubeconfig-cm
NAME DATA AGE
mycm 0 8s
$ kubectl delete cm mycm -n myns --kubeconfig kubeconfig-cm
Error from server (Forbidden): configmaps "mycm" is forbidden: User "system:serviceaccount:myns:cm-user" cannot delete resource "configmaps" in API group "" in the namespace "myns"
You need to use RBAC and define role and then bind that role to a user or serviceaccount using rolebinding
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: default
name: configmap-reader
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["configmaps"]
verbs: ["update", "get", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
# This role binding allows "jane" to read config maps in the "default" namespace.
# You need to already have a Role named "configmap-reader" in that namespace.
kind: RoleBinding
metadata:
name: read-configmap
namespace: default
subjects:
# You can specify more than one "subject"
- kind: User
name: jane # "name" is case sensitive
apiGroup: rbac.authorization.k8s.io
roleRef:
# "roleRef" specifies the binding to a Role / ClusterRole
kind: Role #this must be Role or ClusterRole
name: configmap-reader # this must match the name of the Role or ClusterRole you wish to bind to
apiGroup: rbac.authorization.k8s.io
https://kubernetes.io/docs/reference/access-authn-authz/rbac/

What is the correct way to mount a hostpah volume with gitlab runner?

I need to create a volume to expose the maven .m2 folder to be reused in all my projects but I can't do that at all.
My gitlab runner is running inside my kuberentes cluster as a container.
Follows Deployment and configmap
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: gitlab-runner
namespace: default
spec:
template:
metadata:
labels:
name: gitlab-runner
spec:
serviceAccountName: gitlab-sa
nodeName: 140.6.254.244
containers:
- name: gitlab-runner
image: gitlab/gitlab-runner
securityContext:
privileged: true
command: ["/bin/bash", "/scripts/entrypoint"]
env:
- name: KUBERNETES_NAMESPACE
value: default
- name: KUBERNETES_SERVICE_ACCOUNT
value: gitlab-sa
# This references the previously specified configmap and mounts it as a file
volumeMounts:
- mountPath: /scripts
name: configmap
livenessProbe:
exec:
command: ["/usr/bin/pgrep","gitlab.*runner"]
initialDelaySeconds: 60
timeoutSeconds: 1
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
readinessProbe:
exec:
command: ["/usr/bin/pgrep","gitlab.*runner"]
initialDelaySeconds: 10
timeoutSeconds: 1
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
volumes:
- configMap:
name: gitlab-runner-cm
name: configmap
ConfigMap:
---
apiVersion: v1
kind: ConfigMap
metadata:
name: gitlab-runner-cm
namespace: default
data:
entrypoint: |
#!/bin/bash
set -xe
cp /scripts/config.toml /etc/gitlab-runner/
# Register the runner
/entrypoint register --non-interactive --registration-token ###### --url http://gitlab.######.net --clone-url http://gitlab.######.net --executor "kubernetes" --name "Kubernetes Runner" --config "/etc/gitlab-runner/config.toml"
# Start the runner
/entrypoint run --user=gitlab-runner \
--working-directory=/home/gitlab-runner \
--config "/etc/gitlab-runner/config.toml"
config.toml: |
concurrent = 50
check_interval = 10
[[runners]]
name = "PC-CVO"
url = "http://gitlab.######.net"
token = "######"
executor = "kubernetes"
cache_dir = "/tmp/gitlab/cache"
[runners.kubernetes]
[runners.kubernetes.volumes]
[[runners.kubernetes.volumes.host_path]]
name = "maven"
mount_path = "/.m2/"
host_path = "/mnt/dados/volumes/maven-gitlab-ci"
read_only = false
[[runners.kubernetes.volumes.host_path]]
name = "gitlab-cache"
mount_path = "/tmp/gitlab/cache"
host_path = "/mnt/dados/volumes/maven-gitlab-ci-cache"
read_only = false
But even putting [[runners.kubernetes.volumes.host_path]] as informed in the documentation my volume is not mounted on the host, I tried to use a pv and pvc, but nothing worked, anyone has a light on how to expose this .m2 folder on the host so all my jobs can share it without caching?
After beating my head with name resolution issues with internal DNS, volumes for my m2 and using the docker daemon instead of docker: dind, I finally got a configuration that solves my problem, below is the final configuration files if anyone passes for the same problem.
The main problem was that when the runner was registered the config.toml file was modified by the registration process and this overwrites my settings, to solve this I made a cat after the container registration.
Deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: gitlab-runner
namespace: default
spec:
template:
metadata:
labels:
name: gitlab-runner
spec:
serviceAccountName: gitlab-sa
nodeName: 140.6.254.244
containers:
- name: gitlab-runner
image: gitlab/gitlab-runner
securityContext:
privileged: true
command: ["/bin/bash", "/scripts/entrypoint"]
env:
- name: KUBERNETES_NAMESPACE
value: default
- name: KUBERNETES_SERVICE_ACCOUNT
value: gitlab-sa
# This references the previously specified configmap and mounts it as a file
volumeMounts:
- mountPath: /scripts
name: configmap
livenessProbe:
exec:
command: ["/usr/bin/pgrep","gitlab.*runner"]
initialDelaySeconds: 60
timeoutSeconds: 1
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
readinessProbe:
exec:
command: ["/usr/bin/pgrep","gitlab.*runner"]
initialDelaySeconds: 10
timeoutSeconds: 1
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
volumes:
- configMap:
name: gitlab-runner-cm
name: configmap
Config Map (Here is the solution!)
---
apiVersion: v1
kind: ConfigMap
metadata:
name: gitlab-runner-cm
namespace: default
data:
entrypoint: |
#!/bin/bash
set -xe
cp /scripts/config.toml /etc/gitlab-runner/
# Register the runner
/entrypoint register --non-interactive --registration-token ############ --url http://gitlab.######.net --clone-url http://gitlab.######.net --executor "kubernetes" --name "Kubernetes Runner" --config "/etc/gitlab-runner/config.toml"
cat >> /etc/gitlab-runner/config.toml << EOF
[[runners.kubernetes.volumes.host_path]]
name = "docker"
path = "/var/run/docker.sock"
mount_path = "/var/run/docker.sock"
read_only = false
[[runners.kubernetes.volumes.host_path]]
name = "maven"
mount_path = "/.m2/"
host_path = "/mnt/dados/volumes/maven-gitlab-ci"
read_only = false
[[runners.kubernetes.volumes.host_path]]
name = "resolvedns"
mount_path = "/etc/resolv.conf"
read_only = true
host_path = "/etc/resolv.conf"
EOF
# Start the runner
/entrypoint run --user=gitlab-runner \
--working-directory=/home/gitlab-runner \
--config "/etc/gitlab-runner/config.toml"
config.toml: |
concurrent = 50
check_interval = 10
[[runners]]
name = "PC-CVO"
url = "http://gitlab.########.###"
token = "##############"
executor = "kubernetes"
cache_dir = "/tmp/gitlab/cache"
[runners.kubernetes]
Check if GitLab 15.6 (November 2022) can help:
Mount ConfigMap to volumes with the Auto Deploy chart
The default Auto Deploy Helm chart now supports the extraVolumes and extraVolumeMounts options.
In past releases, you could specify only Persistent Volumes for Kubernetes.
Among other use cases, you can now mount:
Secrets and ConfigMaps as files to Deployments, CronJobs, and Workers.
Existing or external Persistent Volumes Claims to Deployments, CronJobs, and Workers.
Private PKI CA certificates with hostPath mounts to achieve trust with the PKI.
Thanks to Maik Boltze for this community contribution.
See Documentation and Issue.

Resources