Certicate and Service token in gitlab pipeline for kubernetes service - azure

I am a neophyte, I'm trying to configure my project on gitlab to be able to integrate it with a kubernetes cluster infrastructure pipeline.
While I am configuring gitlab asked for a certificate and a token. Since kuberntes is deployed on azure, how can I create/retrieve the certicate and required token?
Possibly which user / secret in the kuberntes service does it refer to?

You can get the default values of CA certificate using the below steps :
CA Certificate:
CA certificate is nothing but the Kubernetes certificate that we use in the config file for authenticating to the cluster.
Connect to AKS cluster,az aks get-credentials — resource-group <RG> — name <KubeName>
Run kubectl get secrets , after you run command in output you will
get a default token name , you can copy the name.
Run kubectl get secret <secret name> -o jsonpath="{['data']['ca\.crt']}" | base64 --decode to get the
certificate , you can copy the certificate and use it in setting the
runner.
Output:
Token :
The token will be of the service account with cluster-admin permissions which Gitlab will use to access the AKS cluster , so you can create a new admin service account if not created earlier by using below steps:
Create a Yaml file with below contents :
apiVersion: v1
kind: ServiceAccount
metadata:
name: gitlab-admin
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: gitlab-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: gitlab-admin
namespace: kube-system
Run kubectl apply -f <filename>.yaml to apply and bind the service
account to the cluster.
Run kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep gitlab-admin | awk '{print $1}') to get the token
for the Gitlab Admin we created in the file and bind with the
cluster in the previous step. You can copy the token value and use it in
the runner setting .
Output:

Related

Getting the Error: could not find the tiller while checking the helm version

I am trying to install the helm in the kubernetes, I have installed the helm successfully.
When I check the helm version it is showing the below error
`helm version
Client: $version .version{SemVer:V2XXX",Git commit:"XXXXXXXXXXXXXXXXX",GitTreeState: "clean"}
Error:could not find the tiller`
When I executed the Init command it is showing Tiller is already installed in the cluster
helm init --history-max 200 --service-account tiller
$HELM_HOME has been configured at home/user/.helm
warning: Tiller is already installed in the cluster
When I check the logs for the pod I am able to see below error
`Type Reason Age From Message
Waring: FailedCreate 11m (x25 over 132m) replicaset-controller error creating: pod "tiller-deploy-xxxxx" is forbidden: errorlooking up service account :tiller not found"`
How to resolve this issue any idea?
I tried to reproduce the same issue in my environment and got the below results
When I check the helm version I got the same Error
When I do the init command it is showing the same error like its already exist
helm init --history-max 200 --service-account tiller
I am getting this error because of I am not having the service account
To resolve this issue I have created the yaml file with service account as shown
I have created the service account using below script, this script i have taken from the SO link and made the changes as per requirements
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRoleBinding","metadata":{"annotations":{},"name":"tiller"},"roleRef":{"apiGroup":"rbac.authorization.k8s.io","kind":"ClusterRole","name":"cluster-admin"},"subjects":[{"kind":"ServiceAccount","name":"tiller","namespace":"kube-system"}]}
creationTimestamp: "XXXXXXX"
name: tiller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system
I have deployed this service account using below command
kubectl apply -f filename.yaml
Deleted the replica set and recreated new replica sets again
kubectl -n kube-system delete replicaset replica-name
After deleting the replica set it automatically recreates the new one
kubectl -n kube-system get replicaset
When I check the helm version I am able to see as shown below
Are you sure tiller service account is created?
Try create the service account and giv it the required permissions
kubectl create serviceaccount tiller --namespace kube-system
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
After that initialize Helm again and see if the error goes away
helm init --history-max 200 --service-account tiller

How we can access kubernetes ingress controller IP using https?

I have deployed application in Azure Kubernetes (AKS). I have used ingress-controller for my POC. Previously I was using domain (saurabh.com). I am able to access saurabh.com through https.
Now what I want is that I want to access my application using IP address with https.
My ingress controller yaml files looks like:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: saurabh-ingress
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
tls:
- secretName: tls-secret
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: saurabh-ui
port:
number: 4200
By doing this, I am able to access my application using IP but its coming http not https. Can someone please help me with this. I want to access my application IP through https.
Note: I have installed the certificates. When I am trying to access domain using saurabh.com, its coming with https.
Thanks in advance.
I tried reproduce the issue in my environment and got the below results
Please use this link to access the files
I have created the namespace
kubectl create namespace namespace_name
Created the applications and deployed into the kubernetes
kubectl apply -f filename.yaml
To check the namespaces which are created and we can get the IP address using below command
kubectl get svc -n namespace_name
I have installed the helm chat for controller and deployed the ingress resource into the kubernetes
NOTE: After installing the nginx controller we have to change the Cluster IP to LoadBalancer.
Here I have enabled HTTPS in AKS using cert manager, it will automatically generate and configured
I have created the namespace for cert manager
kubectl create namespace namespace_name
kubectl get svc namespace namespace_name
I have installed the cert manager using helm using below command
helm install cert-manager jetstack/cert-manager \
--namespace cert-manager \
--version v0.14.0 \
--set installCRDs=true
To check the cert manager namespace
kubectl get pods --namespace cert-manager
I have created the cluster issuer and deployed
vi filename.yaml
kubectl apply --namespace app -f filename.yaml
I have created and installed the TLS or SSL certificates
kubectl apply --namespace app -f filename.yaml
We can verify that the certificate is created or not using below command
Here it will show the certificate is created or not
kubectl describe cert app-web-cert --namespace namespace_name
Check the service using below command
kubectl get services -n app
Test the app with HTTPS: https://hostname with IP address
Here we can also check the certificates which we have added.

Secret is not creating in AKS after fetching it with CSI Driver

By using the reference of https://learn.microsoft.com/en-us/azure/aks/csi-secrets-store-nginx-tls this document, I'm trying to fetch the TLS secrets from AKV to AKS pods.
Initially I created and configured CSI driver configuration with using User Assigned Managed Identity.
I have performed the following steps:
Create AKS Cluster with 1 nodepool.
Create AKV.
Created user assigned managed identity and assign it to the nodepool i.e. to the VMSS created for AKS.
Installed CSI Driver helm chart in AKS's "kube-system" namespace. and completed all the requirement to perform this operations.
Created the TLS certificate and key.
By using TLS certificate and key, created .pfx file.
Uploaded that .pfx file in the AKV certificates named as "ingresscert".
Created new namespace in AKS named as "ingress-test".
Deployed secretProviderClass in that namespace are as follows.:
apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
name: azure-tls
spec:
provider: azure
secretObjects: # secretObjects defines the desired state of synced K8s secret objects
- secretName: ingress-tls-csi
type: kubernetes.io/tls
data:
- objectName: ingresscert
key: tls.key
- objectName: ingresscert
key: tls.crt
parameters:
usePodIdentity: "false"
useVMManagedIdentity: "true"
userAssignedIdentityID: "7*******-****-****-****-***********1"
keyvaultName: "*****-*****-kv" # the name of the AKV instance
objects: |
array:
- |
objectName: ingresscert
objectType: secret
tenantId: "e*******-****-****-****-***********f" # the tenant ID of the AKV instance
Deployed the nginx-ingress-controller helm chart in the same namespace, where certificates are binded with application.
Deployed the Busy Box deployment are as follows:
apiVersion: apps/v1
kind: Deployment
metadata:
name: busybox-one
labels:
app: busybox-one
spec:
replicas: 1
selector:
matchLabels:
app: busybox-one
template:
metadata:
labels:
app: busybox-one
spec:
containers:
- name: busybox
image: k8s.gcr.io/e2e-test-images/busybox:1.29-1
command:
- "/bin/sleep"
- "10000"
volumeMounts:
- name: secrets-store-inline
mountPath: "/mnt/secrets-store"
readOnly: true
volumes:
- name: secrets-store-inline
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: "azure-tls"
---
apiVersion: v1
kind: Service
metadata:
name: busybox-one
spec:
type: ClusterIP
ports:
- port: 80
selector:
app: busybox-one
Check secret is created or not by using command
kubectl get secret -n <namespaceName>
One thing to notice here is, if I attach shell with the busy box pod and go to the mount path which I provided to mount secrets I have seen that secrets are successfully fetched there. But this secrets are not showing in the AKS's secret list.
I have troubleshooted all the AKS,KV and manifest files but not found anything.
IF there is anything I have missed or anyone has solution for this please let me know.
Thanks in advance..!!!
i added this as a new answer, bcs the formatting was bad in the comments:
As you are using the Helm chart, you have to activate the secret sync in the values.yaml of the Helm Chart:
secrets-store-csi-driver:
syncSecret:
enabled: true
I would still recommend to use the csi-secrets-store-provider-azure as AKS Addon instead of the Helm-Chart
Your config looks good to me. One thing to consider is, that the User Assigned Managed Identity should not be the one you created for the AKS, it should be the managed identity from your nodepool (kubelet) and it also needs permission on the AKV.
I had the same issues while using the wrong Managed identity.
userAssignedIdentityID = Kubelet Client Id ( Nodepool Managed Idendity )
AZ CLI
export KUBE_ID=$(az aks show -g <resource group> -n <aks cluster name> --query identityProfile.kubeletidentity.objectId -o tsv)
export AKV_ID=$(az keyvault show -g <resource group> -n <akv name> --query id -o tsv)
az role assignment create --assignee $KUBE_ID --role "Key Vault Secrets Officer" --scope $AKV_ID

Use Key File with Application Running on Kubernetes Cluster

I'm trying to use a key file in my Kubernetes application and I can't seem to find an example of this anywhere. I want to use Firebase authentication in my NodeJS backend. When running my application locally I was using the following
admin.initializeApp({
credential: admin.credential.cert(SERVICE_ACCOUNT_KEY_PATH),
});
My initial thought was to create a secret from a key file like
$ gcloud container clusters get-credentials my-cluster --zone us-central1-c --project my-project
$ kubectl create secret generic service-account-key \
--from-file=${SERVICE_ACCOUNT_KEY_PATH}
However, since I am creating a secret there is not a path for me to set my SERVICE_ACCOUNT_KEY_PATH to when running my application in a Kubernetes container. What is the correct method for doing this in Kubernetes?
you can save the serviceaccount file inside the secret and mount the secret into the deployment volume.
so the secret will be accessible to deployment's volume and your pod can access it.
for example :
apiVersion: v1
kind: Deployment
metadata:
name: mypod
spec:
containers:
- name: mypod
image: nginx
volumeMounts:
- name: foo
mountPath: "/etc/foo"
readOnly: true
volumes:
- name: foo
secret:
secretName: mysecret
you can check out the :
https://kubernetes.io/docs/concepts/configuration/secret/#use-case-pod-with-ssh-keys
another example : https://kubernetes.io/docs/concepts/configuration/secret/#use-case-dotfiles-in-a-secret-volume
so basic idea is to mount the secret into the volume of the deployment and it will be used by the code.

AKS RBAC - Rolebinding has no effect

I'm setting up RBAC in my AKS cluster which is integrated with Azure AD following the instructions here. I have created an AD group in my AAD tenant, added a user to it. Then the group is assigned "Cluster User role" in the AKS cluster as per the instructions. Created a Role and Rolebinding as shown below:
Role:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: development
name: restricted-role
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "watch", "list"]
RoleBinding:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: read-pods
namespace: development
subjects:
- kind: Group
name: 308f50cb-e05a-4340-99d4-xxxxxxxb
apiGroup: rbac.authorization.k8s.io
namespace: development
roleRef:
kind: Role
name: restricted-role
apiGroup: rbac.authorization.k8s.io
I then tried login using the new user credentials:
az login --username kubeuser#xxx.onmicrosoft.com --password xxxx
az aks get-credentials --name mycluster --resource-group myrg --overwrite-existing
As per the documentation, I should be only allowed to do kubectl get pods on the development namespace. However, using this new user credentials, I see that I can do kubectl get pods --all-namespaces, kubectl get svc --all-namespaces etc. and view the results, as if the Rolebinding does not have any impact at all. I also have verified by checking that my cluster has
"enableRBAC": true
Can someone please tell me what is wrong with this configuration?
Using the command:
az aks show -g <rg> -n <clusterName> --query aadProfile
you can confirm if the cluster is AAD enabled. If enabled, the kubeconfig file you get from:
az aks get-credentials -g <rg_name> -n <aks_name>
should look like:
user:
auth-provider:
config:
apiserver-id: <appserverid>
client-id: <clientid>
environment: AzurePublicCloud
tenant-id: <tenant>
name: azure

Resources