How does kubernetes work from within a docker container - linux

How does a Kubernetes run (kubectl get no) from within a docker container?
I know that it has to talk with the API server, but nowhere can I find a config file containing details of this (like .kube/config file found under my user)
I've done an env to check out what variables are set.
I've gone to the home directory which has a .kube directory but no config file.

As per documentation:
The recommended way to authenticate to the apiserver is with a service account credential. By kube-system, a pod is associated with a service account, and a credential (token) for that service account is placed into the filesystem tree of each container in that pod, at /var/run/secrets/kubernetes.io/serviceaccount/token
When kubectl is connecting with api using serviceaccount - token is placed in /var/run/secrets/kubernetes.io/serviceaccount/token
When you create a pod, if you do not specify a service account, it is automatically assigned the default service account in the same namespace
When you perform "config operation" with kubectl like:
kubectl config set-context test
.kube/config will be created automatically.
You can pass also different serviceAccountName into your pod and auto mount token like:
spec:
serviceAccountName: <your_service_account>
automountServiceAccountToken: true
You can find more information about Configure Service Accounts for Pods here.
Hope this help.

Related

Azure DevOps Release Pipeline || To sign in, use a web browser to open

I created the aks cluster with azure service principal id and i provided the contributer role according to the subscription and resource group.
For each and every time when i executed the pipeline the sign-in is asking and after i authenticated it is getting the data.
Also the "kubectl get" task is taking more than 30 min and is getting "Kubectl Server Version: Could not find kubectl server version"
To sign in, use a web browser to open the page https://microsoft.com/devicelogin and enter the code CRA2XssWEXUUA to authenticate
Thanks in advance
What is the version of the created cluster?
I'm assuming from your snapshot that you are using az in order to get credentials for it.
Old azure auth plugin is deprecated in V1.22+. If you are using V1.22 or above you should use kubelogin in order authenticate.
You will also need to update your kube config accordingly:
kubelogin convert-kubeconfig
and specifically if you're logging via az:
kubelogin convert-kubeconfig -l azurecli
Note that the flag -l azurecli is important here: the default value is "devicecode" which will not consider your az as a logging method - and you will still be requested a browser authentication.
Alternatively, you can set environment variable:
AAD_LOGIN_METHOD=azurecli
Because you are getting sign in request and not the deprecation warning for the auth plugin, I suspect that you already have kubelogin installed on your agent, and you just need to update the kube config file
What task are you using? There is official kubectl task: https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/deploy/kubernetes?view=azure-devops
It requires the service connection.
If you still want to execute kubectl directly, you should run the following before the kubectl inside the AzureCLI task:
az aks get-credentials --resource-group "$(resourceGroup)" --name "$(k8sName)" --overwrite-existing
Please use Selfhosted agents for executing your commands. looks like you have private endpoints for your AKS and requests are only allowed from trusted devices.
I ran into the same issue and for me the fix was to change the Connection Type in the stage definition from Azure Resource Manager to Kubernetes Service Connection - check on the screenshot below.
Then you should be able to also specify the connection type in each of the tasks where you are running kubectl or helm commands. For example, in a kubectl task, under Kubernetes Cluster --> Service connection type use the Kubernetes Service Connection:
As mentioned by #DevOpsEngg, the problem could be related to private endpoints but I wouldn't say that it is regarding selfhosted agents, because I'm using these. As an extra comment - this started happening when I added more than one user to the cluster, so you might want to check user permissions and authentication. Unfortunately, I'm still getting used to K8s so I don't have more info about that.

There is a way to pull images from azure ACR without passing secret to evrey container?

I'm using Minikube for development and I need to build a k8s app that pull all images from ACR, all images stored already on ACR.
To pull images from azure what I need to is to create secret with user&pass of the azure account and pass this secret to every image that I want to pull using imagePullSecrets (documentation here)
There is a way to add this registry as a global setting for namespace, or the project?
I don't understand why every image needs to get the secret implicitly in the spec.
Edit:
Thanks for the comments I'll check them later, for now I resolve this problem at minikube level. there is a way to set a private registry in minikube (doc here)
In my version this bug exists, and this answer resolve the problem.
As I know, if you do not use the K8s in Azure, I mean the Azure Kubernetes Service, then there are two ways I know the pull the images from ACR. One is the way you know that using the secrets. And another is to use the service account, but you also need to configure it in each deployment or the pods the same way as the secrets.
If you use the Azure Kubernetes Service, then you just need to assign the AcrPull role to the service principal of the AKS, and then you need to set nothing for each image.
You can add imagePullSecrets to a service account (e.g. to the default serviceaccout).
It will automatically add imagePullSecrets to the pod spec that has assigned this specific (e.g. default) serviceaccount, so you don't have to do it explicitly.
You can do it running:
kubectl patch serviceaccount default -p '{"imagePullSecrets": [{"name": "myregistrykey"}]}'
You can verify it with:
$ kubectl run nginx --image=nginx --restart=Never
$ kubectl get pod nginx -o=jsonpath='{.spec.imagePullSecrets[0].name}{"\n"}'
myregistrykey
Also checkout the k8s docs add-image-pull-secret-to-service-account.
In my case, I had a local Minikube installed in order to test locally my charts and my code. I tried most of the solutions suggested here and in other Stack Overflow posts and the following are the options I found out :
Move the image from the local Docker registry to Minikube's registry and set the pullPolicy to Never or IfNotPresent in your chart.
docker build . -t my-docker-image:v1
minikube image load my-docker-image:v1
$ minikube image list
rscoreacr.azurecr.io/decibel:0.0.1
k8s.gcr.io/pause:3.5
k8s.gcr.io/kube-scheduler:v1.22.3
k8s.gcr.io/kube-proxy:v1.22.3
...
##Now edit your chart and change the `pullPolicy`.
helm install my_name chart/ ## should work.
I think that the main disadvantage of this option is that you need to change your chart and remember to change the values to their previous value.
Create a secret that holds the credentials to the acr.
First login to the acr via :
az acr login --name my-registry.azurecr.io --expose-token
The output of the command should show you a user and an access token.
Now you should create a Kubernetes secret (make sure that you are on the right Kubernetes context - Minikube) :
kubectl create secret docker-registry my-azure-secret --docker-server=my-registry.azurecr.io --docker-username=<my-user> --docker-password=<access-token>
Now, if your chart uses the default service account (When you create a pod, if you do not specify a service account, it is automatically assigned the default service account in the same namespace) you should edit the service account via the following command :
kubectl patch serviceaccount default -p '{"imagePullSecrets": [{"name": "my-azure-secret"}]}'
I didn't like this option because if I have a different secret provider for every helm chart I need to overwrite the yaml with the imagePullSecrets.
Another alternative you have is using Minikube's registry creds
Personally, the solution I went for is the first solution with a tweak, instead of adding the pullPolicy in the yaml itself, I overwrite it when I install the chart :
$ helm install --set image.pullPolicy=IfNotPresent <name> charts/

Authenticating to Google Cloud Firestore from GKE with Workload Identity

I'm trying to write a simple backend that will access my Google Cloud Firestore, it lives in the Google Kubernetes Engine. On my local I'm using the following code to authenticate to Firestore as detailed in the Google Documentation.
if (process.env.NODE_ENV !== 'production') {
const result = require('dotenv').config()
//Additional error handling here
}
This pulls the GOOGLE_APPLICATION_CREDENTIALS environment variable and populates it with my google-application-credentals.json which I got from creating a service account with the "Cloud Datastore User" role.
So, locally, my code runs fine. I can reach my Firestore and do everything I need to. However, the problem arises once I deploy to GKE.
I followed this Google Documentation to set up a Workload Identity for my cluster, I've created a deployment and verified that the pods all are using the correct IAM Service Account by running:
kubectl exec -it POD_NAME -c CONTAINER_NAME -n NAMESPACE sh
> gcloud auth list
I was under the impression from the documentation that authentication would be handled for my service as long as the above held true. I'm really not sure why but my Firestore() instance is behaving as if it does not have the necessary credentials to access the Firestore.
In case it helps below is my declaration and implementation of the instance:
const firestore = new Firestore()
const server = new ApolloServer({
schema: schema,
dataSources: () => {
return {
userDatasource: new UserDatasource(firestore)
}
}
})
UPDATE:
In a bout of desperation I decided to tear down everything and re-build it. Following everything over step by step I appear to have either encountered a bug or (more likely) I did something mildly wrong the first time. I'm now able to connect to my backend service. However, I'm now getting a different error. Upon sending any request (I'm using GraphQL, but in essence it's any REST call) I get back a 404.
Inspecting the logs yields the following:
'Getting metadata from plugin failed with error: Could not refresh access token: A Not Found error was returned while attempting to retrieve an accesstoken for the Compute Engine built-in service account. This may be because the Compute Engine instance does not have any permission scopes specified: Could not refresh access token: Unsuccessful response status code. Request failed with status code 404'
A cursory search for this issue doesn't seem to return anything related to what I'm trying to accomplish, and so I'm back to square one.
I think your initial assumption was correct! Workload Identity is not functioning properly if you still have to specify scopes. In the Workload article you have linked, scopes are not used.
I've been struggling with the same issue and have identified three ways to get authenticated credentials in the pod.
1. Workload Identity (basically the Workload Identity article above with some deployment details added)
This method is preferred because it allows each pod deployment in a cluster to be granted only the permissions it needs.
Create cluster (note: no scopes or service account defined)
gcloud beta container clusters create {cluster-name} \
--release-channel regular \
--identity-namespace {projectID}.svc.id.goog
Then create the k8sServiceAccount, assign roles, and annotate.
gcloud container clusters get-credentials {cluster-name}
kubectl create serviceaccount --namespace default {k8sServiceAccount}
gcloud iam service-accounts add-iam-policy-binding \
--member serviceAccount:{projectID}.svc.id.goog[default/{k8sServiceAccount}] \
--role roles/iam.workloadIdentityUser \
{googleServiceAccount}
kubectl annotate serviceaccount \
--namespace default \
{k8sServiceAccount} \
iam.gke.io/gcp-service-account={googleServiceAccount}
Then I create my deployment, and set the k8sServiceAccount.
(Setting the service account was the part that I was missing)
kubectl create deployment {deployment-name} --image={containerImageURL}
kubectl set serviceaccount deployment {deployment-name} {k8sServiceAccount}
Then expose with a target of 8080
kubectl expose deployment {deployment-name} --name={service-name} --type=LoadBalancer --port 80 --target-port 8080
The googleServiceAccount needs to have the appropriate IAM roles assigned (see below).
2. Cluster Service Account
This method is not preferred, because all VMs and pods in the cluster will have permissions based on the defined service account.
Create cluster with assigned service account
gcloud beta container clusters create [cluster-name] \
--release-channel regular \
--service-account {googleServiceAccount}
The googleServiceAccount needs to have the appropriate IAM roles assigned (see below).
Then deploy and expose as above, but without setting the k8sServiceAccount
3. Scopes
This method is not preferred, because all VMs and pods in the cluster will have permisions based on the scopes defined.
Create cluster with assigned scopes (firestore only requires "cloud-platform", realtime database also requires "userinfo.email")
gcloud beta container clusters create $2 \
--release-channel regular \
--scopes https://www.googleapis.com/auth/cloud-platform,https://www.googleapis.com/auth/userinfo.email
Then deploy and expose as above, but without setting the k8sServiceAccount
The first two methods require a Google Service Account with the appropriate IAM roles assigned. Here are the roles I assigned to get a few Firebase products working:
FireStore: Cloud Datastore User (Datastore)
Realtime Database: Firebase Realtime Database Admin (Firebase Products)
Storage: Storage Object Admin (Cloud Storage)
Going to close this question.
Just in case anyone stumbles onto it here's what fixed it for me.
1.) I re-followed the steps in the Google Documentation link above, this fixed the issue of my pods not launching.
2.) As for my update, I re-created my cluster and gave it the Cloud Datasource permission. I had assumed that the permissions were seperate from what Workload Identity needed to function. I was wrong.
I hope this helps someone.

Azure Container status waiting due to the ImagePullBackOff

I'm new to the Azure AKS and docker. I followed the steps in this article.
Finally, I compleated all the steps and got this status.
But the external IP is not giving the actual output. I checked in the Azure portal, The container status is waiting. Am I missing anything here?
Authentication will be needed to pull the images from ACR. We have to create a docker-registry secret for authentication. To do this, open Cloud Shell on the Azure Portal and run the command below.
> kubectl create secret docker-registry mysecretname --docker-server=myacrname.azurecr.io --docker-username=myacrname--docker-password=myacrpwd --docker-email=myportalemail
Don’t forget to change your password and email address.
To access your password go to your Azure Container Registry go to
https://portal.azure.com/ » Your Container registry » Access keys
Finally make sure that docker image url in the you kubernetes yaml file is right
https://github.com/husseinsa/kubernetes-multi-container-app/blob/master/k8/frontend.yaml
https://github.com/husseinsa/kubernetes-multi-container-app/blob/master/k8/backend.yaml
spec:
containers:
- name: backend
image: mywebregistry.azurecr.io/backend:v1
ports:
- containerPort: 80
put your image url in asure container registry

How to get all running PODs on Kubernetes cluster

This simple Node.js program works fine on local because it pulls the kubernetes config from my local /root/.kube/config file
const Client = require('kubernetes-client').Client;
const Config = require('kubernetes-client/backends/request').config;
const client = new K8sClient({ config: Config.fromKubeconfig(), version: '1.13' });
const pods = await client.api.v1.namespaces('xxxxx').pods.get({ qs: { labelSelector: 'application=test' } });
console.log('Pods: ', JSON.stringify(pods));
Now I want to run it as a Docker container on cluster and get all current cluster's running PODs (for same/current namespace). Now of course it fails:
Error: { Error: ENOENT: no such file or directory, open '/root/.kube/config'
So how make it work when deployed as a Docker container to cluster?
This little service needs to scan all running PODs... Assume it doesn't need pull config data since it's already deployed.. So it needs to access PODs on current cluster
Couple of concepts to grab your head around first:
Service account
Role
Role binding
To perform you end goal (which if i understand correct): Containerize Node js application
Step 1: Put application in a container
Step 2: Create a deployment/statefulset/daemonset as per you requirement using the container created above in step 1
Explanation:
In step 2 above {by default} if you do not (explicitly) mention a serviceaccount (custom) then it will be the default account the credentials of which are mounted inside the container (by default) here
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-xxxx
readOnly: true
which can be verified by this command after (successful) pod creation
kubectl get pod -n {yournamespace(by default is default)} POD_NAME -o yaml
Now (Gotchas!!) if you cannot access the cluster with those credentials then depending on which service account you are using and the rights of that serviceaccount needs to be accessed. For example if you are using abc serviceaccount which does not have rolebinding to it then you will not be able to view the cluster. In that case you need to create (first) a role (to read pods) and a rolebinding (for that role) to the serviceaccount.
UPDATE:The problem got resolved by changing Config.fromKubeconfig() to Config.getInCluster() Ref
Clarification: fromKubeconfig() function is good if you are running your application on a node which is a part of kubernetes cluster and has cluster accessing token saved here: /$USER/.kube/config but if you want to run the nodeJS appilcation in a container in a pod then you need this Config.getInCluster() to load the token.
if you are nosy enough then check the comments of this answer! :P
Note: here the nodejs library in discussion is this

Resources