Clean k8s cronjob and its running pods - cron

I encountered one weird cleanup issue in k8s when cleaning up CronJob and its associated running Pods. Look like some component created a new Pod when i did the following:
Delete the CronJob by using REST API of k8s
Delete the Pod by using REST API of k8s
What I expected after doing 1) and then 2), there will be no CronJob Pod will be running. But i observed:
The CronJob has been deleted
The existing running Pods are deleted also
A new CronJob Pod is created by some component in k8s
I am wondering if the steps to clean up CronJob and its Pods is correct and which component in k8s creates the new CronJob Pod ?

Related

Is there a way to fix a CoreDNS deploy in AKS which is always on CrashLoopBackOff?

To bring some context, I was using AKS and have deployed a APIM solution on a cluster, which was working fine for a month, but some days ago, I went back on my cluster and the CoreDNS & the CoreDNS autoscaler pods are on a CrashBackLoop.
Here are the descriptions of the Pod:
I've tried to scale the deployment
Restarted the deployment
Deleted the pods, updated the deployment image
But none of the actions I did worked so far, if anyone have any suggestions
Here are the deployments files if it can help:
I partially resolved my problem by restarting my cluster on AKS.

How to dictate a master pod with NodeJS app

I'm trying to run a deployment of my NodeJS application in EKS with a ReplicaSet dictating that 3 pods should be run of the application. However, I'm trying to make some logic exclusive to one of the Pods, calling it the "master" version of the application.
Is it possible to either a) have a different environment like IS_MASTER passed to just that pod or to otherwise tell from within the application that it's running on the "master pod" without multiple deployments?
You can have a sticky identity for each pods using StatefulSets
https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/
Quoting docs
Like a Deployment, a StatefulSet manages Pods that are based on an identical container spec. Unlike a Deployment, a StatefulSet maintains a sticky identity for each of their Pods. These pods are created from the same spec, but are not interchangeable: each has a persistent identifier that it maintains across any rescheduling.
Pods will have the hostnames foo-{0..N-1} given N replicas, you can do some sort of check for a master like if the hostname is foo-0 then it is master.

On-demand creation of a container in an existing kubernetes pod

Assume that I have a pod active and contains only one active container initially.
This container is a nodejs application in typescript and shows user interface when opened in browser.
Can this container create another container on-demand/dynamically within the SAME POD ?
How can we achieve this? Please advise.
Also, will reusing npm modules like https://www.npmjs.com/package/kubernetes-client help in creating such containers within the same pod?
Can this container create another container on-demand/dynamically within the SAME POD ? How can we achieve this?
No, the containers within a Pod is declared in the PodTemplate that need to be declared upfront before the pod is created. More specific, what use case do you have? What are you trying to achieve?
Also, will reusing npm modules like https://www.npmjs.com/package/kubernetes-client help in creating such containers within the same pod?
A kubernetes client library is useful for interacting with the ApiServer, e.g. for deploying new applications or Pods. But the Kubernetes deployment unit is a Pod - that is the smallest unit you work with. To change a Pod, you create a new one and terminated the previous one.

How to get all running PODs on Kubernetes cluster

This simple Node.js program works fine on local because it pulls the kubernetes config from my local /root/.kube/config file
const Client = require('kubernetes-client').Client;
const Config = require('kubernetes-client/backends/request').config;
const client = new K8sClient({ config: Config.fromKubeconfig(), version: '1.13' });
const pods = await client.api.v1.namespaces('xxxxx').pods.get({ qs: { labelSelector: 'application=test' } });
console.log('Pods: ', JSON.stringify(pods));
Now I want to run it as a Docker container on cluster and get all current cluster's running PODs (for same/current namespace). Now of course it fails:
Error: { Error: ENOENT: no such file or directory, open '/root/.kube/config'
So how make it work when deployed as a Docker container to cluster?
This little service needs to scan all running PODs... Assume it doesn't need pull config data since it's already deployed.. So it needs to access PODs on current cluster
Couple of concepts to grab your head around first:
Service account
Role
Role binding
To perform you end goal (which if i understand correct): Containerize Node js application
Step 1: Put application in a container
Step 2: Create a deployment/statefulset/daemonset as per you requirement using the container created above in step 1
Explanation:
In step 2 above {by default} if you do not (explicitly) mention a serviceaccount (custom) then it will be the default account the credentials of which are mounted inside the container (by default) here
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-xxxx
readOnly: true
which can be verified by this command after (successful) pod creation
kubectl get pod -n {yournamespace(by default is default)} POD_NAME -o yaml
Now (Gotchas!!) if you cannot access the cluster with those credentials then depending on which service account you are using and the rights of that serviceaccount needs to be accessed. For example if you are using abc serviceaccount which does not have rolebinding to it then you will not be able to view the cluster. In that case you need to create (first) a role (to read pods) and a rolebinding (for that role) to the serviceaccount.
UPDATE:The problem got resolved by changing Config.fromKubeconfig() to Config.getInCluster() Ref
Clarification: fromKubeconfig() function is good if you are running your application on a node which is a part of kubernetes cluster and has cluster accessing token saved here: /$USER/.kube/config but if you want to run the nodeJS appilcation in a container in a pod then you need this Config.getInCluster() to load the token.
if you are nosy enough then check the comments of this answer! :P
Note: here the nodejs library in discussion is this

Azure Container Service (ACS) - How can I safely restart a worker node?

I've deployed a Kubernetes cluster using the Azure CLI (az acs create command). The nodes in the cluster are running Windows.
I want to shutdown and restart a worker node.
I tried kubectl drain to remove the node from the cluster. This works and the node status changes to 'Ready, SchedulingDisabled'
I then shutdown the node using the Azure portal. At this point the node status changes to 'NotReady, SchedulingDisabled'
I then restart the node using the Azure portal. However, the node status remains at 'NotReady, SchedulingDisabled'. I was expecting it to become 'Ready, SchedulingDisabled' and be then able to run kubectl uncordon to make it available again.
What is the recommended process for shutting down and restarting nodes in a Kubernetes cluster?

Resources