I want to shutdown Node.js gracefully, but it doesn't receive the preStop signal from Kubernetes.
process.on('preStop', handleShutdown);
function handleShutdown() {
console.log("Pod will shut down in 30 seconds");
}
I current do not have a preStop lifecycle command in the .yaml because I couldn't find any way to get it to notify the Node.js worker
Thank you!
preStop
command will kill the container. Basically, it will kill your app process. If you set up only 'preStop', it does not work, unfortunately. Here is a sample POD, I played with minikube to test.
apiVersion: v1
kind: Pod
metadata:
name: lifecycle-demo
spec:
containers:
- name: lifecycle-demo-container
image: nginx
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "echo Hello from the postStart handler > /usr/share/message"]
preStop:
exec:
command: ["/bin/sh", "-c", "/bin/sleep 120"]
terminationGracePeriodSeconds: 6000
I am telling to hold the kill for 120 second before killing it. If that does not happen, it will violently be killed after 6000 second.
For your case, terminationGracePeriodSeconds itself should be enough.
Related
We are running our kafka stream application on Azure kubernetes written in java. We are new to kubernetes. To debug an issue we want to take thread dump of the running pod.
Below are the steps we are following to take the dump.
Building our application with below docker file.
FROM mcr.microsoft.com/java/jdk:11-zulu-alpine
RUN apk update && apk add --no-cache gcompat
RUN addgroup -S user1 && adduser -S user1 -G user1
USER user1
WORKDIR .
COPY target/my-application-1.0.0.0.jar .
Submitting the image with below deployment yaml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-application-v1.0.0.0
spec:
replicas: 1
selector:
matchLabels:
name: my-application-pod
app: my-application-app
template:
metadata:
name: my-application-pod
labels:
name: my-application-pod
app: my-application-app
spec:
nodeSelector:
agentpool: agentpool1
containers:
- name: my-application-0
image: myregistry.azurecr.io/my-application:v1.0.0.0
imagePullPolicy: Always
command: ["java","-jar","my-application-1.0.0.0.jar","input1","$(connection_string)"]
env:
- name: connection_string
valueFrom:
configMapKeyRef:
name: my-application-configmap
key: connectionString
resources:
limits:
cpu: "4"
requests:
cpu: "0.5"
To get a shell to a Running container you can run the command below:
kubectl exec -it <POD_NAME> -- sh
To get thread dump running below command
jstack PID > threadDump.tdump
but getting permission denied error
Can some one suggest how to solve this or steps to take thread/heap dumps.
Thanks in advance
Since you likely need the thread dump locally, you can bypass creating the file in the pod and just stream it directly to a file on your local computer:
kubectl exec -i POD_NAME -- jstack 1 > threadDump.tdump
If your thread dumps are large you may want to consider piping to pv first to get a nice progress bar.
I have a usecase where (at least for now) I need a k8s pod to stay up without a HTTP or TCP endpoint. I tried the following deployment...
apiVersion: apps/v1
kind: Deployment
metadata:
name: node-deployment
labels:
app: node-app
spec:
selector:
matchLabels:
app: node-app
template:
metadata:
labels:
app: node-app
spec:
containers:
- name: node-server
image: node:17-alpine
exec:
command: ["node","-v"]
livenessProbe:
exec:
command: ["node","-v"]
initialDelaySeconds: 5
periodSeconds: 5
readinessProbe:
exec:
command: ["node","-v"]
initialDelaySeconds: 5
periodSeconds: 5
But after a while it stops the pod with the following error...
Warning BackOff 5s (x10 over 73s) kubelet, minikube Back-off restarting failed container
And the pod status shows...
node-deployment-58445f5649-z6lkz 0/1 CrashLoopBackOff 5 (103s ago) 4m47s
I know it is running because I see the version in kubectl logs <node-name>. How do I get the image to stay up with no long running process? Is this even possible?
The process of the container simply exited, the same happens if you run node -v in your terminal.
Not sure if the use case you provide is your real use case ( I can't see any reason to have the node version as an application), so ..
if you really want to have the version, you can change the command to be
["watch","-n1","node","-v"]
So it will print the node version every second
I would like to have a docker container active only during certain time's in the day so that a Test Automation can run? Is it possible?
Yes it is possible. Cronjobs is designed to run a job periodically on a given schedule, written in Cron format. A Job creates one or more Pods and will continue to retry execution of the Pods until a specified number of them successfully terminate.
To run your automation tests
You should create a Cronjob definition
Set the cron timer
Call your CMD
Here is a sample Hello Wordl example:
apiVersion: batch/v1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
imagePullPolicy: IfNotPresent
command:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
restartPolicy: OnFailure
You haven't given much information beside running and stopping a container. One of the simplest way is to use the Docker CLI to run an instance of your container in Azure Container Instances. You create and use a context for Azure and then, using docker run will create an ACI and run your container in Azure.
docker login azure
docker context create aci myacicontext
docker context use myacicontext
docker run -p 80:80 [yourImage]
docker rm [instanceName]
Ref: https://www.docker.com/blog/running-a-container-in-aci-with-docker-desktop-edge/ https://learn.microsoft.com/en-us/azure/container-instances/quickstart-docker-cli
Cross-posted from k8s discuss.
Cluster information:
Kubernetes version: v1.17.1
Cloud being used: bare-metal
Installation method: kubeadm
Host OS: Ubuntu 18.04 LTS Server
CNI and version: weave/flannel/kube-router/calico (latest releases of each)
CRI and version: docker/containerd (19.03.5/1.2.10)
Problem:
I am attempting to bring up a ROS 2 installation on Kubernetes, ideally using multiple containers in a single pod. Under the hood, ROS 2 relies upon DDS for communication, which is based upon UDP multicast.
When I bring up a simple pod deployment with two containers in a producer-consumer configuration, the consumer rarely (if ever) receives a message from the producer. When I bring up two pods, each with a single container the same producer-consumer configuration, the consumer always receives the messages.
Surprises
Every once in a while, the consumer will start up and receive messages as expected.
Furthermore, if one logs into the consumer with kubectl exec -it ros2-1 -c consumer /bin/bash then runs /ros_entrypoint.sh ros2 run demo_nodes_cpp listener, messages are sometimes received from the producer in the single pod scenario.
Expected Behavior
Successful messages appear in the logs of the consumer container as:
[INFO] [1579805884.017171859] [listener]: I heard: [Hello World: 1]
[INFO] [1579805885.017168023] [listener]: I heard: [Hello World: 2]
[INFO] [1579805886.017025092] [listener]: I heard: [Hello World: 3]
Actual Behavior
No such log messages are observed from the consumer.
Steps to Reproduce:
Failure within same pod
Bring up a kubernetes cluster
Apply the following pod definition: ros2-1.yaml
apiVersion: v1
kind: Pod
metadata:
name: ros2-1
spec:
containers:
- name: producer
image: osrf/ros2:nightly
args: ["ros2", "run", "demo_nodes_cpp", "talker"]
- name: consumer
image: osrf/ros2:nightly
args: ["ros2", "run", "demo_nodes_cpp", "listener"]
restartPolicy: Never
Watch for messages from the consumer with kubectl logs --follow ros2-1 consumer.
Success in different pods
Bring up a kubernetes cluster
Apply the following pod definition: ros2-2.yaml
apiVersion: v1
kind: Pod
metadata:
name: ros2-2-producer
spec:
containers:
- name: producer
image: osrf/ros2:nightly
args: ["ros2", "run", "demo_nodes_cpp", "talker"]
restartPolicy: Never
---
apiVersion: v1
kind: Pod
metadata:
name: ros2-2-consumer
spec:
containers:
- name: consumer
image: osrf/ros2:nightly
args: ["ros2", "run", "demo_nodes_cpp", "listener"]
restartPolicy: Never
Watch for messages from the consumer with kubectl logs --follow ros2-2-consumer.
Questions:
What is causing a single pod deployment to fail, but multi pod deployment to succeed?
I am unfamiliar with debugging networking issues within the Kubernetes environment, while fairly experienced on bare-metal. How should I go about investigating this issue under flannel, weave, or kube-router?
I have a cronjob that runs every 10 minutes. So every 10 minutes, a new pod is created. After a day, I have a lot of completed pods (not jobs, just one cronjob exists). Is there way to automatically get rid of them?
That's a work for labels.
Use them on your CronJob and delete completed pods using a selector (-l flag).
For example:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: my-cron
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
metadata:
labels:
app: periodic-batch-job
is-cron: "true"
spec:
containers:
- name: cron
image: your_image
imagePullPolicy: IfNotPresent
restartPolicy: OnFailure
Delete all cron-labeled pods with:
kubect delete pod -l is-cron
Specifically for my situation, my pods where not fully terminating as I was running one container with the actual job, another with cloud sql proxy, and cloud sql proxy was preventing the pod from completing successfully.
The fix was to kill the proxy process after 30 seconds (my jobs typically take couple of seconds). Then once the job completes, successfulJobsHistoryLimit on the cronjob kicks in and keeps (by default) only 3 last pods.
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.11
command: ["sh", "-c"]
args:
- /cloud_sql_proxy -instances=myinstance=tcp:5432 -credential_file=/secrets/cloudsql/credentials.json & pid=$! && (sleep 30 && kill -9 $pid 2>/dev/null)