How to dictate a master pod with NodeJS app - node.js

I'm trying to run a deployment of my NodeJS application in EKS with a ReplicaSet dictating that 3 pods should be run of the application. However, I'm trying to make some logic exclusive to one of the Pods, calling it the "master" version of the application.
Is it possible to either a) have a different environment like IS_MASTER passed to just that pod or to otherwise tell from within the application that it's running on the "master pod" without multiple deployments?

You can have a sticky identity for each pods using StatefulSets
https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/
Quoting docs
Like a Deployment, a StatefulSet manages Pods that are based on an identical container spec. Unlike a Deployment, a StatefulSet maintains a sticky identity for each of their Pods. These pods are created from the same spec, but are not interchangeable: each has a persistent identifier that it maintains across any rescheduling.
Pods will have the hostnames foo-{0..N-1} given N replicas, you can do some sort of check for a master like if the hostname is foo-0 then it is master.

Related

MongoDB change stream - Duplicate records / Multiple listeners

My question is an extension of the earlier discussion here:
Mongo Change Streams running multiple times (kind of): Node app running multiple instances
In my case, the application is deployed on Kubernetes pods. There will be at least 3 pods and a maximum of 5 pods. The solution mentioned in the above link suggests to use <this instance's id> in the $mod operator. Since the application is deployed to K8s pods, pod names are dynamic. How can I achieve a similar solution for my scenario?
if you are running stateless workload i am not sure why you want to fix name of POD(deployment).
Fixing PODs names is only possible with stateful sets.
You should be using statefulset instead of deployment, replication controllers(RC), however, replication controllers are replaced with ReplicaSets.
StatefulSet Pods have a unique identity comprised of an ordinal. For any StatefulSet with N replicas, each Pod in the StatefulSet will be assigned an integer ordinal, from 0 up through N-1, which will be unique across Set.

Azure Kubernetes - Auto-scaling & Nodeselector, Taint and Tolerance?

I have an AKS cluster with the below configuration
Windows Node Pools - 1
Nodes - 2
Node Labels - 2 : app1, app2
Pods - 4 : two pods for each app, node is selected based on the nodeselector
Pod uses Taint & Tolerance
Node auto-scaling is enabled
Now, lets says if a new node is created to support the additional load of app1. would that new node labelled automatically and taint is applied so that app1 can be deployed on that node?
When you create a nodepool, you can specify labels and taints (--nodetaints) that would be applied automatically. Once the nodepool is created, I don't think you can currently go back and add that auto-label or auto-tainting ability.

On-demand creation of a container in an existing kubernetes pod

Assume that I have a pod active and contains only one active container initially.
This container is a nodejs application in typescript and shows user interface when opened in browser.
Can this container create another container on-demand/dynamically within the SAME POD ?
How can we achieve this? Please advise.
Also, will reusing npm modules like https://www.npmjs.com/package/kubernetes-client help in creating such containers within the same pod?
Can this container create another container on-demand/dynamically within the SAME POD ? How can we achieve this?
No, the containers within a Pod is declared in the PodTemplate that need to be declared upfront before the pod is created. More specific, what use case do you have? What are you trying to achieve?
Also, will reusing npm modules like https://www.npmjs.com/package/kubernetes-client help in creating such containers within the same pod?
A kubernetes client library is useful for interacting with the ApiServer, e.g. for deploying new applications or Pods. But the Kubernetes deployment unit is a Pod - that is the smallest unit you work with. To change a Pod, you create a new one and terminated the previous one.

Manage Docker containers at low scale

I have deployed 5 apps using Azure container instances, these are working fine, the issue I have is that currently, all containers are running all the time, which gets expensive.
What I want to do is to start/stop instances when required using for this a Master container or VM that will be working all the time.
E.G.
This master service gets a request to spin up service number 3 for 2 hours then shut it down and all other containers will be off until they receive a similar request.
For my use case, each service will be used for less than 5 hours a day most of the time.
Now, I know Kubernetes its an engine made to manage containers but all examples I have found are for high scale services, not for 5 services with only one container each, also not sure if Kubernetes allows to have all the containers off most of the time.
What I was thinking on is to handle all these throw some API, but I'm not fiding any service in Azure that allows something similar to this, I have only found options to create new containers, not to spin up and shut them down.
EDIT:
Also, this apps run process that are to heavy to have them on a serverless platform.
Solution is to define horizontal pod autoscaler for your deployment.
The Horizontal Pod Autoscaler automatically scales the number of pods in a replication controller, deployment or replica set based on observed CPU utilization (or, with custom metrics support, on some other application-provided metrics). Note that Horizontal Pod Autoscaling does not apply to objects that can’t be scaled, for example, DaemonSets.
The Horizontal Pod Autoscaler is implemented as a Kubernetes API resource and a controller. The resource determines the behavior of the controller. The controller periodically adjusts the number of replicas in a replication controller or deployment to match the observed average CPU utilization to the target specified by user.
Configuration file should looks like this:
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: hpa-images-service
spec:
scaleTargetRef:
apiVersion: apps/v1beta1
kind: Deployment
name: example-deployment
minReplicas: 2
maxReplicas: 100
targetCPUUtilizationPercentage: 75
scaleRef should refer toyour deployment definition and minReplicas you can set as 0, value of targetCPUUtilization you can set according to your preferences.. Such approach should help you to save money due to termination pod which have high CPU utilization.
Kubernetes official documentation: kubernetes-hpa.
GKE autoscaler documentation: gke-autoscaler.
Useful blog about saving cash using GCP: kubernetes-google-cloud.

Run different component of an app in same deployment in GCP Kubernetes engine?

I have an app with multiple components say x,y,z, I want to run x,y with 3 pods and z with 1 pod. How can I do this in one deployment.yaml file in Kubernetes engine on GCP?
I dont think deployment can help you in this situation but StatefulSet might help here with some changes in your application as well.
As StatefulSet always create pods with some naming conventions and it stick to it even if pods are recreated. Pods are generally named as -INDEX like mypod-1, mypod-2 etc.
So make use of it, allocate first 3 pods to your two component by disabling third component. Use pod name environment variable and if pod name is not with index 4, disable your third component(this logic has to be in you application) and for pod name with index 4 , disable first two components.
You can use pod name as env variable using below configuration,
- name: Pod_Name
valueFrom:
fieldRef:
fieldPath: metadata.name

Resources