Assume that I have a pod active and contains only one active container initially.
This container is a nodejs application in typescript and shows user interface when opened in browser.
Can this container create another container on-demand/dynamically within the SAME POD ?
How can we achieve this? Please advise.
Also, will reusing npm modules like https://www.npmjs.com/package/kubernetes-client help in creating such containers within the same pod?
Can this container create another container on-demand/dynamically within the SAME POD ? How can we achieve this?
No, the containers within a Pod is declared in the PodTemplate that need to be declared upfront before the pod is created. More specific, what use case do you have? What are you trying to achieve?
Also, will reusing npm modules like https://www.npmjs.com/package/kubernetes-client help in creating such containers within the same pod?
A kubernetes client library is useful for interacting with the ApiServer, e.g. for deploying new applications or Pods. But the Kubernetes deployment unit is a Pod - that is the smallest unit you work with. To change a Pod, you create a new one and terminated the previous one.
Related
I have followed this tutorial microsoft_website to pull images from an azure container. My yaml successfully creates a pod job, which can pull the image, BUT only when it runs on the agentpool node in my cluster.
For example, adding nodeName: aks-agentpool-33515997-vmss000000 to the yamlworks fine, but specifying a different node name, e.g. nodeName: aks-cpu1-33515997-vmss000000, the pod fails. The error message I get with describe pods is Failed to pull image and then kubelet Error: ErrImagePull.
What I'm missing?
Create secret:
kubectl create secret docker-registry <secret-name> \
--docker-server=<container-registry-name>.azurecr.io \
--docker-username=<service-principal-ID> \
--docker-password=<service-principal-password>
As #user1571823 told solution to the problem is deleting the old image from the acr and creating/pushing a new one.
The problem was related to some sort of corruption in the image saved in the azure container registry (acr). The reason why one agent pool could pulled the image was actually because the image already existed in the VM.
Henceforth as #andov said it is good option to open an incident case to Azure support for AKS from your subscription, where AKS is deployed. The support team has full access to the AKS service backend and they can tell exactly what was causing your problem.
Four things to check:
Is it a subscription issue? Are the nodes in different subscriptions?
Is it a rights issue? Does the service principle of the node have rights to pull the image.
Is it a network issue? Are the nodes on different subnets?
Is there something with the image size or configuration, that means that it cannot run on the other cluster.
Edit
New-AzAksNodePool has a parameter -DefaultProfile
It can be AzContext, AzureRmContext, AzureCredential
If this is different between your nodes it would explain the error
I'm trying to run a deployment of my NodeJS application in EKS with a ReplicaSet dictating that 3 pods should be run of the application. However, I'm trying to make some logic exclusive to one of the Pods, calling it the "master" version of the application.
Is it possible to either a) have a different environment like IS_MASTER passed to just that pod or to otherwise tell from within the application that it's running on the "master pod" without multiple deployments?
You can have a sticky identity for each pods using StatefulSets
https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/
Quoting docs
Like a Deployment, a StatefulSet manages Pods that are based on an identical container spec. Unlike a Deployment, a StatefulSet maintains a sticky identity for each of their Pods. These pods are created from the same spec, but are not interchangeable: each has a persistent identifier that it maintains across any rescheduling.
Pods will have the hostnames foo-{0..N-1} given N replicas, you can do some sort of check for a master like if the hostname is foo-0 then it is master.
So on Alibaba Cloud module in terraform and found identic resource:
alicloud_cs_managed_kubernetes
https://www.terraform.io/docs/providers/alicloud/r/cs_managed_kubernetes.html
alicloud_cs_kubernetes
https://www.terraform.io/docs/providers/alicloud/r/cs_kubernetes.html
what is the different of that? i cannot differentiate that two resource
biggest difference is,
Managed kubernetes cluster, that means you can't control kubernetes master.
kubernetes cluster, you need create master as well.
master_instance_types = ["ecs.n4.small"]
Specifically speaking the differences between alicloud_cs_managed_kubernetes vs alicloud_cs_kubernetes on Terraform are can be addressed detailly with the help of parameter reference provided on official documentation.
But, the major difference between Kubernetes and Managed Kubernetes is
You don't need to manage the master node in Managed Kubernetes
When working with Azure Container Instances, is it ok to reuse an existing container group or should we be creating a new container group each time we deploy a container?
You don't have this choice when using the portal, the CLI or PowerShell, but when using the REST API, you can add a container to an existing container group. As long as the container name is unique, it will get provisioned in the existing container group and run. The question is, just because this works, is it meant to be used this way or is the designed way to create a new container group for each container deployment and once the container finished running, delete the container group.
The question is, just because this works, is it meant to be used this
way or is the designed way to create a new container group for each
container deployment and once the container finished running, delete
the container group.
For your issue, I think you are right. The container group cannot be updated. To change an existing group, you need to delete and recreate it.
There are some other limitations with the container group, you can get more details about the container in Azure from this link, also about the container group.
If you're deploying a new image for an existing application/task, then using an existing container group is fine. However, if this is a new set of functionality, then you should probably create a new container grouip.
I'm creating a kubernetes cluster, and in it I have several services. I know based on https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/services.md#discovering-services I have two options.
use the environment variables set by the kubelet.
use skydns
I want to try to use the environment variables first before I go adding another dependency into the mix. However, I'm unsure where the environment variables are for each service. I haven't found them when doing env or sudo env on the kubelet. Are they within a certain container and/or pod? If so do I have to link the other pods to that one to get its environment variables for services?
I have several NodeJS services in containers, so I'm wondering if talking to each service would require this to get the ip:
process.env('SERVICE_X_PUBLIC_IPV4') once I have the environment variable thing sorted out.
Not as important, but related, how does this all work across multiple nodes?
The environment variables for a given service are put in every container that is started after the service was created.
For example, if you create a pod foo and then later a service bar, the pod's containers won't have any environment variables for bar.
If you instead create service bar and then a pod foo, the pod's containers should have environment variables something like:
BAR_PORT=tcp://10.167.240.1:80
BAR_SERVICE_HOST=10.167.240.1
You can test this out by attaching a terminal to one of your containers, as explained here.