We are currently running AKS k8s cluster. this cluster is using a virtual node which is to handle bursting load when request/second reaches a specific limit.
we are also trying to explore APM tools like new-relic.new relic provide integration with k8s by daemon set.
query
as per my understanding daemon set run on each node.
what will be the case for virtual node and daemon set?
will daemon set run 24 by 7 on the virtual node, if yes then how to reduce cost and monitor pods running on a virtual node?
DaemonSets are currently not supported via AKS virtual node (as per documentation)
Related
We find a docker security issue to exhaust all entropy of /dev/random in Linux Kernel and causing DoS attack in the Azure AKS environment.
Reproduction steps:
Follow the AKS tutorial to set up AKS clusters. We use one Virtual Machine with 8G memory, 120G SSD Disk, linux 5.4.0-1043-azure OS, Kubernetes Version V1.18.14 and Docker Version 19.3.14, to set the Azure Kubernetes Cluster. All those settings are done through by Azure Kubernetes UI.
Deploy the docker unprivileged malicious container with UID 1000, dropping all capabilities, using limited memory 2G, running on special core and disable privilege escalation. We run malicious container in a separate Kubernetes Namespace.
In the malicious contianer, we start 100 processes and read random data from /dev/random. As a result, the entropy of /dev/random is exhausted, read request from victim container always blocked, it can not get any random data from /dev/random.
Is there any way to defend against this attack inside Azure AKS environment? Looking forward to your reply!
I have two app services(service A and Service B) developed in .net core 3.1 and hosted as two independent app service in azure. Service A is heavily dependent on service B. Is there is way (Azure offering) to make them communicate faster? Is hosting them in same container will improve inter service communication performance? Any suggestion on kubernetes?
If you are not using Azure Kubernetes Offering yet, (AKS) I would recommend spinning up a cluster. (note that it supports windows nodes in case you need them)
You should keep your services separated into two pods (https://learn.microsoft.com/en-us/azure/aks/concepts-clusters-workloads#pods)
and create a matching kubernetes service.
Now if you would like to have your POD run on the same node to increase the communication speed, you should look at using pod affinity, which will allow to have pod pods run on the same node, without having to tie them to a particular node (node affinity)
https://learn.microsoft.com/en-us/azure/aks/operator-best-practices-advanced-scheduler#inter-pod-affinity-and-anti-affinity
I have VM scale set for my Azure ServiceFabric Application deployed in Azure. I need to run RabbitMQ server on each virtual machine in my VM scale set when it starts (especially actual when I am going to scale up my cluster and new VM is going to be created). In other words I want make queue run automatically. Are there any possibilities to do the next steps after VM has been launched:
Check if RabbitMQ is already installed.
Download and install if not from specified URL.
If it has been installed just run it.
I guess this issue can be solved with virtual machine scale set Automation Script, but I am not sure. Any ideas and suggestions?
You could do this using a VM custom script extension. An extension runs on every new VM when a scale set is deployed or when it scales out.
Your extension could do the checks, install and run, and perhaps create a service so RabbitMQ runs if the VM is rebooted etc.
The following articles provide more details on deploying apps with scale sets:
Deploy your application on virtual machine scale sets
How are Applications deployed on VM Scale Sets?
I am setting up a multi container application on mesos cluster on Azure using azure container service and currently stuck in linking containers.
My setup brief info:
- Mesos cluster is deployed on Azure using Azure container service
- It's a 3 container application - A, B and C
- B is dependent on A and C is dependent on A & B-
- A is deployed currently
How can I link the above containers?
Thanks,
Suraj
If by linking you mean Docker's --link then thats deprecated practice and inter-container communication should be done using Docker networks and port mappings.
For DC/OS - you have some different ways to achieve this (also called Service Discovery). I have written a blog post explaining these different tools by examples: http://blog.itaysk.com/2017/04/28/dcos-service-discovery-and-load-balancing-by-example
If you don't want to read through that long post and looking for a recommendation: Try using VIPs.
When creating the application (either from Marathon or DC/OS UI), look for the 'VIP' setting. Enter an IP there (it can be a made up IP) and port. Your service will be discoverable under this IP:Port.
More on VIPs: https://dcos.io/docs/1.9/networking/load-balancing-vips/virtual-ip-addresses/
Does anyone know why we are experiencing on our kubernetes master node some system load peaks. I thought that the master node is not doing anything except monitoring our agent nodes.
Each time we have a system load peak of 1.8-2 on our dual-core machine, I see in the kube-controller-manager log that the master tries to start 3 things:
controllermanager.go:373] Attempting to start disruption controller
controllermanager.go:385] Attempting to start petset
controllermanager.go:460] Attempting to start certificates
Our kubernetes version is 1.4.6 and is created via the azure portal. The system peaks can we see via datadog monitoring.