I am trying to create windows nodes in an already existing kubernetes cluster in Azure. The kubernetes cluster has two Linux nodes running on them.
I am trying to use az aks cli to create windows nodes but I don't see any option.
So can we create Linux and Windows nodes in the same kubernetes cluster? If yes, How?
Yes, this is posible, but not using the CLI\portal (at this stage). You need to use ACS engine.
You need to use this definition (adjust it to your needs):
https://github.com/Azure/acs-engine/blob/master/examples/windows/kubernetes-hybrid.json
There is a bit of a learning curve, but not that hard.
https://github.com/Azure/acs-engine/blob/master/docs/kubernetes/deploy.md
https://github.com/Azure/acs-engine/blob/master/docs/clusterdefinition.md
Related
Given that I have a 24x7 AKS Cluster on AZURE, for which afaik Kubernetes cannot stop/pause a pod and then resume it standardly,
with, in my case, a small Container in a Pod, and for that Pod it can be sidelined via --replicas=0,
then, how can I, on-demand, best kick off a LINIX script packaged in that Pod/Container which may be not running,
from an AZURE Web App?
I thought using ssh should work, after first upscaling the pod to 1 replica. Is this correct?
I am curious if there are simple http calls in AZURE to do this. I see CLI and Powershell to start/stop AKS cluster, but that is different of course.
You can interact remotely with AKS by different methods. The key here is to use the control plane API to deploy your kubernetes resource programmatically (https://kubernetes.io/docs/concepts/overview/kubernetes-api/) .
In order to do that, you should use client libraries that enable that kind of access. Many examples can be found here for different programming languages:
https://github.com/kubernetes-client
ssh is not really recommended since that is sort of a god access to the cluster and its usage is not meant for your purpose.
I would like to tweak some settings in AKS node group with something like userdata in AWS. Is it possible to do in AKS?
how abt using
https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/virtual_machine_scale_set_extension
The underlying Virtual Machine Scale Set (VMSS) is an implementation detail, and one that you do not get to adjust outside of SKU and disk choice. Just like you cannot pick the image that goes on the VMSS; you also cannot use VM Extensions on that scale set, without being out of support. Any direct manipulation of those VMSSs (from an Azure resource provider perspective) behind your nodepools puts you out of support. The only supported affordance to perform host (node)-level actions is via deploying your custom script work in a DaemonSet to the cluster. This is fully supported, and will give you the ability to run (almost) anything you need at the host level. Examples being installing/executing custom security agents, FIM solutions, anti-virus.
From the support FAQ:
Any modification done directly to the agent nodes using any of the IaaS APIs renders the cluster unsupportable. Any modification done to the agent nodes must be done using kubernetes-native mechanisms such as Daemon Sets.
I started using the AKS service with 3 nodes setup. As I was curious I peeked at the provisioned VMs which are used as nodes. I noticed I can get root on these and that there need to be some updates installed. As I couldn't find anything in the docs, my question is: Who is in charge of managing the AKS nodes (vms).
Do I have to do this myself or what is the idea here?
Thank you in advance.
Azure automatically applies security patches to the nodes in your cluster on a nightly schedule. However, you are responsible for ensuring that nodes are rebooted as required.
You have several options for performing node reboots:
Manually, through the Azure portal or the Azure CLI.
By upgrading your AKS cluster. Cluster upgrades automatically cordon
and drain nodes, then bring them back up with the latest Ubuntu
image. Update the OS image on your nodes without changing Kubernetes
versions by specifying the current cluster version in az aks
upgrade.
Using Kured, an open-source reboot daemon for Kubernetes.
Kured runs as a DaemonSet and monitors each node for the presence of
a file indicating that a reboot is required. It then manages OS
reboots across the cluster, following the same cordon and drain
process described earlier.
When using DC/OS on Azure, and I deploy a container, how can I guarantee if I launch 2 instances that they are on different physical machines (provided I have at least 2 agents).
This is not Azure specific, it applies to any DC/OS (and with it Marathon) setup: you use constraints for placement, in this case UNIQUE for hostname, see also the Marathon docs for details.
Is there a declarative way of specifying what units should be running on a CoreOS cluster and to apply that to an existing CoreOS cluster? It is undesirable to have to run manual fleetctl commands each time the unit setup changes.
It would be similar to how Ansible makes it possible to declaratively specify what packages should be installed in an server and then apply that to an existing server.
CoreOS machines can be customized by writing a cloud configuration file.
The cloud config is executed upon reboot, so you should expect to reboot the machines in your cluster when you make any changes. However, CoreOS is designed for this kind of ad-hoc rebooting so there shouldn't be any problem.
There are a couple ways to associate cloud configuration data to a VM instance. You can have instances pull cloud configuration files from a read-only storage drive or you can attach the cloud configuration file to a VM instance directly as meta-data if the cloud provider supports this (EC2 and GCE support this style of meta-data tagging)