Declarative way of specifying units that should run on a CoreOS cluster? - coreos

Is there a declarative way of specifying what units should be running on a CoreOS cluster and to apply that to an existing CoreOS cluster? It is undesirable to have to run manual fleetctl commands each time the unit setup changes.
It would be similar to how Ansible makes it possible to declaratively specify what packages should be installed in an server and then apply that to an existing server.

CoreOS machines can be customized by writing a cloud configuration file.
The cloud config is executed upon reboot, so you should expect to reboot the machines in your cluster when you make any changes. However, CoreOS is designed for this kind of ad-hoc rebooting so there shouldn't be any problem.
There are a couple ways to associate cloud configuration data to a VM instance. You can have instances pull cloud configuration files from a read-only storage drive or you can attach the cloud configuration file to a VM instance directly as meta-data if the cloud provider supports this (EC2 and GCE support this style of meta-data tagging)

Related

How to kick off Linux script in AKS from Web App (AZURE) on-demand

Given that I have a 24x7 AKS Cluster on AZURE, for which afaik Kubernetes cannot stop/pause a pod and then resume it standardly,
with, in my case, a small Container in a Pod, and for that Pod it can be sidelined via --replicas=0,
then, how can I, on-demand, best kick off a LINIX script packaged in that Pod/Container which may be not running,
from an AZURE Web App?
I thought using ssh should work, after first upscaling the pod to 1 replica. Is this correct?
I am curious if there are simple http calls in AZURE to do this. I see CLI and Powershell to start/stop AKS cluster, but that is different of course.
You can interact remotely with AKS by different methods. The key here is to use the control plane API to deploy your kubernetes resource programmatically (https://kubernetes.io/docs/concepts/overview/kubernetes-api/) .
In order to do that, you should use client libraries that enable that kind of access. Many examples can be found here for different programming languages:
https://github.com/kubernetes-client
ssh is not really recommended since that is sort of a god access to the cluster and its usage is not meant for your purpose.

Azure Functions can be containerized. What is the use case for it?

Azure functions can be containerized but what are the actual use cases for it. Is it portability and ease of running it in any Kubernetes environment on prem or otherwise? Or anything further?
As far as I Know,
We can run the Azure Functions in a serverless fashion i.e., backend VMs and servers managed by the Vendor (Azure). Also, I believe there are 2 Azure Container Services like Container Instances and Kubernetes Service.
Azure Kubernetes Service handles large volume of containers.
Much like running multiple virtual machines on a single physical host, you can run multiple containers in a single physical or virtual host.
In VMs, you look at OS, disk, internet, updating the VM and patching, updating the applications present in VM and all you have to manage, whereas in containers, you don’t have to look at OS, you can easily provision the services like databases, python runtime in the container and utilize them.
Example:
You have control over the VM, but containers are not like that.
Let’s say If I’m the web developer / data scientist / data analyst who wants to work only on SQL Database.
It can be installed on the Virtual Machine, and it is also available through containers.
The primary difference would be,
When you deploy on containers, it would be a simple package which would let you only focus on SQL Database, all the other configuration like dependencies like OS, Configuration comes as part of that package can be taken care by that Container Service.
But in the VM, the moment you install SQL Database, there are other dependencies you need to look at.

How to automate VMs creatiion/setup process

I am creating some VMs in Azure using Azure CLI. These VMs require different setups. For example, one machine needs to be set up as a domain controller and therefore its setup includes activities such as creating domain users, etc. While the activities for other VMs include things like joining the domain, set up fire share, etc. Currently, any activity on the individual VMs is performed manually. However, I would like to automate that process starting from creating the VMs and then performing setup on individual VM. What could be the best way of doing it? Can this type of setup on individual VMs be performed remotely?
You will want to look at the Azure Desired State Configuration (DSC) extension. DSC is a declarative platform used for configuration, deployment, and management of systems. It consists of three primary components:
Configurations are declarative PowerShell scripts which define and
configure instances of resources. Upon running the configuration, DSC
(and the resources being called by the configuration) will simply
"make it so", ensuring that the system exists in the state laid out
by the configuration. DSC configurations are also idempotent: the
Local Configuration Manager (LCM) will continue to ensure that
machines are configured in whatever state the configuration declares.
Resources are the "make it so" part of DSC. They contain the code
that put and keep the target of a configuration in the specified
state. Resources reside in PowerShell modules and can be written to
model something as generic as a file or a Windows process, or as
specific as an IIS server or a VM running in Azure.
The Local
Configuration Manager (LCM) is the engine by which DSC facilitates
the interaction between resources and configurations. The LCM
regularly polls the system using the control flow implemented by
resources to ensure that the state defined by a configuration is
maintained. If the system is out of state, the LCM makes calls to the
code in resources to "make it so" according to the configuration.
An example Azure ARM template that uses DSC to stand up a domain controller can be seen here:
https://github.com/Azure/azure-quickstart-templates/tree/master/active-directory-new-domain
Further Reading
https://learn.microsoft.com/en-us/azure/virtual-machines/extensions/dsc-overview
https://learn.microsoft.com/en-us/powershell/scripting/dsc/overview/overview?view=powershell-7.1

Hi can I have a custom script to be executed in AKS node group?

I would like to tweak some settings in AKS node group with something like userdata in AWS. Is it possible to do in AKS?
how abt using
https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/virtual_machine_scale_set_extension
The underlying Virtual Machine Scale Set (VMSS) is an implementation detail, and one that you do not get to adjust outside of SKU and disk choice. Just like you cannot pick the image that goes on the VMSS; you also cannot use VM Extensions on that scale set, without being out of support. Any direct manipulation of those VMSSs (from an Azure resource provider perspective) behind your nodepools puts you out of support. The only supported affordance to perform host (node)-level actions is via deploying your custom script work in a DaemonSet to the cluster. This is fully supported, and will give you the ability to run (almost) anything you need at the host level. Examples being installing/executing custom security agents, FIM solutions, anti-virus.
From the support FAQ:
Any modification done directly to the agent nodes using any of the IaaS APIs renders the cluster unsupportable. Any modification done to the agent nodes must be done using kubernetes-native mechanisms such as Daemon Sets.

Who manages the nodes in an AKS cluster?

I started using the AKS service with 3 nodes setup. As I was curious I peeked at the provisioned VMs which are used as nodes. I noticed I can get root on these and that there need to be some updates installed. As I couldn't find anything in the docs, my question is: Who is in charge of managing the AKS nodes (vms).
Do I have to do this myself or what is the idea here?
Thank you in advance.
Azure automatically applies security patches to the nodes in your cluster on a nightly schedule. However, you are responsible for ensuring that nodes are rebooted as required.
You have several options for performing node reboots:
Manually, through the Azure portal or the Azure CLI.
By upgrading your AKS cluster. Cluster upgrades automatically cordon
and drain nodes, then bring them back up with the latest Ubuntu
image. Update the OS image on your nodes without changing Kubernetes
versions by specifying the current cluster version in az aks
upgrade.
Using Kured, an open-source reboot daemon for Kubernetes.
Kured runs as a DaemonSet and monitors each node for the presence of
a file indicating that a reboot is required. It then manages OS
reboots across the cluster, following the same cordon and drain
process described earlier.

Resources