Running Kubernetes across cloud providers - azure

Our goal is to run kubernetes in AWS and Azure with minimal customization (setting up kubernetes managed env), support and maintenance. We need portability of containers across cloud providers.
Our preferred cloud provider is AWS. We are planning on running containers in EKS. We wanted to understand the customization effort required to run these containers in AKS.
Would you recommend choosing a container management platform like Pivotal Cloud Foundry or Redhat OpenShift or run them on AWS EKS or AKS where customization is less to run containers across different cloud providers.

You need to define a common set of storage classes that map to similar volume types on each provider. If you are using some kind of provider based Ingress controller those can vary so I would recommend using an internal one like nginx or traefik. If you are using customization annotations for things like networking those can vary, but using those is pretty rare. Others k8s is k8s.

Related

Orchestration of on-demand jobs on Azure cloud

I am facing the following problem: I need to execute on-demand long running workers on Azure VMs. These workers are wrapped in a docker image.
So I looked at what Azure is offering and I seem to have the following two options:
Use a VM with docker-compose. This means I need to be able to programatically start a VM, run the docker image on it, and then shutdown the VM (the specs we use are quite expensive and we can't let it run indefinitely). However this means writing orchestration logic ourselves. Is there a service that maybe we could use to make life easier?
Setting up a k8s cluster. However, I am not sure how pricing works here. Would I be able to add the type of the VMs we use to the cluster, and then use the k8s API to start on-demand containers? How would I get priced in this case?
If the only thing you need are workers, there are a few more options you have. Which service suits best depends on the requirements you have. Based on what's in your question, I would think one of the following two might fit best:
Azure Container Instances
Azure Container Instances offers the fastest and simplest way to run a container in Azure, without having to manage any virtual machines and without having to adopt a higher-level service.
Azure Container Instances is a great solution for any scenario that can operate in isolated containers, including simple applications, task automation, and build jobs.
Azure Container Apps (preview)
Azure Container Apps enables you to run microservices and containerized applications on a serverless platform. Common uses of Azure Container Apps include:
Deploying API endpoints
Hosting background processing applications
Handling event-driven processing
Running microservices
According to Azure's Container services page, here are your options:
IF YOU WANT TO
USE THIS
Deploy and scale containers on managed Kubernetes
Azure Kubernetes Service (AKS)
Deploy and scale containers on managed Red Hat OpenShift
Azure Red Hat OpenShift
Build and deploy modern apps and microservices using serverless containers
Azure Container Apps
Execute event-driven, serverless code with an end-to-end development experience
Azure Functions
Run containerized web apps on Windows and Linux
Web App for Containers
Launch containers with hypervisor isolation
Azure Container Instances
Deploy and operate always-on, scalable, distributed apps
Azure Service Fabric
Build, store, secure, and replicate container images and artifacts Azure
Container Registry
EDIT:
Based on the comment
Let's say the only requirement is that I am able to use the resources on-demand, so I only end up spending the amount of money that would take for a certain job to finish execution. What would you use?
the answer would most probably be Container Apps, if the code you have available is not easily migrated to an Azure Function. The most important reason: they are Serverless, which means they scale to zero and you only pay for actual consumption. Next to that, you have to write limited to no orchestration logic, since the container apps can scale based on event sources.
Enables event-driven application architectures by supporting scale based on traffic and pulling from event sources like queues, including scale to zero.
Another great resource is Comparing Container Apps with other Azure container options.

Sandboxing (gVisor or Kata Containers) for Azure Kubernetes Service to run untrusted code

I'm looking to build a solution that is very similar to what Azure DevOps or any CI/CD product has, which, takes user submitted executable's, code, PowerShell/cmd commands, etc. executes them to deploy applications. Essentially I had the untrusted code execution problem.
Needs to be multi-tenant. Complete isolation from other tenants
There could be 500+ tenants.
I see that Google Cloud has GKE Sandbox which is a possible solution, but I was hoping for something in Azure instead.
Is it possible to use Kata Containers or gVisor for AKS so I can have kernel level isolate between containers?
You could run the user containers in ACI which has hypervisor isolation, managed by Azure.
https://learn.microsoft.com/en-us/azure/container-instances/container-instances-overview#hypervisor-level-security
And integrates with AKS
https://learn.microsoft.com/en-us/azure/aks/concepts-scale#burst-to-azure-container-instances

How to apply Kubernetes cluster to an existing azure virtual mechines

I have an existing azure virtual machines that deployed 30 docker containers.
So I have decided to use Kubernetes service/cluster to manage deploy dockers container on that existing azure virtual machines.
I have also deploy azure registry to store docker images.
Is it possible way?
Please help to give me your opinion?
If you are familiar with Ansible then the best way is probably Kubespray. It is capable of creating clusters almost of any complexity and also contains many features that other cluster management tools like kubeadm don't have.

serverless architecture how it works based on what criteria

can anyone tell me how the serverless architecture works
and some people are saying this is the next technology? and is this help for Linux administration?
Serverless is a technology that you can use to create infrastructure as code to work with your cloud provider. An example would be if your company uses Amazon Web Services and you need to create a lambda function. You can do this via serverless and include several infrastructure properties such a virtual private cloud, which IAM roles to use, creating an s3 bucket, having your lambda listen to sns topics, deploying on multiple environments.
Currently our company uses Amazon Web Services in combination with the Hashicorp Stack, (Terraform, Vault, etc.), as well as serverless to create our IAC quickly.
As far as this being the next technology, I can say that maybe not serverless, but infrastructure as code is extremely powerful, reusable, fast failing, and useful.
An example could be you your work place has a production environment and a dev environment. You can deploy the same serverless project to dev and production and if you interpolate the values properly you have a serverless project that can be deployed on any of your environments.
Is technology helpful for a linux admin? I cannot attest to this as I have only used Serverless interactions with cloud providers. I believe that is what Serverless was created for.

Using Kubernetes as docker containers orchestration in PCF

I have a requirement to use Docker containers in PCF deployed in Azure.
And now we want to use kubernetes as container orchestration.
Does kubernetes can be used here ?
Or PCF will take care of the container orchasteration ?
Which one would be the better approach here ?
PKS Is Pivotal's answer to running kubernetes in PCF (regardless of IaaS)
Pivotal Cloud Foundry (PCF) is a sophisticated answer from Microsoft to current cloud expectations. PCF offers the best platform to run Microsoft based technology like .NET, and smoothly supports enterprise Java application. You can run Kubernetes there with fine results, but to achieve comfortable orchestration and management of containers I suggest reading about GKE or setting up your own Kubernetes cluster using kubespray utility.

Resources