We are planning to deploy 15 different applications in azure kubernetes. These applications are owned by multiple business portfolios - eg: marketing, finance, legal.
Currently we have provisioned an AKS cluster with the following configuration
Subnet : x.x.x.x/21, that gives us ~2k IPs.
Network : Azure CNI
Network policy : calico
Min & Max nodes: Azure auto scaling
Min & Max Pods : Based on the CPU & Memory utilization, maximum allowed.
Azure services (eg: SQL) : Leverages service endpoints
On premises : Leverages private endpoint & private DNS.
Region : West US
Question :
Can we deploy all the applications on the same AKS cluster? If not, what is the industry standard & why?
The simple answer is yes. I wouldn't call it industry standard, because AFAIK there isn't one.
The more complicated answer is: It depends. It depends entirely on the security requirements you need to adhere to for the applications you're deploying.
There's a number of ways you can manage this in a single cluster, assuming you want to prevent access (Network / User) via RBAC and the CNI within Kubernetes. It does make things more complicated but arguably less cumbersome long term.
You could separate out the different apps through namespacing, node taints. There's a lot of different options that will be dependent on the requirements you need to fulfill for your clients / company.
Even if you have region specific requirements, you can manage this in a single environment.
Related
I need that an asp.net core application in azure to have redundancy. If one application fails another, take over your tasks online. I didn't find anything that I can use as a guide. Thanks for your help.
Azure VMs HA options:
Use Availability Set: An availability set is a logical grouping of VMs that allows Azure to understand how your application is built to provide for redundancy and availability. (SLA 99,95%)
Scale Sets: Azure virtual machine scale sets let you create and manage a group of load balanced VMs. The number of VM instances can automatically increase or decrease in response to demand or a defined schedule. Scale sets provide high availability to your applications, and allow you to centrally manage, configure, and update many VMs.
Load Balancing
Also follow this decission tree as starting point to choose whatever feats your needs.
I want to use Service Fabric as a microservices orchestration for gaming backend. The game will be global, so what is the recommended solution for this architecture? Many clusters in separate regions for better performance? What about actor state in this solution? (I use cosmo db/SQL Server/ and actors in my microservices) I know that I can deploy Geo-HA Service Fabric Cluster on this way but is not officially supported and can be risky.
It's possible to deploy a multi region cluster, but it's hard to deal with latency issues properly.
I recommend taking a look at the SF Mesh platform to run your services. It will support availability zones and multiple regions out of the box.
Deploy applications across Availability Zones and multiple regions for
geo-reliability.
Note that at this time it's in public preview and it doesn't support the service & actor programming models.
More info.
In Azure Service Fabric, the default number of upgrade domains is 5. Is there a way to change to a different number?
From https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-cluster-resource-manager-cluster-description#configuring-fault-and-upgrade-domains, there's ClusterManifest.xml, but it doesn't seem we should modify it.
This is not possible in Azure today. SF picks up the FD and UD information from the VM Scale Sets that it runs on, and today these are capped/locked at 5x5. SF itself doesn't care how many UDs you have, and generally recommends more so that during an upgrade you're taking down less of your overall service in terms of capacity and also have more time to react to any issues.
There are some workarounds:
Run multiple service fabric application instances. Since each application instance is independently upgradable, you end up with (app instances) * (# of UDs) separate upgrade boundaries
Run the cluster across VM Scale Sets in multiple Azure AZs
Unfortunately this only works in areas where Azure has multiple AZs exposed
I'm experimenting a little with ACS using the DC/OS orchestrator, and while spinning up a cluster within a single region seems simple enough, I'm not quite sure what the best practice would be for doing deployments across multiple regions.
Azure itself does not seem to support deploying to more than one region right now. With that assumption, I guess my only other option is to create multiple, identical clusters in all the regions I wish to be available, and then use Azure Traffic Manager to route incoming traffic to the nearest available cluster.
While this solution works, it also causes a few issues I'm not 100% sure on how I should work around.
Our deployment pipelines must make sure to deploy to all regions when deploying a new version of a service. If we have a East US and North Europe region, during deployments from our CI tool I have to connect to the Marathon API in both regions to trigger the new deployments. If the deployment fails in one region, and succeeds in the other, I suddenly have a disparity between the two regions.
If i have a service using local persistent volumes deployed, let's say PostgreSQL or ElasticSearch, it needs to have instances in both regions since service discovery will only find services local to the region. That brings up the problem of replication between regions to keep all state in all regions; this seem to require some/a lot of manual configuration to get to work.
Has anyone ever used a setup somewhat like this using Azure Container Service (or really Amazon Container Service, as I assume the same challenges can be found there) and have some pointers on how to approach this?
You have multiple options for spinning up across regions. I would use a custom installation together with terraform for each of them. This here is a great starting point: https://github.com/bernadinm/terraform-dcos
Distributing agents across different regions should be no problem, ensuring that your services will keep running despite failures.
Distributing masters (giving you control over the services during failures) is a little more diffult as it involves distributing a zookeeper quorum across high latency links, so you should be careful in choosing the "distance" between regions.
Have a look at the documentation for more details.
You are correct ACS does not currently support Multi-Region deployments.
Your first issue is specific to Marathon in DC/OS, I'll ping some of the engineering folks over there to see if they have any input on best practice.
Your second point is something we (I'm the ACS PM) are looking at. There are some solutions you can use in certain scenarios (e.g. ArangoDB is in the DC/OS universe and will provide replication). The DC/OS team may have something to say here too. In ACS we are evaluating the best approaches to providing solutions for this use case but I'm afraid I can't give any indication of timeline.
An alternative solution is to have your database in a SaaS offering. This takes away all the complexity of managing redundancy and replication.
I'm a little bit confused about when to use Azure Availability Set and when to use Azure Affinity Group.
Lets look at the key purpose of Availability set and Affinity Group briefly to begin with.
Availability Set: is predominately to provide High Availability for your deployment. Azure does this via Fault domains and Upgrade domains.
A fault domain: is basically a different hardware rack in the same datacenter. The solution will be deployed in two different hardware racks.
Upgrade domains: is exactly same like fault domains in function, but they support upgrades rather than failures. The Upgrade domain is a logical unit of instance separation that determines which instances in a particular service will be upgraded at a point in time.
Affinity Group: In order to explain it, we need to take peek into Azure DC . Windows Azure Data Centers are purpose build , you might see rows and rows of containers (something like shipping containers) that contain clusters and racks. Each of those Containers have specific services, for example, Compute and Storage, SQL Azure, Service Bus, Access Control Service, and so on. Those containers are spread across the data center.
When you deploy a service using Portal or PowerShell , the service will talk directly to RDFE (Red Dog Front End). The RDFE controls the DC and nodes. The Cluster of nodes is controlled by Fabric Controller.. When you specify Affinity Group , the Fabric controller will place all the required elements of a deployment together. This has number of advantages like reducing latency (since required elements are close together) , Networking.
There are new changes brought in related to Network Affinity group , you can refer them (https://azure.microsoft.com/en-us/documentation/articles/virtual-networks-migrate-to-regional-vnet/).
To address you question
You would use Availability set when you want to have Highly Available system and also want to have SLA for Compute. Without Availability set there wont be SLA for your VM or PaaS Instances in other words will single instances of VM (IaaS) and PaaS wont have SLA and prone to downtime during HW failure and Upgrades of OS.
Availability set can be implemented after the deployment as well. Do note there is cost associated with the Availability set , since you are running additional instances , so they will be charged.
Affinity group you need to include them at the time of Creation of the services . It cannot be updated after the creation. So it very important to include Affinity group at the time of creation. There is no additional charges for including Affinity group.
Do share your feedback if the response addresses your question.