Logically seperate azure kubernetes deployments - azure

I have a kubernetes cluster created and deployed with app.
First if I deployed with firstapp.yaml which created a pod and a service to expose the pod externally .
If i have two nodes in the cluster and then make another deployment with secondapp.yaml .
I noticed ,that the second deployment went to different node. Although this is desired behaviour for logical seperation .
Is it something that's provided by kubernetes. How will it manage deployments made using different files? will they always go on seperate nodes (if there are nodes provisioned) ?
If not, what is the practice to be followed if i want logical seperation between two nodes which i want to behave as two environments , let's say dev and qa environment.

No, they will not necessary go to different nodes. Scheduler determines where to put the pod based on different criteria.
As for your last question - it makes no sense. You can use namespaces\network policies to separate environments, you shouldn't care on which node(s) your pods are. Thats the whole point of having a cluster.
You can use placement constraints to achieve what you ask for, but it makes no sense at all.
https://kubernetes.io/docs/concepts/configuration/assign-pod-node/

Agreed with #4c74356b41.
As an addition. It does not matter where your pods are, you can have multiple replicas of your application split between say 50 nodes, they can still communicate with each other (services, service discovery, network CNI) and share resources etc.
And yes, this is a default behavior of Kubernetes, which you can influence by taints, toleration's, resources, limits Node affinity and anti-affinity (you can find a lot of information about each of those in documentation or simply googling it). Also where the Pods are scheduled is dependent on Node capacity. Your pod has been set to a particular Node because Scheduler calculated it had best score, first taking into account mentioned conditions. You can find details about the process here.
Again as #4c74356b41 mentions, if you want to split your cluster into multiple environments, lets say for different teams or as you mention for dev and qa environments you can use namespaces for that. They are basically making a smaller clusters in your cluster (note that this is more of a logical separation and not a separation from security perspective, until you add other components like
for example roles)
You can just add a namespace field to your deployment YAML to specify into which namespace you want to deploy your pods - still does not matter on which nodes they are. Depending on your use case.
Please note that what I wrote is oversimplified and I didn't mention many things in between, which you can easily find in most Kubernetes tutorials.

Related

Node js service that manages k8s jobs - is it feasible?

I need to create a node.js service that is supposed to run a large number (potentially hundreds) of scheduled jobs simultaneously. This service should also expose a REST interface to allow end users to perform CRUD on these jobs.
At first I thought of going for agenda.js and since we use k8s, launching a few instances so we could deal with this amount of jobs.
However, I also thought of another idea and wanted to see if somebody already done if - since we use k8s, I thought of harnessing the power of k8s jobs and create a service that will communicate with k8s api and manage the jobs.
Is it feasible? What things do I have to take into consideration if i'm going in this direction?
what you want is basically the definition of kubernetes operator, and yes, it is possible to do what you want.
in your case, you can use the kubernetes client for nodejs

How do I determine the number of Node Types, Number of nodes and VM size in Service Fabric cluster for a relatively simple but high throughput API?

I have an Asp.Net core 2.0 Wen API that has a relatively simple logic (simple select on a SQL Azure DB, return about 1000-2000 records. No joins, aggregates, functions etc.). I have only 1 GET API. which is called from an angular SPA. Both are deployed in service fabric as as stateless services, hosted in Kestrel as self hosting exes.
considering the number of users and how often they refresh, I've determined there will be around 15000 requests per minute. in other words 250 req/sec.
I'm trying to understand the different settings when creating my Service Fabric cluster.
I want to know:
How many Node Types? (I've determined as Front-End, and Back-End)
How many nodes per node type?
What is the VM size I need to select?
I have ready the azure documentation on cluster capacity planning. while I understand the concepts, I don't have a frame of reference to determine the actual values i need to provide to the above questions.
In most places where you read about the planning of a cluster they will suggest that this subject is part science and part art, because there is no easy answer to this question. It's hard to answer it because it depends a lot on the complexity of your application, without knowing the internals on how it works we can only guess a solution.
Based on your questions the best guidance I can give you is, Measure first, Measure again, Measure... Plan later. Your application might be memory intensive, network intensive, CPU, Disk and son on, the only way to find the best configuration is when you understand it.
To understand your application before you make any decision on SF structure, you could simply deploy a simple cluster with multiple node types containing one node of each VM size and measure your application behavior on each of them, and then you would add more nodes and span multiple instances of your service on these nodes and see which configuration is a best fit for each service.
1.How many Node Types?
I like to map node types as 1:1 to roles on your application, but is not a law, it will depend how much resource each service will consume, if the service consume enough resource to make a single VM(node) busy (Memory, CPU, Disk, IO), this is a good candidate to have it's own node type, in other cases there are services that are light-weight that would be a waste of resources provisioning an entire VM(node) just for it, an example is scheduled jobs, backups, and so on, In this case you could provision a set of machines that could be shared for these services, one important thing you have to keep in mind when you share a node-type with multiple service is that they will compete for resources(memory, CPU, network, disk) and the performance measures you took for each service in isolation might not be the same anymore, so they would require more resources, the option is test them together.
Another point is the number of replicas, having a single instance of your service is not reliable, so you would have to create replicas of it(the right number I describe on next answer), in this case you end up with a service load split in to multiple nodes, making this node-type under utilized, is where you would consider joining services on same node-type.
2.How many nodes per node type?
As stated before, it will depend on your service resource consumption, but a very basic rule is a minimum of 3 per node type.
Why 3?
Because 3 is the lowest number where you could have a rolling update and guarantee a quorum of 51% of nodes\service\instances running.
1 Node: If you have a service running 1 instance in a node-type of 1 node, when you deploy a new version of your service, you would have to bring down this instance before the new comes up, so you would not have any instance to serve the load while upgrading.
2 Nodes: Similar to 1 node, but in this case you keep only 1 node running, in case of failure, you wouldn't have a failover to handle the load until the new instance come up, it will worse if you are running a stateful service, because you will have only one copy of your data during the upgrade and in case of failure you might loose data.
3 Nodes: During a update you still have 2 nodes available, when the one being updated get back, the next one is put down and you still have 2 nodes running, in case of failure of one node, the other node can support the load until a new node is deployed.
3 nodes does not mean the your cluster will be highly reliable, it means the chances of failure and data loss will be lower, you might be unlucky a loose 2 nodes at same time. As suggested in the docs, in production is better to always keep the number of nodes as 5 or more, and plan to have a quorum of 51% nodes\services available. In this case I would recommend 5, 7 or 9 nodes in cases you really need higher uptime 99.9999...%
3.What is the VM size I need to select?
As said before, only measurements will give this answer.
Observations:
These recommendations does not take into account the planning for primary node types, it is recommended to have at least 5 nodes on primary Node Types, it is where SF system services are placed, they are responsible to manage the
cluster, so they must be highly reliable, otherwise you risk losing control of your cluster. If you plan to share these nodes with your application services, keep in mind that your services might impact them, so you have to always monitor them to check for any impact it might cause.

Node.js scaling out on Kubernetes

I built an app on node.js using Docker and I'm not sure how to scale it on a Kubernetes cluster so that I take the most out of my cluster hardware.
From a performance perspective which of the following is better:
clusterize my node app and run as many containers as needed
or
just run as many containers as needed without clustering ?
When I say clustering I mean this https://nodejs.org/api/cluster.html
My app is a simple CRUD Api backed by mongoDB. We estimate that it will have 1000 concurrent users. Our cluster has 3 nodes.
The NodeJS cluster mechanism is useful to allow NodeJS to more effectively use greater than a single core, so depending on your code it may benefit you, but it's highly dependent on your code and the various dependencies and how well they work (or not) with clustering.
As a general practice, if you can break your containers down into nicely parallelized efforts that can be run as pods within kubernetes, then I'd recommend the following as a process to see what works for you:
set up a single pod with your code in it, and run a load test against it. Use the data that Kubernetes has from cAdvisor to characterize how much resources (cpu & memory) your pod likes to have.
set a resource limit for cpu and memory based on what you see above.
run a load test to validate what your single pod handles in terms of scale
And from there, you have a baseline where you can use Kubernetes to scale this horizontally to validate the 1000 user concurrent baseline you want to achieve.
There's a good talk on this process from the 2017 Kubecon called Load Testing Kubernetes: How to optimize your cluster resource allocation in production
Once you have a baseline, you can run a prototype out leveraging the clustering in your code, and then compare against the non-clustered version. If you do this, I'd double-check that any limits you set are > 1 core for CPU, or you'll be self-limiting outside of the NodeJS runtime to get access to multiple cores, which would defeat the purpose of using clustering.
Depending on what you're doing in your code, there may be significant re-work needed to enable clustering, as it wants to leverage its own worker concept, and it's not clear what frameworks you're using and if they'll fit reasonably into that structure.

configure service fabrics microservices on which node to run

We have an microservices application with 5 stateless services
eShopWeb
eShopAPI
eShopOrder
eShopBasket and eShopPayments
We created an service fabrics cluster in azure with 3 nodes. Now we want to configure like as follows
eShopWeb and eShopOrder need to run on node 1
eShopAPI and eShopPayments needs to run on node 2
eShopOrder needs to run on node 3.
How to achieve the above configuration to rum multiple micro services on same node
You shouldn't care which node runs which service. By tying services to nodes you undermine the self-healing capabilities of SF (what if node 2 fails?). Also, you can't do rolling upgrades this way (except for eShopOrder).
I'd recommend avoiding placement constraints if you can. Unless you have multiple node types, or a large cluster.
Service affinity is for legacy services that are so chatty that they don't perform well when on separate nodes, because of latency in communication.
(And for production, you should use 5 nodes.)
You have two options here.
Placement constraints. You can get details in this question. But I would not recommend this as natural balancing ability will be useless here.
Service affinity. You can find more info here. It's when some services are on the same node as another. But be aware: this rule could be broken; read documentation carefully.
I would go with the second approach and see how it works at least in staging environment.

Deploy to azure with roles split over different instance in different configurations?

We have a few different roles in Azure. Currently these are each deployed to separate instances so they can scale separately (and in production this is what we want), but for testing this seems wasteful and we would like to be able to deploy all the roles to a single instance to minimise costs.
Can we do this?
Roles are essentially definitions for what will run inside a set of Windows Azure VM instances. By definition, they have their own instances, so they cannot be targeted toward a single set of instances.
That said: there's nothing stopping you from combining code from different roles into one single role. You'd need to make sure your OnStart() and Run() take care of all needed tasks, as well as combining startup script items.
The upside (which you already surmised): cost savings, especially when running at low volume (where the entire app might be able to run in two instances, vs. several more near-idle instances split up by role).
One potential downside: Everything combined into a single role will now scale together. This may or may not be an issue for you.
Also, think about sizing. Let's say your website is perfectly happy in a Small, yet some background task you have requires XL (maybe it's a renderer needing 10GB RAM or something). And let's say you always run 2 instances of your website, for SLA purposes. Now, even at very low volume, your app consists of two XL instances instead of 2 Small (web) and one XL (background). Now, your near-idle system could cost more as one combined role than as separate roles. This might not apply to you - just giving an example where it might not make sense to combine...
Adding on to David's great explanation, adding things together and gluing them via the OnStart or Run overrides will work, but are you really testing things properly? Configuration values merged together, potential issues with memory usage, concurrency, etc. You would not be testing the same product as you deploy to production.
Better way, would be to deploy extra-small instances to your QA environment. They cost a fraction of the price of say, Medium or Large servers and provide meaningful testing platform.

Resources