I m looking for the best practice when it comes to azure service plans in a microservice architecture. We have a series of microservices where each is completely independent from each other both in terms of capacity, resources, developers and overall architecture. It goes without saying that if one service experiences issues the other ones should not be affected if not interacting with the problematic service. Those services are hosted in Azure
My question is around Service Plans and how those should related to dev / staging environments. Up until now we would create a service plan for our microservice, call it PersonService. So we would create the PersonService service plan and then the default slot would be production (person-service) and then we would have another staging slot (person-service-staging) to cater for staging / testing needs. All of those would be served under the same service plan.
A terrible thought came to me today that if a dev deploys some horrible bug in staging that eats up all the CPU and / or mem then the production slot would starve from those resources and essentially the staging environment would be affecting the response times of production.
Am I right to think this would be the case? How do you guys recommend to set this up to avoid this issue? Thanks
Yes you are correct, if person-service-staging starts to consume a significant proportion of the underlying server's resources, it will affect person-service.
Avoiding it very much depends on your current set up and what your priorities are. Adding a dev / test / staging service plan is by far the easiest approach. This leaves your production service plan solely for production ready code. With deployment slots there simply to allow for easy switching between versions (and quick rollback if you realise something in production is broken)
The alternative to this is having a service plan that is solely dedicated to staging that you deploy as part of your testing pipeline. The speed with which Azure can stand up a service plan means that you can create and destroy them on the fly. This gives you the benefit of being able to performance test against your staging server, when it is running on an identical plan to your production code.
One of the major benefits of cloud computing is the ability to crate disposable servers. It takes a deliberate thought to shift out of the old philosophy of 'that's our staging server' Even in a CI scenario - unless you're deploying code every 30 mins! - it can be much cleaner to throw some new servers up to test against. Even if you don't have an automated test pipeline, it is only a matter of a couple of Azure Automation scripts connected to a button on a webpage (though it is surprising how quickly those couple of scripts multiply! into something much more elegant / complicated)
Related
I am investigating a robust way to scan my Azure AKS clusters and randomly change the numbers of pods, allocated resources, throttling and if possible limit connections to other resources (E.g. database, queues, cache).
The idea is to have this running against any environment (test, QA, live)
Log what changes where made and when
Email that the script has run
Return environment to desired state
My questions are:
Is there tooling for this already?
If this possible via CRON/ Azure pipelines?
This is part of my stress development work cycle that includes API integration and load testing to help find weakness and feedback ways we can improve our offering and teams reputation
Google "Kubernetes chaos engineering".
Look at Azure Chaos Studio https://azure.microsoft.com/en-us/products/chaos-studio/#overview
Create a chaos experiment that uses a Chaos Mesh fault to kill AKS pods with the Azure portal https://learn.microsoft.com/en-us/azure/chaos-studio/chaos-studio-tutorial-aks-portal
Can anyone tell me or explain me what are the limitations/drawbacks of deploying multiple micro services (say 2-3) on a single Azure AppService server?
To achieve following we use microservies
Serve a single purpose or have a single responsibility
Have a clear interface for communication
Have less dependencies on each other
Can be deployed independently without affecting the rest of ecosystem
Can scale independently
Can fail independently
Allow your teams to work independently, without relying on other teams for support and services
Allow small and frequent changes
Have less technical debts
Have a faster recovery from failure
But How the Azure app service works when we try to deploy one of the microservice? will it impact other mircoservices? can we use this it in production environments?
I came across few links hosting mutiple apps on single appservice by defining virtual path for windows and for linux by adding azure storage but is it best/good practice to do?
No, it's not. They will compete for compute resources, and in case of hardware failure all of them will go down.
It sounds like you're referring to hosting multiple App Service apps in a shared App Service plan. This is conceptually (and physically) the same as running multiple apps on a server, and I would think about pros/cons along those lines.
You can host many apps on the same plan as long as the plan provides enough memory/CPU/network resources to cover the needs of the combined demands of those apps. For a few small apps, a modest plan size shouldn't have a problem handling all of them in production. The main benefit of combination is saving costs, since the plan is the unit of charge, not the apps.
Microsoft documents some reasons to isolate apps on separate plans:
The app is resource-intensive.
You want to scale the app independently from the other apps in the existing plan.
The app needs resource in a different geographical region.
From my experience, I'd add some considerations:
Deployment and restarting of apps can cause CPU spikes for the plan (which is a server). If your apps are performance sensitive and you deploy often, you might want more separation
Azure maintenance requires servers to restart at least once a month or more. If all your apps are on a shared plan, a patch reboot can mean the entire system is down and that all apps compete for resources when starting up simultaneously
I generally use separate plans as environment boundaries, so a production plan separate from a test plan. "Test" apps go on test plan, "Prod" apps on production to prevent testing from impacting users.
Azure Functions may be better fit for hosting many microservices
I currently have a web application deployed to Azure on the App Service free plan and, as part of going live, I'm interested in moving to the use of slots.
This is primarily because it gives me the ability to deploy new code into staging and then seamlessly swap over once it's been validated.
Now, to use slots, I know I need the standard plan and this clocks in at a minimum of $X per VM.
What I don't know (and frustratingly haven't been able to find out from the Azure stuff on Microsoft's web pages) is whether a second slot counts as another VM.
In one place at least, it states that deployment slots are live web applications with their own hostname but that could be read in at least two ways. either as a separate app on the same VM or a separate VM altogether.
Since the difference is substantial ($2X/month rather than $X/month), it's rather important to planning. So does anyone know how (preferably with some supporting citation from Microsoft) the slots are handled and charged for?
All deployed Azure sites in a given Web App plan run on the same VM instances. Just as if you deployed mysite1.azurewebsites.net and mysite2.azurewebsites.net in the same plan, they'd share the same VM instances. So, too, do extra deployment slots.
If you scale to 3 instances, you pay for 3 instances, and all deployments (all slots for all deployments) run on all three instances.
One way to make this easier to think about: the 'production' (or main) deployment slot is just another slot.
I have a few Azure websites running on Reserved instances that have Auto-scale turned on so they will take 1->6 servers but are usually just running on one server. I am trying to set up hot-swapping so that when I deploy there is no down time. I have created a deployment slot but am I charged extra for the deployment slot? Even if it is turned off?
This is the only reference I could find relating to this: Managing Multiple Windows Azure Web Site Environments...
To quote:
That’s the only site that would cost money, so the development and
staging sites would have no impact on the cost I’ll incur for this
setup.
I believe that the slots are part of the instance, and that the number of slots that are in use makes no difference in pricing - they are tied to the instance.
Anecdotally, I run a number of slots for QA, Staging and other environments, and the cost has not changed with respect to the number of slots used.
I am recently evaluating Windows Azure. One of the problems I found is that Azure start charging as soon as the app is deployed even if it is in the testing stage.
I want to ask existing Azures how much of your tests are done locally and how much are done after it is deployed? Does Azure provide any means of testing web services locally?
Many thanks.
Yes, Azure provides an emulation framework that largely (but not completely) mimics the Azure deployment environment. This is usually sufficient for testing.
Costs of test deployments can be controlled somewhat, however:
It's possible to deploy "extra-small" instances that are significantly less expensive than larger instances, at the expense of throughput - which unless you're doing load testing isn't usually an issue
You won't generally need to have multiple instances of a role deployed, just one will usually do, unless you have major concurrency issues under load
Some of the cost of Azure is in data traffic, which will obviously be less expensive for test instances
It's not necessary to have test instances permanently available. They can be torn down or re-deployed at will; if your environment becomes sophisticated this can be done programmatically by a continuous integration engine.
In practice we're finding that the cost of test instances is relatively insignificant compared to the cost of our developers and the alternative, which would be to provision and maintain our own data centre.
In particular, being able to quickly spin up a test environment that is a direct mimic of production in a few minutes is a very powerful feature.
Windows azure already provide option to do testing locally.
The Microsoft Azure storage emulator provides a local environment that emulates the Azure Blob, Queue, and Table services for development purposes. Using the storage emulator, you can test your application against the storage services locally, without creating an Azure subscription or incurring any costs. When you're satisfied with how your application is working in the emulator, you can switch to using an Azure storage account in the cloud.
To get complete detail please check link below.
https://azure.microsoft.com/en-in/documentation/articles/storage-use-emulator/