Thread issues on Cloud - multithreading

Will thread create any issues in the Cloud while scaling the application either horizontally or Vertically?
The objective is to migrate the monolith applications to Cloud Suitable application which is basically need to support scaling without any cloud issues. Primarily , We are focusing on Pivotal Cloud Foundry but i would like to other Cloud enviornment as well. Is there any common checkList whether these list of thread patterns will support in Cloud.

Any app instance, in PCF, is constrained to the resource it has been allocated at runtime. If your app instance runs out of resources (memory, disk), it simply crashes. PCF will then spin up a new instance.
PCF manages the atomic instance of an app. Simple answer PCF auto scaling (horizontal or vertical) doesn't care how many threads your app instance creates or what else it does.
The threads for an app instance are completely up to that app instance.
Ben Hale provided a great answer on this thread - Limit total memory consumption of Java process (in Cloud Foundry)

Related

Azure App Service and infrastructure maintenance

As I understand there is no concept of update domain in App Services (and in other PaaS offerings). I am wondering how Azure is handling OS updates if I have only a single instance of an App Service app. Do I need to plan for two and more instances if I want to avoid such cases when an app goes down during the OS/other updates or this is handled without downtime? According to docs App Service has 99.95% SLA - is this time reserved here?
First of all, welcome to the community.
Your application will not become unavailable when App Services is patching the OS, you don't have to worry about that. Imagine if that would be the case, it would be a huge problem. Instead, the PaaS service will make sure your application is replicated to an updated worker node before that happens.
But you should have multiple instances, as a best practice listed in this article:
To avoid a single point-of-failure, run your app with at least 2-3 instances.
Running more than one instance ensures that your application is available when App Service moves or upgrades the underlying VM instances
Have a look at this detailed blog post:
https://azure.github.io/AppService/2018/01/18/Demystifying-the-magic-behind-App-Service-OS-updates.html
When the update reaches a specific region, we update available instances without apps on them, then move the apps to the updated instances, then update the offloaded instances.
The SLA is the same regardless the number of instances, even if you select "1 instance":
We guarantee that Apps running in a customer subscription will be available 99.95% of the time
Have a look at Hyper-V and VMWare, it will give you a rough idea on how App Services handle that.
If you're looking for zero-downtime deployments with App Services, what you are looking for are deployment slots.
Managing versions can be confusing, take a look at this issue I opened, it gives you a detailed how-to approach about managing different slot versions, which is not clearly described by Microsoft docs.

What kind of CPU power do I get with Azure App Service?

I'm interested in deploying CPU intensive web app to an Azure App Service instance. I can't find any details around CPU usage and/or limits for Azure app service. My concern is that not having insight into the CPU specs/limitations for my app will not allow me to plan on how to accurately plan the physical cloud-based infrastructure (using Azure app service).
My app will be using the OpenCV computer vision library to do heavy image processing, face detection, and face recognition with hundreds/thousands of high quality images. This is naturally a CPU-intensive process. In a traditional setting (or on-premise Virtual Machine setup), I would at least know the specs on the machine (I.E. cpu specs, etc).
In summary, my question is two-fold:
1) Why doesn't azure app service say anything about the CPUs inside of their PaaS (App Service) context? If they do, where can i learn more about CPU limitations for this?
2) In the context of my application, is my CPU-based question irrelevant? I do read online that certain Azure App Service tiers do auto-scaling (meaning load balancing across more servers for better performance). Will this be sufficient for my need where multiple end-users are processing many photos to do face detection and recognition?
Microsoft represent the performance of the a VM in terms of ACU (https://learn.microsoft.com/en-us/azure/virtual-machines/windows/acu). There are limited number of VM's Available in Azure App Service plan.
App Service plan has both scale up and scale out option. Scale out can be done based on the different rules.
But always do remember that the application architecture will dictate how it will use the scale out option.
Note : I would suggest to use VM if it GPU or CPU intensive, as you will get more option.
As i know standard app service are running on A Series VM but based on the scenario that you explained i suggest you to go with Premium App Service Plan that runs on Dv2 VM's. Hope below article will help you out:
I Suggest you to check the app service overview link and it says what kind of VM is running on back end so you can cross check with the VM specs and you can find the CPU details there.
App Service Plan Overview
App Service Limitation
App Service Overview

Azure Dynamic App Service instance that starts up and shuts down automatically based on the current needs

I am new to Microsoft Azure / Google Cloud and I am currently comparing these two different cloud solution providers, before starting a new project. I am planning to write a web application using either Google Cloud App Engine or Azure App Service.
I want to start with a very basic service instance, which I want to call via HTTPS. To reduce charges it would be nice to only pay for used service minutes resp. that the instance only runs, when needed.
Google Cloud offers dynamic instances, where compute instances are shutdown, when idle and started for incoming requests. Which seems way cheaper for a seldom used prototype and first usage of cloud services.
Instances are resident or dynamic. A dynamic instance starts up and shuts down automatically based on the current needs. [...] When an application is not being used at all, App Engine turns off its associated dynamic instances, but readily reloads them as soon as they are needed.
Unfortunately, I found in the Azure documentation only an Overview of autoscale in Microsoft Azure Virtual Machines, Cloud Services, and Web Apps, which does not cover my question of an automatic instance shutdown in idle state. Also Start/Stop VMs during off-hours solution in Azure Automation does not satisfy my information need, because I am looking only for a compute instance and not a full VM.
Is there an equivalent in the Azure domain, that allows to automatically start up and shut down app service instances, based on the usage resp. incoming requests?
Depending on the functionality of the two cloud service provider, I am deciding which one to use. Has anybody experience with this matter in the Azure domain? Thank you.
You can't do that with Azure App Service alone as of now (24-Feb-2019). But you could combine an Azure function to fire up a App Service instance and then forward all incoming traffic to an app hosted in this App Service via an Azure function proxy, see this description on learn.microsoft.com. I was planning to try this for while now too. In theory it should work... From experience, App Service instances fire up quickly, so the warm up time should be acceptable. Even better, you could keep free or shared App Service plan instance with your app running and forward the Azure function calls to it by default. On increasing load, move the app to a pre-configured plan which supports auto scaling.
Of course you could try to implement the entire app via a set of Azure functions which are fully "dynamic" using your terminology. Depending on the architecture of your application, this might actually be the best choice.
The Autoscale feature of Azure offers you to scale out/scale in based on configurable criterias, take a look here. You are limited by your pricing tier. Maybe this example will help you get an insight.

Dev/Test/Prod in the Same Application Service Environment?

I have an Azure Application Service Environment.
Is it okay to have multiple App Service Plans (Dev,Test, and Production) all running in the same ASE?
Basically, I know they'll share the Front End Pool, which I'm assuming is fine because no app code is running there and it "...contains compute resources responsible for SSL termination as well automatic load balancing of app requests within an App Service Environment. "
I guess my confusion is around the Worker Pools and Instances.
If I have a Dev Test and Prod can I host each one in a Different Worker Pool? Or would I even need to, could I just host them all in the same worker pool but they'd be using different instances so they're separated? Would I need 2 Worker Pool Instances per App Service Plan to make sure I have redundancy? (Confused why the Website says you only need one additional Instance (for 1-20 instances).
Basically, is this okay? And if so, what would the worker pool setup look like?
Would I have 6 instances in 1 pool with auto-scale turned on?
2 Instances in each of the separate 3 pools with auto-scale turned on?
Or would I need 3 separate Application Service Environments?
I've spent the last 2 hours reading Microsoft articles but none speak clearly about this or have a real-world example setup.
The documentation clarity regarding ASEs is still very much a problem. This is troublesome as ASEs are sold as a premium integrated hosting product in Azure.
I have had a discussion with an engineer at Microsoft about this very question and I still cannot say I understand the resource enough to be happy. Here is what I understand:
An ASE comes with 3 worker pools (vm) by default
All worker pools run the applications defined in your worker pool 1
You can define a different set of ASP and Applications for your worker pool 3 for example and have production on 1 & 2 and dev on 3.
Instances in a worker pool affect all the applications of your configuration.
The way I see it ASE is designed for a very specific business scenario that I have yet to see. It is quite powerful but you will most likely never use it to it's potential. I guess that's why Microsoft engineers recommend using simple App Services in most scenarios.

Worker Role vs Web Job

From what I understand both run small repeatable tasks in the cloud.
What reasons and in what situations might I want to choose one over the other?
Some basic information:
WebJobs are good for lightweight work items that don't need any customization of the environment they run in and don't consume very much resources. They are also really good for tasks that only need to be run periodically, scheduled, or triggered. They are cheap and easy to setup/run. They run in the context of your Website which means you get the same environment that your Website runs in, and any resources they use are resources that your Website can't use.
Worker Roles are good for more resource intensive workloads or if you need to modify the environment where they are running (ie. a particular .NET framework version or something installed into the OS). Worker Roles are more expensive and slightly more difficult to setup and run, but they offer significantly more power.
In general I would start with WebJobs and then move to Worker Roles if you find that your workload requires more than WebJobs can offer.
If we are to measure "power" as computational power, then in a virtual environment, this translates to how many layers are on top of the physical machine (the metal). The user code on a virtual machine runs on top of a hypervisor, which manages the physical machine. This is the thickest layer. Whenever possible the hypervisor tries to simply serve as a pass-through to the metal.
There is fundamentally little overhead for WebJobs. It is sandboxed, OS is maintained, and there are services & modules to make sure it runs. But the application code is essentially as close to the metal as in Worker Roles, since they use the same hypervisor.
If what you want to measure is "flexibility", then use Worker Roles, since it is not managed or sandboxed, it is more flexible. You are able to use more sockets, define your own environment, install more packages, etc.
If what you want is "features", then WebJobs has a full array of features. Including, virtual-networking to on-prem resources, staging environments, remote debugging, triggering, scheduling, easy connection to storage and service bus, etc...
Most people want to focus on solving their problem, and not invest time in infrastructure. For that, you use WebJobs. If you do find that you need more flexibility, or the security sandbox is preventing you from doing something that can't be accomplished any other way, then move to Worker Roles.
It is even possible to build hybrid solutions where some parts are done in WebJobs and others are done in Worker Roles, but that's out of the scope of this question. (hint: WebJobs SDK)
Somethings to remember when choosing to use a Web Job or a Worker Role:
A Worker Role is self hosted on a dedicated VM, a Web Job is hosted in a Web App container.
A Worker Role will scale independently, a Web Job will scale along with the Web App container.
Web Jobs are perfect for polling RSS feeds, checking for and processing messages and for sending notifications, they are lightweight and cheaper than Worker Roles but are less powerful.

Resources