Worker Role vs Web Job - azure

From what I understand both run small repeatable tasks in the cloud.
What reasons and in what situations might I want to choose one over the other?

Some basic information:
WebJobs are good for lightweight work items that don't need any customization of the environment they run in and don't consume very much resources. They are also really good for tasks that only need to be run periodically, scheduled, or triggered. They are cheap and easy to setup/run. They run in the context of your Website which means you get the same environment that your Website runs in, and any resources they use are resources that your Website can't use.
Worker Roles are good for more resource intensive workloads or if you need to modify the environment where they are running (ie. a particular .NET framework version or something installed into the OS). Worker Roles are more expensive and slightly more difficult to setup and run, but they offer significantly more power.
In general I would start with WebJobs and then move to Worker Roles if you find that your workload requires more than WebJobs can offer.

If we are to measure "power" as computational power, then in a virtual environment, this translates to how many layers are on top of the physical machine (the metal). The user code on a virtual machine runs on top of a hypervisor, which manages the physical machine. This is the thickest layer. Whenever possible the hypervisor tries to simply serve as a pass-through to the metal.
There is fundamentally little overhead for WebJobs. It is sandboxed, OS is maintained, and there are services & modules to make sure it runs. But the application code is essentially as close to the metal as in Worker Roles, since they use the same hypervisor.
If what you want to measure is "flexibility", then use Worker Roles, since it is not managed or sandboxed, it is more flexible. You are able to use more sockets, define your own environment, install more packages, etc.
If what you want is "features", then WebJobs has a full array of features. Including, virtual-networking to on-prem resources, staging environments, remote debugging, triggering, scheduling, easy connection to storage and service bus, etc...
Most people want to focus on solving their problem, and not invest time in infrastructure. For that, you use WebJobs. If you do find that you need more flexibility, or the security sandbox is preventing you from doing something that can't be accomplished any other way, then move to Worker Roles.
It is even possible to build hybrid solutions where some parts are done in WebJobs and others are done in Worker Roles, but that's out of the scope of this question. (hint: WebJobs SDK)

Somethings to remember when choosing to use a Web Job or a Worker Role:
A Worker Role is self hosted on a dedicated VM, a Web Job is hosted in a Web App container.
A Worker Role will scale independently, a Web Job will scale along with the Web App container.
Web Jobs are perfect for polling RSS feeds, checking for and processing messages and for sending notifications, they are lightweight and cheaper than Worker Roles but are less powerful.

Related

Azure App Service and infrastructure maintenance

As I understand there is no concept of update domain in App Services (and in other PaaS offerings). I am wondering how Azure is handling OS updates if I have only a single instance of an App Service app. Do I need to plan for two and more instances if I want to avoid such cases when an app goes down during the OS/other updates or this is handled without downtime? According to docs App Service has 99.95% SLA - is this time reserved here?
First of all, welcome to the community.
Your application will not become unavailable when App Services is patching the OS, you don't have to worry about that. Imagine if that would be the case, it would be a huge problem. Instead, the PaaS service will make sure your application is replicated to an updated worker node before that happens.
But you should have multiple instances, as a best practice listed in this article:
To avoid a single point-of-failure, run your app with at least 2-3 instances.
Running more than one instance ensures that your application is available when App Service moves or upgrades the underlying VM instances
Have a look at this detailed blog post:
https://azure.github.io/AppService/2018/01/18/Demystifying-the-magic-behind-App-Service-OS-updates.html
When the update reaches a specific region, we update available instances without apps on them, then move the apps to the updated instances, then update the offloaded instances.
The SLA is the same regardless the number of instances, even if you select "1 instance":
We guarantee that Apps running in a customer subscription will be available 99.95% of the time
Have a look at Hyper-V and VMWare, it will give you a rough idea on how App Services handle that.
If you're looking for zero-downtime deployments with App Services, what you are looking for are deployment slots.
Managing versions can be confusing, take a look at this issue I opened, it gives you a detailed how-to approach about managing different slot versions, which is not clearly described by Microsoft docs.

What are the limitations/drawbacks of using single azure app service to host multiple applications/microservices?

Can anyone tell me or explain me what are the limitations/drawbacks of deploying multiple micro services (say 2-3) on a single Azure AppService server?
To achieve following we use microservies
Serve a single purpose or have a single responsibility
Have a clear interface for communication
Have less dependencies on each other
Can be deployed independently without affecting the rest of ecosystem
Can scale independently
Can fail independently
Allow your teams to work independently, without relying on other teams for support and services
Allow small and frequent changes
Have less technical debts
Have a faster recovery from failure
But How the Azure app service works when we try to deploy one of the microservice? will it impact other mircoservices? can we use this it in production environments?
I came across few links hosting mutiple apps on single appservice by defining virtual path for windows and for linux by adding azure storage but is it best/good practice to do?
No, it's not. They will compete for compute resources, and in case of hardware failure all of them will go down.
It sounds like you're referring to hosting multiple App Service apps in a shared App Service plan. This is conceptually (and physically) the same as running multiple apps on a server, and I would think about pros/cons along those lines.
You can host many apps on the same plan as long as the plan provides enough memory/CPU/network resources to cover the needs of the combined demands of those apps. For a few small apps, a modest plan size shouldn't have a problem handling all of them in production. The main benefit of combination is saving costs, since the plan is the unit of charge, not the apps.
Microsoft documents some reasons to isolate apps on separate plans:
The app is resource-intensive.
You want to scale the app independently from the other apps in the existing plan.
The app needs resource in a different geographical region.
From my experience, I'd add some considerations:
Deployment and restarting of apps can cause CPU spikes for the plan (which is a server). If your apps are performance sensitive and you deploy often, you might want more separation
Azure maintenance requires servers to restart at least once a month or more. If all your apps are on a shared plan, a patch reboot can mean the entire system is down and that all apps compete for resources when starting up simultaneously
I generally use separate plans as environment boundaries, so a production plan separate from a test plan. "Test" apps go on test plan, "Prod" apps on production to prevent testing from impacting users.
Azure Functions may be better fit for hosting many microservices

Migration to Azure Service Fabric - Architectural considerations

We are on Azure since 2010 and had a great benefit from a performance and reliability in our application. Azure offers a lot of enterprise-level services and I think that the new "Azure Service Fabric" is great.
What I cannot understand by reading the documentation is the approach on migrating an "old" Cloud Service to the new Service Fabric. Why do we want to migrate? For horizontal scaling and more reliability.
Currently we have a single-instance cloud service, that spins up a lot of subservices. Those subservices are great candidates for microservices. The only problem is that some of these subservices are "runners", i.e. they just cycle on our users database and decide whether an operation (service) has to be run for a particular user or not.
How would you migrate a service like this considering that more than one instance may run this service?
Thanks
First thing to keep in mind is that once a service is started it keeps running, and his lifecycle and uptime is controlled by Service Fabric (ex: it will restart it automatically if it crashes). Second thing to keep in mind is that you will end-up with multiple instances of the service running at the same time (on different nodes), so they will end-up doing the exact same thing on different nodes of your cluster.
Your first reflex could be to have one stateless service kind/instance per runner "subservice" that keeps running and leverage the RunAsync (https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-reliable-services-advanced-usage). Personally, I wouldn't take that approach, since this could then require some kind of synchronization between services to prevent useless concurrency, since they do the exact same thing independently.
A better approach would be to have your runner services need to run only once in a while when requested by the "main" service acting as an orchestrator, you could have a Queue based approach where the "main" service submit tasks (messages) to be processed by the runners, who are listening concurrently on the same Queue, making sure that maximum one service instance would complete the task.
For the Queue, think Service Bus or Reliable Concurrent Queue (https://learn.microsoft.com/enus/dotnet/api/microsoft.servicefabric.data.collections.preview.ireliableconcurrentqueue-1).

Cloud Services vs VM deployment Cost

We are working on architecture of new web application to be hosted on Azure. This application would run only in day time (Say 9AM to 5PM). What I read so far about Azure is we would continue be billed even when we stop the deployment.
However in case of Azure VM (IAAS) billing stops when we stop the VM.
Client is keenly interested in running the IT cost to the minimum. We are planning to use WASABi/Auto-scaling block to auto shutdown & auto-start the app to run only during (9AM-5PM)
Deploying application every morning & deleting every evening even programmatically doesn't sound like a good architecture.
Should we target the app for VM rather than Azure web role?
While hourly billing cost is definitely a consideration and it is true that if you stop a VM in IaaS, billing stops, there are other considerations as well. Some of them are:
With Cloud Services, you have to architect the application in a certain way to take advantage of statelessness there. So there may be a bit of a learning curve there. With Virtual Machines, in theory you can build an application the way you are used to and deploy that in the cloud.
With Cloud Services, the major advantage is that you don't have to maintain the VM. This is something Microsoft does that for you. So there's little or no IT-admin overhead. With VMs, maintaining the VM is your responsibility so that's an additional cost which is recurring as well (assuming you (or your client) have an IT Admin kind of guy on the payroll).
Generally speaking, if the application is a stand-alone application with quite simple deployment topology and is brand new application it is recommended that you write them as Cloud Service but do take the costs (development / IT admin) into consideration as well.
When you stop the virtual machine through a standard shutdown (through the machine itself for example), it does continue to incur charges. The portal will eventually show it as shutdown, but the VM still has resources allocated.
However, if you stop the machine through the portal, API, or PowerShell, it will stop and DEALLOCATE the machine. This means the VM will use storage space, but will not incur compute charges.
Simply schedule the deallocation of the machines during off-hours, and you will only page for the usage during the day.

Windows Services into Azure WorkerRoles

What is established best practice in porting a Windows Service to Azure? Should it be changed into a Worker Role or moved into a VM Role? Are there other options? Assume that my services write to external persistence sources (MSMQ, databases, WCF) rather than to the file system directly.
You are far better off converting your Windows Services to Worker-Roles than VM roles. VM roles are meant to house applications that require complex un-automatable installation procedures. They are also a bigger pain to manage and you want to stay away from VM roles as much as possible. If you can find a way to automate deployment of your existing Windows Services via Worker-Roles, it is definitely the way to go.
You can also looking into HPC roles and depending on the on-prem/off-prem and load/compute requirements, adding Azure machines to your HPC cluster maybe of benefit.
All types of Roles (Web/Worker/VM/HPC) are stateless and require to be able to spin-up or tear-down from scratch on demand. All types of Roles are meant to run more than one VM instance at a time.
HTH
I wrote a blog post about this a while back. It is here:
http://blogs.msdn.com/b/golive/archive/2011/02/11/installing-a-windows-service-in-a-worker-role.aspx
Note that a Windows Service won't communicate directly with the fabric controller, so you need to ping it periodically to check health, then take remediative actions as needed.
Putting a Windows Service into a worker or web role is accepted practice. The main reason to go with VM Role is if there is significant (>10 minutes) setup required. My blog post details how to install your service.
Of course, if you want to move the code into a worker role, that's also fine. In this case you don't need any special steps to ensure the fabric controller is aware of its health.
If cost is an issue, combining functions into web/worker is also accepted practice. And you can save by not working over your code to get it into a web/worker.
Azure has a special type of Web Role called "WCF Service Web Role" which corresponds to a Windows WCF Service. This is a good point for migrating existing services.
Ideally the migration should be followed by taking advantage of Azure specific features, for instance using queues and work roles to maximise perfromance and scalability.

Resources