i have a web application (asp.net 4.5) deployed over IIS in 3 different virtual machines. From the UI you can enqueue long running jobs with a duration between 1 and 10 hours and this process can't be easilly split (i use Hangfire per manage these jobs) and users can check status of jobs from UI. There are multiple Db with same schema split by group of users. As storage it uses a shared drive (about 40TB of stuff)
In this scenario, what about migrate all on Azure with microservices architetture?
I was thinking to keep the Hangfire component to manage long jobs/recurring operations and have a react frontend that call one/more api microseevices to get info and enqueue jobs.
About database worth it split consider that is already "balanced" by group of users? I was also thinking to use cqrs pattern with a readonly Db populated with service bus messages, but i'm not sure about the advantages...
How do you Will migrate to cloud similar type of application?
Related
We are on Azure since 2010 and had a great benefit from a performance and reliability in our application. Azure offers a lot of enterprise-level services and I think that the new "Azure Service Fabric" is great.
What I cannot understand by reading the documentation is the approach on migrating an "old" Cloud Service to the new Service Fabric. Why do we want to migrate? For horizontal scaling and more reliability.
Currently we have a single-instance cloud service, that spins up a lot of subservices. Those subservices are great candidates for microservices. The only problem is that some of these subservices are "runners", i.e. they just cycle on our users database and decide whether an operation (service) has to be run for a particular user or not.
How would you migrate a service like this considering that more than one instance may run this service?
Thanks
First thing to keep in mind is that once a service is started it keeps running, and his lifecycle and uptime is controlled by Service Fabric (ex: it will restart it automatically if it crashes). Second thing to keep in mind is that you will end-up with multiple instances of the service running at the same time (on different nodes), so they will end-up doing the exact same thing on different nodes of your cluster.
Your first reflex could be to have one stateless service kind/instance per runner "subservice" that keeps running and leverage the RunAsync (https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-reliable-services-advanced-usage). Personally, I wouldn't take that approach, since this could then require some kind of synchronization between services to prevent useless concurrency, since they do the exact same thing independently.
A better approach would be to have your runner services need to run only once in a while when requested by the "main" service acting as an orchestrator, you could have a Queue based approach where the "main" service submit tasks (messages) to be processed by the runners, who are listening concurrently on the same Queue, making sure that maximum one service instance would complete the task.
For the Queue, think Service Bus or Reliable Concurrent Queue (https://learn.microsoft.com/enus/dotnet/api/microsoft.servicefabric.data.collections.preview.ireliableconcurrentqueue-1).
I am writing an Azure hosted MVC website for a gym booking system. I need to be able to maintain membership expiry, suspensions as well as gym class attendence (i.e. logging to the database if a session has been missed). Each of these tasks requires a "c# service function" to be run that will go through the database, perform some checks and update records as and when required.
I need this to run pretty regularly to ensure that missed sessions are logged asap. Should I be developing this as an Azure WebJob and running it continuously? Should i be doing it in another manner? If I could get some suggestions on routes to take that would be massively appreciated.
Thanks
You have a few options: Web Jobs, Scheduler, and Worker Roles.
Web Jobs are a nice addon to an existing azure web app and have the benefit of no additional cost. Web Jobs use Scheduler under the covers if you choose to schedule the Web Job to run at an interval other than continuously. Here is a nice answer that describes the differences between the two.
Worker Roles would be the next logical step up from a Web Job. Worker Roles are dedicated Cloud Service VMs that can provide more dedicated power and offer greater scaling capabilities. Worker Roles can also do much more than just run jobs.
For the application you have described, if you are already running on Azure App Services (Web App) it sounds like a continuously running Web Job would be the correct choice.
I'm trying to figure out a solution for recurring data aggregation of several thousand remote XML and JSON data files, by using Azure queues and WebJobs to fetch the data.
Basically, an input endpoint URL of some sort would be called (with a data URL as parameter) on an Azure website/app. It should trigger a WebJobs background job (or can it continuously running and checking the queue periodically for new work), fetch the data URL and then callback an external endpoint URL on completion.
Now the main concern is the volume and its performance/scaling/pricing overhead. There will be around 10,000 URLs to be fetched every 10-60 minutes (most URLs will be fetched once every 60 minutes). With regards to this scenario of recurring high-volume background jobs, I have a couple of questions:
Is Azure WebJobs (or Workers?) the right option for background processing at this volume, and be able to scale accordingly?
For this sort of volume, which Azure website tier will be most suitable (comparison at http://azure.microsoft.com/en-us/pricing/details/app-service/)? Or would only a Cloud or VM(s) work at this scale?
Any suggestions or tips are appreciated.
Yes, Azure WebJobs is an ideal solution to this. Azure WebJobs will scale with your Web App (formerly Websites). So, if you increase your web app instances, you will also increase your web job instances. There are ways to prevent this but that's the default behavior. You could also setup autoscale to automatically scale your web app based on CPU or other performance rules you specify.
It is also possible to scale your web job independently of your web front end (WFE) by deploying the web job to a web app separate from the web app where your WFE is deployed. This has the benefit of not taking up machine resources (CPU, RAM) that your WFE is using while giving you flexibility to scale your web job instances to the appropriate level. Not saying this is what you should do. You will have to do some load testing to determine if this strategy is right (or necessary) for your situation.
You should consider at least the Basic tier for your web app. That would allow you to scale out to 3 instances if you needed to and also removes the CPU and Network I/O limits that the Free and Shared plans have.
As for the queue, I would definitely suggest using the WebJobs SDK and let the JobHost (from the SDK) invoke your web job function for you instead of polling the queue. This is a really slick solution and frees you from having to write the infrastructure code to retrieve messages from the queue, manage message visibility, delete the message, etc. For a working example of this and a quick start on building your web job like this, take a look at the sample code the Azure WebJobs SDK Queues template punches out for you.
I need to design an AZURE architecture for a service. Some key features:
the user load can reach up to 50K requests per second
the architecture should be scalable
the service requires real time user notifications
some requests must be queued as there are limits for specific calls (user should know that an operation is pending)
global availability
My first idea is:
MVC as a client entry point (azure web site)
WEB API as a backend (azure web site)
Service Bus (for requests queueing)
Web Jobs (workers for queued requests)
Azure DB for data storage
SignalR hub for live notifications
Azure Traffic Manager
What do you think about the above? Any suggestions / best practices to make this highly scalable and available?
First of all, just to make sure: 50k requests per second? Are you sure that's not 50k concurrent users or some such? 50k/second is near Twitter volume. About a 1/10th of Google requests/sec volume - which is HUGE.
50k concurrent users usually translates to 500-600 requests/second (assuming 10 page views per 15-minute user session)
Now, onto your question:
I would reconsider using Azure Service Bus for such a high volume system and consider Event Hubs (as Panagiotis pointed out) or stick with a simpler but more scalable Azure Storage Queues for messaging. You will need to design out a queue strategy where-by you will spread messages across multiple queues to not overflow single queues that live in single storage partitions.
I would also consider Azure Web Roles instead of Azure Websites to host IIS and to run queue processing. Websites are limited to 10 servers only per load-balanced endpoint. There are also limitations on the number of cores. With requests of 50k/sec you'll need a decent amount of horse power to run thru queues and serve traffic and 10 servers might not be enough.
Which Azure DB are you referring to? Azure Document DB? SQL Azure DB?
I want to create an Azure application which does the following:
User is presented with a MVC 4 website (web role) which shows a list of commands.
When the user selects a command, it is broadcast to all worker roles.
Worker roles process the task, store the results and notify web role
Web role displays the combined results of the worker roles
From what I've been reading there seem to be two ways of doing this: the Windows Azure Service Bus or using Queues. Each worker role also stores the results in the database.
The Service Bus seems more appropriate with its publish/subscribe model, so all worker roles would get the same command and roughly the same time. Queues seem easier to use though.
Can the service bus be used locally with the emulator when developing? I am using a free trial and cannot keep the application constantly whilst still developing. Also, when using queues how can you notify the web role that processing is complete?
I agree. ServiceBus is a better choice for this messaging requirement. You could, with some effort, do the same with queues. But, you'll be writing a lot of code to implement things that the ServiceBus already gives you.
There is not a local emulator for ServiceBus like there is for the Azure Strorage service (queues/tables/blobs). However, you could still use the ServiceBus for messaging between roles while they are running locally in your development environment.
As for your last question about notifying the web role that processing is complete, there are a several ways to go here. Just a few thoughts (not exhaustive list)...
Table storage where the web role can periodically check the status of the unit of work.
Another ServiceBus Queue/topic for completed work.
Internal endpoints. You'll have to have logic to know if it's just an update from worker role N or if it is indicating a completed unit of work for all worker roles.
I agree with Rick's answer, but would also add the following things to think about:
If you choose the Service Bus Topic approach then as each worker role comes online it would need to generate a subscription to the topic. You'll need to think about subscription maintenance of when one of the workers has a failure and is recycled, or any number of reasons why a subscription may be out there.
Telling the web role that all the workers are complete is interesting. The options Rick provides are good ones, but you'll need to think about some things here. It means that the web role needs to know just how many workers are out there or some other mechanism to decide when all have reported done. You could have the situation of five worker roles receieving a message and start working, then one of them starts to repeatedly fail processing. The other four report their completion but now the web role is waiting on the fifth. How long do you wait for a reply? Can you continue? What if you just told the system to scale down and while the web role thinks there are 5 there is now only 4. These are things you'll need to to think about and they all depend on your requirements.
Based on your question, you could use either queue service and get good results. But each of them are going to have different challenges to overcome as well as advantages.
Some advantages of service bus queues is that it provides blocking receipt with a persistent connection (up to 100 connections), it can monitor messages for completion, and it can send larger messages (256KB).
Some advantages of storage queues over the service bus solution is that it's slightly faster (if 15 ms matters to you), you can use a single storage system (since you'll probably be using Storage for blob and table services anyways), and simple auto-scaling. If you need to auto-scale your worker roles based on the load, passing the the requests through a storage queue makes auto-scaling trivial -- you just setup auto-scaling in the Azure Cloud Service UI under the scale tab.
A more in-depth comparison of the two azure queue services can be found here: http://msdn.microsoft.com/en-us/library/hh767287.aspx
Also, when using queues how can you notify the web role that processing is complete?
For the Azure Storage Queues solution, I've written a library that can help: https://github.com/brentrossen/AzureDistributedService.
It provides a proxy layer that facilitates RPC style communication from web roles to worker roles and back through Storage Queues.