I am working something that utilizes the azure services & azure functions(with sb trigger), and trying to figure out if it matters to distribute the messages by creating multiple subscriptions VS Just one?
Please see the Before VS After in below chart:
I am trying to improve the performance of entire process as there are too many messages sitting in there.
There's no difference between the 3 functions in the After chart, all they do is upserting DB records. Does it even matter if I have 1 sub vs 3 sub in this flow ?
Best Practices for performance improvements using Service Bus Messaging, can be found here: https://learn.microsoft.com/en-us/azure/service-bus-messaging/service-bus-performance-improvements?tabs=net-standard-sdk-2
In your design it looks like writing to the database is going to be the bottleneck. So I would start with a single subscription.
Please try the approach of Fan-in/Fan-out pattern with Azure Durable function
find the below use case for your reference.
https://madeofstrings.com/2019/01/09/scaling-azure-functions-to-make-500000-requests-to-weather-com-in-under-3-minutes/
https://ajitpatra.com/2020/01/15/using-service-bus-triggered-azure-durable-function-with-d365-ce/
Related
I am a complete newbie for Azure and Azure Functions but my team plans to move to Azure soon. Now I'm researching how I could use Azure Functions to basically do what I would normally do in a .Net console application.
My question is, can Azure Functions handle quite a bit of code processing?
Our team uses several console apps that effectively pick up a pipe delimited file, do some business logic, update a database with the data, and log everything along the way. From what I've been reading so far I typically see that Azure Functions are used for little pieces of code. How little do they mean? Is it best practice to have a bunch of Azure Functions to replace a console app EX: have one function that does the reading of a file and create a list of objects, another function to loop through those items and add business logic, and then another to write the data to a database or can I use one Azure Function to do all of that?
Direct answer is yes - you can run bigger pieces of code as Azure Function - this is not a problem as long as you meet their limitations. You can even have dependency injecton. For chained scenarios, you can use Durable Functions. However, Microsoft do not recommend long running functions, cause of unexpected timeouts. See best practices for azure functions.
Because of that, I would consider alternatives:
If all what you need is run console app in Azure you can use WebJobs. Here is example how to deploy console app directly to azure via VisualStudio
For more complex logic you can use .NET Core Worker Service which behaves as Windows Service, and could be deployed to azure as App Service.
If you need long-running jobs but with scheduled runs only I had really great experience with Hangfire which can be hosted in Azure as well.
This is really hard to answer because we don't know what kind of console app you have over there. I usually try to use the same SOLID principles used to any class on my functions too. And whenever you need to coordenate actions or if you need to run things in parallel you always use Durable Functions Framework too.
The only concern is related to execution time. Your function cans get pretty expensive if you're running on consumption plan and do know pay attention to it. I recommend you the reading of the following gread article:
https://dev.to/azure/is-serverless-really-as-cheap-as-everyone-claims-4i9n
You can do all of that in one function.
If you need on-the-fly data processing, you can safely use Azure Functions even if it takes reading files or database communication.
What you need to be careful at and configure, though, is the timeout. Their scalability is an interesting topic as well.
If you need to host an application, you need a machine or a part of the storage space of a machine in Azure to do that.
I have an idea whereby I intend to build a cloud native application for algorithmic trading, ideally by consuming all PaaS and SaaS (no IaaS), and I'd like to get some feedback on how I intend to build it. The concept is pretty straight-forward in that I intend to consume financial trading data from an external SaaS solution via an API query, feed that data into various Azure PaaS solutions (most notably ML for modeling), and then take some action. Here is a high-level diagram I've come up with so far:
Solution Overview
As a note, while I'm familiar with Azure, I'm not a Azure cloud engineer and have limited experience in actually building solutions myself. Subsequently, I intend to use this project as a foundation to further educate myself.
When starting on the build, I immediately questioned whether I should or shouldn't use Event Hubs. Conceptually it makes sense, in that I'm decoupling the production of a data stream from the consumption of it. Presumably, this facilitates less complications when / if I need to update the data feed(s) in the future. I also thought about where the data is stored... should it be a SQL database, or more simply, an Azure Table? The idea here is that the trading data will need to be stored for regression testing as my iterate through my models. All that said, looking for some insights from anybody that may have experience in this space.
Thanks!
There's no real question in here. Take a look on the architecture reference provided by Microsoft: https://learn.microsoft.com/en-us/azure/architecture/reference-architectures/
I'm aware of the many different ways of scheduling system-centric events in Azure. E.g. Azure Scheduler, Logic Apps, etc. These can be used for things like backups, sending batch emails, or other maintenance functions.
However, I'm less clear on what technology is available for events relating to a large volume of documents or records.
For example, imagine I have 100,000 documents in Cosmos and some of the datetime properties on those documents relate to events: e.g. expiry, reminders, escalations, timeouts, etc. Each record has a different set of dates and times.
What approaches are there to fire off code whenever one of those datetimes is reached?
Stuff I've thought of so far:
Have a scheduled task that runs once per minute and looks for anything relating to that particular minute in Cosmos then does "stuff".
Schedule tasks on Service Bus queues with a future date as-and-when the Cosmos records are created and then have something to receive those messages and do "stuff".
But are there better ways of doing this? Is there a ready-made Azure service that would take away much of the background infrastructure work and just let me schedule a single one-off event at a particular point in time and hit a webhook or something like that?
Am I mis-categorising Azure Scheduler as something that you'd use for a handful of regularly scheduled tasks rather than the mixed bag of dates and times you'd find in 100,000 Cosmos records?
FWIW, in my use-case there isn't really a precision issue - stuff scheduled for 10:05:00 happening at 10:05:32 would be perfectly acceptable, for example.
Appreciate your thoughts.
First of all, Azure Schedular will be replaced by Azure Logic Apps:
Azure Logic Apps is replacing Azure Scheduler, which is being retired. To schedule jobs, follow this article for moving to Azure Logic Apps instead.
(source)
That said, Azure Logic Apps is one of your options since you can define a logic apps that starts a one time job by using a delay activity. See the docs for details.
It scales very well and you can pay for what you use (or use a fixed pricing model).
Another option is using a durable azure function with a timer in it. Once elapsed, you could do your thing. You can use a consumption plan as well, so you pay only for what you use or you can use a fixed pricing model. It also scales very well so hundreds of those instances won't be a problem.
In both cases you have to trigger the function or logic app when the Cosmos records are created. Put the due time as context in the trigger and there you go.
Now, given your statement
I'm aware of the many different ways of scheduling system-centric events in Azure. E.g. Azure Scheduler, Logic Apps, etc. These can be used for things like backups, sending batch emails, or other maintenance functions.
That is up to you. You can do anything you want. You don't specify in your question what work needs to be done when the due time is reached but I doubt it is something you can't do with those services.
Recently I got other developer Azure function code written in C# Script (.csx), I used to write Azure function using visual studio.
I love C# Script imperative binding, it make code more easy (no need to manage connections)
I saw some problem with C# Script
Code quality tool doesn't works (StyleCop/Sonar)
Can't write unit test against .csx file
If you have a different opinion, Please share.
So I decided that I will convert all functions(10) into .net project with sonar integration and UnitTest.
Question
My most of functions not having any business logic they are getting triggers from EventHub and dumping data into cosmos DB, I'm not able to decide should I create 10 projects or 1 project under single solution?
I believe single project with multiple function having single host.json file, If I will change host.json value for scaling particular function, it will impact other function as well. Am'I right?
Number of function = number of project is a correct solution?
How it would impact cost?
Personal opinion is the CSX files are fine for experimenting or something quick and dirty but for production you should be using compiled c#.
Adjusting any setting in the host.json file will impact all functions within that function app. There is no universally correct answer on when to break out your function into separate apps but there are a few questions you can ask to help answer it for your scenario:
Does a particular function have a dramatically different scaling characteristic then the other function(s). (e.g. does one of your message triggers get very different message volume or processing logic then others - do you need to change the host.json)
Is your function doing a separate business process then the other functions (e.g. one is receiving device telemetry messages while the other is handling audit telemetry)
Do #1 and #2 justify the management and devops overhead of creating a separate function app (lots and lots of functions apps, especially in micro-service like architectures can be a challenge to manage)
In your case you have some flexibility with your function apps because they are just message listeners they aren't impacted as much as say http triggers if you find you want to break out the function into a separate app later (e.g. http endpoints changing).
Your overall idea to move to precompiled projects makes sense. That's recommended by Microsoft for all but simplest ad-hoc Functions.
Single project vs multiple projects should be decided based on whether you want a single Function App or multiple Apps. Function App is a scaling unit. If you want multiple Functions to scale independently, they should be in separate Apps and projects.
We have a running site using NLog for logs. We are not only login errors, we use it to measure things relative to business logic.
Now we are moving to Azure and that's why I'm searching for a better way to log this type of info in azure. I'm looking for something like graylog.
Things to have in mind:
What azure provides to log info is easy to read?
Can i make queries to read data?
Is there an API to log?
Check out the following stuff, which is more or less native to Azure. Also you could probably use some of the third parties, like New Relic.
Log Analytics
Application Insights
Operations Management Suite
Application Insights not only has out of the box monitoring but also provides capabilities to create your own queries.
ps. Just my 2 cents, I'd go for OMS, Microsoft is pushing it oh so hard, it is evolving rapidly, even if you are missing some capabilities they are going to be there soon and in the long run, Microsoft is really unlikely to drop OMS anytime soon, since they started forcing it like 1.5 year ago.