Best practice with Azure Functions implementation and trigger - azure

I have a discussion with some colleagues of mine about the Azure Functions. I'm giving you a bit of context.
I have created an Azure Functions responsible for communicating the accounting system. In this function I have all I need related to the accounting. So, if you want to use my functions, you know in this one you find everything. I think it is easy to manage also because everything is in one solution. Probably, if I have to update a model or a function, other functions or classes are effected.
For this reason, I have in this function different triggers (HTTP, Servicebus, Timer...). I think an Azure Function is container and each function in it is a "micro" service and it implements SOLID principles by nature. Then, I can say my implementation is correct.
My colleagues said it is not good practice to mix different type of triggers in the same Azure Function.
What is the best practice? Is there any (official) recommendation or advice for that?

It is okay to use different type of triggers in the same Azure Function. But you need to consider that functions within a function app share resources.
Here is the general best practices for your reference.
General best practices

Related

Azure Functions how much code can be done in one?

I am a complete newbie for Azure and Azure Functions but my team plans to move to Azure soon. Now I'm researching how I could use Azure Functions to basically do what I would normally do in a .Net console application.
My question is, can Azure Functions handle quite a bit of code processing?
Our team uses several console apps that effectively pick up a pipe delimited file, do some business logic, update a database with the data, and log everything along the way. From what I've been reading so far I typically see that Azure Functions are used for little pieces of code. How little do they mean? Is it best practice to have a bunch of Azure Functions to replace a console app EX: have one function that does the reading of a file and create a list of objects, another function to loop through those items and add business logic, and then another to write the data to a database or can I use one Azure Function to do all of that?
Direct answer is yes - you can run bigger pieces of code as Azure Function - this is not a problem as long as you meet their limitations. You can even have dependency injecton. For chained scenarios, you can use Durable Functions. However, Microsoft do not recommend long running functions, cause of unexpected timeouts. See best practices for azure functions.
Because of that, I would consider alternatives:
If all what you need is run console app in Azure you can use WebJobs. Here is example how to deploy console app directly to azure via VisualStudio
For more complex logic you can use .NET Core Worker Service which behaves as Windows Service, and could be deployed to azure as App Service.
If you need long-running jobs but with scheduled runs only I had really great experience with Hangfire which can be hosted in Azure as well.
This is really hard to answer because we don't know what kind of console app you have over there. I usually try to use the same SOLID principles used to any class on my functions too. And whenever you need to coordenate actions or if you need to run things in parallel you always use Durable Functions Framework too.
The only concern is related to execution time. Your function cans get pretty expensive if you're running on consumption plan and do know pay attention to it. I recommend you the reading of the following gread article:
https://dev.to/azure/is-serverless-really-as-cheap-as-everyone-claims-4i9n
You can do all of that in one function.
If you need on-the-fly data processing, you can safely use Azure Functions even if it takes reading files or database communication.
What you need to be careful at and configure, though, is the timeout. Their scalability is an interesting topic as well.
If you need to host an application, you need a machine or a part of the storage space of a machine in Azure to do that.

Is a Service provider really necessary in NestJS?

I am trying to understand what the purpose of injecting service providers into a NestJS controller? The documentation here explains here how to use them, that's not the issue here: https://docs.nestjs.com/providers
What I am trying to understand is, in most traditional web applications regardless of platform, a lot of the logic that would go into a NestJS service would otherwise just normally go right into a controller. Why did NestJS decide to move the provider into its own class/abstraction? What is the design advantages gained here for the developer?
Nest draws inspiration from Angular which in turn drew inspiration from enterprise application frameworks like .NET and Java Spring Boot. In these frameworks, the biggest concerns are ideas called Separation of Concern (SoC) and the Single Responsibility Principle (SRP), which means that each class deal with a specific function, and for the most part it can do it without really knowing much about other parts of the application (which leads to loosely coupled design patterns).
You could, if you wanted, put all of your business logic in a controller and call it a day. After all, that would be the easy thing to do, right? But what about testing? You'll need to send in a full request object for each functionality you want to test. You could then make a request factory that makes theses requests for you so it's easier to test, but now you're also looking at needing to test the factory to make sure it is producing correctly (so now you're testing your test code). If you broke apart the controller and the service, the controller could be tested that it just returns whatever the service returns and that's that. Then he service can have a specific input (like from the #Body() decorator in NestJS) and have a much easier input to work with an test.
By splitting the code up, the developer gains flexibility in maintenance, testing, and some autonomy if you are on a team and have interfaces set up so you know what kind of architecture you'll be getting from an injected service without needing to know how the service works in the first place. However, if you still aren't convinced you can also read up on Module Programming, Coupling, and Inversion of Control

Best practices for writing/organising azure functions in project/solution

I have multiple azure functions created. Some are related a similar functionality and others different. Let say:
1. File Movement - TimerTrigger
2. Processing - HttpTrigger
For File Movement I have 2 functions and for Processing say another 2 functions.
I have created 4 azure functions in the same project. Is it the right way?
Should I put FileMovement functions in same class file and Processing in different class file - same project/solution?
Separate project for all azure functions?
Applications settings value must be shared across all the azure functions.
I've blogged about this a while ago.
I suggest the following:
For larger solutions: Apply Domain Driven Design principles to your solution. Keep the functions which require to work together (within a bounded context, or a module within a bounded context) within one Function App. "What changes together should be deployed together."
Check the scaling requirements of the individual functions. If all functions have the same scaling behavior then they can stay in the same Function App. If some functions require different scaling than others, keep them in seperate Function App.
Personally, I like to have one function definition per class since that allows me to use nameof(FunctionClass) in the FunctionName attribute, as I descibed in this post.
Use solution folders to keep the code in the Function App structured. One of my demo projects on GitHub: DurableFunctions.Demo.DotNetCore.

Architecture azure functions

I have an azure function with an azure storage queue trigger. It runs fine without any problems. Inside the queue there will be saved a json and then the function does their job.
But now we need more functionality. I like to expand the json with a functionality key. Now is it better to expand also the function
If functionality = A go to class A
Else go to class B
Or is it better to create a new function with the same trigger?
Regards
It is okay to have different classes in the function.
To make the function responsible only for a particular process, you can split it into two functions and have Service Bus Topic Subscriptions instead of Storage Queues. This will keep the implementation reliable as Service Bus got wide set of features when compared to Storage Queues.
You can use Rules in Topic Subscriptions for filtering the messages.
Functions are just like traditional apps. There's no issues in referencing a class library that handles that deserializing.
What you are looking for is a concept called Message Versioning. It's a heavy topic so I may not be able to handle it here completely but versioning will happen.
One possibility, is to consider each messages as a Command (read on CQRS). You could pre parse the version number in the message and have a CommandHandler for each version.
This is not specific to Functions. Here's a piece of advice Functions related. Keep a single function. With versioning happening, it will be simpler to debug and find what Functions is still working or not.

How should I organize a solution with multiple Azure Functions?

Where can I find guidance on organizing a Visual Studio solution with multiple Azure Functions? Specifically, how should the project be organized?
A single Azure Function resides in a single class file. I suppose each function could be its own class file, all stored within a single project. But is this the optimal solution or do I risk future complications due to a poorly organized project / solution?
The MS Azure team will probably have a better answer but this is what has worked for us so far.
We have several function apps, each are in their own project (and solution).
Of those function apps, some have only a single function, others have multiple functions each in their own class/file.
Where we have multiple functions, it is because those functions are all related to a particular feature area of our system. Hence they work together, and for us it makes sense to maintain and deploy them as a group.
Other function apps are independent, containing only a single function doing a job unrelated to any other function. E.g. we have one timer driven function that crunches some numbers and sends a push notification as required.
Grouping the functions in this way has (so far) made sense for us as it gives us a balance between keeping our deployment relatively simple and being able to scale the 'groups' independently.
Anyway this has proved good enough for our project thus far, but I'd be interested to see if there are better ways.
I am also considering having a large solution with many functions deployed as a single unit. As much research I came across the undocumented setting FunctionsInDependencies which can be added to your startup project as below.
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<TargetFramework>net6.0</TargetFramework>
<AzureFunctionsVersion>v4</AzureFunctionsVersion>
<FunctionsInDependencies>true</FunctionsInDependencies>
</PropertyGroup>
You also need to have a reference in your startup project to the projects that implement function app - e.g. a dummy class inheritance. Otherwise the project reference will be ignored.
This isn't very pretty to be honest. My final call will probably be to declare all the functions and bindings into the startup project and reference the implementation located in sub-projects.
Useful article: https://cosmin-vladutu.medium.com/how-to-split-up-durable-functions-in-multiple-class-projects-328b0015ed37

Resources