How should I organize a solution with multiple Azure Functions? - azure

Where can I find guidance on organizing a Visual Studio solution with multiple Azure Functions? Specifically, how should the project be organized?
A single Azure Function resides in a single class file. I suppose each function could be its own class file, all stored within a single project. But is this the optimal solution or do I risk future complications due to a poorly organized project / solution?

The MS Azure team will probably have a better answer but this is what has worked for us so far.
We have several function apps, each are in their own project (and solution).
Of those function apps, some have only a single function, others have multiple functions each in their own class/file.
Where we have multiple functions, it is because those functions are all related to a particular feature area of our system. Hence they work together, and for us it makes sense to maintain and deploy them as a group.
Other function apps are independent, containing only a single function doing a job unrelated to any other function. E.g. we have one timer driven function that crunches some numbers and sends a push notification as required.
Grouping the functions in this way has (so far) made sense for us as it gives us a balance between keeping our deployment relatively simple and being able to scale the 'groups' independently.
Anyway this has proved good enough for our project thus far, but I'd be interested to see if there are better ways.

I am also considering having a large solution with many functions deployed as a single unit. As much research I came across the undocumented setting FunctionsInDependencies which can be added to your startup project as below.
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<TargetFramework>net6.0</TargetFramework>
<AzureFunctionsVersion>v4</AzureFunctionsVersion>
<FunctionsInDependencies>true</FunctionsInDependencies>
</PropertyGroup>
You also need to have a reference in your startup project to the projects that implement function app - e.g. a dummy class inheritance. Otherwise the project reference will be ignored.
This isn't very pretty to be honest. My final call will probably be to declare all the functions and bindings into the startup project and reference the implementation located in sub-projects.
Useful article: https://cosmin-vladutu.medium.com/how-to-split-up-durable-functions-in-multiple-class-projects-328b0015ed37

Related

Best practice with Azure Functions implementation and trigger

I have a discussion with some colleagues of mine about the Azure Functions. I'm giving you a bit of context.
I have created an Azure Functions responsible for communicating the accounting system. In this function I have all I need related to the accounting. So, if you want to use my functions, you know in this one you find everything. I think it is easy to manage also because everything is in one solution. Probably, if I have to update a model or a function, other functions or classes are effected.
For this reason, I have in this function different triggers (HTTP, Servicebus, Timer...). I think an Azure Function is container and each function in it is a "micro" service and it implements SOLID principles by nature. Then, I can say my implementation is correct.
My colleagues said it is not good practice to mix different type of triggers in the same Azure Function.
What is the best practice? Is there any (official) recommendation or advice for that?
It is okay to use different type of triggers in the same Azure Function. But you need to consider that functions within a function app share resources.
Here is the general best practices for your reference.
General best practices

Best practices for writing/organising azure functions in project/solution

I have multiple azure functions created. Some are related a similar functionality and others different. Let say:
1. File Movement - TimerTrigger
2. Processing - HttpTrigger
For File Movement I have 2 functions and for Processing say another 2 functions.
I have created 4 azure functions in the same project. Is it the right way?
Should I put FileMovement functions in same class file and Processing in different class file - same project/solution?
Separate project for all azure functions?
Applications settings value must be shared across all the azure functions.
I've blogged about this a while ago.
I suggest the following:
For larger solutions: Apply Domain Driven Design principles to your solution. Keep the functions which require to work together (within a bounded context, or a module within a bounded context) within one Function App. "What changes together should be deployed together."
Check the scaling requirements of the individual functions. If all functions have the same scaling behavior then they can stay in the same Function App. If some functions require different scaling than others, keep them in seperate Function App.
Personally, I like to have one function definition per class since that allows me to use nameof(FunctionClass) in the FunctionName attribute, as I descibed in this post.
Use solution folders to keep the code in the Function App structured. One of my demo projects on GitHub: DurableFunctions.Demo.DotNetCore.

Azure Function Project Structure (C# vs CSX) and Best Practices

Recently I got other developer Azure function code written in C# Script (.csx), I used to write Azure function using visual studio.
I love C# Script imperative binding, it make code more easy (no need to manage connections)
I saw some problem with C# Script
Code quality tool doesn't works (StyleCop/Sonar)
Can't write unit test against .csx file
If you have a different opinion, Please share.
So I decided that I will convert all functions(10) into .net project with sonar integration and UnitTest.
Question
My most of functions not having any business logic they are getting triggers from EventHub and dumping data into cosmos DB, I'm not able to decide should I create 10 projects or 1 project under single solution?
I believe single project with multiple function having single host.json file, If I will change host.json value for scaling particular function, it will impact other function as well. Am'I right?
Number of function = number of project is a correct solution?
How it would impact cost?
Personal opinion is the CSX files are fine for experimenting or something quick and dirty but for production you should be using compiled c#.
Adjusting any setting in the host.json file will impact all functions within that function app. There is no universally correct answer on when to break out your function into separate apps but there are a few questions you can ask to help answer it for your scenario:
Does a particular function have a dramatically different scaling characteristic then the other function(s). (e.g. does one of your message triggers get very different message volume or processing logic then others - do you need to change the host.json)
Is your function doing a separate business process then the other functions (e.g. one is receiving device telemetry messages while the other is handling audit telemetry)
Do #1 and #2 justify the management and devops overhead of creating a separate function app (lots and lots of functions apps, especially in micro-service like architectures can be a challenge to manage)
In your case you have some flexibility with your function apps because they are just message listeners they aren't impacted as much as say http triggers if you find you want to break out the function into a separate app later (e.g. http endpoints changing).
Your overall idea to move to precompiled projects makes sense. That's recommended by Microsoft for all but simplest ad-hoc Functions.
Single project vs multiple projects should be decided based on whether you want a single Function App or multiple Apps. Function App is a scaling unit. If you want multiple Functions to scale independently, they should be in separate Apps and projects.

Is it better to have lots of Azure Functions in one project or many?

I have written eight different Azure Functions in one .NET project. They work great but I was wondering if it makes more sense from a performance optimization standpoint to put each function in it's own project. Any ideas in this would regard would be most appreciated...
I don't see any reason why it could have an impact on performance. It could be a difference when you put them all in the same appservice plan.
A reason to put them in separate projects can be that you want to deploy and evolve them independently. We try to make them as independent as possible putting them in seperate projects or even in a different git repo with separate pipelines will make it easier to evolve them independent.

How can I still use DDD, TDD in BizTalk?

I just started getting into BizTalk at work and would love to keep using everything I've learned about DDD, TDD, etc. Is this even possible or am I always going to have to use the Visio like editors when creating things like pipelines and orchestrations?
You can certainly apply a lot of the concepts of TDD and DDD to BizTalk development.
You can design and develop around the concept of domain objects (although in BizTalk and integration development I often find interface objects or contract first design to be a more useful way of thinking - what messages get passed around at my interfaces). And you can also follow the 'Build the simplest possible thing that will work' and 'only build things that make tests pass' philosophies of TDD.
However, your question sounds like you are asking more about the code-centric sides of these design and development approaches.
Am I right that you would like to be able to follow the test driven development approach of first writing a unti test that exercises a requirement and fails, then writing a method that fulfils the requirement and causes the test to pass - all within a traditional programing language like C#?
For that, unfortunately, the answer is no. The majority of BizTalk artifacts (pipelines, maps, orchestrations...) can only really be built using the Visual Studio BizTalk plugins. There are ways of viewing the underlying c# code, but one would never want to try and directly develop this code.
There are two tools BizUnit and BizUnit Extensions that give some ability to control the execution of BizTalk applications and test them but this really only gets you to the point of performing more controled and more test driven integration tests.
The shapes that you drag onto the Orchestration design surface will largely just do their thing as one opaque unit of execution. And Orchestrations, pipelines, maps etc... all these things are largely intended to be executed (and tested) within an entire BizTalk solution.
Good design practices (taking pointers from approaches like TDD) will lead to breaking BizTalk solutions into smaller, more modular and testable chunks, and are there are ways of testing things like pipelines in isolation.
But the detailed specifics of TDD and DDD in code sadly don't translate.
For some related discussion that may be useful see this question:
Mocking WebService consumed by a Biztalk Request-Response port
If you often make use of pipelines and custom pipeline components in BizTalk, you might find my own PipelineTesting library useful. It allows you to use NUnit (or whatever other testing framework you prefer) to create automated tests for complete pipelines, specific pipeline components or even schemas (such as flat file schemas).
It's pretty useful if you use this kind of functionality, if I may say so myself (I make heavy use of it on my own projects).
You can find an introduction to the library here, and the full code on github. There's also some more detailed documentation on its wiki.
I agree with the comments by CKarras. Many people have cited that as their reason for not liking the BizUnit framework. But take a look at BizUnit 3.0. It has an object model that allows you to write the entire test step in C#/VB instead of XML. BizUnitExtensions is being upgraded to the new object model as well.
The advantages of the XML based system is that it is easier to generate test steps and there is no need to recompile when you update the steps. In my own Extensions library, I found the XmlPokeStep (inspired by NAnt) to be very useful. My team could update test step xml on the fly. For example, lets say we had to call a webservice that created a customer record and then checked a database for that same record. Now if the webservice returned the ID (dynamically generated), we could update the test step for the next step on the fly (not in the same xml file of course) and then use that to check the database.
From a coding perspective, the intellisense should be addressed now in BizUnit 3.0. The lack of an XSD did make things difficult in the past. I'm hoping to get an XSD out that will aid in the intellisense. There were some snippets as well for an old version of BizUnit but those havent been updated, maybe if theres time I'll give that a go.
But coming back to the TDD issue, if you take some of the intent behind TDD - the specification or behavior driven element, then you can apply it to some extent to Biztalk development as well because BizTalk is based heavily on contract driven development. So you can specify your interfaces first and create stub orchestrations etc to handle them and then build out the core. You could write the BizUnit tests at that time. I wish there were some tools that could automate this process but right now there arent.
Using frameworks such as the ESB guidance can also help give you a base platform to work off so you can implement the major use cases through your system iteratively.
Just a few thoughts. Hope this helps. I think its worth blogging about more extensively.
This is a good topic to discuss.Do ping me if you have any questions or we can always discuss more over here.
Rgds
Benjy
You could use BizUnit to create and reuse generic test cases both in code and excel(for functional scenarios)
http://www.codeplex.com/bizunit
BizTalk Server 2009 is expected to have more IDE integrated testability.
Cheers
Hemil.
BizUnit is really a pain to use because all the tests are written in XML instead of a programming language.
In our projects, we have "ported" parts of BizUnit to a plain old C# test framework. This allows us to use BizUnit's library of steps directly in C# NUnit/MSTest code. This makes tests that are easier to write (using VS Intellisense), more flexible, and most important, easier to debug in case of a test failure. The main drawback of this approach is that we have forked from the main BizUnit source.
Another interesting option I would consider for future projects is BooUnit, which is a Boo wrapper on top of BizUnit. It has advantages similar to our BizUnit "port", but also has the advantage of still using BizUnit instead of forking from it.

Resources