Running some startup tasks in Configure of Azure FunctionsStartup - azure

I have a function that is bound to a QueueTrigger. In this function I generate a file and write this to a Blob Storage.
But before writing (uploading) the file I want to make sure that the container exists. Is the Configure method in the startup class that inherits FunctionsStartup the right place? It feels wrong to do it every time the trigger runs, isn't it?
I'm using DI to supply my function class some services.
[FunctionName("MyFunction")]
public async Task Run([QueueTrigger(MyQueueName, Connection = "AzureWebJobsStorage")]
MyObject queueMessage, ILogger log)
{
var bytes = Encoding.UTF8.GetBytes("MyFileContent");
// Check if container exists - but not everytime?
var blobClient = new BlobClient(_settings.ConnectionString, _settings.ContainerName, _settings.FileName);
await using var memoryStream = new MemoryStream(bytes);
await blobClient.UploadAsync(memoryStream, true);
}
using MyApp.FunctionApp;
using MyApp.FunctionApp.Options;
using Microsoft.Azure.Functions.Extensions.DependencyInjection;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;
[assembly: FunctionsStartup(typeof(Startup))]
namespace MyApp.FunctionApp
{
public class Startup : FunctionsStartup
{
public override void Configure(IFunctionsHostBuilder builder)
{
// Some startup tasks here like ensuring existence of a Blob Container?
builder.Services.AddOptions<Storage>().Configure<IConfiguration>((settings, configuration) =>
{
configuration.GetSection("Storage").Bind(settings);
});
}
}
}

Depending on the frequency of how often you want to check, you could even do something as simple as this:
//shared variable for all instances that run on the same VM
private static bool HaveCheckedBlobContainer = false;
Then, on each invocation:
if (!HaveCheckedBlobContainer)
{
//perform check ...
HaveCheckedBlobContainer = true;
}
I'll generally have an Initialize() method to set up some expensive instances that need to be stored in static member variables. I'll call Initialize() on each invocation, and use a check such as
_someMemberVariable ??= getItFromMyDiContainerOrInstantiateId();
So that it's only executed once, regardless of invocation count.

Related

How to use dependency injection in Azure Durable Functions?

I want to create an Azure Durable Function that will download a CSV from the Internet and based on the data in this file, it will update my database using EntityFramework.
I set up the simple startup function that is triggered with TimeTrigger. This function is responsible for starting the orchestrator. The orchestrator executes multiple activities in parallel. There are around 40000 work items to be processed, so that's the number of activities that are triggered by orchestrator. Some of these activities will need to update the database (insert/update/delete rows). For this I need a database connection. I can configure DI in the StartUp in the following way:
public override void Configure(IFunctionsHostBuilder builder)
{
var connectionString = Environment.GetEnvironmentVariable("DefaultConnection");
builder.Services.AddDbContext<SqlContext>(options => options.UseSqlServer(connectionString));
builder.Services.AddScoped<IDbContext, SqlContext>();
}
}
However all my functions (orchestrator, activity function, etc.) are static and reside in a static class. I haven't seen any example where durable functions were defined in a non-static class and I had all kinds of problems when I tried that myself, so I assumed they must be static without diving too much into it.
I do not know how to pass my DbContext object to the Activity function, so it can update the data in the database when needed.
How should I resolve it?
I want to create an Azure Durable Function that will download a CSV from the Internet and based on the data in this file, it will update my database using EntityFramework.
Configure DI in the StartUp in the following way:
public override void Configure(IFunctionsHostBuilder builder) {
var connectionString = Environment.GetEnvironmentVariable("DefaultConnection");
builder.Services.AddDbContext<IDbContext, SqlContext>(options =>
options.UseSqlServer(connectionString)); //To inject DbContext
builder.Services.AddHttpClient(); //To inject HttpClient
}
Ensure you host your function app on Azure Functions Runtime V3+ so the class and methods don’t have to be static.
This will allow regular classes that have non-static constructors with injectable arguments
public class MyFunction {
private readonly HttpClient httpClient;
private readonly IDbContext dbContext;
//ctor
public MyFunction(IHttpClientFactory factory, IDbContext dbContext) {
httpClient = factory.CreateClient();
this.dbContext = dbContext;
}
[FunctionName("Function_Name_Here")]
public async Task Run(
[OrchestrationTrigger] IDurableOrchestrationContext context) {
// ... access dependencies here
}
// ... other functions, which can include static, but they wont
// have access to the instance fields.
}
This series of articles might be of some assistance to you
A Practical Guide to Azure Durable Functions — Part 2: Dependency Injection

What is the AddService of JobHostingConfiguration with Azure WebJobs used for

I have the following WebJob Function...
public class Functions
{
[NoAutomaticTrigger]
public static void Emailer(IAppSettings appSettings, TextWriter log, CancellationToken cancellationToken)
{
// Start the emailer, it will stop on dispose
using (IEmailerEndpoint emailService = new EmailerEndpoint(appSettings))
{
// Check for a cancellation request every 3 seconds
while (!cancellationToken.IsCancellationRequested)
{
Thread.Sleep(3000);
}
log.WriteLine("Emailer: Canceled at " + DateTime.UtcNow);
}
}
}
I have been looking at how this gets instantiated which I can do with the simple call...
host.Call(typeof(Functions).GetMethod("MyMethod"), new { appSettings = settings })
However it's got me wondering how the TextWriter and CancellationToken are included in the instantiation. I have spotted that JobHostingConfiguration has methods for AddService and I have tried to inject my appSettings using this but it has failed with the error 'Exception binding parameter'.
So how does CancellationToken get included in the instantiation and what is JobHostingConfiguration AddService used for?
how does CancellationToken get included in the instantiation
You could use the WebJobsShutdownWatcher class because it has a Register function that is called when the cancellation token is canceled, in other words when the webjob is stopping.
static void Main()
{
var cancellationToken = new WebJobsShutdownWatcher().Token;
cancellationToken.Register(() =>
{
Console.Out.WriteLine("Do whatever you want before the webjob is stopped...");
});
var host = new JobHost();
// The following code ensures that the WebJob will be running continuously
host.RunAndBlock();
}
what is JobHostingConfiguration AddService used for?
Add Services: Override default services via calls to AddService<>. Common services to override are the ITypeLocator and IJobActivator.
Here is a custom IJobActivator allows you to use DI, you could refer to it to support instance methods.

Using Ninject in an Azure WebJobs but can't pass my db client

I'm using Ninject in a new Azure WebJobs project. One of my repositories requires a Db client to be passed. How do I pass this client?
My bindings class is:
public class NinjectBindings : Ninject.Modules.NinjectModule
{
public override void Load()
{
Bind<IMyRepository>().To<MyRepository>();
}
}
My Main function in the console app looks like this:
static void Main()
{
var kernel = new StandardKernel();
kernel.Load(Assembly.GetExecutingAssembly());
var config = new Configuration();
config.AddJsonFile("appsettings.json");
DbClient _dbClient = new DbClient(config);
IMyRepository myRepository = kernel.Get<IMyRepository>(); // This is where I get an error
}
My repository code is like this which is expecting the DbClient
public class MyRepository : IMyRepository
{
private DbClient _client;
public MyRepository(DbClient client)
{
_client = client;
}
}
You need to setup a binding for your DbClient.
I'd suggest being cautious around when components are released. I've not seen a good ninject example for web jobs yet so I've wired up manually. But that's just my thoughts...

Blob path name provider for WebJob trigger

I have a following test code that is placed inside a WebJob project. It is triggered after any blob is created (or changed) inside "cBinary/test1/" storage account.
The code works.
public class Triggers
{
public void OnBlobCreated(
[BlobTrigger("cBinary/test1/{name}")] Stream blob,
[Blob("cData/test3/{name}.txt")] out string output)
{
output = DateTime.Now.ToString();
}
}
The question is: how to get rid of ugly hard-coded const string "cBinary/test1/" and ""cData/test3/"?
Hard-coding is one problem, but I need to create and maintain couple of such strings (blob directories) that are created dynamically - depend of supported types. What's more - I need this string value in couple of places, I don't want to duplicate it.
I would like them to be placed in some kind of configuration provider that builds the blob path string depending on some enum, for instance.
How to do it?
You can implement INameResolver to resolve QueueNames and BlobNames dynamically. You can add the logic to resolve the name there. Below is some sample code.
public class BlobNameResolver : INameResolver
{
public string Resolve(string name)
{
if (name == "blobNameKey")
{
//Do whatever you want to do to get the dynamic name
return "the name of the blob container";
}
}
}
And then you need to hook it up in Program.cs
class Program
{
// Please set the following connection strings in app.config for this WebJob to run:
// AzureWebJobsDashboard and AzureWebJobsStorage
static void Main()
{
//Configure JobHost
var storageConnectionString = "your connection string";
//Hook up the NameResolver
var config = new JobHostConfiguration(storageConnectionString) { NameResolver = new BlobNameResolver() };
config.Queues.BatchSize = 32;
//Pass configuration to JobJost
var host = new JobHost(config);
// The following code ensures that the WebJob will be running continuously
host.RunAndBlock();
}
}
Finally in Functions.cs
public class Functions
{
public async Task ProcessBlob([BlobTrigger("%blobNameKey%")] Stream blob)
{
//Do work here
}
}
There's some more information here.
Hope this helps.

Running windows service in separate thread and use autofac for DI

I'm trying to create a long running windows service, so I need to run the actual worker class on a separate thread, to avoid the "service did not respond in a timely fashion" error when I right click and select start in Windows Service Manager.
The worker class ("NotificationProcess") has a whole raft of dependencies and I'm using Autofac to satisfy these.
I'm really not sure how to set up Autofac for the worker class. At the moment I'm getting errors telling me that the DbContext has been disposed when I go to use it in the "Execute" method of the worker class.
I guess I'm looking for how to write a windows service and use a new thread for the worker class with dependencies satisfied by autofac.
I've googled and can't find any examples of this.
Any suggestions would be awesome.
Here's what I've got so far...
Program.cs:
static class Program
{
static void Main()
{
using (var container = ServiceStarter.CreateAutoFacContainer())
{
var service = container.Resolve<NotificationService>();
if (Environment.UserInteractive)
{
service.Debug();
}
else
{
ServiceBase.Run(container.Resolve<NotificationService>());
}
}
The Service class:
public partial class NotificationService : ServiceBase
{
private NotificationProcess _app;
readonly ILifetimeScope _lifetimeScope;
public NotificationService(ILifetimeScope lifetimeScope)
{
_lifetimeScope = lifetimeScope;
InitializeComponent();
}
protected override void OnStart(string[] args)
{
_app = _lifetimeScope.Resolve<NotificationProcess>();
_app.Start();
}
The worker class:
public class NotificationProcess
{
private Thread _thread;
private readonly IBankService _bankService;
private readonly IRateService _rateService;
private readonly IEmailService _emailService;
private readonly IRateChangeSubscriberService _rateChangeSubscriberService;
private readonly IRateChangeNotificationService _rateChangeNotificationService;
private readonly ILogManager _logManager;
public NotificationProcess(IBankService bankService, ILogManager logManager, IRateService rateService, IEmailService emailService,
IRateChangeSubscriberService rateChangeSubscriberService, IRateChangeNotificationService rateChangeNotificationService)
{
_bankService = bankService;
_rateService = rateService;
_emailService = emailService;
_rateChangeSubscriberService = rateChangeSubscriberService;
_rateChangeNotificationService = rateChangeNotificationService;
_logManager = logManager;
}
public void Start()
{
_thread = new Thread(new ThreadStart(Execute));
_thread.Start();
}
public void Execute()
{
try
{
var rateChangeToNotify = _rateService.GetRateChangesForNotification();
foreach (var rateChange in rateChangeToNotify)
{
//do whatever business logic.....
}
}
}
The answer is actually simple: use scoping! You should do the following:
Register all services (such as DbContext) that should live for the duration of a request or action with the LifetimeScope lifestyle. You'll usually have a timer in your windows service. Each 'pulse' can be considered a request.
On the beginning of each request begin a lifetime scope.
Within that scope, resolve the root object from the object graph and call its method.
Dispose the scope.
In your case that means you need to change your design, since NotificationService is resolved once and its dependencies are reused on another thread. This is a no-no in dependency injection land.
Here's an alternative design:
// This method is called on a background thread
// (possibly in a timely manner)
public void Run()
{
try
{
using (var scope = container.BeginLifetimeScope())
{
var service = scope.Resolve<NotificationService>();
service.Execute();
}
}
catch (Exception ex)
{
// IMPORTANT: log exception.
// Not logging an exception will leave us in the dark.
// Not catching the exception will kill our service
// because we run in a background thread.
}
}
Using a lifetime scope allows you to get a fresh DbContext for every request and it would even allow you to run requests in parallel (with each request its own DbContext).

Resources