Is it possible to run a Change Feed Processor host as an Azure Web Job? - azure

I'm looking to use the Change Feed Processor SDK to monitor for changes to an Azure Cosmos DB collection, however, I have not seen clear documentation about whether the host can be run as an Azure Web Job. Can it? And if yes, are there any known issues or limitations versus running it as a Console App?
There are a good number of blog posts about using the CFP SDK, however, most of them vaguely mention running the host on a VM, and none of them or any examples running the host as an azure web job.
Even if it's possible, as a side question is, if such a host is deployed as a continuous web job and I select the "Scale" setting of the web job to Multi Instance, what are the approaches or recommendations to make the extra instances run with a different instance name, which the CFP SDK requires?

According to my research,Cosmos db trigger could be implemented in the WebJob SDK.
static async Task Main()
{
var builder = new HostBuilder();
builder.ConfigureWebJobs(b =>
{
b.AddAzureStorageCoreServices();
b.AddCosmosDB(a =>
{
a.ConnectionMode = ConnectionMode.Gateway;
a.Protocol = Protocol.Https;
a.LeaseOptions.LeasePrefix = "prefix1";
});
});
var host = builder.Build();
using (host)
{
await host.RunAsync();
}
}
But it seems only Nuget for c# sdk could be used,no clues for other languages.So,you could refer to the Compare Functions and WebJobs to balance your needs and cost.

The Cosmos DB Trigger for Azure Functions it's actually, a WebJobs extension: https://github.com/Azure/azure-webjobs-sdk-extensions/tree/dev/src/WebJobs.Extensions.CosmosDB
And it uses the Change Feed Processor.
Functions run over WebJob technology. So to answer the question, yes, you can run Change Feed Processor on WebJobs, just make sure that:
Your App Service is set to Always On
If you plan to use multiple instances, make sure to set the InstanceName accordingly and not a static/fixed value. Probably something that identifies the WebJob instance.

Related

Blazor WASM Azure Static Web App, Functions not working

I created a simple Blazor WASM webapp using C# .NET5. It connects to some Functions which in turn get some data from a SQL Server database.
I followed the tutorial of BlazorTrain: https://www.youtube.com/watch?v=5QctDo9MWps
Locally using Azurite to emulate the Azure stuff it all works fine.
But after deployment using GitHub Action the webapp starts but then it needs to get some data using the Functions and that fails. Running the Function in Postman results in a 503: Function host is not running.
I'm not sure what I need to configure more. I can't find the logging from Functions. I use the injected ILog, but can find the log messages in Azure Portal.
In Azure portal I see my 3 GET functions, but no option to test or see the logging.
With the help of #Aravid I found my problem.
Because I locally needed to tell my client the URL of the API I added a configuration in Client\wwwroot\appsettings.Development.json.
Of course this file doesn't get deployed.
After changing my code in Program.cs to:
var apiAddress = builder.Configuration["ApiAddress"] ?? $"{builder.HostEnvironment.BaseAddress}/api/";
builder.Services.AddHttpClient("Api",(options) => {
options.BaseAddress = new Uri(apiAddress);
});
My client works again.
I also added my SqlServer connection string in the Application Settings of my Static Web App and the functions are working as well.
I hope somebody else will benefit from this. Took me several hours to figure it out ;)

Only run code on first Azure WebApp instance

I have a webapp runnin on Azure. The web site is built in asp net core 3, and is running in a Docker container.
There is a background worker doing a few things such as database cleanup and sending emails built into the application.
My question is how I should best handle this if I need to scale out the application. That is if I create multiple instances of it, whats the best way to make sure the background worker is only running on one of the instances.. And if another instance is removed that another takes over the job.
I realize one solution to this is to break the application apart and run the backgroundworker separately as an Azure function. But I would prefer to avoid this for cost (it's a hobby project) and complexity reasons.
So I'm intrested if there are more ways of solving this which keeps things in one docker container.
Is there for example an environment variable that I can query to get the current instance name and a list of all instances (then I can just say that the first instance in alphabetical order is the "primary" instance). And check this every so often to know if the current instance is the primary instance.
Sidenote: Azure Functions don't cost extra if you reuse the App Service Plan of your website. And complexitywise they are probably less complex than what you are currently thinking about. But if your main goal is to run everything in a single container, you can achieve that as well:
You can use the WebJobs SDK to basically run the "event handler side" of Azure Functions, including the coordination of the required work. Use the singleton attribute if you need additional limitation of concurrency. Infrastructure-wise, WebJobs require a storage account how they manage scale.
You can run WebJobs in the same process as the rest of your ASP.NET Core Application. Some code to get you started if you want to go that route:
var builder = Host.CreateDefaultBuilder(args);
builder.ConfigureWebHostDefaults(webBuilder =>
{
// webBuilder.UseStartup<Startup>();
webBuilder.Configure(app =>
{
app.UseRouting();
app.UseEndpoints(endpoints =>
{
endpoints.MapGet("/", async context =>
{
await context.Response.WriteAsync("Hello World!");
});
});
});
});
builder.ConfigureWebJobs(b =>
{
b.AddAzureStorageCoreServices();
b.AddAzureStorage();
b.AddTimers();
});
IHost host = builder.Build();
using (host)
{
await host.RunAsync();
}

App Settings not being observed by Core WebJob

I have a Core WebJob deployed into an Azure Web App. I'm using WebJobs version 3.0.6.
I've noticed that changes to Connection Strings and App Settings (added via the Azure web UI) are not being picked up immediately by the WebJob code.
This seems to correlate with the same Connection Strings and App Settings not being displayed on the app's KUDU env page straight away (although I acknowledge this may be a red herring and could be some KUDU caching thing which I'm unaware of).
I've deployed a few non-Core WebJobs in the past and have not come across this issue so wonder if it's Core related? Although I can't see how that might affect configs showing up KUDU though.
I was having this issue the other day (where the configs were not getting picked up by the WebJob or shown in KUDU) and was getting nowhere, so left it. When I checked back the following day, the configs were now correctly showing in KUDU and being picked up by the WebJob. So I'd like to know what has happened in the meantime which means the configs are now being picked up as expected.
I've tried re-starting the WebJob and re-starting the app after making config changes but neither seem to have an effect.
It's worth also noting that I'm not loading appSettings.json during the program setup. That being said, the connection string being loaded was consistenly the connection string from that file i.e. my local machine SQL Server/DB. My understanding was always that the anything in the Azure web UI would override any equivalent settings from config files. This post from David Ebbo indicates that by calling AddEnvironmentVariables() during the setup will cause the Azure configs to be observed, but that doesn't seem to be the case here. Has this changed or is it loading the configs from this file by convention because it can't see the stuff from Azure?
Here's my WebJob Program code:
public static void Main(string[] args)
{
var host = new HostBuilder()
.ConfigureHostConfiguration(config =>
{
config.AddEnvironmentVariables();
})
.ConfigureWebJobs(webJobConfiguration =>
{
webJobConfiguration.AddTimers();
webJobConfiguration.AddAzureStorageCoreServices();
}
)
.ConfigureServices((context, services) =>
{
var connectionString = context.Configuration.GetConnectionString("MyConnectionStringKey");
services.AddDbContext<DatabaseContext>(options =>
options
.UseLazyLoadingProxies()
.UseSqlServer(connectionString)
);
// Add other services
})
.Build();
using(host)
{
host.Run();
}
}
So my questions are:
How quickly should configs added/updated via the Azure web UI be displayed in KUDU?
Is the fact they're not showing in KUDU related to my Core WebJob also not seeing the updated configs?
Is appSettings.json getting loaded even though I'm not calling .AddJsonFile("appSettings.json")?
What can I do to force the new configs added via Azure to be available to my WebJob immediately?
The order in which configuration sources are specified is important, as this establishes the precedence with which settings will be applied if they exist in multiple locations. In the example below, if the same setting exists in both appsettings.json and in an environment variable, the setting from the environment variable will be the one that is used. The last configuration source specified “wins” if a setting exists in more than one location. The ASP.NET team recommends specifying environment variables last, so that the environment where your app is running can override anything set in deployed configuration files.
You can refer here for more details on Azure App Services Application Settings and Connection Strings in ASP.NET Core

Concurrency in the Task-based API in Azure SDK for .NET

I currently have a couple of concurrency issues with the Task-based asynchronous API in the Azure SDK for .Net version 3.0.2-prerelease.
I have a list of web site names
var webSites = new [] { "website1", "website2" };
and from these, I'm using the task based API to create or delete the WebSites. Both occasionally fail:
await Task.WhenAll(webSites.Select(x => webSiteClient.WebSites.CreateAsync(
"westeuropewebspace",
new WebSiteCreateParameters
{
SiteMode = WebSiteMode.Limited,
ComputeMode = WebSiteComputeMode.Shared,
Name = x
WebSpaceName = "something"
}
)));
Seldom, I get an exception complaining that the Server Farm "Default1" already exists. I get that this server farm is implicitly created for Free web sites, but there is currently no way to create this Server Farm through the API before creating the WebSites (only the "DefaultServerFarm" can be).
When deleting, something similar happens:
await Task.WhenAll(webSites.Select(x => webSiteClient.WebSites.DeleteAsync(
"westeuropewebspace",
x,
new WebSiteDeleteParameters
{
DeleteAllSlots = true,
DeleteEmptyServerFarm = true,
DeleteMetrics = true,
}
)));
Often (about every second time), I get an Exception that "website2" could not be found, although it definitely existed. The WebSite is deleted, though.
Update:
I have serialized this second Task.WaitAll into a foreach-loop and I still get the exception. The only difference now is that when deleting "website1" fails, "website2" still exists in the cloud (because the second delete request is not sent) and I have to delete it manually through the portal.
You are right - the create site api also tries to create a server farm implicitly and if called concurrently that may cause conflicts. A safer way is to create a server farm explicitly using API and then use that server farm when creating web sites. That way you explicitly control the placement of sites to server farms and there are no implicit server farm creations.
The Azure SDK API contain a method to create server farm explicitly.
https://github.com/Azure/azure-sdk-for-net/blob/master/src/WebSiteManagement/Generated/ServerFarmOperations.cs

How-to: Create role instances on emulator

How do I create new instances of some role via C# using Azure emulator? Is there some guide about that? There are some manuals about creating instances in the cloud, not in emulator.
So far I know that:
I need to change config-file. Is it config in sln-file or in some temp-delpoyment folder?
I need to use csrun tool. How to pick params?
UPD
Got it.
To change count or instances on emulator, you have to:
update 'ServiceConfiguration.cscfg' file in bin-folder
run 'csrun' tool with params: string.Format("/update:{0};\"{1}\"", deploymentId, "<path to ServiceConfiguration.cscfg>")
where deploymentId:
// get id from RoleEnvironment with regex
var patternt = Regex.Escape("(") + #"\d+" + Regex.Escape(")");
var input = RoleEnvironment.DeploymentId;
var m = Regex.Match(input, patternt);
var deploymentId = m.ToString().Replace("(", string.Empty).Replace(")", string.Empty);
If you have troubles running csrun via code, read this:
http://social.msdn.microsoft.com/Forums/en/windowsazuredevelopment/thread/62ca1372-2388-4181-9dbd-8fbba470ea77
In local emulator, you need to modify the CSCFG file under the deployment .csx folder, instead of your source code folder, since the local emulator will fire your application up from that folder.
Once you modified the saved your CSCFG file, for example the count of the instances you can retrieve the new value from your code immediately. But if you want the local emulator detect this changes and perform the related actions, such as increase the VMs or invoke the Configuration_Changed method, you need to execute
csrun /update:;
You can retrieve the deployment id from the compute emulator UI.
You can find the instance count in the ServiceConfiguration.cscfg in your Azure project

Resources