I have an azure function that runs off of a queue trigger. The repository has method to grab the connection string from the ConnectionStrings collection.
return System.Configuration.ConfigurationManager.ConnectionStrings["MyDataBase"].ToString();
This works great for the most part but I see intermittently that this returns a null exception error.
Is there a way I can make this more robust?
Do azure functions sometimes fail to get the settings?
Should I store the setting in a different section?
I also want to say that this runs thousands of times a day but I see this popup about a 100 times.
Runtime version: 1.0.12299.0
Are you reading the configuration for every function call? You should consider reading it once (e.g. using a Lazy<string> and static) and reusing it for all function invocations.
Maybe there is a concurrency issue when multiple threads access the code. Putting a lock around the code could help as well. ConfigurationManager.ConnectionStrings should be tread-safe, but maybe it isn't in the V1 runtime.
A similar problem was posted here, but this concerned app settings and not connection strings. I don't think using CloudConfigurationManager should be the correct solution.
You can also try putting the connection string into the app settings, unless you are using Entity Framework.
Connection strings should only be used with a function app if you are using entity framework. For other scenarios use App Settings. Click to learn more.
(via Azure Portal)
Not sure if this applies to the V1 runtime as well.
The solution was to add a private static string for the connection string. Then only read from the configuration if it fails. I then added a retry that paused half a second. This basically removed this from happening.
private static string connectionString = String.Empty;
private string getConnectionString(int retryCount)
{
if (String.IsNullOrEmpty(connectionString))
{
if (System.Configuration.ConfigurationManager.ConnectionStrings["MyEntity"] != null)
{
connectionString = System.Configuration.ConfigurationManager.ConnectionStrings["MyEntity"].ToString();
}
else
{
if (retryCount > 2)
{
throw new Exception("Failed to Get Connection String From Application Settings");
}
retryCount++;
getConnectionString(retryCount);
}
}
return connectionString;
}
I don't know if this perfect but it works. I went from seeing this exception 30 times a day to none.
I've got an Azure webjob with several queue-triggered functions. The SDK documentation at https://learn.microsoft.com/en-us/azure/app-service-web/websites-dotnet-webjobs-sdk-storage-queues-how-to#config defines the MaxDequeueCount property as:
The maximum number of retries before a queue message is sent to a
poison queue (default is 5).
but I'm not seeing this behavior. In my webjob I've got:
JobHostConfiguration config = new JobHostConfiguration();
config.Queues.MaxDequeueCount = 1;
JobHost host = new JobHost(config);
host.RunAndBlock();
and then I've got a queue-triggered function in which I throw an exception:
public void ProcessQueueMessage([QueueTrigger("azurewejobtestingqueue")] string item, TextWriter logger)
{
if ( item == "exception" )
{
throw new Exception();
}
}
Looking at the webjobs dashboard I see that the SDK makes 5 attempts (5 is the default as stated above):
and after the 5th attempt the message is moved to the poison queue. I expect to see 1 retry (or no retries?) not 5.
UPDATE: Enabled detailed logging for the web app and opted to save those logs to an Azure blob container. Found some logs relevant to my problem located in the azure-jobs-host-archive container. Here's an example showing an item with a dequeue count of 96:
{
"Type": "FunctionCompleted",
"EndTime": "2017-02-22T00:07:40.8133081+00:00",
"Failure": {
"ExceptionType": "Microsoft.Azure.WebJobs.Host.FunctionInvocationException",
"ExceptionDetails": "Microsoft.Azure.WebJobs.Host.FunctionInvocationException: Exception while executing function: ItemProcessor.ProcessQueueMessage ---> MyApp.Exceptions.MySpecialAppExceptionType: Exception of type 'MyApp.Exceptions.MySpecialAppExceptionType' was thrown.
},
"ParameterLogs": {},
"FunctionInstanceId": "1ffac7b0-1290-4343-8ee1-2af0d39ae2c9",
"Function": {
"Id": "MyApp.Processors.ItemProcessor.ProcessQueueMessage",
"FullName": "MyApp.Processors.ItemProcessor.ProcessQueueMessage",
"ShortName": "ItemProcessor.ProcessQueueMessage",
"Parameters": [
{
"Type": "QueueTrigger",
"AccountName": "MyStorageAccount",
"QueueName": "stuff-processor",
"Name": "sourceFeedItemQueueItem"
},
{
"Type": "BindingData",
"Name": "dequeueCount"
},
{
"Type": "ParameterDescriptor",
"Name": "logger"
}
]
},
"Arguments": {
"sourceFeedItemQueueItem": "{\"SourceFeedUpdateID\":437530,\"PodcastFeedID\":\"2d48D2sf2\"}",
"dequeueCount": "96",
"logger": null
},
"Reason": "AutomaticTrigger",
"ReasonDetails": "New queue message detected on 'stuff-processor'.",
"StartTime": "2017-02-22T00:07:40.6017341+00:00",
"OutputBlob": {
"ContainerName": "azure-webjobs-hosts",
"BlobName": "output-logs/1ffd3c7b012c043438ed12af0d39ae2c9.txt"
},
"ParameterLogBlob": {
"ContainerName": "azure-webjobs-hosts",
"BlobName": "output-logs/1cf2c1b012sa0d3438ee12daf0d39ae2c9.params.txt"
},
"LogLevel": "Info",
"HostInstanceId": "d1825bdb-d92a-4657-81a4-36253e01ea5e",
"HostDisplayName": "ItemProcessor",
"SharedQueueName": "azure-webjobs-host-490daea03c70316f8aa2509438afe8ef",
"InstanceQueueName": "azure-webjobs-host-d18252sdbd92a4657d1a436253e01ea5e",
"Heartbeat": {
"SharedContainerName": "azure-webjobs-hosts",
"SharedDirectoryName": "heartbeats/490baea03cfdfd0416f8aa25aqr438afe8ef",
"InstanceBlobName": "zd1825bdbdsdgga465781a436q53e01ea5e",
"ExpirationInSeconds": 45
},
"WebJobRunIdentifier": {
"WebSiteName": "myappengine",
"JobType": "Continuous",
"JobName": "ItemProcessor",
"RunId": ""
}
}
What I'm further looking for though are logs which would show me detail for a particular queue item where processing succeeds (and hence is removed from the queue) or fails due to an exception and is placed in the poison queue. I so far haven't found any logs showing that detail. The log files referenced in the output above do not contain data of this sort.
UPDATE 2: Looked at the state of my poison queue and it seems like it could be a smoking gun but I'm too dense to put 2 and 2 together. Looking at the screenshot of the queue below you can see the message with the ID (left column) 431210 in there many times. The fact that it appears multiple times says to me that the message in the original queue is failing improperly.
As mentioned by Rob W, this issue exists when using WindowsAzure.Storage > 7.1.2. The issue has apparently been fixed in issue #1141 but this has not yet made it into a release.
Contributer asifferman has shared a code snippet in a comment post on issue #985. that appears to resolve the problem (it worked perfectly for me).
In case of link rot, and to meet SO rules, here's the post along with the code snippet:
For those (like me) who cannot wait the next release to get the
WebJobs SDK to work with the latest releases of Azure Storage, and
based on the explanations of #brettsam, you can simply write a custom
CustomQueueProcessorFactory to create a new CloudQueueMessage in
CopyMessageToPoisonQueueAsync.
namespace ConsoleApplication1
{
using Microsoft.Azure.WebJobs.Host.Queues;
using Microsoft.WindowsAzure.Storage.Queue;
using System.Threading;
using System.Threading.Tasks;
public class CustomQueueProcessorFactory : IQueueProcessorFactory
{
public QueueProcessor Create(QueueProcessorFactoryContext context)
{
return new CustomQueueProcessor(context);
}
private class CustomQueueProcessor : QueueProcessor
{
public CustomQueueProcessor(QueueProcessorFactoryContext context)
: base(context)
{
}
protected override Task CopyMessageToPoisonQueueAsync(CloudQueueMessage message, CloudQueue poisonQueue, CancellationToken cancellationToken)
{
var newMessage = new CloudQueueMessage(message.Id, message.PopReceipt);
newMessage.SetMessageContent(message.AsBytes);
return base.CopyMessageToPoisonQueueAsync(newMessage, poisonQueue, cancellationToken);
}
}
}
}
Then in your Main, you just have to set the custom queue processor
factory in the job host configuration:
var config = new JobHostConfiguration();
config.Queues.QueueProcessorFactory = new CustomQueueProcessorFactory();
I could get it work with WindowsAzure.Storage 8.1.1 and
Microsoft.Azure.WebJobs 2.0.0. Hope that helps!
If you are still seeking an answer, we tried some of the answers listed without success. It turns out that it was a version issue with the Storage sdk (WindowsAzure.Storage) and the Webjob sdk (Microsoft.Azure.WebJobs). To fix it, we ended up having to downgrade our version of the Storage sdk to 7.2.1 (we had recently upgraded to 8.1.1). Based on the article below, the engineers are now aware of the problems and will hopefully have it fixed soon:
https://github.com/Azure/azure-webjobs-sdk/issues/1045
MaxDequeueCount property works correctly for me if I configure it.
So it is very odd that it is not working for you. When I set
config.Queues.MaxDequeueCount = 2; then I get the expected result please refer to the screenshot.
And we also could use dequeueCount to control the retry times. The following is the demo code for no try.
public void ProcessQueueMessage([QueueTrigger("queue")] string item, int dequeueCount, TextWriter logger)
{
if (dequeueCount == 1)
{
if (item == "exception")
{
throw new Exception();
}
logger.WriteLine($"NewMsge: {item}");
Console.WriteLine($"NewMsge: {item}");
}
}
Log info please refer to the screenshot
I suspect it's because you're not actually running the binaries that you think you are in Azure. This one threw me for a loop as well.
When you're running triggered WebJobs on Azure, publishing a new version of the WebJob doesn't cause the old triggered WebJob to be immediately unloaded and the new one started. If you look at your WebJob logs, I suspect you will not see a restart when you republished.
This is because Kudu by default copies all of your WebJob files to a temp directory and executes them. From the Kudu WebJob docs:
The WebJob is copied to a temporary directory under %TEMP%\jobs{job
type}{job name}{random name} and will run from there This option
prevents the original WebJob binaries from being locked which might
cause issues redeploying the WebJob. For example updating an .exe file
that is currently running.
The only success I've had in making sure that a newly published triggered WebJob is actually running is to do the following:
Log into the Kudu console. It's https://yourappname.scm.azurewebsites.net. You'll use the same credentials that you do when logging into the Azure Portal.
Once logged in, click on the Process Explorer menu option at the top. Find your WebJob process that's currently running, and kill it.
FTP into your Web App. Browse to the directory containing your WebJob code, and delete it. It should be under /app_data/jobs/triggered/[your webjob name].
I then hop over to the portal, browse to by Web App management blade that hosts the WebJob, click on the WebJobs menu option, and confirm that the old WebJob is no longer there.
Publish my new WebJob from Visual Studio.
That should guarantee you that you're running the code that you publish. Hope this helps.
I am seeing the same thing where messages go way past the max dequeue count. I will post more details in a bit, but I am also seeing what appears to be a very large number end up in poison queue. So I suspect that it is adding to poison queue after 5, but that trying more which ends up in lots in poison queue (hundreds).
For anyone using the Azure WebJobs v3.x SDK:
In v3.x, hosts.json does not work for WebJob.
Instead, version 3.x uses the standard ASP.NET Core APIs, so you need to configure it using the ConfigureWebJobs method:
static async Task Main()
{
var builder = new HostBuilder();
builder.ConfigureWebJobs(b =>
{
b.AddAzureStorageCoreServices();
b.AddAzureStorage(a => {
a.BatchSize = 8;
a.NewBatchThreshold = 4;
a.MaxDequeueCount = 4;
a.MaxPollingInterval = TimeSpan.FromSeconds(15);
});
});
var host = builder.Build();
using (host)
{
await host.RunAsync();
}
}
Docs: https://learn.microsoft.com/pt-pt/azure/app-service/webjobs-sdk-how-to#queue-storage-trigger-configuration
After looking through the documentation and trying to find other examples of developers getting this error, I am a bit stuck. We are working with NServiceBus 6 and are occasionally getting a System.MethodAccessException in our message handlers on the call to return Task.CompletedTask. It seems to only occur when the handler is deployed in an Azure Worker Role (as opposed to running in the emulator). We are using the Azure Service Bus transport.
public Task Handle(UpdatePatientAccommodationCode message, IMessageHandlerContext context)
{
Console.WriteLine($"Handling [{message.GetType()}]");
var patientVisit = LoadByExternalPatientId(message.ClientId, message.ExternalPatientId);
var mappedEvent = patientVisit.HandleCommand(message);
if (patientVisit.IsEventAdded)
PatientVisitEventStore.Save(patientVisit);
return mappedEvent == null ? Task.CompletedTask : context.Publish(mappedEvent);
}
The actual exception looks like this:
System.MethodAccessException: Attempt by method 'XXX.Handlers.PatientVisitHandler.Handle(XXX.UpdatePatientAccommodationCode, NServiceBus.IMessageHandlerContext)' to access method 'System.Threading.Tasks.Task.get_CompletedTask()' failed.
at XXX.Handlers.PatientVisitHandler.Handle(UpdatePatientAccomm odationCode message, IMessageHandlerContext context) in PatientVisitHandler.cs: line 314
at NServiceBus.InvokeHandlerTerminator.Terminate(IInvokeHandlerContext context) in C:\BuildAgent\work\3206e2123f54fce4\src\NServiceBus.Core\Pipeline\Incoming\Invok eHandlerTerminator.cs: line 24
at NServiceBus.LoadHandlersConnector.<Invoke>d__1.MoveNext() in C:\BuildAgent\work\3206e2123f54fce4\src\NServiceBus.Core\Pipeline\Incoming\LoadH andlersConnector.cs: line 40
I suspect your code locally has .NET framework 4.6.x which supports Task.CompletedTask. When you deploy to CS and use OS family less than version 5 won't have support for 4.6.x You either will need to use a startup task to install 4.6.x or migrate to OS Family 5 (Server 2016).
That is strange. Judging by the reference sources of Task.CompletedTask I can't come up with a scenario where this could happen. The task that is statically cached is initialized with RAN_TO_COMPLETION and DO_NOT_DISPOSE. Based on that I would suggest you determine whether you are using the .NET Framework version 4.6 or higher. If you do and you still see the exception try to replace Task.CompletedTask with
static class TaskEx
{
public static readonly Task CompletedTask = Task.FromResult(0);
}
I'm using Azure WebJob to get messages from a service bus queue with success.
But i wanted to use this same WebJob to run some methods every 5 seconds.
I've tried the following approach, and locally it run fine, but when i publish it only runs once.
No errors on azure logs.
What am i doing wrong?
Thanks for helping.
static void Main()
{
try
{
var testTimer = new System.Threading.Timer(e => TestMethod(), null, TimeSpan.FromSeconds(0), TimeSpan.FromSeconds(5));
SetupJobHost();
}
catch (Exception ex)
{
Console.WriteLine(ex);
}
}
private static void TestMethod()
{
Console.WriteLine("Test");
}
I recommend taking a different approach and using a TimerTrigger. You can use a simple chron expression that will cause your method to be executed on a set schedule. If you go this route, make sure that you deploy your WebJob as a triggered job (not continuous!) and that you call the JobHostConfiguration's UseTimers() method before calling the JobHost's RunAndBlock method. This is a much easier and cleaner approach than rolling your own timer service.
According to your description, I have tested your code and reproduced on my side.
After some trials, I found a problem with the class System.Threading.Timer, if we don’t refer to the Timer instance after the initial assignment, then it will get garbage collected.
Please try below methods to see whether it helps:
Method 1: Deploy your webjob in Debug Mode without changing any code;
Method 2: Change your code as follows and deploy it to Azure in Release Mode.
try
{
var testTimer = new System.Threading.Timer(e => TestMethod(), null, TimeSpan.FromSeconds(0), TimeSpan.FromSeconds(5));
SetupJobHost();
testTimer.Dispose();
}
catch (Exception ex)
{
Console.WriteLine(ex);
}
I recommend you to read this article for a better understanding of this interesting issue.
In addition, you could achieve your purpose by using the following code:
System.Timers.Timer sysTimer = new System.Timers.Timer(TimeSpan.FromSeconds(5).TotalMilliseconds);
sysTimer.Elapsed += (s, e) =>TestMethod();
sysTimer.Enabled = true;
I have a console based application as WebJob. Now internally i am trying to map a CloudDrive using the storageconnectionstring UseDevelopmentStorage=true
It is throwing exception ERROR_AZURE_DRIVE_DEV_PATH_NOT_SET. I searched for this error and found that WebJobs do not run locally in Azure emulator. Is this information still valid?
Is there any plan to provide emulator (storage) support for webjobs in near future say in a week or so?
thanks
The information is still valid - we don't support the Azure emulator.
We have that work item on our backlog but I cannot give you any ETA.
Boo hoo Microsoft... This seems rather stupid given that you want us to start adopting the use of Azure Web Jobs!
There are new few lines of code in current version, which I believe solves this issue
static void Main()
{
var config = new JobHostConfiguration();
if (config.IsDevelopment)
{
config.UseDevelopmentSettings();
}
var host = new JobHost();
// The following code ensures that the WebJob will be running continuously
host.RunAndBlock();
}