Azure servicebus exception on load - azure

On production, noticing below error messages quite frequently on Azure service bus processor. What would be the reason? can't able to replicate it on lower environment
MessageReceiver error (Action=Complete, ClientId=MessageReceiver1topicname/Subscriptions/subscriptionname, EntityPath=topicname/Subscriptions/subscriptionname, Endpoint=ex.servicebus.windows.net)
Message processing error (Action=Complete, ClientId=MessageReceiver1topicname/Subscriptions/subscriptionname, EntityPath=topicname/Subscriptions/subscriptionname, Endpoint=ex.servicebus.windows.net)
Where the processor is AKS hosted, servicetrigger based Azure function based processor
SDK vesrion: "Microsoft.Azure.WebJobs.Extensions.ServiceBus" Version="4.1.2"

Related

Deploying Azure App Service Webjob Using .Net 6 Fails to Start "Failed to bind to address http://127.0.0.1:5000: address already in use"

I ran into an issue while migrating an Azure app service from .Net Core 5 to 6 while also updating the stack configuration in Azure Portal to use .Net version ".Net 6 (LTS)". The app service only contains continuous webjobs that process service bus messages. Locally, the webjob project runs fine but when deployed to Azure it fails to start. In Kudu tools I'm presented with an error:
[01/03/2023 18:21:32 > 1b0f90: ERR ] Unhandled exception. System.IO.IOException: Failed to bind to address http://127.0.0.1:5000: address already in use.
[01/03/2023 18:21:32 > 1b0f90: ERR ] ---> Microsoft.AspNetCore.Connections.AddressInUseException: Only one usage of each socket address (protocol/network address/port) is normally permitted.
[01/03/2023 18:21:32 > 1b0f90: ERR ] ---> System.Net.Sockets.SocketException (10048): Only one usage of each socket address (protocol/network address/port) is normally permitted.
Eventually I am able to get past the error by applying the app setting ASPNETCORE_URLS=http://localhost:5001 to the app service, and applying the same app setting every .Net Core 6 app service running web jobs in the same app service plan except I have to increment the port to something different. This does not seem to be a problem with non-webjob applications, and only occurs when I configure the app service stack to ".Net 6 (LTS)" in Azure Portal.
My question is: Is there another workaround to this issue? I find adding unique port assignments to every webjob running .Net 6 to be a cumbersome and not ideal, and this issue will exist as a serious gotcha for future development.
Here is the dependencies I am pulling in:
Azure.Messaging.ServiceBus Version=7.11.0
Microsoft.Azure.WebJobs Version=3.0.32
Microsoft.ApplicationInsights.AspNetCore Version=2.21.0
Microsoft.ApplicationInsights.NLogTarget Version=2.21.0
Microsoft.Azure.Services.AppAuthentication Version=1.6.2
Microsoft.Azure.WebJobs.Extensions Version=4.0.1
Microsoft.Azure.WebJobs.Extensions.ServiceBus Version=5.3.0
Microsoft.Azure.WebJobs.Extensions.Storage Version=5.0.1
NLog Version=5.0.4
NLog.Targets.Seq Version=2.1.0
NLog.Web.AspNetCore Version=5.1.4
To reproduce:
Create two or more .Net Core 6 applications that only implement Webjobs. My Webjobs functions process Service Bus topic messages, not sure if this is important to reproduce.
Deploy the Webjob applications to the same App Service Plan
In the configuration blade settings tab for each web app make sure that the runtime stack is set to ".Net 6 (LTS)", keep the rest as default.
Now when you go to view the webjobs in Azure Portal you will see that the job is stuck in a restart cycle.
The problem seems to be around setting the stack settings version to ".Net 6 (LTS)". From this article it seems that this setting makes the app service Run Kestrel with YARP, I'm guessing the feature parity is not 1:1 with the previous stack.
Example project that can reproduce the issue can be found on Github. Follow README found in .\Scripts to deploy example to Azure.
Note: there seems to be an issue with the template setting the stack to .Net 6. This may need to be done manually post deployment to fully reproduce the issue.
I have created 2 .NET Core 6 Applications and deployed to Azure Web Jobs in same Azure App Service.
Make sure to enable Always On option in App Service => Configuration => General Settings to ensure WebJobs are running Continuously.
I have updated the Stack settings run time Version to .NET 6.
Now when you go to view the webjobs in Azure Portal you will see that the job is stuck in a restart cycle.
Yes, even I got stuck with the same issue. The WebJob which I have published 2nd is showing the Pending Restart status.
When I click on the Logs, I can see the below error is Logged.
Make sure that you are setting a connection string named >`AzureWebJobsDashboard` in your Microsoft Azure Website >configuration by using the following format >`DefaultEndpointsProtocol=https;AccountName=**NAME**;Account>Key=**KEY**` pointing to the Microsoft Azure Storage account where >the Microsoft Azure WebJobs Runtime logs are stored.
In the second Console App, I haven't Configured the WebJobs and Storage Account.
Updated the code and published again.
Now I can see both the Jobs are in Running State.
My Program.cs file:
// See https://aka.ms/new-console-template for more information
using System.Threading.Tasks;
using Microsoft.Extensions.Hosting;
using Microsoft.Extensions.Logging;
namespace WebJobApp
{
class Program
{
static async Task Main()
{
var builder = new HostBuilder();
builder.UseEnvironment(EnvironmentName.Development);
builder.ConfigureWebJobs(b =>
{
b.AddAzureStorageCoreServices();
b.AddAzureStorageQueues();
});
builder.ConfigureLogging((context, b) =>
{
b.AddConsole();
});
var host = builder.Build();
using (host)
{
await host.RunAsync();
}
}
}
}
Reference taken from MSDoc.

DSC cannot create SSL/TLS secure channel in the SendConfigurationApply function

I am trying to add a vm scale set to an agent pool in Azure Devops. I have set up the VMSS and now am adding DSC as an extension which will then call "SendConfigurationApply" to send to the agent pool.
However when I upgrade the VMs I get this error:
VM has reported a failure when processing extension 'DSC'. Error
message: "DSC Configuration 'AgentsInstalled' completed with error(s).
Following are the first few: The request was aborted: Could not create
SSL/TLS secure channel. The PowerShell DSC resource
'[AgentsResource]Agents' with SourceInfo
'C:\Packages\Plugins\Microsoft.Powershell.DSC\2.83.1.0\DSCWork\agentsConnectedInt.ps1.1\agentsConnectedInt.ps1::26::9::AgentsResource'
threw one or more non-terminating errors while running the Set
functionality. These errors are logged to the ETW channel called
Microsoft-Windows-DSC/Operational. Refer to this channel for more
details. The SendConfigurationApply function did not succeed." More
information on troubleshooting is available at
https://aka.ms/VMExtensionDSCWindowsTroubleshoot
I have set the protocols on the VMs to use TLS 1.1 and 1.2.
The VMs are on an older version of Windows 10 client because we need to run Edge Legacy. Could that be related to this issue?

Azure Functions - Queue trigger consumes message on fail

This issue only occurs when I use the Azure Portal Editor. If I upload from Visual Studio, this issue does not occur, but I cannot upload from Visual Studio due this unrelated bug: Azure Functions - only use connection string in Application Settings in cloud for queue trigger.
When using the Azure Portal Editor, if I throw an exception from C# or use context.done(error) from JavaScript, Application Insights shows an error occurred, but the message is simply consumed. The message is not retried, and it does not go to a poison queue.
The same code for C# correctly retries when uploaded from Visual Studio, so I believe this is a configuration issue. I have tried modifying the host.json file for the Azure Portal Editor version to:
{
"queues": {
"visibilityTimeout": "00:00:15",
"maxDequeueCount": 5
}
}
but the message was still getting consumed instead of retried. How do I fix this so that I can get messages to retry when coding with the Azure Portal Editor?
Notes:
In JavaScript, context.bindingData.dequeueCount returns 0.
Azure Function runtime version: 1.0.11913.0 (~1).
I'm using a Consumption App Plan.
I was using the manual trigger from the Azure Portal Editor, which has different behavior from creating a message in the queue. When I put a message in the queue, the Azure Function worked as expected.
For local development, if your function is async use Task for the return type.
public async Task Run
instead of void:
public async void Run

What happened when using same Storage account for multiple Azure WebJobs (dev/live)?

In my small job, I just use same Storage Account for AzureWebJobsDashboard and AzureWebJobsStorage configs.
But what happened when we use same connection string for local Debugging and Published job equally? Are they treated in isolated manner? Or do they have any conflict issue?
I looked into blobs of published job and found azure-webjobs-dashboad/functions/instances or azure-webjobs-dashboad/functions/recent/by-job-run/{jobname}, or azure-webjobs-hosts/output-logs directories; they have no discriminator specified among jobs while some other directories have GUID with job name.
Note that my job will be run in continuous.
Or do they have any conflict issue?
No, there is no conflict issue. Base on my experience, it is not recommended to local debugging while Published job is running in the azure with the same connection string. Take Azure Storage queue for example, we can't control which queue should be executed in the local or in the azure. If we try to use debug it locally, please have a try to stop the continue WebJob from Azure Portal.
If we try to know WebJob is executed from which instance we could log the instance info in the code with the environment variable WEBSITE_INSTANCE_ID.The following is the code sample:
public static void ProcessQueueMessage([QueueTrigger("queue")] string message, TextWriter log)
{
string instance = Environment.GetEnvironmentVariable("WEBSITE_INSTANCE_ID");
string newMsg = $"WEBSITE_INSTANCE_ID:{instance}, timestamp:{DateTime.Now}, Message:{message}";
log.WriteLine(newMsg);
Console.WriteLine(newMsg);
}
More info please refer to how to use azure queue storage with the WebJob SDK. The following is snipped from the document.
If your web app runs on multiple instances, a continuous WebJob runs on each machine, and each machine will wait for triggers and attempt to run functions. The WebJobs SDK queue trigger automatically prevents a function from processing a queue message multiple times; functions do not have to be written to be idempotent
Update :
About timer trigger we could find more explanation of timer trigger in the WebJob from the GitHub document.So if you want to debug locally, please have a try to stop the WebJob from the azure portal.
only a single instance of a particular timer function will be running across all instances (you don't want multiple instances to process the same timer event).

Azure Worker Role ,Serivce bus exception casuing crash

"Self healing " that what i trying to do.
i have deployed a worker Role[Single instance ,Small Size] with server 2012. mostly the worker role is assigned to work with service bus.
At a recurring period around 14 to 15 Hours .Worker Role goes in unhealthy state and wont Restart again.
The Exception which are reported in intelliTrace are only.
System.ServiceModel.FaultException<Service.ServiceModel.ExceptionDetails>
System.TimeoutException
Event logs are not reported any thing on these exceptions.
Can i put RoleEnvironment.RequestRecycle() at Catch to Recycle if exception occurs. Will this do a Work around.
i haven't able to crash my role on Emulator . so i don't know will this work or not.

Resources