I have an azure function in the premium plan and I have deployed a queue trigger function inside the docker image.
Here is the reference -> https://learn.microsoft.com/en-us/azure/azure-functions/functions-create-function-linux-custom-image?tabs=in-process%2Cbash%2Cazure-cli&pivots=programming-language-javascript
host.json
{
"version": "2.0",
"logging": {
"applicationInsights": {
"samplingSettings": {
"isEnabled": true,
"excludedTypes": "Request"
}
}
},
"extensionBundle": {
"id": "Microsoft.Azure.Functions.ExtensionBundle",
"version": "[3.*, 4.0.0)"
},
"functionTimeout": "00:30:00",
"extensions": {
"queues": {
"batchSize": 1,
"newBatchThreshold": 0,
"maxDequeueCount": 1
}
}
}
It was working perfectly after some days it suddenly stopped processing message. When i went into azure portal and checked the storage queue, got to see that there are more 100+ message in the queue and function is not picking up any message.
I just restarted the function and it starts working.
I went through this -> https://github.com/Azure/azure-functions-host/issues/3721#issuecomment-441186710
There someone was trying to set maxPollingInterval up to 2 mins. I just check this doc it is mentioned that the default value of this property is 1 min then setting it to 2 mins doesn’t make any sense.
This is an acutely weird behaviour that is why this is happening and it happened to me a couple of times.
The queue is receiving 20 messages maximum in a day, hence the frequency of messages is very low. Those 20 messages can be pushed into the storage queue at once or every hour depending on the requirement.
Any help would be appreciated.
Thanks in advance.
Related
Here is the structure, a data-drift-detected event in ML Workspace sends the event into event grid which triggers a function in Azure Function App. I want it to run only once after the data drift detection. However, I got this:
image
It runs every ~20s for a few times ://
Here is my host.json:
{
"version": "2.0",
"logging": {
"applicationInsights": {
"samplingSettings": {
"isEnabled": true,
"excludedTypes": "Request"
}
}
},
"extensionBundle": {
"id": "Microsoft.Azure.Functions.ExtensionBundle",
"version": "[3.*, 4.0.0)"
}
}
and function.json:
{
"scriptFile": "__init__.py",
"bindings": [
{
"type": "eventGridTrigger",
"name": "event",
"direction": "in"
}
]
}
Tried changing default options in "singleton" field in host.json, but nothing changed.
Do you have any idea?
When you create an event grid trigger there you will have retry policies where you can change it 1.
Event grid trigger waits of response if it doesnt get it triggers again until it gets a response, so change it to 1. To only trigger once.
So if event grid doesnt get response it triggers again with some interval of time
If not you are sending responses, so its triggering try not to send more responses to your end point.
References taken from:
Azure Event Grid delivery and retry - Azure Event Grid | Microsoft Learn
I'm trying to get the payload of Azure IoT Hub telemetry to a Function. I tried using this documentation, but I must be missing something. While I see data coming through, my function is not executed. I tried to put a Service Bus in between, so I created a Message Route in my IoT Hub and used that according to the same documentation, but for Service Bus instead of IoT Hub. I see the messages from a simulated device in the IoT Hub and the Service Bus, but somehow, the function is not executed. I also have no idea how to debug this problem, why the function is not executed. Any help with debugging tips or documentation tips would be much appreciated.
I added the Service Bus parameters in host.json:
...
"serviceBus": {
"prefetchCount": 100,
"messageHandlerOptions": {
"autoComplete": true,
"maxConcurrentCalls": 32,
"maxAutoRenewDuration": "00:05:00"
},
"sessionHandlerOptions": {
"autoComplete": false,
"messageWaitTimeout": "00:00:30",
"maxAutoRenewDuration": "00:55:00",
"maxConcurrentSessions": 16
},
"batchOptions": {
"maxMessageCount": 1000,
"operationTimeout": "00:01:00",
"autoComplete": true
}
}
...
And set the right trigger binding in functions.json:
{
"scriptFile": "__init__.py",
"bindings": [
{
"name": "msg",
"type": "serviceBusTrigger",
"direction": "in",
"queueName": "[MyQueueName]",
"connection": "Endpoint=sb://[MyServiceBusName].servicebus.windows.net/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=[MyServiceBusSAS]"
}
]
}
So what the instructions lack to tell you, what I did wrong, is that you should give your connection an arbitrary name, say queue_conn_str. Then on the Azure Portal, you go to your function app, and set there the application setting with the actual connection string (Endpoint...), with the same name (queue_conn_str).
With that you can actually connect your IoT hub directly to your function, no need for an Event Hub or Service Bus in between.
We are using an azure function that is triggered by a service bus queue.
Our function's host.json is as follows:
{
"version": "2.0",
"extensions": {
"serviceBus": {
"prefetchCount": 0,
"autoCompleteMessages": true,
"maxAutoLockRenewalDuration": "00:05:00",
"maxConcurrentCalls": 16,
"maxConcurrentSessions": 2000,
"maxMessages": 1000,
"sessionIdleTimeout": "00:01:00",
"enableCrossEntityTransactions": false
}
},
"logging": {
"applicationInsights": {
"samplingExcludedTypes": "Request",
"samplingSettings": {
"isEnabled": true
}
},
"logLevel": {
"Function.Delegation.User": "Information",
"Function": "Error"
}
}
}
We also have an exponential retry in case messages fail, with infinite retry as we cannot afford to lose any data:
[FunctionName("Delegation")]
[ExponentialBackoffRetry(-1, "00:00:01", "00:01:00")]
public async Task Run([ServiceBusTrigger(DelegationConstants.PlotQueueName, Connection = "ServiceBusConnectionString", IsSessionsEnabled = true)] string myQueueItem, MessageReceiver messageReceiver, string lockToken, string messageId, ILogger log)
Our service bus namespace has a single queue that is triggering this. The queue has sessions and partitions enabled and we currently have a 1GB size across 16 partitions (16GB). Our partition key is a physical device identifier (an IMEI on a mobile device if you are familiar), so it has a very broad range of values (around 55k in total in our estate).
Our service bus message is created with a session id of the partition key (IMEI):
var serviceBusMessage = new ServiceBusMessage(JsonConvert.SerializeObject(delegationMessage))
{
SessionId = delegationMessage.PartitionKey
};
Each of our function invocations take ~200-300ms:
Messages 'per device' must be processed FIFO, hence the sessions. At any one time, we could need up to (and in future possibly more than) 1000 messages per second across many devices, each at 200-300ms.
We have about reached the maximum optimisation of our function code itself, so I can't make that any quicker unfortunately.
Given all the above information, our service bus queue is still increasing, indicating that perhaps the service bus + function setup we have right now is not sufficient to cope with our throughput. Is there anything I am missing here or have I reached the limit of concurrency with this configuration? Our function is running on a P1V3 and sits at ~45% cpu and ~25% memory
I've been trying to get my logging output in order so I'd be able to fix any errors resulting from the Function working improperly. But all the logging says is ResultCode 0.
As I looked at the initial examples of the docs I thought maybe i am missing a return, just like here LINK. But I am misunderstanding how it works, because when I add them they only generate errors. They can be found in below snippet where I return the statuscode to the output binding.
{
"scriptFile": "__init__.py",
"bindings": [
{
"name": "mytimer",
"type": "timerTrigger",
"direction": "in",
"schedule": "0 */10 * * * *"
},
{
"name": "$return",
"direction": "out",
"type": "http"
}
]
}
init.py
def main(mytimer: func.TimerRequest)-> func.HttpResponse:
logging.info('Getting pre-requisite data from Azure RM and CosmosDB ...')
azure_nsg_list = get_full_nsg_list()
cosmosdb_nsg_entities_list = get_list_of_entities()
nsg_stack_reference_list = get_nsg_number_references()
logging.info('Checking for unmanaged Network Security Groups ..')
unmanaged_nsg_list = [item for item in azure_nsg_list if item not in cosmosdb_nsg_entities_list]
if unmanaged_nsg_list:
logging.info('Unmanaged NSGs found, adding to CosmosDB ..')
for nsg in unmanaged_nsg_list:
logging.info('Adding NSG %s ..')
create_azure_table_entity()
logging.info('Finished adding to CosmosDB ..')
return func.HttpResponse("All found NSGs have been added to CosmosDB.", status_code=200)
else:
logging.info('No unmanaged NSGs found ...')
return func.HttpResponse("No unmanaged NSGs found ...", status_code=101)
In the end i want to be able to get alerting on the moments when my function would actually give an error 4xx.
Is there some way I can get the ResultCodes to show the actual statuscode of the code? I have three functions, the other two have eventhub inputs.
Probably also important, this is my host.json:
{
"version": "2.0",
"logging": {
"applicationInsights": {
"samplingSettings": {
"isEnabled": true,
"excludedTypes": "Request"
}
}
},
"extensionBundle": {
"id": "Microsoft.Azure.Functions.ExtensionBundle",
"version": "[2.*, 3.0.0)"
},
"extensions": {
"eventHubs": {
"batchCheckpointFrequency": 1,
"eventProcessorOptions": {
"maxBatchSize": 10,
"prefetchCount": 20
}
}
}
}
I actually there are other things I am missing, as the metrics are also not showing all the metrics which these logs should be based on. For example:
We can check all the logging.info messages from Application Insights. Make sure that Applications Insights are enabled for Function App.
Steps to get the log information: Application Insights (Our function App) -> Performance -> Select “Overall” under “Operation Name” column -> Select Function Name from the under “All” logs -> Click on “View all telemetry”
Here we will be able to see the message, have a look at below screenshot for reference from my function message:
My Python code:
Coming to 4XX error, mostly we’ll be getting 403 errors while running the function app.
We need to check two points here:
Need to make sure that we add all the values from Local.Settings.json file to Application settings (FunctionApp -> Configuration -> Application Settings)
Check for CORS. Try adding “*” (Any request made against a storage resource when CORS is enabled must either have a valid authorization header or must be made against a public resource.)
I have a Azure queue trigger app. It tries to process message asap but when I have 1000s of message in the queue I want to limit number queue messages it process per second. Is there a way to setup limit?
My goal is to slow down the rate at which my function processes messages.
{
"generatedBy": "Microsoft.NET.Sdk.Functions-1.0.24",
"configurationSource": "attributes",
"bindings": [
{
"type": "queueTrigger",
"queueName": "fred",
"connection": "",
"name": "myQueueItem"
}
],
"disabled": false,
"scriptFile": "../bin/run.dll",
"entryPoint": "Fred.Run"
}
You cannot limit to "X requests per second" as this depends on your processing logic. However, you can configure the batch size and then also to how many instances your Function will scale out.
See here: https://learn.microsoft.com/en-us/azure/azure-functions/functions-bindings-storage-queue-trigger?tabs=csharp#concurrency
https://learn.microsoft.com/en-us/azure/azure-functions/functions-app-settings#website_max_dynamic_application_scale_out