Azure Durable Functions handling exceptions to stop further execution - azure

I have an Azure Durable Function (written in C#) where in Activity part I often connect to Azure SQL Database to run stored procedures or select records from table to pass them further on.
Right now I don't have any error handling implemented in my code. When stored procedure does not finish execution cause of error I do not return this information to the user however I would like to.
My functions that I use to execute stored procedures in activity part in my durable functions look similar to:
var str = Environment.GetEnvironmentVariable("sqldb_connection");
using (SQLConnection conn = new SQLConnection(str))
{
conn.Open();
SQLCommand cmd = new SQLCommand("Stored_procedure", conn);
cmd.CommandType = CommandType.StoredProcedure;
var reader = cmd.ExecureReader();
conn.Close();
}
Could you please provide me with ways to add exceptions, so if this stored procedure failes, the rest of my activity stops? I would also really appreciate when such information are stored (failed execution) and how to retrieve them.

As Junnas said, If an activity function throws any exceptions, it can deal by the Orchestrator which are also known as FunctionFailedExceptions.
If that Orchestrator cannot handle (fails) in that exception logging, then the instance will be logged with a failed status which means Orchestrator function doesn't have implementation/implemented to that exception.
Application Insights will give you more data about the Azure Functions as well as Durable Functions where you can trace end-to-end execution of the Orchestration where you also have logging capabilities of SQL Server using the package DurableTask.SqlServer.
Refer to the Durable Task SQL Server Logging and Orchestrator exception handling official docs provided by Microsoft.

Related

Azure App Service Timeout for Resource creation

I have a problem with my Web app on Azure App Service and its timeout. It provides an API that creates a CosmosDB instance in an Azure Resource group. Since its creation takes a lot of time (~ 5 minutes), the App Service timeout (230 seconds) forces the App to return an HTTP Response 500, while the CosmosDB creation is successful. Within the method, the Resource is created and then some operations are performed on it.
ICosmosDBAccount cosmosDbAccount = azure.CosmosDBAccounts
.Define(cosmosDbName)
.WithRegion(Region.EuropeNorth)
.WithNewResourceGroup(resourceGroupName)
.WithDataModelSql()
.WithSessionConsistency()
.WithDefaultWriteReplication()
.Create();
DoStuff(cosmosDbAccount);
Since I've read that the timeout cannot be increased, is there a simple way to await the Resource creation and get a successful response?
From your code point of view, you are using .net sdk to implement it.
From the official SDK source code, we should be very clear that the source code uses asynchronous methods to create resources, and finally determines whether the creation is complete by detecting the state value of ProvisioningState.
In webapp, the api we designed should be that it is reasonable to return data immediately after sending the request. If in our api, we need to wait for the asynchronous or synchronous callback result in the SDK, and then return the data, it must take a long time, so the design is unreasonable.
So my suggestion is to use rest api to achieve your needs.
Create (use Database Accounts - Create Or Update)
The database account create or update operation will complete asynchronously.
Check the response result.
Check the provisioningState in properties (use Database Accounts - Get)
If the provisioningState is Succeeded, then we can know the resoure has been created successfully.
If you want to achieve the same effect as in portal, it is recommended to add a timer to get the state value of provisioningState.

Console application Logs to store in Azure storage

I am using a console application that runs on on-premise servers triggered by a task scheduler. This console application performs the various actions and needed to be logged these. It would generate logs of around 200kb per run and the console app runs every hour.
Since the server is not accessible to us, I am planning to store the logs to Azure. I read about blob/table storage.
I would like to know what is the best strategy to store the logs in Azure.
Thank you.
Though you can write logging data in Azure Storage (both Blobs and Tables), it would actually make more sense if you use Azure Application Insights for logging this data.
I recently did the same for a console application I built. I found it incredibly simple.
I created an App Insight Resource in my Azure Subscription and got the instrumentation key. I then installed App Insights SDK and referenced appropriate namespaces in my project.
using Microsoft.ApplicationInsights;
using Microsoft.ApplicationInsights.DataContracts;
using Microsoft.ApplicationInsights.Extensibility;
This is how I initialized the telemetry client:
var appSettingsReader = new AppSettingsReader();
var appInsightsInstrumentationKey = (string)appSettingsReader.GetValue("AppInsights.InstrumentationKey", typeof(string));
TelemetryConfiguration configuration = TelemetryConfiguration.CreateDefault();
configuration.InstrumentationKey = appInsightsInstrumentationKey;
telemetryClient = new TelemetryClient(configuration);
telemetryClient.InstrumentationKey = appInsightsInstrumentationKey;
For logging trace data, I simply did the following:
TraceTelemetry telemetry = new TraceTelemetry(message, SeverityLevel.Verbose);
telemetryClient.TrackTrace(telemetry);
For logging error data, I simply did the following:
catch (Exception excep)
{
var message = string.Format("Error. {0}", excep.Message);
ExceptionTelemetry exceptionTelemetry = new ExceptionTelemetry(excep);
telemetryClient.TrackException(exceptionTelemetry);
telemetryClient.Flush();
Task.Delay(5000).Wait();//Wait for 5 seconds before terminating the application
}
Just keep one thing in mind though: Make sure you wait for some time (5 seconds is good enough) to flush the data before terminating the application.
If you're still keen on writing logs to Azure Storage, depending on the logging library you're using you will find suitable adapters that will write directly into Azure Storage.
For example, there's an NLog target for Azure Tables: https://github.com/harouny/NLog.Extensions.AzureTableStorage (though this project is not actively maintained).

Not able to open Azure SQL Server DB immidiately after the creation

I am trying to automate the tenant DB creation in Azure SQL Server.
DB has been created/copied as
CREATE DATABASE {0} AS COPY OF {1} ( SERVICE_OBJECTIVE = 'S2' )
Immediately adding a record to one of the table in the same. Getting error as:
Function completed (Failure, Id=1046eae2-c07a-4eee-9a1d-886e89ab5071)
A ScriptHost error has occurred
Exception while executing function: Functions.CreateTenant. .Net SqlClient Data Provider: Database 'tenant8' on server 'dbserver' is not currently available. Please retry the connection later. If the problem persists, contact customer support, and provide them the session tracing ID of '8AF58081-8F25-4B7F-83E3-63AFFC13C8CB'.
Exception while executing function: Functions.CreateTenant
Executed 'Functions.CreateTenant' (Failed, Id=1046eae2-c07a-4eee-9a1d-886e89ab5071)
Function had errors. See Azure WebJobs SDK dashboard for details. Instance ID is '1046eae2-c07a-4eee-9a1d-886e89ab5071'
Looking at the error message, it seems that you are trying to insert data before the database tenant8 has been provisioned and deployed. Deploying a database takes any time between a few dozens seconds to a few minutes. A description of the steps involved in this operation can be found on this blog post by Steve Mark. Please let us know if the deployment takes more than 5 minutes.

Single Azure Function to Trigger when upload a file in blob then insert the blob name in Azure SQL

I'm just new to Azure Function ,I just gone through the TimerTrigger for Sql Connection and BlobTrigger for Azure Blob.I tired with the demo works fine for me.
Next i tried to do with combination of both this.
When a file uploaded/Added in the Specific Blob Container.I should write the blob name in my Azure SQL Database table.
How could i achieve this in a Single Azure function ?
Also i'm having a doubt that if we create a azure function for a blob trigger,then this function will always be running in the background ? I mean it will consume the Background running cost ?
I'm thinking that azure function for a blob trigger will consume the cost only
during it's run. Isn't it ?
Could somebody help me with this
How could i achieve this in a Single Azure function ?
You could achieve it using blob trigger. You will get blob name from the function parameter [name]. Then you could save this value to your Azure SQL database. Sample code below is for your reference.
public static void Run(Stream myBlob, string name, TraceWriter log)
{
var str = "connection string of your sql server";
using (SqlConnection conn = new SqlConnection(str))
{
conn.Open();
var text = "insert into mytable(id, blobname) values(#id, #blobname)";
using (SqlCommand cmd = new SqlCommand(text, conn))
{
cmd.Parameters.AddWithValue("id", 1);
cmd.Parameters.AddWithValue("blobname", name);
// Execute the command and log the # rows affected.
var rows = cmd.ExecuteNonQuery();
log.Info($"{rows} rows were updated");
}
}
}
I'm thinking that azure function for a blob trigger will consume the cost only during it's run. Isn't it?
You will need to choose hosting plan when creating an Azure function.
If you choose App Service Plan, you will need to pay for the App Service Plan which is depends on the tier you chosen. If you choose Consumption plan, your function is billed based on two things. Resource consumption and executions.
Resource consumption is calculated by multiplying average memory size in Gigabytes by the time in seconds it takes to execute the function. You need to pay for the CPU and Memory consumed by your function. Executions means the requests count which are handled by your function. Please note that Consumption plan pricing also includes a monthly free grant of 1 million requests and 400,000 GB-s of resource consumption per month.
We can also call the Sp (like Exec spname) in the place of Insert Command?Right ?
Yes, we could call the sp by setting CommandType to StoredProcedure. Code below is for your reference.
using (SqlCommand cmd = new SqlCommand("StoredProcedure Name", conn))
{
cmd.CommandType = System.Data.CommandType.StoredProcedure;
}
Sure, you should use Blob Trigger for your scenario.
If you use Consumption Plan, you will only be changed per event execution. No background cost will apply.

What happened when using same Storage account for multiple Azure WebJobs (dev/live)?

In my small job, I just use same Storage Account for AzureWebJobsDashboard and AzureWebJobsStorage configs.
But what happened when we use same connection string for local Debugging and Published job equally? Are they treated in isolated manner? Or do they have any conflict issue?
I looked into blobs of published job and found azure-webjobs-dashboad/functions/instances or azure-webjobs-dashboad/functions/recent/by-job-run/{jobname}, or azure-webjobs-hosts/output-logs directories; they have no discriminator specified among jobs while some other directories have GUID with job name.
Note that my job will be run in continuous.
Or do they have any conflict issue?
No, there is no conflict issue. Base on my experience, it is not recommended to local debugging while Published job is running in the azure with the same connection string. Take Azure Storage queue for example, we can't control which queue should be executed in the local or in the azure. If we try to use debug it locally, please have a try to stop the continue WebJob from Azure Portal.
If we try to know WebJob is executed from which instance we could log the instance info in the code with the environment variable WEBSITE_INSTANCE_ID.The following is the code sample:
public static void ProcessQueueMessage([QueueTrigger("queue")] string message, TextWriter log)
{
string instance = Environment.GetEnvironmentVariable("WEBSITE_INSTANCE_ID");
string newMsg = $"WEBSITE_INSTANCE_ID:{instance}, timestamp:{DateTime.Now}, Message:{message}";
log.WriteLine(newMsg);
Console.WriteLine(newMsg);
}
More info please refer to how to use azure queue storage with the WebJob SDK. The following is snipped from the document.
If your web app runs on multiple instances, a continuous WebJob runs on each machine, and each machine will wait for triggers and attempt to run functions. The WebJobs SDK queue trigger automatically prevents a function from processing a queue message multiple times; functions do not have to be written to be idempotent
Update :
About timer trigger we could find more explanation of timer trigger in the WebJob from the GitHub document.So if you want to debug locally, please have a try to stop the WebJob from the azure portal.
only a single instance of a particular timer function will be running across all instances (you don't want multiple instances to process the same timer event).

Resources