I have a problem with my Web app on Azure App Service and its timeout. It provides an API that creates a CosmosDB instance in an Azure Resource group. Since its creation takes a lot of time (~ 5 minutes), the App Service timeout (230 seconds) forces the App to return an HTTP Response 500, while the CosmosDB creation is successful. Within the method, the Resource is created and then some operations are performed on it.
ICosmosDBAccount cosmosDbAccount = azure.CosmosDBAccounts
.Define(cosmosDbName)
.WithRegion(Region.EuropeNorth)
.WithNewResourceGroup(resourceGroupName)
.WithDataModelSql()
.WithSessionConsistency()
.WithDefaultWriteReplication()
.Create();
DoStuff(cosmosDbAccount);
Since I've read that the timeout cannot be increased, is there a simple way to await the Resource creation and get a successful response?
From your code point of view, you are using .net sdk to implement it.
From the official SDK source code, we should be very clear that the source code uses asynchronous methods to create resources, and finally determines whether the creation is complete by detecting the state value of ProvisioningState.
In webapp, the api we designed should be that it is reasonable to return data immediately after sending the request. If in our api, we need to wait for the asynchronous or synchronous callback result in the SDK, and then return the data, it must take a long time, so the design is unreasonable.
So my suggestion is to use rest api to achieve your needs.
Create (use Database Accounts - Create Or Update)
The database account create or update operation will complete asynchronously.
Check the response result.
Check the provisioningState in properties (use Database Accounts - Get)
If the provisioningState is Succeeded, then we can know the resoure has been created successfully.
If you want to achieve the same effect as in portal, it is recommended to add a timer to get the state value of provisioningState.
Related
I am trying to figure out a usecase.
We want to monitor our machine learning computes and shut them down if they cross the budget threshold.
For this, I am planning to do below things but yet not able to achieve it.
create azure function which accepts a subscription and resource group as query params. It can then use MASL SDK to get the access token. And then find out the workspaces under an resource groups and then query all the computes under that workspace and shut them down.
create action group which will call this function. (Not able to figure out how to pass this subscription and resource id to function app).
create budget to monitor the resource group and use the action group created at step 2.
Please guide me how to update the function URL so that it accepts query param to invoke the function job.
So I've got an Azure Machine Learning pipeline here that consists of a number of PythonScriptStep tasks - pretty basic really.
Some of these script steps fail intermittently due to network issues or somesuch - really nothing unexpected. The solution here is always to simply trigger a rerun of the failed experiment in the browser interface of Azure Machine Learning studio.
Despite my best efforts I haven't been able to figure out how to set a retry parameter either on the script step objects, the pipeline object, or any other AZ ML-related object.
This is a common pattern in pipelines of any sort: Task fails once - retry a couple of times before deciding it actually fails.
Does anyone have pointers for me please?
Edit: One helpful user suggested an external solution to this which requires an Azure Logic App that listens to ML pipeline events and re-triggers failed pipelines via an HTTP request. While this solution may work for some it just takes you down another rabbit hole of setting up, debugging, and maintaining another external component. I'm looking for a simple "retry upon task failure" option that (IMO) must be baked into the Azure ML pipeline framework and is hopefully just poorly documented.
I assume that if a script fails, you want to rerun the entire pipeline. In that case, it is pretty simple with Logic Apps. What you need is the following:
You need to make a PipelineEndpoint for your pipeline so it can be triggered by something outside Azure ML.
You need to set up a Logic App to listen for failed runs. See the following: https://medium.com/geekculture/notifications-on-azure-machine-learning-pipelines-with-logic-apps-5d5df11d3126. Instead of printing a message to Microsoft Teams as in that example, you instead invoke your pipeline through its endpoint.
(this would ideally be a comment but it exceeded the word limit)
#user787267's answer above help me set up the re-try pipeline. So I thought I'd add a few more details that might help someone else set this up.
How to set up the HTTP action
Method: POST
URI: The pipeline endpoint that you configured
Headers: `Key`: Content-Type -- `Value`: application/json
Body:
{
"ExperimentName": "my_experiment_name",
"ParameterAssignments": {
"param1": "value1",
"param2": "value2" },
"RunSource": "SDK"
}
Authentication Type: Managed Identity
Managed Identity: System-assigned managed identity
You can set up the managed identity by going to the logic app's page and then clicking on the Identity tab as shown below. After that just follow the steps. You'll need to give the managed identity permissions over the space in which your ML instance lives.
I have an Azure logic app that correctly creates an Azure Container Instance. The container starts, does its job and terminates. I need to collect its logs with the appropriate connector and write them to an azure blob.
I have all the pieces in place but I do not know how to wait for the container to terminate before using the "get logs of container" connector to collect logs.
If the container job would last a predictable amount of time, I could use the Delay connector before getting the logs and it would suffice (I've tried with short jobs and it works well).
But my jobs may last several hours, depending on some external factors, so the Delay technique does not work.
I've tried with the "Until" connector, together with delay and the "get properties of a container group" container to wait until the state of the container is not "terminated", but without success (maybe I did it wrong). Anyway this can be quite expensive, since every "check" is billed.
How can I wait for the container to terminate before asking for its logs?
thanks.
starting from Charles Xu's answer, the correct sequence when setting the variable is
this uses the "state" container instance variable instead of "provisioning state". The latter is about the creation of the container group, the first is about the state of the container instance, which is what I need.
I added a delay to decrease the number of (paid) runs of the connector.
If you want to get the logs of a container group and nobody knows when it can terminate. In the logic app, you can use a variable to store the state of the container group and then use an until control the loop which will get the terminate of the container group until it comes true.
Here are the steps:
Create a container group;
Get the provision state of the container group;
Initialize a variable to store the provision state of the container group;
In the untile, get the provision state of the container group until the state is equal to terminate;
Get the logs of the container group.
The whole structure:
The Initialize variable and Until steps:
Since you mentioned using "Until" in logic app maybe expensive, here I provide another solution for your reference.
We can create a time trigger azure function and set the cron expression every 1 minutes, create a service plan(free tier) for the function, so we don't need to pay for the running cost of the function(but may pay for the storage of the function).
The function will run every 1 minute. In function we need to get the properties of the container by using this rest api and then if the state is "terminated", call the logic app request to trigger the logic app(the logic app should be created as "When a HTTP request is received").
The function code(in local, before deployed to azure) should be as below:
using System;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Host;
using Microsoft.Extensions.Logging;
namespace hurytimeFun
{
public static class Function1
{
[FunctionName("Function1")]
public static void Run([TimerTrigger("0 */1 * * * *")]TimerInfo myTimer, ILogger log)
{
log.LogInformation($"C# Timer trigger function executed at: {DateTime.Now}");
//1. using rest api to get the properties of your container
//2. if the state is "terminated"
// call the logic app request to trigger the logic app
}
}
}
Hope it would be helpful to your problem~
Once a container terminates, everything is lost. You could install the log-analytics-containers from the Azure marketplace for logging and log before the container terminates
I created a function like this
public static Task HandleStorageQueueMessageAsync(
[QueueTrigger("%QueueName%", Connection = "%ConnectionStringName%")] string body,
TextWriter logger)
{
if (logger == null)
{
throw new ArgumentNullException(nameof(logger));
}
logger.WriteLine(body);
return Task.CompletedTask;
}
The queue name and the connection string name come from my configuration that has an INameResolver to get the values. The connection string itself I put from my secret store into the app config at app start. If the connection string is a normal storage connection string granting all permissions for the whole account, the method works like expected.
However, in my scenario I am getting an SAS from a partner team that only offers read access to a single queue. I created a storage connection string from that which looks similar like
QueueEndpoint=https://accountname.queue.core.windows.net;SharedAccessSignature=st=2017-09-24T07%3A29%3A00Z&se=2019-09-25T07%3A29%3A00Z&sp=r&sv=2018-03-28&sig=token
(I tried successfully to connect using this connection string in Microsoft Azure Storage Explorer)
The queue name used in the QueueTrigger attribute is also gathered from the SAS
However, now I am getting the following exceptions
$exception {"Error indexing method 'Functions.HandleStorageQueueMessageAsync'"} Microsoft.Azure.WebJobs.Host.Indexers.FunctionIndexingException
InnerException {"No blob endpoint configured."} System.Exception {System.InvalidOperationException}
If you look at the connection string, you can see the exception is right. I did not configure the blob endpoint. However I also don't have access to it and neither do I want to use it. I'm using the storage account only for this QueueTrigger.
I am using Microsoft.Azure.WebJobs v2.2.0. Other dependencies prevent me from upgrading to a v3.x
What is the recommended way for consuming messages from a storage queue when only a SAS URI with read access to a single queue is available? If I am already on the right path, what do I need to do in order to get rid of the exception?
As you have seen, v2 WebJobs SDK requires access to blob endpoint as well. I am afraid it's by design, using connection string without full access like SAS is an improvement tracked but not realized yet.
Here are the permissions required by v2 SDK. It needs to get Blob Service properties(Blob,Service,Read) and Queue Metadata and process messages(Queue,Container&Object,Read&Process).
Queue Trigger is to get messages and delete them after processing, so SAS requires Process permission. It means the SAS string you got is not authorized correctly even if SDK doesn't require blob access.
You could ask partner team to generate SAS Connection String on Azure portal with minimum permissions above. If they can't provide blob access, v3 SDK seems an option to try.
But there are some problems 1. Other dependencies prevent updating as you mentioned 2. v3 SDK is based on .NET Core which means code changes can't be avoided. 3. v3 SDK document and samples are still under construction right now.
I was having a load of issues getting a SAS token to work for a QueueTrigger.
Not having blob included was my problem. Thanks Jerry!
Slightly newer screenshot (I need add also):
I have a azure function with "HTTP Trigger" in C#. I want to log a message to queue storage when I invoke azure function with "HTTP trigger".
When "HTTP trigger" is invoked it logs a message (log.info("C# Http trigger....");) somewhere else but I want to this log message along with ipaddress, (when user makes request to "HTTP trigger" function) to be stored in Queue storage
(log.info("C# Queue trigger ",ipaddress)) so that I can block client ipaddress for a day, if user makes calls to api beyond the number of attempts and next day again I can unblock client ipaddress by taking all ipaddress from queue storage running timer trigger in cloud.
Is there a possibility to achieve this or else Is there any alternate way to store ipaddress other than Queue storage?
from what you wrote I would guess, that you want to offer some kind of API to your users and want to limit the access. To achieve this, I would rather suggest using Azure API Management (https://azure.microsoft.com/en-us/services/api-management/). It offers you a access quote, which limits the user's access to the API endpoint based on different criteria. It also offers much more: user management, payment, subscriptions, security and much more.
For retrieving the client IP under your Azure function with HTTP Trigger, you could use the following code snippet:
run.csx:
#r "System.Web"
using System.Web;
using System.Net;
public static HttpResponseMessage Run(HttpRequestMessage req, TraceWriter log)
{
var address = ((HttpContextWrapper)req.Properties["MS_HttpContext"]).Request.UserHostAddress;
log.Info($"Your client IP: {address}");
return req.CreateResponse(HttpStatusCode.OK, $"Your client IP: {address}");
}
For more details, you could refer to this issue.
so that I can block client ipaddress for a day, if user makes calls to api beyond the number of attempts and next day again I can unblock client ipaddress by taking all ipaddress from queue storage running timer trigger in cloud.
I would prefer to use table storage for storing the access logs for the specific IPs. You could set the PartitionKey column to the Date, set RowKey to the ClientIP or ClientIP + ApiName, also you could add other columns (Timestamp, TotalRequests, etc). Also, you could refer to Azure Storage Table Design Guide for designing your storage table.
For your azure function, you could use Storage table bindings and Table Storage SDK for reading request log for the specific IP and update the total request count for the specific IP. Moreover, here is a similar blog, you could refer to here.
UPDATE:
You could configure the logs to be stored in the file system or blob storage under the "MONITORING > Diagnostic logs" as follows:
For Application Logging (Filesystem), you could find the logs via kudu:
For Application Logging (Blob), you could leverage the Storage Explorer to retrieve your logs.