I'm using a Azure Function .Net 6 Isolated.
When I run the function on localhost from VS2022, it is 5 times faster then when I deploy it to Azure Function. Localhost is a VM hosted in Azure in the same region as the function.
I tried different Service Plans, but issue remains. (Consumption Plan, Elastic Premium EP3, Premium V2 P3v2)
Results in different regions vs. localhost:
The code is as follows:
DI - using the IHttpClientFactory (here):
public static class DataSourceServiceRegistration
{
public static IServiceCollection RegisterDataSourceServices(this IServiceCollection serviceCollection)
{
serviceCollection.AddHttpClient();
return serviceCollection;
}
}
HttpClient usage:
private readonly HttpClient _httpClient;
public EsriHttpClientAdapter(HttpClient httpClient)
{
_httpClient = httpClient ?? throw new ArgumentNullException(nameof(httpClient));
}
public async Task<JsonDocument> SendPrintServiceMessage(string url, HttpMethod httpMethod, string referer, IEnumerable<KeyValuePair<string, string>> content = null)
{
var watch = System.Diagnostics.Stopwatch.StartNew();
HttpContent httpContent = null;
if (content != null)
{
httpContent = new FormUrlEncodedContent(content);
}
var msg = new HttpRequestMessage(httpMethod, url) { Content = httpContent };
_httpClient.DefaultRequestHeaders.Referrer = new Uri(referer);
_httpClient.DefaultRequestHeaders.Add("some", "config");
_logger.LogInformation($"Before SendAsync - time {watch.ElapsedMilliseconds}");
var result = await _httpClient.SendAsync(msg);
_logger.LogInformation($"After SendAsync - time {watch.ElapsedMilliseconds}");
var response = await result.Content.ReadAsStringAsync();
_logger.LogInformation($"After ReadAsStringAsync - time {watch.ElapsedMilliseconds}");
if (result.StatusCode == HttpStatusCode.OK)
{
//do some stuff here
}
}
Application Insights is as follows:
AZURE:
Localhost:
Not sure if this is applicable to you, but hopefully it helps. If you're running under a Basic (Consumption) plan, your function will always be cold and need to spin up when being invoked by Http trigger. To circumvent this, you can set the function to Always On (if this is within your budget and scope) if on App Service Environment, Dedicated, or Premium plans. (In other words, Free Functions will always run cold.)
You can change this under Configuration > General Settings > Always On.
There's good info on how a Function runs through a cold startup at:
https://azure.microsoft.com/en-us/blog/understanding-serverless-cold-start/
Related
I have an Azure durable function triggered by a message that then uses the client to start an Orchestration trigger which starts several activity functions. I have set breakpoints in Orchestration client , trigger and each activity function.
But, it only hits the breakpoints in Orchestration client function and the others are getting ignored. But underneath it seems to execute the activity functions although the breakpoints are not hit.
Investigative information
Programming language used = C#
Visual Studio Enterprise 2019 version = 16.8.3
Azure Functions Core Tools
Core Tools Version: 3.0.3216
Function Runtime Version: 3.0.15193.0
Below here, I have included a code snippet. (have not added every activity function)
[FunctionName(nameof(InitiateExport))]
public static async Task InitiateExport(
[ServiceBusTrigger("%ExportQueueName%", Connection = "AzureSBConnection")]Message message,
[DurableClient(TaskHub = "%FunctionHubName%")] IDurableOrchestrationClient orchestrationClient,
[Inject]IServiceProvider rootServiceProvider, ILogger log)
{
var DataQueuedDetails = JsonConvert.DeserializeObject<DataQueuedDetails>(Encoding.UTF8.GetString(message.Body));
using (var scope = rootServiceProvider.CreateScope())
{
log.LogInformation($"{nameof(ExportData)} function execution started at: {DateTime.Now}");
var services = scope.ServiceProvider;
services.ResolveRequestContext(message);
var requestContext = services.GetRequiredService<RequestContext>();
await orchestrationClient.StartNewAsync(nameof(TriggerDataExport), null, (DataQueuedDetails, requestContext));
log.LogInformation($"{nameof(ExportData)} timer triggered function execution finished at: {DateTime.Now}");
}
}
[FunctionName(nameof(TriggerDataExport))]
public static async Task TriggerDataExport(
[OrchestrationTrigger] IDurableOrchestrationContext orchestrationContext,
[Inject] IServiceProvider rootServiceProvider, ILogger log)
{
using (var scope = rootServiceProvider.CreateScope())
{
var services = scope.ServiceProvider;
var (DataOperationInfo, requestContext) = orchestrationContext.GetInput<(DataQueuedDetails, RequestContext)>();
if (!orchestrationContext.IsReplaying)
log.LogInformation($"Starting Export data Id {DataOperationInfo.Id}");
var blobServiceFactory = services.GetRequiredService<IBlobServiceFactory>();
requestContext.CustomerId = DataOperationInfo.RelatedCustomerId;
try
{
await orchestrationContext.CallActivityAsync(
nameof(UpdateJobStatus),
(DataOperationInfo.Id, DataOperationStatus.Running, string.Empty, string.Empty, requestContext));
// some other activity functions
---
---
} catch (Exception e)
{
await orchestrationContext.CallActivityAsync(
nameof(UpdateJobStatus),
(DataOperationInfo.Id, DataOperationStatus.Failed, string.Empty, string.Empty, requestContext));
}
}
}
[FunctionName(nameof(UpdateJobStatus))]
public static async Task RunAsync(
[ActivityTrigger] IDurableActivityContext activityContext,
[Inject]IServiceProvider rootServiceProvider)
{
using (var scope = rootServiceProvider.CreateScope())
{
try
{
var (DataOperationId, status, blobReference, logFileBlobId, requestContext) = activityContext.GetInput<(string, string, string, string, RequestContext)>();
var services = scope.ServiceProvider;
services.ResolveRequestContext(requestContext.CustomerId, requestContext.UserId, requestContext.UserDisplayName, requestContext.Culture);
var dataService = services.GetRequiredService<IDataService>();
var DataOperationDto = new DataOperationDto
{
Id = DataOperationId,
OperationStatusCode = status,
BlobReference = blobReference,
LogBlobReference = logFileBlobId
};
await dataService.UpdateAsync(DataOperationDto);
}
catch (Exception e)
{
throw e;
}
}
}
While you are debugging your Function, you should make sure that you are either using the local storage emulator, or an Azure Storage Account that is different from the one that is also being used by the Function already deployed in Azure.
If you are using the same storage account that another running Function is using, then it could be that parts of the execution is actually happening in Azure instead of your dev machine.
Need help to understand why first request always takes longer than others. Test case: send binary data via POST request.
This is a typical picture from Azure Application Insights, firing 2 series of 4 requests, within the same minute:
Server side
Simply reading the binary data into byte array.
with Azure Function:
[FunctionName("TestSpeed")]
public static HttpResponseMessage Run([HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = "TestSpeed")]HttpRequestMessage req,
Binder binder,
ILogger log)
{
Stopwatch sw = new Stopwatch();
sw.Start();
byte[] binaryData = req.Content.ReadAsByteArrayAsync().Result;
sw.Stop();
return req.CreateResponse(HttpStatusCode.OK, $"Received {binaryData.Length} bytes. Data Read in: {sw.ElapsedMilliseconds} ms");
}
Or with ASP.NET web app API:
public class MyController : ControllerBase
{
private readonly ILogger<MyController> _logger;
public MyController(ILogger<MyController> logger)
{
_logger = logger;
}
[HttpPost]
public IActionResult PostBinary()
{
_logger.LogInformation(" - TestSpeed");
var sw = new Stopwatch();
sw.Start();
var body = Request.Body.ToByteArray();
sw.Stop();
return Ok($"Received {body.Length} bytes. Data Read in: {sw.ElapsedMilliseconds} ms");
}
}
Client (for testing only)
Using .NET Framework, C# console application...
private static void TestSpeed()
{
Console.WriteLine($"- Test Speed - ");
string requestUrl = "https://*******.azurewebsites.net/api/TestSpeed";
string path = "/Users/temp/Downloads/1mb.zip";
byte[] fileToSend = File.ReadAllBytes(path);
var sw = new Stopwatch();
for (int i = 0; i < 4; i++)
{
sw.Reset();
sw.Start();
var response = SendFile(fileToSend, requestUrl);
sw.Stop();
Console.WriteLine($"{i}: {sw.ElapsedMilliseconds} ms. {response}");
}
}
private static string SendFile(byte[] bytesToSend, string requestUrl)
{
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(requestUrl);
request.Method = "POST";
request.ContentType = "application/octet-stream";
request.ContentLength = bytesToSend.Length;
using (Stream requestStream = request.GetRequestStream())
{
// Send the file as body request.
requestStream.Write(bytesToSend, 0, bytesToSend.Length);
requestStream.Close();
}
try
{
using (HttpWebResponse response = (HttpWebResponse)request.GetResponse())
{
using (var sr = new StreamReader(response.GetResponseStream()))
{
var responseString = sr.ReadToEnd();
return responseString;
}
}
}
catch (Exception e)
{
return "ERROR:" + e.Message;
}
}
Suspects I've tried:
Its not a cold start/warmup thing because the behavior repeats within the same minute.. and I have "Always On" enabled as well.
Compare HTTP and HTTPS - same behavior.
Azure functions vs ASP.NET core web API app - same behavior. The only difference I noticed is that with functions, request content is already fully received on server side before invocation:
ASP.NET web API: 5512 ms. Received 1044397 bytes. Data Read in: 3701 ms
Function App: 5674 ms. Received 1044397 bytes. Data Read in: 36 ms
Sending 1Kb vs 1Mb - same behavior, first call take much more.
Running server on Localhost - similar behavior, but much smaller difference than with distant servers! (looks like network distance matters here... )
Is there some session creation overhead? If so, why is it so huge?
Anything I can do about it?
Because your test interface is in the web program, even if you turn on the always on switch, what happens to the program or whether it can be kept active, you need to raise a support ticket to confirm with the official. From a developer's perspective, it is recommended that you test like this:
After redeploying the web interface, first use the function app to test, and then use the webapi interface to test to compare the test time.
Re-deploy again, first use webapi for testing and then use function app for testing. Compare test time.
No deployment is required. On the basis of the second test, test again after 5 minutes. The order of using function app or webapi does not matter. Look at the test time data.
I think the problem should be on IIS. IIS itself has a recycling mechanism. The application will not be used for a long time or there will be a delay after deployment. It is recommended to confirm with the official.
Is there any way to setup email alerts for when individual Azure Container Instances succeed or fail (or basically change state)? I have some run-once containers I kick off periodically, and would like to be notified when they complete and the status of the completion. Bonus if the email can include logs from the container, as well. Is this possible using the built-in alerts?
So far I haven't been able to make it work with the provided signals.
Dont believe there is an automatic way so what I did was created a small timer based function that runs every 5 minutes and gets a list of all containers and checks the status. If any are in say a failed state it then uses SendGrid to sent an alert email.
UPDATE for Dan
I use Managed Service Identity in my function so I have a container tasks class like the below, can't remember exactly where I got the help to generate the GetAzure function as obviously when doing local debug you can't use the MSI credentials and the local account on Visual Studio that is meant to work doesn't appear to. However I think it might have been here - https://github.com/Azure/azure-sdk-for-net/issues/4968
public static class ContainerTasks
{
private static readonly IAzure azure = GetAzure();
private static IAzure GetAzure()
{
var tenantId = Environment.GetEnvironmentVariable("DevLocalDbgTenantId");
var clientId = Environment.GetEnvironmentVariable("DevLocalDbgClientId");
var clientSecret = Environment.GetEnvironmentVariable("DevLocalDbgClientSecret");
AzureCredentials credentials;
if (!string.IsNullOrEmpty(tenantId) &&
!string.IsNullOrEmpty(clientId) &&
!string.IsNullOrEmpty(clientSecret))
{
var sp = new ServicePrincipalLoginInformation
{
ClientId = clientId,
ClientSecret = clientSecret
};
credentials = new AzureCredentials(sp, tenantId, AzureEnvironment.AzureGlobalCloud);
}
else
{
credentials = SdkContext
.AzureCredentialsFactory
.FromMSI(new MSILoginInformation(MSIResourceType.AppService), AzureEnvironment.AzureGlobalCloud);
}
var authenticatedAzure = Azure
.Configure()
.WithLogLevel(HttpLoggingDelegatingHandler.Level.Basic)
.Authenticate(credentials);
var subscriptionId = Environment.GetEnvironmentVariable("DevLocalDbgSubscriptionId");
if (!string.IsNullOrEmpty(subscriptionId))
return authenticatedAzure.WithSubscription(subscriptionId);
return authenticatedAzure.WithDefaultSubscription();
}
public static IEnumerable<IContainerGroup> ListTaskContainers(string resourceGroupName, ILogger log)
{
log.LogInformation($"Getting a list of all container groups in Resource Group '{resourceGroupName}'");
return azure.ContainerGroups.ListByResourceGroup(resourceGroupName);
}
}
Then my monitor function is simply
public static class MonitorACIs
{
[FunctionName("MonitorACIs")]
public static void Run([TimerTrigger("%TimerSchedule%")]TimerInfo myTimer, ILogger log)
{
log.LogInformation($"MonitorACIs Timer trigger function executed at: {DateTime.Now}");
foreach(var containerGroup in ContainerTasks.ListTaskContainers(Environment.GetEnvironmentVariable("ResourceGroupName"), log))
{
if(String.Equals(containerGroup.State, "Failed"))
{
log.LogInformation($"Container Group {containerGroup.Name} has failed please investigate");
Notifications.FailedTaskACI(containerGroup.Name, log);
}
}
}
}
Notifications.FailedTaskACI is just a class method that sends an email to one of our Teams channels
It's not perfect but it does the job for now!
I have a ASP.NET Web API application with supporting Azure Web Job with functions that are triggered by messages added to a storage queue by the API's controllers. Testing the Web API is simple enough using OWIN but how do I test the web jobs?
Do I run a console app in memory in the test runner? Execute the function directly (that wouldn't be a proper integration test though)? It is a continious job so the app doesn't exit. To make matters worse Azure Web Job-functions are void so there's no output to assert.
There is no need to run console app in memory. You can run JobHost in the memory of your integration test.
var host = new JobHost();
You could use host.Call() or host.RunAndBlock(). You would need to point to Azure storage account as webjobs are not supported in localhost.
It depends on what your function is doing, but you could manually add a message to a queue, add a blob or whatever. You could assert by querying the storage where your webjob executed result, etc.
While #boris-lipschitz is correct, when your job is continious (as op says it is), you can't do anything after calling host.RunAndBlock().
However, if you run the host in a separate thread, you can continue with the test as desired. Although, you have to do some kind of polling in the end of the test to know when the job has run.
Example
Function to be tested (A simple copy from one blob to another, triggered by created blob):
public void CopyBlob(
[BlobTrigger("input/{name}")] TextReader input,
[Blob("output/{name}")] out string output)
{
output = input.ReadToEnd();
}
Test function:
[Test]
public void CopyBlobTest()
{
var blobClient = GetBlobClient("UseDevelopmentStorage=true;");
//Start host in separate thread
var thread = new Thread(() =>
{
Thread.CurrentThread.IsBackground = true;
var host = new JobHost();
host.RunAndBlock();
});
thread.Start();
//Trigger job by writing some content to a blob
using (var stream = new MemoryStream())
using (var stringWriter = new StreamWriter(stream))
{
stringWriter.Write("TestContent");
stringWriter.Flush();
stream.Seek(0, SeekOrigin.Begin);
blobClient.UploadStream("input", "blobName", stream);
}
//Check every second for up to 20 seconds, to see if blob have been created in output and assert content if it has
var maxTries = 20;
while (maxTries-- > 0)
{
if (!blobClient.Exists("output", "blobName"))
{
Thread.Sleep(1000);
continue;
}
using (var stream = blobClient.OpenRead("output", "blobName"))
using (var streamReader = new StreamReader(stream))
{
Assert.AreEqual("TestContent", streamReader.ReadToEnd());
}
break;
}
}
I've been able to simulate this really easily by simply doing the following, and it seems to work fine for me:
private JobHost _webJob;
[OneTimeSetUp]
public void StartupFixture()
{
_webJob = Program.GetHost();
_webJob.Start();
}
[OneTimeTearDown]
public void TearDownFixture()
{
_webJob?.Stop();
}
Where the WebJob Code looks like:
public class Program
{
public static void Main()
{
var host = GetHost();
host.RunAndBlock();
}
public static JobHost GetHost()
{
...
}
}
I'm writing a function in C# using Azure Functions and need to get the ip address of the client that called the function, is this possible?
Here is an answer based on the one here.
#r "System.Web"
using System.Net;
using System.Web;
public static HttpResponseMessage Run(HttpRequestMessage req, TraceWriter log)
{
string clientIP = ((HttpContextWrapper)req.Properties["MS_HttpContext"]).Request.UserHostAddress;
return req.CreateResponse(HttpStatusCode.OK, $"The client IP is {clientIP}");
}
you should use these function Get the IP address of the remote host
request.Properties["MS_HttpContext"] is not available if you debug precompiled functions local
request.Properties[RemoteEndpointMessageProperty.Name] is not available on azure
private string GetClientIp(HttpRequestMessage request)
{
if (request.Properties.ContainsKey("MS_HttpContext"))
{
return ((HttpContextWrapper)request.Properties["MS_HttpContext"]).Request.UserHostAddress;
}
if (request.Properties.ContainsKey(RemoteEndpointMessageProperty.Name))
{
RemoteEndpointMessageProperty prop;
prop = (RemoteEndpointMessageProperty)request.Properties[RemoteEndpointMessageProperty.Name];
return prop.Address;
}
return null;
}
Update 21.08.2018:
Now Azure Functions are behind a LoadBalancer --> we have to inspect Request-Headers to determine the correct Client IP
private static string GetIpFromRequestHeaders(HttpRequestMessage request)
{
IEnumerable<string> values;
if (request.Headers.TryGetValues("X-Forwarded-For", out values))
{
return values.FirstOrDefault().Split(new char[] { ',' }).FirstOrDefault().Split(new char[] { ':' }).FirstOrDefault();
}
return "";
}
Here is an extension method based on what I am seeing in
.Net Core 3.1
public static IPAddress GetClientIpn(this HttpRequest request)
{
IPAddress result = null;
if (request.Headers.TryGetValue("X-Forwarded-For", out StringValues values))
{
var ipn = values.FirstOrDefault().Split(new char[] { ',' }).FirstOrDefault().Split(new char[] { ':' }).FirstOrDefault();
IPAddress.TryParse(ipn, out result);
}
if (result == null)
{
result = request.HttpContext.Connection.RemoteIpAddress;
}
return result;
}
.NET 6.+
public static IPAddress GetClientIpn(this HttpRequestMessage request)
{
IPAddress result = null;
if (request.Headers.TryGetValues("X-Forwarded-For", out IEnumerable<string> values))
{
var ipn = values.FirstOrDefault().Split(new char[] { ',' }).FirstOrDefault().Split(new char[] { ':' }).FirstOrDefault();
IPAddress.TryParse(ipn, out result);
}
return result;
}
Now that Azure functions get an HttpRequest parameter, and they're behind a load balancer, this function to get the IP address works for me:
private static string GetIpFromRequestHeaders(HttpRequest request)
{
return (request.Headers["X-Forwarded-For"].FirstOrDefault() ?? "").Split(new char[] { ':' }).FirstOrDefault();
}
Update 18-Oct-2019:
The solution I tried is much easier and quicker and is mentioned below stepwise. But some more lengthy/tricky alternates are available # https://learn.microsoft.com/en-us/azure/azure-monitor/app/ip-collection:
Login into Azure portal.
Open a new tab in same browser while you are logged in and dial “http://Resources.Azure.Com”
This is Azure back end services portal so being slightly careful in making changes would be great.
Expand SUBSCRIPTIONS section from the left panel and expand your Azure Subscription where app insight resource is located.
Expand Resource Groups section and expand the Resource Group where app insights resource is.
Expand the Providers section and find the Microsoft.Insights provider and expand it.
Expand the Components section and find and select your App Insight Instance by name.
On the right top change your mode to Read Write from Read Only.
Click EDIT button on the Rest API call.
ADD NEW “"DisableIpMasking": true property to properties section.
Press PUT button to apply changes.
Now your App Insight is enabled to start collecting Client IP addresses.
Do some queries on the Function.
Refresh and Test the App Insights data after about 5 to 10 minutes.
As mentioned already by others, the old method of looking at MS_HttpContext no longer works. Further, while the method of looking at the headers for X-Forwarded-For does work, it only works after being published in Azure - it doesn't return a value when you're running locally. That may matter if you prefer testing locally to minimize any potential cost-impact, but still want to be able to see that everything works correctly.
To see the IP address even when running locally, try this instead:
using Microsoft.AspNetCore.Http;
And then:
String RemoteIP = ((DefaultHttpContext)req.Properties["HttpContext"])?.Connection?.RemoteIpAddress?.ToString();
This is working for me currently in Azure Functions V3.0.
In a .NET 6.0 function, within the Run() function of the operation, this can be accessed of the HttpRequest req object:
public static class PingOperation
{
[FunctionName("ping")]
public static async Task<IActionResult> Run(
[HttpTrigger(AuthorizationLevel.Function, "get", "post", Route = null)] HttpRequest req,
ILogger log)
{
string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
log.LogInformation($"PingOperation requested from: {req.HttpContext.Connection.RemoteIpAddress}:{req.HttpContext.Connection.RemotePort}");
string responseMessage = "This HTTP triggered function executed successfully.";
return new OkObjectResult(responseMessage);
}
}