How to integration test Azure Web Jobs? - azure

I have a ASP.NET Web API application with supporting Azure Web Job with functions that are triggered by messages added to a storage queue by the API's controllers. Testing the Web API is simple enough using OWIN but how do I test the web jobs?
Do I run a console app in memory in the test runner? Execute the function directly (that wouldn't be a proper integration test though)? It is a continious job so the app doesn't exit. To make matters worse Azure Web Job-functions are void so there's no output to assert.

There is no need to run console app in memory. You can run JobHost in the memory of your integration test.
var host = new JobHost();
You could use host.Call() or host.RunAndBlock(). You would need to point to Azure storage account as webjobs are not supported in localhost.
It depends on what your function is doing, but you could manually add a message to a queue, add a blob or whatever. You could assert by querying the storage where your webjob executed result, etc.

While #boris-lipschitz is correct, when your job is continious (as op says it is), you can't do anything after calling host.RunAndBlock().
However, if you run the host in a separate thread, you can continue with the test as desired. Although, you have to do some kind of polling in the end of the test to know when the job has run.
Example
Function to be tested (A simple copy from one blob to another, triggered by created blob):
public void CopyBlob(
[BlobTrigger("input/{name}")] TextReader input,
[Blob("output/{name}")] out string output)
{
output = input.ReadToEnd();
}
Test function:
[Test]
public void CopyBlobTest()
{
var blobClient = GetBlobClient("UseDevelopmentStorage=true;");
//Start host in separate thread
var thread = new Thread(() =>
{
Thread.CurrentThread.IsBackground = true;
var host = new JobHost();
host.RunAndBlock();
});
thread.Start();
//Trigger job by writing some content to a blob
using (var stream = new MemoryStream())
using (var stringWriter = new StreamWriter(stream))
{
stringWriter.Write("TestContent");
stringWriter.Flush();
stream.Seek(0, SeekOrigin.Begin);
blobClient.UploadStream("input", "blobName", stream);
}
//Check every second for up to 20 seconds, to see if blob have been created in output and assert content if it has
var maxTries = 20;
while (maxTries-- > 0)
{
if (!blobClient.Exists("output", "blobName"))
{
Thread.Sleep(1000);
continue;
}
using (var stream = blobClient.OpenRead("output", "blobName"))
using (var streamReader = new StreamReader(stream))
{
Assert.AreEqual("TestContent", streamReader.ReadToEnd());
}
break;
}
}

I've been able to simulate this really easily by simply doing the following, and it seems to work fine for me:
private JobHost _webJob;
[OneTimeSetUp]
public void StartupFixture()
{
_webJob = Program.GetHost();
_webJob.Start();
}
[OneTimeTearDown]
public void TearDownFixture()
{
_webJob?.Stop();
}
Where the WebJob Code looks like:
public class Program
{
public static void Main()
{
var host = GetHost();
host.RunAndBlock();
}
public static JobHost GetHost()
{
...
}
}

Related

Azure Function slow response on first HTTPS call, with Always On ( same with ASP.NET Core Web API )

Need help to understand why first request always takes longer than others. Test case: send binary data via POST request.
This is a typical picture from Azure Application Insights, firing 2 series of 4 requests, within the same minute:
Server side
Simply reading the binary data into byte array.
with Azure Function:
[FunctionName("TestSpeed")]
public static HttpResponseMessage Run([HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = "TestSpeed")]HttpRequestMessage req,
Binder binder,
ILogger log)
{
Stopwatch sw = new Stopwatch();
sw.Start();
byte[] binaryData = req.Content.ReadAsByteArrayAsync().Result;
sw.Stop();
return req.CreateResponse(HttpStatusCode.OK, $"Received {binaryData.Length} bytes. Data Read in: {sw.ElapsedMilliseconds} ms");
}
Or with ASP.NET web app API:
public class MyController : ControllerBase
{
private readonly ILogger<MyController> _logger;
public MyController(ILogger<MyController> logger)
{
_logger = logger;
}
[HttpPost]
public IActionResult PostBinary()
{
_logger.LogInformation(" - TestSpeed");
var sw = new Stopwatch();
sw.Start();
var body = Request.Body.ToByteArray();
sw.Stop();
return Ok($"Received {body.Length} bytes. Data Read in: {sw.ElapsedMilliseconds} ms");
}
}
Client (for testing only)
Using .NET Framework, C# console application...
private static void TestSpeed()
{
Console.WriteLine($"- Test Speed - ");
string requestUrl = "https://*******.azurewebsites.net/api/TestSpeed";
string path = "/Users/temp/Downloads/1mb.zip";
byte[] fileToSend = File.ReadAllBytes(path);
var sw = new Stopwatch();
for (int i = 0; i < 4; i++)
{
sw.Reset();
sw.Start();
var response = SendFile(fileToSend, requestUrl);
sw.Stop();
Console.WriteLine($"{i}: {sw.ElapsedMilliseconds} ms. {response}");
}
}
private static string SendFile(byte[] bytesToSend, string requestUrl)
{
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(requestUrl);
request.Method = "POST";
request.ContentType = "application/octet-stream";
request.ContentLength = bytesToSend.Length;
using (Stream requestStream = request.GetRequestStream())
{
// Send the file as body request.
requestStream.Write(bytesToSend, 0, bytesToSend.Length);
requestStream.Close();
}
try
{
using (HttpWebResponse response = (HttpWebResponse)request.GetResponse())
{
using (var sr = new StreamReader(response.GetResponseStream()))
{
var responseString = sr.ReadToEnd();
return responseString;
}
}
}
catch (Exception e)
{
return "ERROR:" + e.Message;
}
}
Suspects I've tried:
Its not a cold start/warmup thing because the behavior repeats within the same minute.. and I have "Always On" enabled as well.
Compare HTTP and HTTPS - same behavior.
Azure functions vs ASP.NET core web API app - same behavior. The only difference I noticed is that with functions, request content is already fully received on server side before invocation:
ASP.NET web API: 5512 ms. Received 1044397 bytes. Data Read in: 3701 ms
Function App: 5674 ms. Received 1044397 bytes. Data Read in: 36 ms
Sending 1Kb vs 1Mb - same behavior, first call take much more.
Running server on Localhost - similar behavior, but much smaller difference than with distant servers! (looks like network distance matters here... )
Is there some session creation overhead? If so, why is it so huge?
Anything I can do about it?
Because your test interface is in the web program, even if you turn on the always on switch, what happens to the program or whether it can be kept active, you need to raise a support ticket to confirm with the official. From a developer's perspective, it is recommended that you test like this:
After redeploying the web interface, first use the function app to test, and then use the webapi interface to test to compare the test time.
Re-deploy again, first use webapi for testing and then use function app for testing. Compare test time.
No deployment is required. On the basis of the second test, test again after 5 minutes. The order of using function app or webapi does not matter. Look at the test time data.
I think the problem should be on IIS. IIS itself has a recycling mechanism. The application will not be used for a long time or there will be a delay after deployment. It is recommended to confirm with the official.

Want to trigger Different methods at different time interval in Azure Web Job

I have hosted my web job through website (with option Added Existing project as a Azure Webjob). I am want to trigger different methods at different time interval.
Code :
static void Main()
{
var config = new JobHostConfiguration();
config.UseTimers();
var host = new JobHost();
host.RunAndBlock();
}
public static void TimerTrig1([TimerTrigger("00:00:02")] TimerInfo timer)
{
Console.WriteLine("Triggered");
}
public static void TimerTrig2([TimerTrigger("00:00:04")] TimerInfo timer)
{
Console.WriteLine("Triggered");
}
and webjob-publish-settings.json is :
{
"$schema": "http://schemastore.org/schemas/json/webjob-publish-settings.json",
"webJobName": "KentDataImporterWebJob",
"runMode": "Continuous"
}
I need a solution so that I can trigger these function on 2 Sec or 4 Sec.
So I want to trigger different methods on different interval in Azure Web Job when my azure web job hosted with different web Site.
Your code looks fine. A couple of caveats:
Make sure that in the Main method of your Program class, you call the config.UseTimers() method. If you do not, your TimerTriggers will not fire. Also, make sure that your WebJob is continuous.
public static void Main()
{
var config = new JobHostConfiguration();
config.UseTimers();
var host = new JobHost(config);
host.RunAndBlock();
}

Azure web jobs - parallel message processing from queues not working properly

I need to provision SharePoint Online team rooms using azure queues and web jobs.
I have created a console application and published as continuous web job with the following settings:
config.Queues.BatchSize = 1;
config.Queues.MaxDequeueCount = 4;
config.Queues.MaxPollingInterval = TimeSpan.FromSeconds(15);
JobHost host = new JobHost();
host.RunAndBlock();
The trigger function looks like this:
public static void TriggerFunction([QueueTrigger("messagequeue")]CloudQueueMessage message)
{
ProcessQueueMsg(message.AsString);
}
Inside ProcessQueueMsg function i'm deserialising the received json message in a class and run the following operations:
I'm creating a sub site in an existing site collection;
Using Pnp provisioning engine i'm provisioning content in the sub
site (lists,upload files,permissions,quick lunch etc.).
If in the queue I have only one message to process, everything works correct.
However, when I send two messages in the queue with a few seconds delay,while the first message is processed, the next one is overwriting the class properties and the first message is finished.
Tried to run each message in a separate thread but the trigger functions are marked as succeeded before the processing of the message inside my function.This way I have no control for potential exceptions / message dequeue.
Tried also to limit the number of threads to 1 and use semaphore, but had the same behavior:
private const int NrOfThreads = 1;
private static readonly SemaphoreSlim semaphore_ = new SemaphoreSlim(NrOfThreads, NrOfThreads);
//Inside TriggerFunction
try
{
semaphore_.Wait();
new Thread(ThreadProc).Start();
}
catch (Exception e)
{
Console.Error.WriteLine(e);
}
public static void ThreadProc()
{
try
{
DoWork();
}
catch (Exception e)
{
Console.Error.WriteLine(">>> Error: {0}", e);
}
finally
{
// release a slot for another thread
semaphore_.Release();
}
}
public static void DoWork()
{
Console.WriteLine("This is a web job invocation: Process Id: {0}, Thread Id: {1}.", System.Diagnostics.Process.GetCurrentProcess().Id, Thread.CurrentThread.ManagedThreadId);
ProcessQueueMsg();
Console.WriteLine(">> Thread Done. Processing next message.");
}
Is there a way I can run my processing function for parallel messages in order to provision my sites without interfering?
Please let me know if you need more details.
Thank you in advance!
You're not passing in the config object to your JobHost on construction - that's why your config settings aren't having an effect. Change your code to:
JobHost host = new JobHost(config);
host.RunAndBlock();

Azure Webjobs and Queues

I am working with an Azure Service Bus Queue (or potentially a topic if required), and would like to know how a Web Job can be used with the Queue.
When a message comes onto the queue it represents a process that will run within the web job (or be started from the webjob). This process might be quick, 30 seconds, or it might be slow, 1 hour etc.
Can I use a single Web Job for this and somehow say that it should be running no more than 10 of these processes at a time?
Yes you can use a WebJob. I have created a simple WebJob with Storage Queue to just guide how it can be done. The below workflow will run only ten process at a time and keep all the other requests in memory of ConcurrentQueue. You will have to implement the logic to dequeue it and consume it
public class Functions
{
public delegate void CompletedProcessHandler(object sender, CompletedProcessHandlerArgs args);
static readonly Dictionary<int, CustomProcess> _dictionary =
new Dictionary<int, CustomProcess>();
static readonly ConcurrentQueue<ProcessEntity> _remaining =
new ConcurrentQueue<ProcessEntity>();
// This function will get triggered/executed when a new message is written
// on an Azure Queue called queue.
public static void ProcessQueueMessage([QueueTrigger("testqueue")] ProcessEntity msg,
TextWriter log)
{
if (_dictionary.Count <= 10)
{
var newProcess = new CustomProcess((_dictionary.Last().Key) + 1,
msg.Duration);
}
else
{
_remaining.Enqueue(msg);
}
}
public static void CompletedProcess(object sender, CompletedProcessHandlerArgs args)
{
_dictionary[Int32.Parse(args.ProcessID)].Dispose();
_dictionary.Remove(Int32.Parse(args.ProcessID));
}
}
public class CustomProcess : IDisposable
{
public event Functions.CompletedProcessHandler OnProcessCompleted;
private CancellationTokenSource _token;
private string _id;
private Timer _timer;
public CustomProcess(int i, int duration)
{
_timer = new Timer { Enabled = true, Interval = duration * 1000 };
_timer.Elapsed += Timer_Elapsed;
_id = i.ToString();
_token = new CancellationTokenSource();
Task.Factory.StartNew(() => WriteMessages());
_timer.Start();
OnProcessCompleted += Functions.CompletedProcess;
}
private void Timer_Elapsed(object sender, System.Timers.ElapsedEventArgs e)
{
_token.Cancel();
OnProcessCompleted?.Invoke(this, new CompletedProcessHandlerArgs(_id));
}
private void WriteMessages()
{
while (!_token.Token.IsCancellationRequested)
{
Console.WriteLine("Test Message from process " + _id);
}
}
public void Dispose()
{
_token.Dispose();
_timer.Dispose();
}
}
public class CompletedProcessHandlerArgs : EventArgs
{
public string ProcessID { get; set; }
public CompletedProcessHandlerArgs(string ID)
{
ProcessID = ID;
}
}
public class ProcessEntity
{
public int Duration { get; set; }
}
In the app.config of the web job you need to provide the two app settings
<add name="AzureWebJobsDashboard"
connectionString="DefaultEndpointsProtocol=https;AccountName=[AccountName];AccountKey=[AccountKey]" />
<add name="AzureWebJobsStorage"
connectionString="DefaultEndpointsProtocol=https;AccountName=[AccountName];AccountKey=[AccountKey]" />
The Program file is the default one from the Visual Studio template
public class Program
{
// Please set the following connection strings in app.config for this WebJob to run:
// AzureWebJobsDashboard and AzureWebJobsStorage
static void Main()
{
var host = new JobHost();
// The following code ensures that the WebJob will be running continuously
host.RunAndBlock();
}
}
WebJob will keep dequeue the message the moment it comes. Since you want only 10 to run at a time you will have to enqueue the message in memory and wait for running process to complete before you start a new one
As #Rick has mentioned you can set the is_Singleton property to true in settings.job file of the web job
Yes, you can trigger a web job with an Azure Service Bus Queue or Topic. A good example to look at to get you going would be the Service Bus quick start project template in Visual Studio.
In particular, you want to look at the ServiceBusTrigger attribute that the Web Jobs SDK provides.
As for the scalability of the web job, this will scale according to your web app instances. So, if you had say 5 instances of your web app with always on enabled, then you would have 5 instances of your web job. As an additional comment on this, if you wanted just one instance of the web job in an environment of 5 web app instances, then you could set the is_singleton property to true in the settings.job file.

Update storage tables when webjob is shutting down

My question is similar to the below one.
Notification of when continuous Azure WebJob is stopping for NoAutomaticTrigger type jobs
I have used the idea from Amit's Blog but then hit a little roadblock
I have a file watcher set in the webjob which gets triggered if the webjob is shutdown from the portal.
I need to update a few flags in my storage tables before the webjob is terminated.
The problem is that my code seems to stop at a point where I am trying to retrive a record from storage table. I have exception handler around the below code and no exception message is written on the console.
Below is my code
CloudStorageAccount storageAccount = CloudStorageAccount.Parse("my storage key");
var tableClient = storageAccount.CreateCloudTableClient();
var table = tableClient.GetTableReference("myTable");
TableOperation operation = TableOperation.Retrieve("partKey", "rowKey");
var result = table.Execute(operation); // stucks here
if (result.Result != null)
{
MyEntity entity = (MyEntity)result.Result;
if (entity != null)
{
entity.IsRunning = false; //reset the flag
TableOperation update = TableOperation.InsertOrReplace(entity);
table.Execute(update); //update the record
}
}
I have increased the stopping_wait_time in settings.job to 300 seconds but still no luck.
You could use Microsoft.Azure.WebJobs.WebJobsShutdownWatcher
This is an implementation of Amit solution : WebJobs Graceful Shutdown
So I've found a solution doing this :
No modification in the Program.cs
class Program
{
static void Main()
{
var host = new JobHost();
host.Call(typeof(Startup).GetMethod("Start"));
host.RunAndBlock();
}
}
the graceful shutdown goes in your function :
public class Startup
{
[NoAutomaticTrigger]
public static void Start(TextWriter log)
{
var token = new Microsoft.Azure.WebJobs.WebJobsShutdownWatcher().Token;
//Shut down gracefully
while (!token.IsCancellationRequested)
{
// Do somethings
}
// This code will be executed once the webjob is going to shutdown
Console.Out.WriteLine("Webjob is shuting down")
}
}
After the while loop, you could also stop started tasks.

Resources