Azure DevOps Hosted Build Controller - Is the Azure Storage Emulator supported? - azure

I'd like to run unit / integration tests that utilise the Azure Storage Emulator rather than real storage from a Azure DevOps build.
The emulator is installed on the Hosted Build Controller as part of the Azure SDK in its usual place (C:\Program Files (x86)\Microsoft SDKs\Azure\Storage Emulator\AzureStorageEmulator.exe).
However the emulator is in the uninitialised state on the Build Controller. When trying to run the command Init from the command line, I get the following error:
This operation requires an interactive window station
Is there an known workaround for this or plans to support the emulator in Azure DevOps builds?

Despite all the answers here to the contrary, I've been running the Azure Storage Emulator on a VS2017 hosted build agent for over a year.
The trick is to initialise SQL LocalDB first (the emulator uses it), and then start the emulator. You can do this with a command line task that runs:
sqllocaldb create MSSQLLocalDB
sqllocaldb start MSSQLLocalDB
sqllocaldb info MSSQLLocalDB
"C:\Program Files (x86)\Microsoft SDKs\Azure\Storage Emulator\AzureStorageEmulator.exe" start

As already stated you can't run the Azure Storage Emulator. What you can run though is Azurite an open source alternative.
Please note: Azurite can emulate blobs, tables and queues. However I have only used the blob storage emulation in this way.
At the start of your build configuration add a nuget step that runs a custom nuget command install Azurite -version 2.2.2. Then add a command line step that runs start /b $(Build.SourcesDirectory)\Azurite.2.2.2\tools\blob.exe.
It runs on the same port as the Azure Storage Emulator so you can use the standard connection strings.

No, the Hosted Build Controller does not run in Interactive Mode, so the emulator won't work under the environment. See Q&A in Hosted build controller for XAML builds for details.
Q: Do you need to run your build service in interactive mode?
A: No. Then you can use the hosted build controller.
I recommend you setup on-premises build controller and run the build server in Interactive Mode. Refer to Setup Build Server and Setup Build Controller for details.

Seems like the answer is maybe from the Visual Studio Online side. There's a User Voice entry if anyone has similar issues.
Not really sure why the emulator doesn't have a non-interactive mode, personally I don't use it's UI 99% of the time. There's a general User Voice entry for making Azure Storage more unit testable.

If you want to do start the Azure Storage Emulator right in your integration test code in C#, you can put this into your test initialization (startup) code (the example is for xUnit):
[Collection("Database collection")]
public sealed class IntegrationTests
{
public IntegrationTests(DatabaseFixture fixture)
{
this.fixture = fixture;
}
[Fact]
public async Task TestMethod1()
{
// use fixture.Table to run tests on the Azure Storage
}
private readonly DatabaseFixture fixture;
}
public class DatabaseFixture : IDisposable
{
public DatabaseFixture()
{
StartProcess("SqlLocalDB.exe", "create MSSQLLocalDB");
StartProcess("SqlLocalDB.exe", "start MSSQLLocalDB");
StartProcess("SqlLocalDB.exe", "info MSSQLLocalDB");
StartProcess(EXE_PATH, "start");
var client = CloudStorageAccount.DevelopmentStorageAccount.CreateCloudTableClient();
Table = client.GetTableReference("tablename");
InitAsync().Wait();
}
public void Dispose()
{
Table.DeleteIfExistsAsync().Wait();
StartProcess(EXE_PATH, "stop");
}
private async Task InitAsync()
{
await Table.DeleteIfExistsAsync();
await Table.CreateAsync();
}
static void StartProcess(string path, string arguments, int waitTime = WAIT_FOR_EXIT) =>
Process.Start(path, arguments).WaitForExit(waitTime);
public CloudTable Table { get; }
private const string EXE_PATH =
"C:\\Program Files (x86)\\Microsoft SDKs\\Azure\\Storage Emulator\\AzureStorageEmulator.exe";
private const int WAIT_FOR_EXIT = 60_000;
}
[CollectionDefinition("Database collection")]
public class DatabaseCollection : ICollectionFixture<DatabaseFixture>
{
// This class has no code, and is never created. Its purpose is simply
// to be the place to apply [CollectionDefinition] and all the
// ICollectionFixture<> interfaces.
}

Related

Whats is the difference between 'Settings.job' and 'TimerTrigger' in Azure WebJobs SDK 3.0

There are many tutorials using the following code to create a Webjobs via the WebJob SDK 3.0 library. Specifically 'TimerTrigger'
public void DoSomethingUseful([TimerTrigger("0 */1 * * * *", RunOnStartup = false)] TimerInfo timerInfo, TextWriter log)
{
// Act on the DI-ed class:
string thing = _usefulRepository.GetFoo();
Console.WriteLine($"{DateTime.Now} - {thing}");
}
The above example should run this method as a webjob every 1 minute. However this doesn't work.
I have managed to get the webjob to work when including a setting.job file.
setting.job: { "schedule": "0 */1 * * * *" }
My question is what is the different between these two?
Update:
Please go to the azure webjobs log, then you can see it actually runs as per the timerTrigger defined by SDK(even though the Schedule is n/a, and settings.job is blank, it does not matter):
In short, When using webjob sdk 3.x, you can use TimerTrigger attribute to run the function as per the time you defined. Without using webjobs SDK(like use .zip file or publish a console project from visual studio), you can use setting.job to defined timer instead of TimerTrigger attribute.
1.When you're using webjobs SDK 3.x for timer trigger, you should add this line of code: config.AddTimers(); .
Here are my code using webjobs SDK 3.x(it's a .net core 2.2 console project created in visual studio):
The packages with latest version: Microsoft.Azure.WebJobs / Microsoft.Azure.WebJobs.Extensions / Microsoft.Extensions.Logging.Console
The code in Program.cs:
class Program
{
static void Main(string[] args)
{
var builder = new HostBuilder()
.ConfigureWebJobs(config =>
{
config.AddTimers();
config.AddAzureStorageCoreServices();
})
.ConfigureLogging((context, b) =>
{
b.AddConsole();
}
)
.Build();
builder.Run();
}
}
Then create a new file, like SayHelloWebJob.cs, and code in it:
public class SayHelloWebJob
{
public void ProcessCollateFiles([TimerTrigger("0 */1 * * * *", RunOnStartup = false)]TimerInfo timerInfo,TextWriter writer)
{
writer.WriteLine("hi, it is a testing running");
Console.WriteLine("test");
}
}
Note that in the appsettings.json file, add your storage connection string, like below:
{
"AzureWebJobsStorage": "DefaultEndpointsProtocol=https;AccountName=xxx;AccountKey=xxx;EndpointSuffix=core.windows.net"
}
Then run the project, you can see the function is triggered as per 1 minute:
2.For settings.job, eg. if you're just creating a console project, and does not use the webjobs sdk. Since you're not using webjobs sdk, you cannot use the timerTrigger attribute. At this moment, you can include the settings.job file(in it's property, set "Copy to Output Directory" as "copy if newer") in this project and configure the scheduled timer like you did in your post. After publish as webjob(from visual studio, when publish, select "Webjob run mode" as "run on demand"), it can run as per the schedule you defined in settings.job.
I have been struggling with the same problem. Here is my understanding after longer research.
There are two kind of Webjobs:
triggered
continuous
Triggered one has to be triggered manually or can be triggered by App Service as per CRON expression schedule provided in setting.job. These jobs are not present in memory once not running.
Continuous one runs always, so the process exists in memory all the time. You can schedule it using Webjobs SKD TimeTrigger attribute.
You will also notice difference between these two Webjob types in the Dashboard.
For triggered Webjobs you will see on top level jobs runs then functions invoked and eventually invocation details.
For continuous Webjobs this will be functions invoked and eventually invocation details. Job runs are missing as this is just one long running job.
Check App Service / Process explorer under Kudu w3wp process to see Webjobs processes running.
Note that continuous and triggered Webjobs have to be started in different way in Main method where you provide the configuration. All comes configured when Webjob of specific type is added via Visual Studio.
This is based on WebJobs 2.x.
My recommendation is
for periodical (e.g. once per few hours, days) jobs use triggered
ones, job when not running will not consume resources,
for more frequent jobs use continuous ones with TimeTrigger
attribute, it will consume resources all the time but will not need
extra time for start-up.

Azure Functions v1 timer triggered function cannot start due to Windows Platform FIPS issue

I'm trying to develop a function app that uses a timer trigger and I'm getting an issue with the Windows Platform FIPS that prevents the timer-triggered function to start locally. Here's the code that's causing the error (it's the default timer-triggered function):
using System;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Host;
namespace FunctionApp1
{
public static class Function1
{
[FunctionName("Function1")]
public static void Run([TimerTrigger("0 */5 * * * *")]TimerInfo myTimer, TraceWriter log)
{
log.Info($"C# Timer trigger function executed at: {DateTime.Now}");
}
}
}
When I try to run this function in func.exe, it produces the following error:
The listener for function 'Function1' was unable to start. mscorlib: Exception has been thrown by the target of an invocation. mscorlib: This implementation is not part of the Windows Platform FIPS validated cryptographic algorithms.
This exact code works on another dev environment that I have access to. What do I need to do to fix these Windows Platform FIPS issues so that the timer trigger will run?
Thanks!
If your environment does need this FipsAlgorithmPolicy somewhere, disable it for Azure function only.
In File Explorer, open %localappdata%\AzureFunctionsTools\Releases\1.4.0\cli\func.exe.Config, add <enforceFIPSPolicy enabled="false"/> under <runtime> element. Note that in this way, you have to repeat this step once new function cli is released.
Similarly, if you use Storage Emulator locally, open C:\Program Files (x86)\Microsoft SDKs\Azure\Storage Emulator\AzureStorageEmulator.exe.config, add <enforceFIPSPolicy enabled="false"/> under element.
Else just disable FipsAlgorithmPolicy for your computer.
In search box or right click on Start button and click Run, input regedit to open Registry Editor.
In address bar(View>Address Bar), navigate to Computer\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa\FipsAlgorithmPolicy.
Double click on Enabled, change Value data to 0.

Serilog not working in Service Fabric

I am using Serilog to write to a file and try to get more information about an error that is occurring in my production cluster...
In my local dev cluster the log files are created fine but they are not created in the VM's on my production cluster. I think this may be security related
Has anyone ever had this?
My production cluster has 5 nodes with a Windows 2016 VM on each
Even more strange is that this works on a single node cluster in Azure
public static ILogger ConfigureLogging(string appName, string appVersion)
{
AppDomain.CurrentDomain.ProcessExit += (sender, args) => Log.CloseAndFlush();
var configPackage = FabricRuntime.GetActivationContext().GetConfigurationPackageObject("Config");
var environmentName = configPackage.GetSetting("appSettings", "Inspired.TradingPlatform:EnvironmentName");
var loggerConfiguration = new LoggerConfiguration()
.WriteTo.File(#"D:\SvcFab\applog-" + appName + ".txt", shared: true, rollingInterval: RollingInterval.Day)
.Enrich.WithProperty("AppName", appName)
.Enrich.WithProperty("AppVersion", appVersion)
.Enrich.WithProperty("EnvName", environmentName);
var log = loggerConfiguration.CreateLogger();
log.Information("Starting {AppName} v{AppVersion} application", appName, appVersion);
return Log.Logger = log;
}
Paul
I wouldn't recommend logging into local files in Service Fabric, since your node may be moved to another VM any time and you won't have access to these files. Consider using another sinks which write to external system (database, message bus or logging system like loggly)
It is likely a permission issue. Your service might be trying to log to a folder where it does not have permission.
By default, your services will run under same user as the Fabric.exe process, that run as NetworkService, you can find more information about this on this link.
I would not recommend this approach, because many reasons, a few of them are:
Your services might be moved around the cluster so your files will be incomplete
You have to log on multiple machines to find the logs
The node might be gone with files (Scale up + Down, Failure, Disk error)
Multiple instances on same node trying to access the same file
and so on...
On Service Fabric, the recommended way is to use EventSource(or ETW) + EventFlow + Application Insights. They run smoothly together and bring you many features.
If you want to use stay on Serilog, I would recommend you use Serilog + Application Insights instead, it will give you move flexibility on your monitoring. Take a look at the Application Insights sink for serilog here.
This was actually user error! I was connecting to a different cluster of VMs than the one my service fabric was connected to! Whoops!

The binding type 'serviceBusTrigger' is not registered error in azure functions c# with core tools 2

I open a fresh new azure functions project, my packages are:
Microsoft.Azure.WebJobs 3.0.0-beta4
Microsoft.Azure.WebJobs.ServiceBus 3.0.0-beta4
Microsoft.NET.Sdk.Functions 1.0.7
NETStandardLibrary 2.0.1
I use servicebustrigger and my function code is basic:
public static class Function1
{
[FunctionName("OrderPusherFunction")]
public static Task Run([ServiceBusTrigger("orders","orderpusher", Connection ="ServiceBus")]
string myQueueItem, TraceWriter log)
{
log.Info($"C# Queue trigger function processed: {myQueueItem}");
return Task.CompletedTask;
}
}
I also have:
Azure Functions Core Tools (2.0.1-beta.22) and Function Runtime Version: 2.0.11415.0
When i run, i get "The binding type 'serviceBusTrigger' is not registered" error, and the function does not get triggered. Anyone has an idea? This looks to me as a basic setup..
Basically, in v2 ServiceBus trigger was moved out of the default installation into Extensibility model. You need to register Service Bus binding as an extension as per Binding Extensions Management.
Unfortunately, this is all work-in-progress, as there is a number of issues for Service Bus binding:
Migrate ServiceBus Extension to .NET Core - "Done", but see the comments for which problems still exist
Build failure after installing ExtensionsMetadatGenerator into empty v2 app prevents VS tooling from registering the extension properly
Extensions.json is not updated when "extension install" CLI command executed for service bus extension for CLI issue
My advice would be to stick to v1 for now.

How to run webjobs in azure emulator

I have a console based application as WebJob. Now internally i am trying to map a CloudDrive using the storageconnectionstring UseDevelopmentStorage=true
It is throwing exception ERROR_AZURE_DRIVE_DEV_PATH_NOT_SET. I searched for this error and found that WebJobs do not run locally in Azure emulator. Is this information still valid?
Is there any plan to provide emulator (storage) support for webjobs in near future say in a week or so?
thanks
The information is still valid - we don't support the Azure emulator.
We have that work item on our backlog but I cannot give you any ETA.
Boo hoo Microsoft... This seems rather stupid given that you want us to start adopting the use of Azure Web Jobs!
There are new few lines of code in current version, which I believe solves this issue
static void Main()
{
var config = new JobHostConfiguration();
if (config.IsDevelopment)
{
config.UseDevelopmentSettings();
}
var host = new JobHost();
// The following code ensures that the WebJob will be running continuously
host.RunAndBlock();
}

Resources