I have a console based application as WebJob. Now internally i am trying to map a CloudDrive using the storageconnectionstring UseDevelopmentStorage=true
It is throwing exception ERROR_AZURE_DRIVE_DEV_PATH_NOT_SET. I searched for this error and found that WebJobs do not run locally in Azure emulator. Is this information still valid?
Is there any plan to provide emulator (storage) support for webjobs in near future say in a week or so?
thanks
The information is still valid - we don't support the Azure emulator.
We have that work item on our backlog but I cannot give you any ETA.
Boo hoo Microsoft... This seems rather stupid given that you want us to start adopting the use of Azure Web Jobs!
There are new few lines of code in current version, which I believe solves this issue
static void Main()
{
var config = new JobHostConfiguration();
if (config.IsDevelopment)
{
config.UseDevelopmentSettings();
}
var host = new JobHost();
// The following code ensures that the WebJob will be running continuously
host.RunAndBlock();
}
Related
I am using Serilog to write to a file and try to get more information about an error that is occurring in my production cluster...
In my local dev cluster the log files are created fine but they are not created in the VM's on my production cluster. I think this may be security related
Has anyone ever had this?
My production cluster has 5 nodes with a Windows 2016 VM on each
Even more strange is that this works on a single node cluster in Azure
public static ILogger ConfigureLogging(string appName, string appVersion)
{
AppDomain.CurrentDomain.ProcessExit += (sender, args) => Log.CloseAndFlush();
var configPackage = FabricRuntime.GetActivationContext().GetConfigurationPackageObject("Config");
var environmentName = configPackage.GetSetting("appSettings", "Inspired.TradingPlatform:EnvironmentName");
var loggerConfiguration = new LoggerConfiguration()
.WriteTo.File(#"D:\SvcFab\applog-" + appName + ".txt", shared: true, rollingInterval: RollingInterval.Day)
.Enrich.WithProperty("AppName", appName)
.Enrich.WithProperty("AppVersion", appVersion)
.Enrich.WithProperty("EnvName", environmentName);
var log = loggerConfiguration.CreateLogger();
log.Information("Starting {AppName} v{AppVersion} application", appName, appVersion);
return Log.Logger = log;
}
Paul
I wouldn't recommend logging into local files in Service Fabric, since your node may be moved to another VM any time and you won't have access to these files. Consider using another sinks which write to external system (database, message bus or logging system like loggly)
It is likely a permission issue. Your service might be trying to log to a folder where it does not have permission.
By default, your services will run under same user as the Fabric.exe process, that run as NetworkService, you can find more information about this on this link.
I would not recommend this approach, because many reasons, a few of them are:
Your services might be moved around the cluster so your files will be incomplete
You have to log on multiple machines to find the logs
The node might be gone with files (Scale up + Down, Failure, Disk error)
Multiple instances on same node trying to access the same file
and so on...
On Service Fabric, the recommended way is to use EventSource(or ETW) + EventFlow + Application Insights. They run smoothly together and bring you many features.
If you want to use stay on Serilog, I would recommend you use Serilog + Application Insights instead, it will give you move flexibility on your monitoring. Take a look at the Application Insights sink for serilog here.
This was actually user error! I was connecting to a different cluster of VMs than the one my service fabric was connected to! Whoops!
So i'm trying to get a small project of mine going that I want to host on azure, it's a web app which works fine and I've recently found webjobs which I now want to use to have a task run which does data gathering and updating, which I have a Console App for.
My problem is that I can't set a schedule, since it is published to the web app which dosen't support scheduling, so I tried using the Azure Webjobs SDK and using a timer but it wont run without a AzureWebJobsStorage connection string which I cannot get since my Azure account is a Dreamspark account and I cannot make a Azure Storage Account with it.
So I was wondering if there is some way to get this webjob to run on a time somehow (every hour or so). Otherwise if I just upgraded my account to "Pay-As-You-Go"? would I still retain my free features? namely SQL Server.
Im not sure if this is the right palce to ask but I tried googling for it without success.
Update: Decided to just make the console app run oin a infinate loop and ill just monitor it through the portal, the code below is what I am using to made that loop.
class Program
{
static void Main()
{
var time = 1000 * 60 * 30;
Timer myTimer = new Timer(time);
myTimer.Start();
myTimer.Elapsed += new ElapsedEventHandler(myTimer_Elapsed);
Console.ReadLine();
}
public static void myTimer_Elapsed(object sender, ElapsedEventArgs e)
{
Functions.PullAndUpdateDatabase();
}
}
The simplest way to get your Web Job on a schedule is detailed in Amit Apple's blog titled "How to add a schedule to a triggered WebJob".
It's as simple as adding a JSON file called settings.job to your console application and in it describing the schedule you want as a cron expression like so:
{"schedule": "the schedule as a cron expression"}
For example, to run your job every 30 minutes you'd have this in your settings.job file:
{"schedule": "0 0,30 * * * *"}
Amit's blog also goes into details on how to write a cron expression.
Caveat: The scheduling mechanism used in this method is hosted on the instance where your web application is running. If your web application is not configured as Always On and is not in constant use it might be unloaded and the scheduler will then stop running.
To prevent this you will either need to set your web application to Always On or choose an alternative scheduling option - based on the Azure Scheduler service, as described in a blog post titled "Hooking up a scheduler job to a WebJob" written by David Ebbo.
I'm running into some tough to explain oddities when trying to retrieve messages from my local storage queues. I'm fairly sure this isn't happening in production using actual Azure queues.
The line in particular causing this issue is:
msgs = await priorityQueue.GetMessagesAsync(Settings.NumberOfMessagesToGet, visibilityTimeSpan, null, null);
Which will just do nothing and doesn't seem to ever return. However, replacing it with:
msgs = priorityQueue.GetMessages(Settings.NumberOfMessagesToGet, visibilityTimeSpan, null, null);
Returns back once it's done and seems fine.
Am I using the await here right? Any ideas why this isn't working?
I'm using the Windows Azure SDK 2.8, with the Windows Azure Storage Emulator 4.2.0.0, in case it gives any clues.
The Azure Storage Queues sample on GitHub demonstrates how to use async patterns with the .NET Client library from a console application:
https://github.com/Azure-Samples/storage-queue-dotnet-getting-started
Notice that at the top level of the program, the "Wait()" method is used:
ProcessBatchOfMessagesAsync(queue).Wait();
I am having problems getting the Microsoft.Azure.Documents library to initialize the client in an azure worker role. I'm using Nuget Package 0.9.1-preview.
I have mimicked what was done in the example for azure document
When running locally through the emulator I can connect fine with the documentdb and it runs as expected. When running in the worker role, I am getting a series of NullReferenceException and then ArgumentNullException.
The bottom System.NullReferenceException that is highlighted above has this call stack
so the nullReferenceExceptions start in this call at the new DocumentClient.
var endpoint = "myendpoint";
var authKey = "myauthkey";
var enpointUri = new Uri(endpoint);
DocumentClient client = new DocumentClient(endpointUri, authKey);
Nothing changes between running it locally vs on the worker role other then the environment (obviously).
Has anyone gotten DocumentDb to work on a worker role or does anyone have an idea why it would be throwing null reference exceptions? The parameters getting passed into the DocumentClient() are filled.
UPDATE:
I tried to rewrite it being more generic which helped at least let the worker role run and let me attached a debugger. It is throwing the error on the new DocumentClient. Seems like some security passing is null. Both the required parameters on initialization are not null. Is there a security setting I need to change for my worker role to be able to connect to my documentdb? (still works locally fine)
UPDATE 2:
I can get the instance to run in release mode, but not debug mode. So it must be something to do with some security setting or storage setting that is misconfigured I guess?
It seems I'm getting System.Security.SecurityExceptions - only when using The DocumentDb - queues do not give me that error. All Call Stacks for that error seem to be with System.Diagnostics.EventLog. The very first Exception I see in the Intellitrace Summary is System.Threading.WaitHandleCannotBeOpenedException.
More Info
Intellitrace summary exception data:
top is the earliest and bottom is the latest (so System.Security.SecurityException happens first then the NullReference)
The solution for me to get rid of the security exception and null reference exception was to disable intellitrace. Once I did that, I was able to deploy and attach debugger and see everything working.
Not sure what is between the null in intellitrace and the DocumentClient, but hopefully it's just in relation to the nuget and it will be fixed in the next iteration.
unable to repro.
I created a new Worker Role. Single instance. Added authkey & endoint config to cscfg.
Created private static DocumentClient at WorkerRole class level
Init DocumentClient in OnStart
Dispose DocumentClient in OnStop
In RunAsync inside loop,
execute a query Works as expected.
Test in emulator works.
Deployed as Release to Production slot. works.
Deployed as Debug to Staging with Remote Debug. works.
Attached VS to CloudService, breakpoint hit inside loop.
Working solution : http://ryancrawcour.blob.core.windows.net/samples/AzureCloudService1.zip
I've been running a cloud drive snapshot in dev for a while now with no probs. I'm now trying to get this working in Azure.
I can't for the life of me get it to work. This is my latest error:
Microsoft.WindowsAzure.Storage.CloudDriveException: Unknown Error HRESULT=D000000D --->
Microsoft.Window.CloudDrive.Interop.InteropCloudDriveException: Exception of type 'Microsoft.WindowsAzure.CloudDrive.Interop.InteropCloudDriveException' was thrown.
at ThrowIfFailed(UInt32 hr)
at Microsoft.WindowsAzure.CloudDrive.Interop.InteropCloudDrive.Mount(String url, SignatureCallBack sign, String mount, Int32 cacheSize, UInt32 flags)
at Microsoft.WindowsAzure.StorageClient.CloudDrive.Mount(Int32 cacheSize, DriveMountOptions options)
Any idea what is causing this? I'm running both the WorkerRole and Storage in Azure so it's nothing to do with the dev simulation environment disconnect.
This is my code to mount the snapshot:
CloudDrive.InitializeCache(localPath.TrimEnd('\\'), size);
var container = _blobStorage.GetContainerReference(containerName);
var blob = container.GetPageBlobReference(driveName);
CloudDrive cloudDrive = _cloudStorageAccount.CreateCloudDrive(blob.Uri.AbsoluteUri);
string snapshotUri;
try
{
snapshotUri = cloudDrive.Snapshot().AbsoluteUri;
Log.Info("CloudDrive Snapshot = '{0}'", snapshotUri);
}
catch (Exception ex)
{
throw new InvalidCloudDriveException(string.Format(
"An exception has been thrown trying to create the CloudDrive '{0}'. This may be because it doesn't exist.",
cloudDrive.Uri.AbsoluteUri), ex);
}
cloudDrive = _cloudStorageAccount.CreateCloudDrive(snapshotUri);
Log.Info("CloudDrive created: {0}", snapshotUri, cloudDrive);
string driveLetter = cloudDrive.Mount(size, DriveMountOptions.None);
The .Mount() method at the end is what's now failing.
Please help as this has me royally stumped!
Thanks in advance.
Dave
I finally got this to work last night. All I did was create a new container and upload my VHD to it so I'm not sure if there was something weird going on with the old container...? Can't think what. The old container must've been getting a bit long in the tooth..!?!
2 days of my life I'll never get back. Debugging live Azure issues is an excruciatingly tedious process.
It's a shame the Azure CloudDrive dev simulation doesn't more closely replicate the live environment.
One source of the D000000D InteropCloudDriveException is when the drive (or snapshot) being mounted is expandable rather than fixed size. Unfortunately, the MSDN documentation provides minimal information on restrictions, but this note is an excellent source of information:
http://convective.wordpress.com/2010/02/27/azure-drive/
I can confirm Dave's findings regarding the BLOB container (Love you Dave, I only spent one evening).
I also had problems debugging before changing BLOB container.
The error message I had was "there was an error attaching the debugger to the IIS worker process for url ...".
Hope this helps some poor Azure dev, having a challenging time with the debugger.