I am having problems getting the Microsoft.Azure.Documents library to initialize the client in an azure worker role. I'm using Nuget Package 0.9.1-preview.
I have mimicked what was done in the example for azure document
When running locally through the emulator I can connect fine with the documentdb and it runs as expected. When running in the worker role, I am getting a series of NullReferenceException and then ArgumentNullException.
The bottom System.NullReferenceException that is highlighted above has this call stack
so the nullReferenceExceptions start in this call at the new DocumentClient.
var endpoint = "myendpoint";
var authKey = "myauthkey";
var enpointUri = new Uri(endpoint);
DocumentClient client = new DocumentClient(endpointUri, authKey);
Nothing changes between running it locally vs on the worker role other then the environment (obviously).
Has anyone gotten DocumentDb to work on a worker role or does anyone have an idea why it would be throwing null reference exceptions? The parameters getting passed into the DocumentClient() are filled.
UPDATE:
I tried to rewrite it being more generic which helped at least let the worker role run and let me attached a debugger. It is throwing the error on the new DocumentClient. Seems like some security passing is null. Both the required parameters on initialization are not null. Is there a security setting I need to change for my worker role to be able to connect to my documentdb? (still works locally fine)
UPDATE 2:
I can get the instance to run in release mode, but not debug mode. So it must be something to do with some security setting or storage setting that is misconfigured I guess?
It seems I'm getting System.Security.SecurityExceptions - only when using The DocumentDb - queues do not give me that error. All Call Stacks for that error seem to be with System.Diagnostics.EventLog. The very first Exception I see in the Intellitrace Summary is System.Threading.WaitHandleCannotBeOpenedException.
More Info
Intellitrace summary exception data:
top is the earliest and bottom is the latest (so System.Security.SecurityException happens first then the NullReference)
The solution for me to get rid of the security exception and null reference exception was to disable intellitrace. Once I did that, I was able to deploy and attach debugger and see everything working.
Not sure what is between the null in intellitrace and the DocumentClient, but hopefully it's just in relation to the nuget and it will be fixed in the next iteration.
unable to repro.
I created a new Worker Role. Single instance. Added authkey & endoint config to cscfg.
Created private static DocumentClient at WorkerRole class level
Init DocumentClient in OnStart
Dispose DocumentClient in OnStop
In RunAsync inside loop,
execute a query Works as expected.
Test in emulator works.
Deployed as Release to Production slot. works.
Deployed as Debug to Staging with Remote Debug. works.
Attached VS to CloudService, breakpoint hit inside loop.
Working solution : http://ryancrawcour.blob.core.windows.net/samples/AzureCloudService1.zip
Related
I have function app where I have one HttpTrigger and 3 BlobTrigger functions. After I deployed it, http trigger is working fine but for others functions which are blob triggers, it gives following errors
"Stopping the listener 'Microsoft.Azure.WebJobs.Host.Blobs.Listeners.BlobListener' for function " for one function
Stopping the listener 'Microsoft.Azure.WebJobs.Host.Listeners.CompositeListener' for function
" for another two
I verified with other environments and config values are same/similar so not sure why we are getting this issue in one environment only. I am using consumption mode.
Update: When file is placed in a blob function is not getting triggered.
Stopping the listener 'Microsoft.Azure.WebJobs.Host.Blobs.Listeners.BlobListener' for function
I was observed the same message when working on the Azure Functions Queue Trigger:
This message doesn't mean the error in function. Due to timeout of Function activity, this message will appear in the App Insights > Traces.
I have stopped sending the messages in the Queue for some time and has been observed the traces like Web Job Host Stopped and if you run the function again or any continuous activity is present in the Function, then this message will not appear in the traces.
If you are using elastic Premium and has VNET integrated, the non-http trigers needs Runtime scale monitoring enabled.
You can find Function App-->Configuration--> Function runtime settings and turn on Runtime scale monitoring.
If function app and storage account which holds the metadata of the function Private linked, you will need to add the app settings WEBSITE_CONTENTOVERVNET = 1.
Also, make sure you have private linked for blob, file, table and queue on storage account.
I created ticket with MS to fix this issue. After analysis I did some code changes as
Function was async but returning void so changed to return Task.
For the trigger I was using connection string from app settings. But then I changed it to azureWebJobStorage(even though bobth were same) in function trigger attribute param
It started working. So posting here in case it is helpful for others
We have been trying to use a SqlConnection within a TransactionScope. When we build the site and try this database call we run into an error:
A transport-level error has occurred when receiving results from the server. (provider: TCP Provider, error: 0 - An existing connection was forcibly closed by the remote host.)
The error involved occurs on the line cnn.Open().
using (var scope = new TransactionScope())
using (var cnn = new SqlConnection(connectionString))
{
cnn.Open();
int result = cnn.QuerySingle<int>("SELECT 1");
Console.WriteLine(result);
}
We created a console application to figure out what was wrong and discovered that by changing our connection string keyword 'Pooling' from 'false' to 'true' allows this to run in the console application and successfully return our result.
We made the same change to our site connection string, the same error as before returns.
Is there any reason this code is not working?
I was under the assumption that the web.config was law, as viewing the file through the Kudu service showed my expected connection string, but apparently this isn't the case in Azure.
I discovered that Azure Publish Profile is overriding our web.config connection string, this override still contained 'Pooling=false'.
Removing this now allows our code to run as intended.
This blog post explains more:
"When this code runs on a developer’s local machine, the value returned will be the one from the web.config file. However when this code runs in Windows Azure Web Sites, the value returned will instead be overridden with the value entered in the portal"
I have a webjob running in azure that is processing data sent to an event hub.
In the eventprocessor I want to save information to a SQL server. To make sure that everything is inserted correctly I want to use transactions.
When I run the code locally everything works perfect. But when running in Azure nothing happens, no error is thrown.
What I have read it should be possible to use TransactionScope. This example code below is not working.
using (TransactionScope scope = new TransactionScope())
{
dataImportDao.StartProcessingMessage(mappedMessage);
scope.Complete();
}
Any suggestions how to solve it or if I should go with a different approach is very appreciated.
I am running an azure webjobs SDK console application (continuous) with the recommended setup:
public static void ProcessQueueMessage([QueueTrigger("logqueue")] string logMessage, TextWriter logger)
The azure queue I am running against has ~6000 messages in it and I am running the web-job locally, as a console application.
The problem I'm having is that the processing randomly stops after processing between zero and ~30 messages. The console stays open, but no more console messages are displayed.
For example, it might just process 2 messages:
Executing: 'Functions.ProcessQueueMessage' - Reason: 'New queue message detected on 'QueueName'.'
Executed: 'Functions.ProcessQueueMessage' (Succeeded)
Executing: 'Functions.ProcessQueueMessage' - Reason: 'New queue message detected on 'QueueName'.'
Executed: 'Functions.ProcessQueueMessage' (Succeeded)
And then, nothing. There doesn't seem to be anything wrong with my internet connection and I can't trace the issues down to any particular messages.
Has anyone else had issues with this SDK?
Update:
I made sure that I was using the right versions of all of the dependencies by removing the nuget packages and then re-running install-package Microsoft.Axure.Webjobs. I am now using webjobs version 1.1.0 which has pulled in version 4.3 of azure storage.
As recommended by Matthew, I have pulled down the source code for azure webjobs to determine where the process is freezing up. Once the freez-up occurs, I pause execution and checked the running threads for what I believe is the culprit within Microsoft.Azure.WebJobs.Host.CompositeTraceWriter
protected virtual void InvokeTextWriter(TraceEvent traceEvent)
{
if (_innerTextWriter != null)
{
string message = traceEvent.Message;
if (!string.IsNullOrEmpty(message) &&
message.EndsWith("\r\n", StringComparison.OrdinalIgnoreCase))
{
// remove any terminating return+line feed, since we're
// calling WriteLine below
message = message.Substring(0, message.Length - 2);
}
_innerTextWriter.WriteLine(message);
if (traceEvent.Exception != null)
{
_innerTextWriter.WriteLine(traceEvent.Exception.ToDetails());
}
}
}
The line it freezes on is line 66 : _innerTextWriter.WriteLine(message);
_innerTextWriter is an instance of System.IO.TextWriter.SyncTextWriter
Is it possible there is some deadlock issue with this class or the way it is being used?
Some notes:
I am running in the debugger, so in this case I believe the textwriter is forwarding to the console internally
I have my batchsize set to 1 via config.Queues.BatchSize = 1;, not sure if that could matter
I'm currently working on setting up an environment on another computer so that I can see if it is reproducible somewhere other than this machine (surface book).
Update
The issue was me not understanding how the new windows 10 command prompt works. Any time you click on the command window, it goes into "select" mode which completely pauses execution of the process.
Basically: https://superuser.com/questions/419717/windows-command-prompt-freezing-randomly?newreg=ece53f5584254346be68f85d1fd2f18d
You can tell it is in this state because it will prefix the window title with the word "Select":
You have to press enter or click again to get it going once again.
So, two final comments:
1) What an incredibly confusing and un-intuitive behavior for a command window!
2) I hope some admin will come take pity on the shame I have brought upon myself and my family by deleting this question.
To get rid of this strange behavior, you can disable QuickEdit mode:
Strange. When it is in this stuck state, can you try adding a new queue message to the queue and see if that triggers? Are you sure your function isn't hanging internally? What version of the SDK are you using? You might also try upgrading to v1.1.0 which we just released last week. If there are really a bunch of messages in the queue waiting to be processed, I can't think of anything that would cause this. The queue listener in the SDK should chug along, reading batches of messages in parallel and dispatching them to your function. Have you changed any of the JobHostConfiguration.Queues configuration knobs? You haven't force updated the version of the Azure SDK have you to something higher than the WebJobs SDK supports?
Another option if you can't figure this out might be to clone the SDK, build it and debug it locally. The repo is here. The main queue processing loop is here.
I am getting a System.AccessViolationException thrown during the execution of an Azure Web Role (run on the Azure emulator, this has not been uploaded to Azure yet) when a call is made to an overridden method of an object when a local int variable is passed as one of the method parameters. The exception message is "Attempted to read or write protected memory. This is often an indication that other memory is corrupt".
The code where the exception is thrown is part of a local library that has been used for several years on live systems (not Azure) with no issues. The part that errors is as follows:
foreach (XmlDataComponent item in this.items)
{
int index = 0;
XmlNode node = item.ToXml(dataSet, xmlDocument, this, index); // Exception thrown when this call is made
...
}
The XmlDataComponent is a base class, when the code runs item is one of its derived classes. The ToXml() method is overridden in the derived classes. The exception is thrown as soon as the call is made to ToXml().
The problem is the index parameter. If I swap this to use an explicit value instead of the local variable, e.g.
item.ToXml(dataSet, xmlDocument, this, 0)
there are no errors.
Similarly, if I cast the item to its actual type e.g.
((XmlDataItem)item).ToXml(dataSet, xmlDocument, this, index))
and mark the ToXml() method in the XmlDataItem class as new instead of override there are no errors.
I have also tried calling the library from a console application rather than a web role with exactly the same data (i.e. everything the same other than running under a web role). Again, this caused no problems.
It appears that when run under the Azure emulator, accessing a local variable as a parameter to an overridden method is an issue!!!
I'm hoping this is only an issue when run under the emulator, however we still need a fix otherwise dev is more difficult.
Any suggestions or advise would be much appreciated.