My Azure Function Intermittantly fails to read the connection string - azure

I have an azure function that runs off of a queue trigger. The repository has method to grab the connection string from the ConnectionStrings collection.
return System.Configuration.ConfigurationManager.ConnectionStrings["MyDataBase"].ToString();
This works great for the most part but I see intermittently that this returns a null exception error.
Is there a way I can make this more robust?
Do azure functions sometimes fail to get the settings?
Should I store the setting in a different section?
I also want to say that this runs thousands of times a day but I see this popup about a 100 times.
Runtime version: 1.0.12299.0

Are you reading the configuration for every function call? You should consider reading it once (e.g. using a Lazy<string> and static) and reusing it for all function invocations.
Maybe there is a concurrency issue when multiple threads access the code. Putting a lock around the code could help as well. ConfigurationManager.ConnectionStrings should be tread-safe, but maybe it isn't in the V1 runtime.
A similar problem was posted here, but this concerned app settings and not connection strings. I don't think using CloudConfigurationManager should be the correct solution.
You can also try putting the connection string into the app settings, unless you are using Entity Framework.
Connection strings should only be used with a function app if you are using entity framework. For other scenarios use App Settings. Click to learn more.
(via Azure Portal)
Not sure if this applies to the V1 runtime as well.

The solution was to add a private static string for the connection string. Then only read from the configuration if it fails. I then added a retry that paused half a second. This basically removed this from happening.
private static string connectionString = String.Empty;
private string getConnectionString(int retryCount)
{
if (String.IsNullOrEmpty(connectionString))
{
if (System.Configuration.ConfigurationManager.ConnectionStrings["MyEntity"] != null)
{
connectionString = System.Configuration.ConfigurationManager.ConnectionStrings["MyEntity"].ToString();
}
else
{
if (retryCount > 2)
{
throw new Exception("Failed to Get Connection String From Application Settings");
}
retryCount++;
getConnectionString(retryCount);
}
}
return connectionString;
}
I don't know if this perfect but it works. I went from seeing this exception 30 times a day to none.

Related

Azure function goes idle when running in Consumption Plan with ServiceBus Queue trigger

I have also asked this question in the MSDN Azure forums, but have not received any guidance as to why my function goes idle.
I have an Azure function running on a Consumption plan that goes idle (i.e. does not respond to new messages on the ServiceBus trigger queue) despite following the instructions outlined in this GitHub issue:
The configuration for the function is the following json:
{
"ConnectionStrings": {
"MyConnectionString": "Server=tcp:project.database.windows.net,1433;Database=myDB;User ID=user#project;Password=password;Encrypt=True;Connection Timeout=30;"
},
"Values": {
"serviceBusConnection": "Endpoint=sb://project.servicebus.windows.net/;SharedAccessKeyName=SharedAccessKeyName;SharedAccessKey=KEY_HERE",
}
}
And the function signature is:
public static void ProcessQueue([ServiceBusTrigger("queueName", AccessRights.Listen, Connection = "serviceBusConnection")] ...)
Based on the discussion in the GitHub issue, I believed that having either a serviceBusConnection entry OR an AzureWebJobServiceBus entry should be enough to ensure that the central listener triggers the function when a new message is added to the ServiceBusQueue, but that is proving to not be the case.
Can anyone clarify the difference between how those two settings are used, or notice anything else with the settings I provided that might be causing the function to not properly be triggered after a period of inactivity?
I suggest there are several possible causes for this behavior. I have several Azure subs and only one of them had issues with Storage/Service Bus-based triggers only popping up when app is not idle. So far I have observed that actions listed below will prevent triggers from working correctly:
Creating any Storage-based trigger, deleting (for any reason) the triggering object and re-creating it.
Corrupting azure function input parameters by deleting/altering associated objects without recompiling a function
Restarting functions app when one of the functions fails to compile/bind to trigger OR input parameter and hangs may cause same problems.
It has also been observed that using legacy Connection Strings setting for trigger binding will not work.
Clean deploy of an affected function app will most likely solve the problem if it was caused by any of the actions described above.
EDIT:
It looks like this is also caused by setting Authorization/Authentication on the functions app, but I have not yet figured out if it happens in general or when Auth has specific configuration. Tested on affected Azure sub by disabling auth at all - function going idle after 30-40 mins, queue trigger still initiates an execution, though with a delay as expected. I have found an old bug related to this, but it says issue resolved.

Connecting worker role with EF 6 to Azure sql database

I'm trying to set up a worker role to read and act based on data from Azure sql database.
I set the connection string like this:
public DBEntities(string connectionString) : base(connectionString) {}
Whenever I try to run the worker role localy I get the following error when querying the entities:
System.Data.Entity.Infrastructure.UnintentionalCodeFirstException: 'The context is being used in Code First mode with code that was generated from an EDMX file for either Database First or Model First development.
To query the entities:
using (ctx = new CODDBEntities(_connectionString))
{
var result = ctx.entity.ToList().FindAll();
}
What am I doing wrong?
It seems that you're specifying the Database Initialization Strategies.
Try the Following:
public CODDBEntities()
{
Database.SetInitializer<CODDBEntities>(null);//Disable initializer
}
more info in here: http://www.entityframeworktutorial.net/code-first/database-initialization-strategy-in-code-first.aspx
Finnaly I got it. Using this article I understood that I was using the wrong connection string type. I needed to use the one starting like - metadata=res://*/..." while I just coiped the default connection string from the Azure portal.

Azure Socket Leaks?

I have an ASP.NET Core a website with a lot of simultaneous users which crashes many times during the day and I scaled up and out but no luck.
I have been told my numerous Azure support staff that the issue is that I'm sending out a lot of database calls although database utilization improved after creating indexes. Can you kindly advise what you think the problem is as I have done my best...
I was told that I have "socket leaks".
Please note:
I don't have any external service calls except to sendgrid
I have not used ConfigureAwait(false)
I'm not using "using" statements or explicitly disposing contexts
This is my connection string If it may help...
Server=tcp:sarahah.database.windows.net,1433;Initial Catalog=SarahahDb;Persist Security Info=False;User ID=********;Password=******;MultipleActiveResultSets=True;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;Max Pool Size=400;
These are some code examples:
In Startup.CS:
services.AddDbContext<ApplicationDbContext>(options =>
options.UseSqlServer(Configuration.GetConnectionString("DefaultConnection")));
Main class:
private readonly ApplicationDbContext _context;
public MessagesController(ApplicationDbContext context, IEmailSender emailSender, UserManager<ApplicationUser> userManager)
{
_context = context;
_emailSender = emailSender;
_userManager = userManager;
}
This an important method code for example:
string UserId = _userManager.GetUserId(User);
var user = await _context.Users.Where(u => u.Id.Equals(UserId)).Include(u => u.Messages).FirstOrDefaultAsync();
// some other code
return View(user.Messages);
Please advise as I have tried my best but this is very embarrassing to me in font of my customers.
Without the error messages that you're seeing, here's a few ideas that you can check.
I'd start with going to your Web App's Overview blade in the Azure Portal. Update the monitoring graph to a time period when you're experiencing problems. Are you CPU bound? Have you exhausted memory? Also, check the HTTP Queue length. If your HTTP queue is really long, it's because your server is choking trying to service the requests and users are experiencing timeout issues.
Next, jump over to your SQL Server's Overview blade in the Azure Portal, and look at the resource utilization chart. Set the time period on the chart to when you're experiencing problems. Have you pegged out your DTUs for your database? If so, it's a sign of poor indexing, poor schema design, or you're just undersized and need to scale up.
Turn on ApplicationInsights if you haven't already. You can use the ApplicationInsights API to insert your own trace statements into your code. Or, you might be able to see exceptions causing the issue without having to do your own tracing.
Check the Kudu logs for your Web Apps.
I agree with Tseng - your usage of EF and .NET Core's DI framework looks correct.
Let us know how the troubleshooting goes and provide additional information on exactly what kind of errors you're seeing. Best of luck!
It looks like a DI issue to me. You are injecting ApplicationDbContext context. Which means the ApplicationDbContext will be resolved from the DI container meaning it will stay open the entire request (transient) as Tseng pointed out. It should be a scoped.
You can inject IServiceScopeFactory scopeFactory in your controller and do something like:
using (var scope = _scopeFactory.CreateScope())
{
var context = scope.ServiceProvider.GetRequiredService<ApplicationDbContext>();
}
Note that if you are using ASP.NET Core 1.1 and want to be sure that all your services are being resolved correctly change your ConfigureService method in the Startup to:
public IServiceProvider ConfigureServices(IServiceCollection services)
{
// Register services
return services.BuildServiceProvider(validateScopes: true);
}

Why does my continuous azure webjob run the function twice?

I have created my first azure webjob that runs continously;
I'm not so sure this is a code issue, but for the sake of completeness here is my code:
static void Main()
{
var host = new JobHost();
host.CallAsync(typeof(Functions).GetMethod("ProcessMethod"));
host.RunAndBlock();
}
And for the function:
[NoAutomaticTrigger]
public static async Task ProcessMethod(TextWriter log)
{
log.WriteLine(DateTime.UtcNow.ToShortTimeString() + ": Started");
while (true)
{
Task.Run(() => RunAllAsync(log));
await Task.Delay(TimeSpan.FromSeconds(60));
}
log.WriteLine(DateTime.UtcNow.ToShortTimeString() + "Shutting down..");
}
Note that the async task fires off a task of its own. This was to ensure they were started quite accurately with the same interval. The job itself is to download an url, parse and input some data in a db, but that shouldnt be relevant for the multiple instance issue I am experiencing.
My problem is that once this has been running for roughly 5 minutes a second ProcessMethod is called which makes me have two sessions simoultaniously doing the same thing. The second method says it is "started from Dashboard" even though I am 100% confident I did not click anything to start it off myself.
Anyone experienced anything like it?
Change the instance count to 1 from Scale tab of WebApp in Azure portal. By default it is set to 2 instances which is causing it to run two times.
I can't explain why it's getting called twice, but I think you'd be better served with a triggered job using a CRON schedule (https://azure.microsoft.com/en-us/documentation/articles/web-sites-create-web-jobs/#CreateScheduledCRON), instead of a Continuous WebJob.
Also, it doesn't seem like you are using the WebJobs SDK, so you can completely skip that. Your WebJob can be as simple as a Main that directly does the work. No JobHost, no async, and generally easier to get right.

AccessViolationException thrown when running Azure web role under the Azure emulator

I am getting a System.AccessViolationException thrown during the execution of an Azure Web Role (run on the Azure emulator, this has not been uploaded to Azure yet) when a call is made to an overridden method of an object when a local int variable is passed as one of the method parameters. The exception message is "Attempted to read or write protected memory. This is often an indication that other memory is corrupt".
The code where the exception is thrown is part of a local library that has been used for several years on live systems (not Azure) with no issues. The part that errors is as follows:
foreach (XmlDataComponent item in this.items)
{
int index = 0;
XmlNode node = item.ToXml(dataSet, xmlDocument, this, index); // Exception thrown when this call is made
...
}
The XmlDataComponent is a base class, when the code runs item is one of its derived classes. The ToXml() method is overridden in the derived classes. The exception is thrown as soon as the call is made to ToXml().
The problem is the index parameter. If I swap this to use an explicit value instead of the local variable, e.g.
item.ToXml(dataSet, xmlDocument, this, 0)
there are no errors.
Similarly, if I cast the item to its actual type e.g.
((XmlDataItem)item).ToXml(dataSet, xmlDocument, this, index))
and mark the ToXml() method in the XmlDataItem class as new instead of override there are no errors.
I have also tried calling the library from a console application rather than a web role with exactly the same data (i.e. everything the same other than running under a web role). Again, this caused no problems.
It appears that when run under the Azure emulator, accessing a local variable as a parameter to an overridden method is an issue!!!
I'm hoping this is only an issue when run under the emulator, however we still need a fix otherwise dev is more difficult.
Any suggestions or advise would be much appreciated.

Resources