I have a class library that is shared between multiple Role projects in my solution. Two of these projects are a Web Role and a Worker Role.
They each have the same Configuration Setting:
<Setting name="QueueConnectionString" value="UseDevelopmentStorage=true" />
each of them is making a call to this function:
public static void AddMessage(string Message)
{
var account = CloudStorageAccount.DevelopmentStorageAccount;
ServicePoint queueServicePoint = ServicePointManager.FindServicePoint(account.QueueEndpoint);
queueServicePoint.UseNagleAlgorithm = false;
var client = account.CreateCloudQueueClient();
var queue = client.GetQueueReference(DefaultRoleInstanceQueueName);
queue.CreateIfNotExist();
queue.AddMessage(new CloudQueueMessage(Message));
}
When this executes in the Worker Role, it works without any issues; I've confirmed proper reads and writes of Queue messages. When it executes in the Web Roles, the call to queue.CreateifNotExist() crashes with the error "Response is not available in this context". I've tried searching for information on what could be causing this but so far my search efforts have been fruitless. Please let me know if there's any additional information I can include.
ok, so after a lot more work, I determined that it was because i was calling it from within the Global.asax Application_Start.
I moved this to my WebRole.cs OnStart() function and it works properly.
Related
As described in various other related questions here, I am also expecting long lasting (30 seconds) first call after (re)deploying a web role with pretty large EF6 Model and a plenty of referenced nuget-packages. After trying out different proposed solutions with preloadEnabled and serviceAutoStartProviders I am still consufed and decided to reopen this topic in hope, that somebody has come upon a better solution in the meantime.
The main goal for me is to make a web role answering the first request almost as fast as subsequent calls as soon as the role quits its busy state on a fresh deployment and gets accessible by load balancer and external clients. Unfortunately I experienced following problems with proposed solutions so far:
preloadEnabled:
I do add Application Initialization module via PKGMGR.EXE /iu:IIS-ApplicationInit in a startup task. So far so good.
When I then try to execute %windir%\system32\inetsrv\appcmd set site "MySiteName" -applicationDefaults.preloadEnabled:true it fails as at the time of execution of a startup script there are still no websites created in IIS on a fresh deployment.
If I try to set the preloadEnabled-setting via ServerManager-class in my Application_Start method instead, I cannot understand, how this code is intended to be executed before a first external call on the web role is made, as the preloadEnabled setting is defaulted to false after a fresh web role deploy and thus the Application_Start method in my understanding does not get a chance to be executed by the Application Initialization module?
serviceAutostartProviders:
here we need to put a name of our AutostartProvider implementing the IProcessHostPreloadClient interface in applicationhost.config i.e by using either appcmd script or the ServerManager class, BUT:
serviceAutostartProvider is like preloadEnabled a site related setting, so we have the same problem here as with %windir%\system32\inetsrv\appcmd set site "MySiteName" -applicationDefaults.preloadEnabled:true in 1.2 - at execution time of startup script after a fresh deployment the websites are not yet created in IIS and the script fails to execute properly
Another possibility would be to include applicationhost.config into the deployment package, but I did not find any solution to do this for a web role.
So how did you guys managed to ensure, that preloading assemblies and some initialization code (like filling memory caches) is run before the role gets it's first hit from outside?
We start to gain some traffic now and get approx. 1-2 requests per second on our WebApi, and thus a 30 second delay for preloading "visible" to clients after each update deployment is becoming a major issue now.
We are scheduling update deploys at low traffic times, but what if I need to make an urgent hotfix deploy?
Perhaps the best way to accomplish this is to use deployment slots. Deploy updates to your staging slot first. Before a switch from a staging slot to a production slot takes place, Kudu will hit the root of the staging slot with a request in order to warm up the application. After the request to the staging slot's root returns, the IP switch occurs and your slots are swapped.
However, sometimes you need to warm up other pages or services to get the app ready to handle traffic, and hitting the root with a warmup request is insufficient. You can update your web.config so that Kudu will hit additional pages and endpoints before the IP switch occurs and the slots are swapped.
These warmup request URLs should be in the tag. Here's an example:
<?xml version="1.0" encoding="utf-8"?>
<configuration>
<system.webServer>
<applicationInitialization>
<add initializationPage="/pagetowarmup1" />
<add initializationPage="/pagetowarmup2" />
<add initializationPage="/pagetowarmup3" />
</applicationInitialization>
</system.webServer>
</configuration>
You can read the Kudu docs on this issue here.
OK. Now I got it.
The point is to set preloadEnabled-property not inside of the Application_Start method in Global.asax (as it will not be hit before a first request to the Role anyway), but inside RoleEntryPoint.OnStart.
The difference is that RoleEntryPoint.OnStart is called directly after deployment package is extracted and everything is set up for starting the role. At this moment the azure instance is still in it's busy state and is not yet available from outside as long as some code inside RoleEntryPoint.OnStart is being executed.
So here is the code I came up so far for warming up the instance before it gets its first call from outside
public class WebRole : RoleEntryPoint
{
public override bool OnStart()
{
// set preloadEnabled of each Site in this deployment to true
using (var serverManager = new ServerManager())
{
foreach (var mainSite in serverManager.Sites)
{
var mainApplication = mainSite.Applications["/"];
mainApplication["preloadEnabled"] = true;
}
serverManager.CommitChanges();
}
// call my warmup-code which will preload caches and do some other time consuming preparation
var localuri = new Uri(string.Format("https://{0}/api/warmup", RoleEnvironment.CurrentRoleInstance.InstanceEndpoints["Endpoint1"].IPEndpoint));
try
{
var request = (HttpWebRequest)WebRequest.Create(localuri);
request.Method = "GET";
var response = request.GetResponse();
}
catch { }
// send this thread to sleep in order to give the warmup-request a chance to complete BEFORE the Role will get started
// and becomes available to the outside world
System.Threading.Thread.Sleep(60000);
return base.OnStart();
}
}
I'm trying to work with Azure Webjobs, I understand the way its works but I don't understand why I need to use two connection strings, one is for the queue for holding the messages but
why there is another one called "AzureWebJobsDashboard" ?
What its purpose?
And where I get this connection string from ?
At the moment I have one Web App and one Webjob at the same solution, I'm experiment only locally ( without publishing anything ), the one thing I got up in the cloud is the Storage account that holds the queue.
I even try to put the same connection string in both places ( AzureWebJobsDashboard,AzureWebJobsStorage) but its throw exception :
"Cannot bind parameter 'log' when using this trigger."
Thank you.
There are two connection strings because the WebJobs SDK writes some logs in the storage account. It gives you the possibility of having one storage account just for data (AzureWebJobsStorage) and the another one for logs (AzureWebJobsDashboard). They can be the same. Also, you need two of them because you can have multiple job hosts using different data accounts but sending logs to the same dashboard.
The error you are getting is not related to the connection strings but to one of the functions in your code. One of them has a log parameter that is not of the right type. Can you share the code?
Okay, anyone coming here looking for an actual answer of "where do I get the ConnectionString from"... here you go.
On the new Azure portal, you should have a Storage Account resource; mine starts with "portalvhds" followed by a bunch of alphanumerics. Click that so see a resource Dashboard on the right, followed immediately by a Settings window. Look for the Keys submenu under General -- click that. The whole connection string is there (actually there are two, Primary and Secondary; I don't currently understand the difference, but let's go with Primary, shall we?).
Copy and paste that in your App.config file on the connectionString attribute of the AzureWebJobsDashboard and AzureWebJobsStorage items. This presumes for your environment you only have one Storage Account, and so you want that same storage to be used for data and logs.
I tried this, and at least the WebJob ran without throwing an error.
#RayHAz - Expanding upon your above answer (thanks)...
I tried this https://learn.microsoft.com/en-us/azure/app-service/webjobs-sdk-get-started
but in .Net Core 2.1, was getting exceptions about how it couldn't find the connection string.
Long story short, I ended up with the following, which worked for me:
appsettings.json, in a .Net Core 2.1 Console app:
{
"ConnectionStrings": {
"AzureWebJobsStorage": "---your Azure storage connection string here---",
"AzureWebJobsDashboard":"---the same connectionstring---"
}
}
... and my Program.cs file...
using System;
using System.IO;
using Microsoft.Azure.WebJobs;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.Logging;
namespace YourWebJobConsoleAppProjectNamespaceHere
{
public class Program
{
public static IConfiguration Configuration;
static void Main(string[] args)
{
var builder = new ConfigurationBuilder()
.SetBasePath(Path.Combine(AppContext.BaseDirectory))
.AddJsonFile("appsettings.json", true);
Configuration = builder.Build();
var azureWebJobsStorageConnectionString = Configuration.GetConnectionString("AzureWebJobsStorage");
var azureWebJobsDashboardConnectionString = Configuration.GetConnectionString("AzureWebJobsDashboard");
var config = new JobHostConfiguration
{
DashboardConnectionString = azureWebJobsDashboardConnectionString,
StorageConnectionString = azureWebJobsStorageConnectionString
};
var loggerFactory = new LoggerFactory();
config.LoggerFactory = loggerFactory.AddConsole();
var host = new JobHost(config);
host.RunAndBlock();
}
}
}
I have written a custom Windows Service that writes data to a custom Event Log (in the Windows Event Viewer).
For dev'ing the biz logic that the service uses, I created a Windows Form which simulates the Start/Stop methods of the Windows Service.
When executing the biz logic via the Windows Forms, info is successfully written to my custom Event Log. However, when I run the same biz logic from the custom Windows Service, information is failing to be written to the Event Log.
To be clear, I have written a library (.dll) that does all the work that I want my custom service to do - including the create/write to the custom Event Log. My Form application references this library as does my Windows Service.
Thinking the problem is a security issue, I manually set the custom Windows Service to "Log on" as "Administrator", but the service still did not write to the Event Log.
I'm stuck on how to even troubleshoot this problem since I can't debug and step into the code when I run the service (if there is a way to debug a service, please share).
Do you have any ideas as to what could be causing my service to fail to write to the event log?
I use it like this. There can be some typos. Writed it on my phone browser...
public class MyClass
{
private EventLog eventLog = new EventLog();
public void MyClass()
{
if (!System.Diagnostics.EventLog.SourceExists("MyLogSource"))
System.Diagnostics.EventLog.CreateEventSource("MyLogSource", "MyLogSource_Log");
eventLog.Source = "MyLogSource";
eventLog.Log = "MyLogSource_Log";
}
private void MyLogWrite()
{
eventLog.WriteEntry(ex.ToString(), EventLogEntryType.Error);
}
}
To debug a running service you need to attach to the process. See here for the steps.
You could also add parameter checking to the Main entry point and have a combination service and console app which would start based on some flag. See this SO post for a good example but here's a snippet:
using System;
using System.ServiceProcess;
namespace WindowsService1
{
static class Program
{
static void Main(string[] args)
{
if (args == null)
{
Console.WriteLine("Starting service...");
ServiceBase.Run(new ServiceBase[] { new Service1() });
}
else
{
Console.WriteLine("Hi, not from service: " + args[0]);
}
}
}
}
The above starts the app in console mode if there any parameters exist and in service mode if there are no parameters. Of course it can be much fancier but that's the gist of the switch.
I discovered why my service wasn't writing to the Event Log.
The problem had nothing to do with any part of the code/security/etc that was attempting to write to the EL. The problem was that my service wasn't successfully collecting the information that is written to the EL - therefore, the service wasn't even attempting to write the log.
Now that I fixed the code that collects the data, data is successfully writing to the event log.
I'm open to having this question closed since the question was amiss to the real problem.
I have read that the easiest way for setting up a connection and creating a table is putting the following line of codes in the webrole.cs onStart() method.
but for some reason I have got errors and when I put the same code in global.asax.cs Application_start() method. it works fine?
what is the difference
here is the code I am talking about : I am using tablestorage bytheway
...
CloudStorageAccount.SetConfigurationSettingPublisher((configName, configSettingPublisher) =>
{
var connectionString = RoleEnvironment.GetConfigurationSettingValue(configName);
configSettingPublisher(connectionString);
}
);
var account =
CloudStorageAccount.FromConfigurationSetting(
Constants.KEY_STORAGE);
//create table
var client = account.CreateCloudTableClient();
client.CreateTableIfNotExist(Constants.EMAILMERGE_TABLE);
/////////////////////////////////
and the Error I am getting is-----------------------------
SetConfigurationSettingPublisher needs to be called before FromConfigurationSetting can be used
Tnx for the tips!!
cheeers
For a worker role, we only need to put the code in OnStart. But for a web role, we need to put the code in two places. If you want to access storage in OnStart, please put the code in OnStart. If you want to access storage in your web application, please put the code in Global.asax’s Application_Start. If you need both, please put the code in both places.
Best Regards,
Ming Xu.
I use local storage on Windows Azure to store temporary files. In there I call an .exe file to make a conversion of several other files in same local storage folder. Problem is I always get the exception "Access to the path XYZ.exe is denied.".
I should mention the following:
- I am using a worker role
- set in the service definition file
and tried to add permission to the folder I am accessing:
public static void AddPermission(string absoluteFolderPath)
{
DirectoryInfo myDirectoryInfo = new DirectoryInfo(absoluteFolderPath);
DirectorySecurity myDirectorySecurity = myDirectoryInfo.GetAccessControl();
myDirectorySecurity.AddAccessRule(new FileSystemAccessRule(
"NETWORK SERVICE",
FileSystemRights.FullControl,
AccessControlType.Allow));
myDirectoryInfo.SetAccessControl(myDirectorySecurity);
}
UPDATE:
I tried with this code now:
public static void FixPermissions()
{
var tempDirectory = RoleEnvironment.GetLocalResource("localStorage").RootPath;
Helper.addPermission(tempDirectory);
var dir = new DirectoryInfo(tempDirectory);
foreach (var d in dir.GetDirectories())
Helper.addPermission(d.FullName);
}
private static void addPermission(string path)
{
FileSystemAccessRule everyoneFileSystemAccessRule = new FileSystemAccessRule("Everyone",
FileSystemRights.FullControl,
InheritanceFlags.ContainerInherit | InheritanceFlags.ObjectInherit,
PropagationFlags.None, AccessControlType.Allow);
DirectoryInfo directoryInfo = new DirectoryInfo(path);
DirectorySecurity directorySecurity = directoryInfo.GetAccessControl();
directorySecurity.AddAccessRule(everyoneFileSystemAccessRule);
directoryInfo.SetAccessControl(directorySecurity);
}
I get a really strange behaviour of the page. I still get the errors but sometimes some files gets converted by the ffmpeg.exe file.
Can someone help me out here??
Thanks a lot.
SOLUTION:
So seems the problem was that I ran the .exe file within local storage and therefore had the given security issues. Putting the .exe into the application and referring directly solved my issue.
Thx for your help.
By default your worker role will most likely not be running with sufficient privilege to allow changes to the access control lists on Azure folders.
There's two possible options:
Best: run a script at startup to set the permissions. Details are on MSDN here: http://msdn.microsoft.com/en-us/library/gg456327.aspx. You'll want to set executionContext="elevated".
The best way to write the script itself is through Powershell. An example is here: http://weblogs.thinktecture.com/cweyer/2011/01/fixing-windows-azure-sdk-13-full-iis-diagnostics-and-tracing-bug-with-a-startup-task-a-grain-of-salt.html. Alternatively, write a console application to do the same thing.
Easiest, but much less secure: set the security in your OnStart method, and run your whole worker role elevated: in your service definition file include
<WebRole name="WebApplication2">
<Runtime executionContext="elevated" />
<Sites>
However, I'd really not recommend that as it's a terrible security hole for something that's running in the public cloud.