We have a WebRole which deals with request coming in off a WCF service, this validates then puts the messages into an Azure queue in Cloud storage. In the same application we have a WorkerRole which reads the information from the queue and makes a call to our persistence layer which processes the request and returns the result. We were wondering why the worker roles didn't pick up any of our configuration settings and hence was not providing the Trace information we were looking for. We realised that the worker role was likely looking for an app.config and couldn't find it.
Is there a way to point the worker role to the web config, or to be able to load our Enterprise Library settings into the ServiceConfiguration.cscfg file which in either case would mean both could read from a common place?
Many thanks in advance
Kindo
As far as I'm aware there is no way for your worker role to get access to the web config of a web role out of the box.
If you move your configuration items to the ServiceConfiguration.csfg file and both the worker and web role are in the same cloud project, the settings will be in the same file. But because the web role and the worker role are different projects within that cloud project, their settings are in different sections of that .csfg file. If you want the settings to be the same for both of them, you will have to duplicate the settings.
Putting your settings in this file gives you the advantage that you can change the settings while the roles are running and have the roles respond however you like e.g. you might want certain settings to restart the roles, for others you may just want to update a static variable. In order to update a web.config or app.config you need to redeploy that role.
You do need to be aware though that the ServiceConfiguration file is not a replacement for a webconfig. If you're using tools that look for their settings in a web or app config, unless they're particularly smart and aware of the Azure environment, they won't go looking for settings in the ServiceConfiguration file.
I know you didn't ask this question, but if you're expecting your worker role to be providing an almost synchronous response to your web role by using a request queue and a response queue, you'll probably find that this won't scale very well. If you want to do synchronous calls to a worker role in Azure you're best off using WCF (this isn't highlighted anywhere in the guides). As you said that all of this is just a WCF service anyway, the answer to your problem may be to just do away with the worker role.
You cannot share web.config's OR .cscfg's across Roles in Azure, as there is no guarantee a role is in the same host, cluster, or even datacenter as another role.
If you are simply trying to share items like connection Strings, app-specific variables, etc., I would simply create your own "config manager" class that would obtain some XML and parse it into a collection of settings. This way, you could store that config on Azure Blob Storage and changes would be as simply as updating that blob and signaling your apps to reload. (very easy using the Service Management API).
Related
I use data stored in in a blob for some configuration for some azure web apps, and I'd like to react to changes to it in near realtime. Currently I just set a timed event and periodically check if the etag of the blob has changed, and if it has then download the new blob.
This is ok, but I don't want to poll the blob too often, and I also want to be reactive. The devs changing the values in the blob want to be able to test the new values quickly.
The web app scales up and down, and each instance of the web app needs to download the config file. So, as far as I can tell, I can't just use the event system that azure storage has, as that would only send a notification to one instance.
Is there a recommended way to do this?
Per my understanding, you want to centralize manage your azure web apps. Once some config has been changed, your app services should reload configs on time automatically. Actually, Azure App Configuration provides this kind of functionality.
You can also config the condition to reload all configs in code. This is a .net core sample here. And you find other samples under the Enable dynamic configuration blade.
Is Azure diagnostics only implemented through code? Windows has the Event Viewer where various types of information can be accessed. ASP.Net websites have a Trace.axd file at the root that can viewed for trace information.
I was thinking that something similar might exist in Azure. However, based on the following url, Azure Diagnostics appears to require a custom code implementation:
https://azure.microsoft.com/en-us/documentation/articles/cloud-services-dotnet-diagnostics/#overview
Is there an easier, more built-in way to access Azure diagnostics like I described for other systems above? Or does a custom Worker role need to be created to capture and process this information?
Azure Worker Roles have extensive diagnostics that you can configure up.
You get to them via the Role configuration:
Then, through the various tabs, you can configure up specific types of diagnostics and have them periodically transferred to a Table Storage account for later analysis.
You can also enable a transfer of application specific logs, which is handy and something that I use to avoid having to remote into the service to view logs:
(here, I transfer all files under the AppRoot\logs folder to a blob container named wad-processor-logs, and do so every minute.)
If you go through the tabs, you will find that you have the ability to extensively monitor quite a bit of detail, including custom Performance Counters.
Finally, you can also connect to your cloud service via the Server Explorer, and dig into the same information:
Right-click on the instance, and select View Diagnostics Data.
(a recent deployment, so not much to see)
So, yes, you can get access to Event Logs, IIS Logs and custom application logs without writing custom code. Additionally, you can implement custom code to capture additional Performance Counters and other trace logging if you wish.
"Azure diagnostics" is a bit vague since there are a variety of services in Azure, each with potentially different diagnostic experiences. The article you linked to talks about Cloud Services, but are you restricted to using Cloud Services?
Another popular option is Azure App Service, which allows you many more options for capturing logs, including streaming them, etc. Here is an article which goes into more details: https://azure.microsoft.com/en-us/documentation/articles/web-sites-enable-diagnostic-log/
My understanding of the VMs involved in Azure Cloud Services is that at least some parts of it are not meant to persist throughout the lifetime of the service (unlike regular VMs that you can create through Azure).
This is why you must use Startup Tasks in your ServiceDefinition.csdef file in order to configure certain things.
However, after playing around with it for a while, I can't figure out what does and does not persist.
For instance, I installed an ISAPI filter into IIS by logging into remote desktop. That seems to have persisted across deployments and even a reimaging.
Is there a list somewhere of what does and does not persist and when that persistence will end (what triggers the clearing of it)?
See http://blogs.msdn.com/b/kwill/archive/2012/10/05/windows-azure-disk-partition-preservation.aspx for information about what is preserved on an Azure PaaS VM in different scenarios.
In short, the only things that will truly persist are things packaged in your cscfg/cspkg (ie. startup tasks). Anything else done at runtime or via RDP will eventually be removed.
See - How to: Update a cloud service role or deployment - in most cases, an UPDATE to an existing deployment will preserve local data while updating the application code for your cloud service.
Be aware that if you change the size of a role (that is, the size of a virtual machine that hosts a role instance) or the number of roles, each role instance (virtual machine) must be re-imaged, and any local data will be lost.
Also if you use the standard deployment practice of creating a new deployment in the staging slot and then swapping the VIP, you will also lose all local data (these are new VMs).
I need to to deploy 2 different worker role instances but each needs it's own configuration data (ID code, password, SenderCompID, etc.) to connect to a trading server. I can't share the credentials across the instances.
Each instance for any role (Worker or Web) are identical in terms of application based configuration. This is because all the instances are created from the same application will read exact same application configuration data.
If you write your application in a way that when application starts it reads data outside the machine (from azure storage, azure table or anything else outside the VM, mostly available on some server) and then configure itself then you could achieve your objective. You also need to provide instance specific data on server so each instance gets its own data. If I choose this option, i might use Azure table name i.e. Instance_ID# so each instance gets its own configuration and configured itself. This way I can modify the data any time on Azure Table and restart the role to load updated configuration. Other may have some other way to make it happen.
The other option is to have two role (worker or web) in same Azure application and while application code could be same in between two or more worker roles however you sure can configure them separately. Each of above options have its own pros and con.
In short, is there a RoleEnvironment event that I can handle in code when any other role in my deployment is rebooted or taken offline for patching?
I've got an application in production that has both web roles for an web front end and web roles running WCF services as an application layer (business logic, data access etc). The web layer communicates with the WCF layer over an internal endpoint as we don't want to expose the services at this point in time. So this means it is not possible to use the load balancer to call my service layer through a single url.
So I have to load balance requests to the WCF web roles manually. This has caused problems in the past when a machine has been recycled by the fabric controller for patching.
I'm handling the RoleEnvironment.Changing and RoleEnvironment.Changed events to adjust the list of backend web roles I am communicating with, which works well in testing when I make a configuration change to increase or decrease the number of instances in my deployment. But if I reboot a role through the portal, this does not fire the RoleEnvironment events.
Thanks,
Rob
RoleEnvironment.Changing will be fired "before a change to the service configuration" (my emphasis). In this case no configuration change is occurring, your service is still configured to have exactly the same number of instances. AFAIK there is no way to know when your deployment is taken offline, and clearly their are instances where notice cannot be given in advance (e.g. hardware failure). Therefore you have to code for communication failure, intercept the error, and try another role instance.
I do not believe you can intercept RoleEnvironment changes from a different Role easily.
I would suggest that you have RoleEnvironment changes trapped in the Role where they occur, handle them by throwing a message/record onto some persisted storage and let your Web-roles check that storage either on a regular schedule or every-time when you communicate to the WCF-roles.
Basically, if you're doing your own internal load-balancing, you need a mechanism for registration/tear-down of your instances so that you can manage your wcf workers
You can use the Azure storage queues to post a message when a role is going down and have a worker role that listens on that queue and adjusts things accordingly.