Azure localisation - how to take resource files out of your packaged application? - azure

I have a localised site reading from your standard .resx resource files. Everything works fine, however I am deploying to Azure. The .resx files are packaged along with the rest of the site and deployed onto each role instance. Meaning if I want to make a change to something I need to redeploy the entire package to Azure again and suffer a rolling update.
Is there a way I can get my site to read resource files from a single static location, such as blob storage? Is this a good idea or should I just do my best to get it right first time?
Thank you!

Well rolling updates aren't the end of the world. If your site is hosted with multiple running instances, each instance will be taken out of the load-balanced loop, brought down and updated in sequence, so your users shouldn't experience any real down time.
One option though would be to move to a non-resx based localization setup. you can write your own ResourceProvider to override the built in one. Rick Strahl had a nice example of reading resource information from a database.
http://www.west-wind.com/weblog/posts/2009/Apr/01/Updated-WestwindGlobalization-Data-Driven-Resource-Provider-for-ASPNET

Related

Prevent Azure App Service from viewing backend configuration

I am working on a project that has us deploying to an Azure Web Site.
The code is overall working and now we are focusing more on security.
Right now we are having an issue that back end configuration files are visible with the direct URL.
Examples (Link won't work):
https://myapplication.azurewebsite.net/foldername/FileName.xml (this
file is in a folder that is contained within the root application)
https://myapplication.azurewebsite.net/vApp/FileName.css (this file
is a part of virtual application sub folder)
I have found this to be true with multiple extensions and locations.
Extensions like:
.css
.htm
.xml
.html
the list likely goes on
I understand that certain files are downloaded to the client side and that those can't be stopped. However backend XML files are something we don't pass to the client (especially if has connection strings).
I did read a similar article, Azure App Service Instrumentation Profiling?
However this didn't directly relate to my issue.
Any insight would extremely helpful.
Do not store sensitive information in flat files, especially under your site root. Even if you web.config it just right you're still one botched commit away from disaster.
Use Application Settings instead, that's what they're for.
https://learn.microsoft.com/en-us/azure/app-service-web/web-sites-configure

Why do duplicate folders exist in my Azure blob storage container?

I am aware that Azure blob storage does not use an actual folder structure but could not think of a better way to describe this.
The issue we're seeing is when opening Server Explorer (in Visual Studio) to browse through our blob storage container. We separate client resources and data by folder so in this case we have a blob titled productdata/Client_5/testimage.jpg.
The problem is that this Client_5 folder appears twice when inspecting our blob storage. So far I've double checked there are no weird special characters in either folder and double checked case sensitivity. The two paths are EXACTLY the same except its actual contents. Our application has no problems with this because the path is still exactly the same to the resources it's attempting to get. (For example, since the folders are named exactly the same, https://myazureaccount.blob.core.windows.net/productdata/Client_5/image.jpg still takes us to exactly where we need to be.) It's just a pain when we use Server Explorer to view our blobs on Azure because we have two folder locations to check. This could very well be a bug in Server Explorer for Visual Studio as well.
If anyone else has ever come across this, any info is appreciated. I couldn't find anything on the topic when searching online but figure I would post the question here for reference. Also, I'll be contacting Azure support soon to see if they can shed some light on any of this and will post what info I get from them here later.
It's true that blob storage doesn't have the concept of folder but the API built on the top of it does. I've seen exactly the same or similar problems in other tools as well: Microsoft Azure Storage Explorer and even Azure Portal. I tried to go deeper and when I executed:
CloudBlobContainer.ListBlobs(null, useFlatBlobListing: false)
it also returned duplicated directories. To be precise it returned the list with several instances of CloudBlobDirectory that had the same Prefix. Sounds like a bug. Now, if a tool uses this approach to get a list of directories it will fail. If the tool uses flat listing and builds the structure of folders in its logic it should be ok.
Hard to say what is the reason of such behaviour. In my case files in blob storage were copied by Azure Data Factory activity with concurrency option but I'm not sure if it's the rule.
BTW Microsoft Azure Storage Explorer in my case showed only some subset of folders which is much worse than displaying duplicated directories so I switched to Azure Explorer mentioned above and it's worth recommending.
I was experiencing an issue where the "folder" names appeared identical, but on closer inspection one had a trailing space.
Because folders don't really exist in blob storage and a space is a valid value, it is possible to have trailing or leading spaces in the names.
Azure blob storage does not ahve the concept of folder, only container, you can simulate folder setting the name of the blog to save like 'folder/img.png', but folder/ is part of the name of the blob.
Also, I ever use Storage Explorer, try with this: http://azurestorageexplorer.codeplex.com/releases/view/125870

Web Deploy Set Parameter using external file

We have an Website project that's hosted in Azure, and we use Web.config transforms for setting environment variables. However, our current approach for building the system for different environments is to build the project multiple times (currently this is 3), which is inefficient.
We'd like to move to using Web Deploy, as this would then set us up nicely for using Release Manager.
Our issue is around using Web Deploy parameters instead of web.config transforms; we need to substitute multiple xml elements, rather than single values.
After much research, I found these 2 articles which detail almost exactly what I'm trying to do
http://blogs.iis.net/elliotth/web-deploy-xml-file-parameterization
http://www.iis.net/learn/publish/using-web-deploy/parameterization-improvements-in-web-deploy-v3
Essentially I'm trying to replicate Scenario 5, but using a separate Set Parameter file for the value.
Unfortunately, in the examples, referencing an external xml file only works if it is on the target machine. Some testing with a colleague confirmed this; works on local machine, but not on Azure.
Is there a way I can force Web Deploy to look in a particular location for the external configuration files?
As you've already noticed, Web Deploy is only able to read replacement values on the local machine or on a UNC share. It can't read that specific file over HTTP.
If you're deploying to an Azure Web App, then one thing you could try would be to use Kudu/FTP to manually upload that file one level above your wwwroot folder. Then you could specify the file location like so:
D:\home\site\prices.xml:://book[#name='book1']/price
Of course this implies that you'd have to pre-upload this file before publishing to your site, so it's not a perfect solution, but it should work for what you're trying to accomplish.

Azure cloud service project configuration (.csdef and .cscfg) in multiple environments

Currently we have a development cloud services (acme-dev-service) and a production cloud service (acme-prod-service). Our current setup in our solution has a cloud service project called acme.application that uses transformation of the .cscfg and .csdef files for deploying the project to the two environments (production and development). I don’t like the transformation method because it feels like a bit of a hack to me. So after doing some research it seems that you can have multiple configuration files which solves some of the issue but I am running into problems because you are only allowed one service definition. This doesn’t work for us because the production environment requires extra certificates as well as different hostHeader bindings than our dev environment does.
So it seems we cant really get away from using the transformations. So I guess my question boils down to am I looking at the Azure Service Project files in the wrong light? Should we really be mapping one Azure Project to one Azure cloud service? Should I have an Azure project for Production and a second Azure Project for Development?
Is there a better way to do this? Or a best practice for working with multiple environments in Azure?
The CSDefinition file is the real kicker here. If you have a value you need to be different between two environments (dev/test/stage/production, etc.) then you really have three options:
1) Manually modify the value before a deployment. Errr....Okay....you have two options.
1) Tap into the MS Build process and determine which cloud configuration you have selected (the one used to determine which version of the .cscfg file will be used) and then have the build modify the .csdef after the build and prior to packaging (there is a time when the file has been copied to a different directory just before packaging and this is where you want to make the change). This can be tricky, though I've seen it done and have even done so myself in the early SDK days. Here is a blog post explaining one example where he's using WebConfigTransformRunner to do just that: http://fabriccontroller.net/blog/posts/apply-xdt-transforms-to-your-servicedefinition-csdef-file/. I don't really think this is your best option because it is opaque. It's not evident what is going on and someone who comes along after you to maintain the code will not know about this little gem and will spend forever trying to figure out why some value they put into the csdef somewhere is somehow getting overwritten after they publish to a different environment.
2) Use the two Azure Project approach you mentioned. You can set up build definitions in your Build tool of choice that determine which of the Azure projects you want to build and publish. Personally I think this is the best way to deal with different .csdef files. It's straight forward and doesn't require modifying the csproj files. I'm not opposed to csproj file changing, it's just not overly obvious it was done and, speaking as someone who has inherited things like that, it's not easy to find when people do that kind of thing and they aren't around to tell you about it.

How to use IIS app_offline.htm file with Azure

I have a brilliantly designed app_offline.htm file that I'd like to display on my site periodically when I'm doing things like backing up the DB. On a server with a real file system, this wouldn't be a problem: I'd just copy app_offline.htm to the my app's root, and IIS will work its magic and redirect all requests to this file.
However, I'm using Azure, so there's no real file system and there's no easy way move files around from one location to another.
How I can I make app_offline.htm play nicely with Azure?
I figured I'd add this, I haven't seen it mentioned yet. You can actually do this via web publish from Visual Studio (or WebMatrix) as well, just put app_offline.htm in the root of your project - the same level as your main web.config. When done, just rename it and redeploy to go back online. 2 clicks - easy.
The manual option is to drop it into your /site/wwwroot via FTP.
A little personal secret, none of your site files will be accessible, style sheets etc. So put your includes into an azure blob container, and viola.
Actually there is a real file system, as each VM instance runs on Windows 2008 Server (SP2 or R2 SP1). To see this for yourself, enable Remote Desktop for your deployment and connect to a running instance.
Knowing this, you should be able to set up a mechanism to perform a file-copy of your app_offline.htm to your app root based on some type of administrative command. You'll just need to make sure each of your web role instances perform this action.
David has provided you with a good answer. However, you might be missing out on what Azure can do for you. You should be able to virtually eliminate down time with Azure by running multiple instances and using SQL Azure which is triple backed up for you. You can also backup SQL Azure using http://msdn.microsoft.com/en-us/library/ff951624.aspx

Resources