The Issue
I am currently building a PWA that is hosted on Azure and utilises Azure CDN Premium.
Within this PWA, we have the following files:
/service-worker.js
/js/translations/en-us.json
/js/translations/en-hk.json
etc...
When a release is deployed to the storage blob, we trigger a CDN 'purge' that is meant to tell the edge nodes to re-retrieve the assets from the origin storage account.
However, for some reason, the CDN is still returning old versions of these files, despite the storage account having the latest versions (I have left it over 10 hours so not a propagation issue).
Why is this happening? The whole point of a 'purge' is to empty the cache...
I appreciate that there may also be downstream caches beyond the nodes but I never have these problems with AWS and therefore I can only come to the conclusion that it is because Azure is either doing something badly, or I am misunderstanding how it is meant to work.
Possible Solutions
I have come up with possible solutions to this, however, because I am fairly new to Azure, I want to get other's opinions on what the best solution is...
Use Query Strings and Set the relevant Cache mode
I am aware that I could use just use query strings on these files (apart from service-worker.js), however, I do not feel confident this is the best solution.
Custom Rules Engine
Alternatively, I can define custom rules to instruct the CDN to skip the cache for certain files. This kind of defeats the purpose of a CDN though. Which goes back to the question, why is Azure not purging these assets properly...
If this is the best solution, please could someone advise me on the what rules I should define?
Related
In the application i'm working on today there are some functionality that is not able to be tested on staging (I know this is terrible, but is currently the reality).
The project is a backend, and we use some internal APIs that have secrets, and I renew them from time to time, but for obvious security reasons I don't want people to know the production keys, and wish to issue temporary keys only for specific tests, with very limited expirations. This way people would be able to run tests with this backend pointing to the production APIs without disrupting production itself.
I initially thought of creating a service whereby all requests to an internal API would pass through, and only this service would have the API keys, and would validate the temporary ones.
I would like to know... is there an already made solution that does something similar? or would I have to build it from scratch?
Also, am I on the right track?
I've found many questions and answers that are similar to mine, but they all seem to be very specific use cases - or, at least, different/old enough to not really apply to me (I think).
What I want to do is something that I thought would be simple today. The most inefficient thing with the web apps is that copying files between them can be slow and/or time-consuming. You have to FTP (or similar) down, the send it back up.
There must be a way to do this same thing, but natively within Azure so the files don't necessarily have to go far and certainly not with the same bandwidth restrictions.
Are there any solid code samples or open-source/commercial tools out there that help make this possible? So far, I haven't come across any code samples, products, or anything else that makes it possible (aside from many very old PowerShell blogs from 5+ years ago). (I'm not opposed to a PowerShell-based solution, either.)
In my case, these are all the same web apps that have minor configuration-based customization differences between them. So, I don't think webdeploy is an option because it's not about deployment of code. Sometimes it's simply creating a clone for a new launch, and other times its creating a copy for staging/development.
Any help in the right direction is appreciated.
As you've noticed, copying files over to AppService is not the way to go. For managing files across different AppService instances, try using a Storage Account. You can attach an Azure Storage file share to the app service and mount it. There's a comprehensive answer below on how to get files into the storage account.
Alternatively, if you have control over the app code, you can use blob storage instead of files and read the content directly from the app.
I've been developing a site through microsoft azure. Been doing some styling with bootstrap and wanted to know if pulling the bootstrap library through CDN is faster or pulling it from my directory is faster after DEPLOYING? what will u suggest performance wise
CDN itself caches the files for specific timeperiod which makes it deliver the content faster. This adds an advantage when user first time visits your site. Also CDN have its geographical advantages. Also since the cache is shared between the users there will be minimal load on original server.I hope this would help you.
I host a software product on Azure, and store the downloads themselves in a public container, which the website links to via URL. You can see my downloads page here: https://flyinside-fsx.com/Download
Normally I get somewhere in the range of 200mb-500mb worth of downloads per day, with the downloaded files themselves being 15-30mb. Starting these week, I've seen spikes of up to 220GB per day from this storage container. It hasn't harmed the website in any way but the transfer is costing me money. I'm certainly not seeing an increase in website traffic that would accompany 220GB worth of downloads, so this appears to either be some sort of DOS attack or a broken automated downloader.
Is there a way to remedy this situation? Can I set the container to detect and block malicious traffic? Or should I be using a different type of file hosting entirely, which offers these sorts of protections?
To see what's going on with your storage account, best way would be to use Storage Analytics especially see the storage activity logs. These logs are stored in a special blob container called $logs. You can download the contents of the blob using any storage explorer which supports exploring the contents of it.
I would highly recommend starting from there and identify what exactly is going on. Based on the findings, you can take some corrective actions. For example, if the traffic is coming via some bots, you can put a simple CAPTCHA on the download page.
I have deployed my Windows Azure application to the cloud. Now that it's running it seems to be slow. Some pages taking up to three seconds to return and all the look ups are to table storage with direct key lookups.
It's not very significant but when I check with fiddler I see all of my web requests are resulting in Status codes 200. Even those for the CSS. Is this expected. I thought the CSS would be cached.
Getting back to the original question. When performance is slow is there a way I can work out why? I already set the solution configuration to "Release". What more is there that I can do?
Any tips / help would be much appreciated.
For investigating the problems in production, you could try using StackOverflow's profiler to work out where the slowness is occurring - http://code.google.com/p/mvc-mini-profiler/
For looking at how to encourage browsers to use cached content for css, js and images, I think you can just use web.config files in subfolders - see IIS7 Cache-Control - and you should also be able to setup gzip compression.
You can try http://getglimpse.com
Seems promising
Put your files in the Azure storage and provide cache instructions:
Add Cache-Control and Expires headers to Azure Storage Blobs
If you want to do it from IIS, provide the proper HTTP caching instructions to the browser.
Best practices for speeding up your website.
Anyway, you have to provide more details about what are you doing. Are you using Session? how many queries launch each page?
The fact that in your computer, with just one client (you) goes fast, doesn't mean the application is fast, you have to try with lots of users in order to ensure there is no bottlenecks, contention locks, busy resources etc.. etc..