Azure cdn Ignore query strings purpose - azure

I know what is the difference between Azure CDN query string modes and I have read a helpfull example of query string modes but...
I don't understand what is the purpose of "Ignore query strings" or how this can be useful in a real dynamic web.
For example, suppose we have a product purchase website with a URL similar to www.myweb.com/products?id=3
If we use "Ignore query strings"... Does this mean that if an user later requests product 4 (www.myweb.com/products?id=4), he will receive the page for product 3?
I think I'm not understanding correctly Azure CDN, I'm seeing Azure CDN as a dynamic content CDN, however Azure CDN is only used for static content as this article explains:
Standard content delivery network (CDN) capability includes the ability to cache files closer to end users to speed up delivery of static files.
This is correct? Any help or example on the subject is welcome

Yes, if you are selected Ignore query strings Query string caching behavior (this is the default), in your case subsequent requests after the initial request www.myweb.com/products?id=3, no matter the query string value, that POP server will serve the same content until it's cache period expires.
And for the second question, CDN is all about serving static files. To my understanding i believe what the article says is about dynamic site accelaration. It's about bunch of techniques to optimize dynamic web sites content serving performance. Because unlike static web sites, dynamic web sites assets (static files. ex: images, js, css, html) are loading dynamically based on the user behavior.

Now that I have it clearer, I will answer my question:
Azure CDN - Used to cache static content, even on dynamic web pages.
For the example in the question, all products must download the same javascript and css content, for those types of files Azure CDN is used. Real example using "Ignore query strings":
User A www.myweb.com/products?id=3, jquery-versionX.js and mystyles.css are not cached, the server is requested and the user receives it.
User B www.myweb.com/products?id=4, since we are using "Ignore query strings" the jquery-versionX.js and mystyles.css files are cached, they are served to the user without requesting it from the server again.
User C www.myweb.com/products?id=3, since we are using "Ignore query strings" the jquery-versionX.js and mystyles.css files are cached, they are served to the user without requesting it from the server again.
Reddis or other similar - Used to cache dynamic content (queries to databases for example).
For the example in the question, all the products have different information, which is obtained by doing a database query. We can store those queries or JSON objects in a Reddis cache. Real example:
User A www.myweb.com/products?id=3, product 3 is not cached, it is requested from the server and received by the user.
User B www.myweb.com/products?id=4, product 4 is not cached, it is requested from the server and received by the user.
User C www.myweb.com/products?id=3, product 3 is cached, the server is not requested and the user receives it from the cache.
Summary:
Both methods can be used simultaneously, Azure CDN is for static content and Reddis or similar for dynamic content.

Related

How To Use Netlify Split Testing Based on a Condition?

newbie here
I want to try the Netlify split testing feature which basically split the traffic randomly on multiple GitHub branches (but keeps the same URL).
But what I need to do is instead of splitting the traffic randomly on different versions, I want to split the traffic based on conditions, especially by using the document.referrer request.
For example, a user from Facebook will see the site from branch A, and others will see branch B.
Is there any way to do this?
Thank you.
It doesn't look like Netlify has built-in referrer targeting options built into their split testing product. At least, not according to their docs. Tools like Google Optimize and Optimizely provide options to split test against the HTTP Referer header, which is the URL of the site the user was on before they hit your page.
Netlify does, however, mention the following on the above page:
We set a cookie called nf_ab to ensure that the same visitor always gets the same branch. By default, the value of the cookie is a random number between zero and one and is configured out of the box to ensure that your site visitors have a consistent experience. If you'd like your visitors to manually opt in to a split test, you can also use client-side JavaScript to manually set the value of the nf_ab cookie to a branch name, which Netlify's CDN will read and serve accordingly.
So I believe your only option would be to write custom client-side JS that checks the HTTP Referer header value and sets the value of the nf_ab cookie used by Netlify to the branch you want that user to be served a version of your site from.

Serving some files from Azure CDN to only certain users

Suppose I have a website that is served by an Azure CDN endpoint (via files that have been uploaded to blob storage).
I want the minified website content to be available to everyone -- that part is easy, since that's what the CDN does by default.
Ideally, I would also have the sourcemaps available on that same CDN (so that the default behavior of //# sourceMappingURL=0-8d1d0e3cc4594b2c2758.js.map within my JS files would "just work"). However, I'd like for those sourcemaps to only be served to a subset of users.
Is there a way of accomplishing this scenario? I'm happy to defined "subset" in any way that would make this scenario work (e.g., being connected to a certain VPN or being in a certain IP-address range; or using Fiddler to set a secret header; etc.)
Thanks!
I assume that what you need is to build a system that, in production, allows to offer sourcemaps to a certain group of users, for instance, a team of developers, but not to everyone, the sourcemaps should not be publicly accessible.
There are different alternatives that can help achieve this goal.
On the one hand, we can try to use a rules engine that analyzes the received HTTP traffic and offers one or the other response depending on the criteria deemed appropriate.
These rules engines allows you to customize how HTTP requests are handled, by defining a set of possible match condition(s) on the incoming requests, and actions to be performed if the match condition(s) apply.
Azure CDN provides two types of rules engines, one standard rule engine for Azure CDN from Microsoft, and other premium from Verizon, which provide more advanced features.
How you use these rule engines depends largely on how you need to identify your user group and what you want to do to condition the response offered by your application to a sourcemap request.
For instance, one of the standard rule engines match conditions - also available in the premium rule engine - is the remote IP address where the request comes from: maybe it could be a good criterion to discriminate between your different subsets of users.
Or, as you suggested with the use of Fiddle, you can analyze incoming request header in search of a custom one.
The Azure CDN Verizon Premium rule engine provides more advanced match conditions based in browser, device type, etcetera.
Once the users have been identified, the system must consider the action to take depending on whether they belong to one or another group.
Both the standard and Verizon rules engines provides that could be relevant for this purpose.
I think that the best option, if you can use the Verizon rule engine, will be to deny access to the HTTP requests send by users that does not belong to the group allowed to access the sourcemaps.
Other options, although I think more difficult to implement if your are working with webpack and SPA, can be redirect the requests received from one subset of users to certain files which contains the sourcemaps - or to different index.html pages if you are using SPA in your frontend, each with different js and css resources, with sourcemaps or not -, or rewrite the URL to directly deliver a different set of files.
Another possible action could be to not include the inline sourcemap location in your minified files and to take advantage of the capabilities to modify response headers and Append a SourceMap header that points to the actual sourcemaps instead. This header will only be sent for the desired user group. Again, depending of how you are building your frontend it could not be an easy task.
Finally, if you are using Webpack and the SourceMapDevToolPlugin to build your frontend, you can use the publicPath option to point, in production, your sourcemaps to a non public, more developer oriented, URL location. This is the approach followed in this article. I think this approach is also worth looking into.

How to get images to the client?

I'm building a website on express.js and I'm wondering where to store images. Certain 'static' website info like the team pages will be backed by a database. If new team members come onboard, we push new data to CouchDB and a new team page shows up on the site.
A team page will include a profile picture, which will be stored in CouchDB along with other data.
Should I be sending the image through the webserver or just sending the reference to where the image is and having the client grab the image from the database, since CouchDB is an HTTP server itself?
I am not an expert from Couch DB, but here is my 2 cents. In general hitting DB for every image, is going to increase the load. If the website is going to be accessed by many people, that will be a lot.
Ideal way is serve it with CDN, and have the CDN server point to your resource server/ webserver.
You can store the profile pics (and any other file) as attachments to the docs. The load is the same like for every other web-server.
Documentation:
attachment endpoint /db/doc/attachment
document endpoint /db/doc
CouchDB manages the ETags for attachments as well as for docs or views. Clients which have cached the pics already will get a light-weight 304 response for every identical request. You can try it out with my CouchDB based blog lbl.io. Open you favorite browser developer bar and observe the image requests during multiple refreshes.
Hint 1: If you have the choice between inline-attachment-upload (Base64 encoded in the doc, 1 request to create a doc with attachment) or upload-attachment-only (multipart/related in the original content type, 2 requests to create a doc with attachment or 1 request to create an attachment when the doc already exists) .... then choose the second. Its more efficient handled by CouchDB.
Hint 2: You can configure CouchDB to handle gzip compression by the content-type of attachments. It reduces the load a lot.
I just dump avatars in /web/images/avatars, store the filename only in couchdb, and serve the folder with express.static()
You certainly can use a couchdb attachment
You can also create an amazon s3 bucket and save the absolute https path on your user objects

What is a request as defined by Windows Azure

I'm a project manager not a developer, and we stood up a really basic site in windows azure, We've now been asked to track usage of the site, and I'm curious what a request is. The site is accessed through a mobile device and performs a basic calculation. So I believe that there is only one request per visit, changing numbers you enter in various fields to what if the calculation doesn't seem to refresh the page. Is that a fair assumption or are there some basic checks I can do to understand if I need to aggregate requests to really track or interpret usage of the site.
Whenever your web browser fetches a file (a page, a picture, etc) from
a web server, it does so using HTTP - that's "Hypertext Transfer
Protocol". HTTP is a request/response protocol, which means your
computer sends a request for some file (e.g. "Get me the file
'home.html'"), and the web server sends back a response ("Here's the
file", followed by the file itself).
That request which your computer sends to the web server contains all
sorts of (potentially) interesting information. We'll now examine the
HTTP request your computer just sent to this web server, see what it
contains, and find out what it tells me about you.
http://djce.org.uk/dumprequest
tip: add google analytics to your site. It will give to you the number of views and unique users.

Does adding a DNS TXT record slow down loading times?

Google Webmaster Tools offers several methods to verify ownership of websites. Meta tags, DNS records, linking to a Google Analytics account, or uploading an HTML file to the server. My website has already been verified through the HTML file method, but I'd like to make my verification more resilient with Google (yes, they do actually recommend more than one method of verification). I don't want to make our usage of Google any more public than it already is, so adding meta tags is out of the picture - as well as using a Google Analytics account, as we don't utilize that for visitor reporting.
This brings up my original question, if I choose to add a DNS record in the form of the following:
TXT Record google-site-verification=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
How would adding this TXT record affect site loading times and overall performance - especially in terms of new visitors who must perform a new DNS lookup? Substantial, marginal?
Most likely marginal, but we're pinching at pennies here and trying to squeeze ever last bit of optimization out of our server box. Any feedback and/or your own speed tests would be more than welcome!
Typical users will not see the TXT records. They'd only request for A (or AAAA) records to access your services. People interested in the TXT record need to ask for it explicitly:
dig TXT your.fqdn.com
So there is no effect on site load times.

Resources