I am having some kind of strange problem which i am not able to debug.
Our application servers needs the downtime and hence we have built the temporary xhtml page for our website which will give the message that temporary servers down. Our plan is that during downtime we will rename our original index.xhtml page to something like index-original.xhtml and downtime.xhtml to index.xhtml. So during downtime we can show the website temporary unavailable page. And we will revert these changes when downtime is over
Now when we were testing this renaming thing, we found out that even after renaming the downtime page to index.xhtml and preserving the original index page, browser was still loading the original index page. We have disabled the caching by using following code in login filter.
res.setHeader("Cache-Control", "no-cache, no-store, must-revalidate"); // HTTP 1.1.
res.setHeader("Pragma", "no-cache"); // HTTP 1.0.
res.setDateHeader("Expires", 0); // Proxies.
res.setHeader("Access-Control-Allow-Origin", "*");
What I found from the server logs is that, the request is hitting the server, but somehow browser is still showing the old page. When i checked in the browser's developer tools, i found that page is not getting loaded from cache and it is coming from server. But somehow server returning the old page after renaming
After one server restart, the renamed downtime page gets dispalyed.
The same server restart required when we want our original home page page back after the downtime is over.
My concern is, why server restart is required for this renaming to work? Shouldn't it load the renamed file directly as the request is hitting the server.
I get the following messages ocassionally in chrome developer tool is
Document was loaded from Application Cache with manifest https://www.google.co.in/_/chrome/newtab/manifest?espv=2&ie=UTF-8
Application Cache Checking event
Application Cache NoUpdate event
But in the network section of developer tool, it is not showing that page is loaded from cache.
I am thoroughly confused here as i am a junior developer.
Most likely you have facelet caching turned on. See this stackoverflow answer JSF and automatic reload of xhtml files
Related
I am using Azure CDN to host a static website I am building.
It's great, other than the fact that when I update my web app the old page is cached and so still shown.
I have added the following Cache rule in the rules engine to put it to refresh every 60 seconds, however this does nothing and I still get the old content, the only way to get the new content is to go to an incognito browser.
Anyone have any ideas it's driving me crazy!
Here is a screenshot of the browser dev window when I hit the index.html page, I can't see any cache control headers here, I would think that the Azure CDN would/should be putting these on, is that incorrect?
The rule you are modifying controls the "internal max age". If a file shows up correctly in icognito mode, this rule is working fine. You have to set "external max age" to control the Cache-Control header.
https://learn.microsoft.com/en-us/azure/cdn/cdn-verizon-premium-rules-engine-reference-features
Looks like it is not Azure CDN which is caching index.html, it is your browser. Ensure that the Cache-Control header is returned correctly by using the developer tools.
https://learn.microsoft.com/en-us/azure/cdn/cdn-manage-expiration-of-cloud-service-content
https://learn.microsoft.com/en-us/azure/cdn/cdn-manage-expiration-of-blob-content
When using JS/CSS from unsecured CDN in https page,
A. Some pages block loading js/css, and cause runtime error by short of js code.
B. Some pages do not block loading js/css, pages are shown as entirely insecure contents.
What is the difference of these behaviors?
Even if using same browser (I'm using Chrome 51.0.2704.103 (64-bit) in Mac OS X) and seeing same page, behavior changes sometimes...
May some response headers of index.html or so control this behavior?
Anyone know about this?
Example:
My friend create page https://cfn-iot-heatmap.herokuapp.com/, in before, this page's behavior was like A, contents are totally white out.
In this case, insecure CDN contents are:
https://cdn.leafletjs.com/leaflet-0.6.4/leaflet.js
https://cdn.leafletjs.com/leaflet-0.6.4/leaflet.css
I got source codes of this page and deployed to my heroku repository https://kinkyujitai.herokuapp.com/, it is shown like B.
But curious, after I deployed my repository, friend's repository also works like B, showing security warning but shown.
It is very curious, so I want to know the reason of this phenomena...
From a secure (https) origin, you should always include secure elements.
If you don't, browser can block insecure request and/or remove the visual indication of the security.
I have no experience with Varnish, so please bear with me.
We have inserted Google Tag Manager into a clients site. The Tag Manager injects Google Analytics tracking code (and nothing else) into the page. The clients technical service provider has now complained that the Tag Manager prevents the Varnish cache from working.
My guess is that this has nothing to do with the tag manager as such but is rather caused by the cookies from Google Analytics - apparently in the default configuration pages with cookies are not cached. However since I'm not very familiar with Varnish I cannot speak with any authority in the matter.
So my question is: is there any reason why Google Tag Manager itself (not any tags inside the tag manager) would invalidate a Varnish cache on each request ? A web search turned up nothing specific regarding Varnish and GTM.
Thank you for your time,
Eike
Google Tag Manager will not interfere with Varnish cache in any way. The reason being is that the requests for Google Tag Manager are sent to google-analytics.com, not your website.
The cookies are then set by google-analytics.com and are only sent between the clients browser and google-analytics.com.
This means that Google Tag Manager does not actually have any affect on your website apart from the initial Javascript being loaded from there.
In fact varnish does not validate any cookie that is created through javascript, only caches the "set-cookie header" of the http request.
The problem you may be having is, if the "DataLayer" is placed in the html code, the values of the variables do not change as they would be in cache.
To solve this problem, we must make another http call (ex. ajax) does not to cache, it returns the variables for DataLayer.
We've partnered with a company whose website will display our content in an IFRAME. I understand what the header is and what it does and why, what I need help with is tracking down where it's coming from!
Windows Server 2003/IIS6
Container page: https://testDomain.com/test.asp
IFRAME Content: https://ourDomain.com/index.asp?lots_of_parameters,_wheeeee
Testing in Firefox 24 with Firebug installed. (IE and Chrome do the same thing.) Also running Fiddler so I can watch network traffic while I'm at it.
For simplicity's sake, I created a page with nothing on it but the IFRAME in question - same physical server, different domain/site - and it failed with
Load denied by X-Frame-Options: https://www.google.com/ does not permit cross-origin framing.
(That's in the Firebug console.) I'm confused because:
Google is not referenced anywhere in the containing app, or in the IFRAMEd app. All javascript libraries are kept locally; there is no analytics in the app. No Google, nowhere.
The containing page has NOTHING on it, except the IFRAME. No html tags, no head tag, no body tag. IFRAME. That's it.
The X-FRAME-OPTIONS header does not exist in IIS on the server: not at the "Websites" node, not in the individual sites.
So where the h-e-double-sticks is that coming from? What am I missing?
Interesting point: if I remove http"S" from the IFRAME url, it works. Given the nature of the data, SSL is required.
You might check global.asax.cs, the app could be adding the header to every response automatically. If you just search the app for "x-frame-options" you might find something also.
We have a JSF 1.2 application with RichFaces 3.3.3 on JBoss 5.1. When we restart the server and make the very first login in the app, then we are redirected to a blank page on the following URL
https://our.domain.com/a4j/s/3_3_3.Finalorg/richfaces/renderkit/html/css/extended_classes.xcss/DATB/eAF7sqpgb-jyGdIAFrMEaw__
It happens only on the very first login after the redeploy and restart of the server. The second login and from then on it works fine.
How is this caused and how can I solve it?
The container managed authentication will automatically redirect to the very first HTTP request which triggered the authentication check.
That it redirected to a CSS file can only mean that the actual page has been requested from the browser cache instead of directly from the server, while the CSS file has been requested directly from the server, either fully or by a conditional GET.
You need to fix 2 problems:
Create a filter which tells the browser to not cache the restricted pages. Map this filter on the same URL pattern as the security constraint. This way the browser will never request them from the cache. This is to be done by setting the following headers on the response
response.setHeader("Cache-Control", "no-cache, no-store, must-revalidate"); // HTTP 1.1.
response.setHeader("Pragma", "no-cache"); // HTTP 1.0.
response.setDateHeader("Expires", 0); // Proxies.
Exclude the static resources like CSS/JS/images from the authentication check. Add another allow-all security constraint on the URL pattern of those resources, e.g. /a4j/*, /resource/*, /static/*, etc or whatever you have. This way the server will never authenticate those requests.
can you post your web.xml file . Try to externalize the css from the rich faces jar if possible. It seems that while loading the css the first time there is either some exception or it is taking too much time. Do post your web.xml