Heey all,
I have a problem with my custom 404 page.
domain.com/lalala -> displays the 404 page in plain text
domain.com/lalala.html -> displays the page correctly
The problem is not present in Internet Explorer, but only in Firefox/Chrome.
I think this all has something to do with mime types. I tried add a mime type rule:
.* -> text/html but no result.
By the way, it hosts a sharepoint 2007 site.
Anyone suggestions?
Tnx
I did some poking around for you, and the consensus seems to be that this is caused in Sharepoint by invalid and invisible characters in the default Sharepoint 404 file. The solution seems to be to recreate a 404 page from scratch (don't copy/paste), overwrite, and try again.
IE is generally more forgiving about the content-type header than FF or Chrome, so I'm not surprised it works there.
Related
When I set response content type as Excel, the Open/Save dialog is shown twice , just on IE8. It works fine on other browsers (tested on Chrome/Firefox/Opera).
The code for setting response content type is:
response.setContentType("application/vnd.ms-excel");
response.setHeader("Content-disposition","attachment;filename=abc.xls");
I searched for solutions/workarounds. Turning off Smartscreen didn't help.
Also, another suggestion was to wait for 5-10 sec before clicking Save/Open. That too didn't work.
What's the cause of this? Are there any IE specific workarounds?
It's a pain but IE8 is still widely used by the users.
This is just a guess, but it could have something to do with the way Office (used to) embed itself in IE with plugins.
A workaround might be putting it in a zip file before sending it over to the user.
I'm sending the HTTP header "Content-type: application/atom+xml; charset=utf-8" in my Atom 2.0 feed right now (using the header() function in PHP).
Whenever I open the URL in Chrome or Konqueror, it will just show text.
If I change that to application/xml, Chrome will display an XML tree and Konqueror will still display that as text.
Since I have an Agregator on my computer, shouldn't that xml be opened with it?
And if not, since these standards have more than 10 years, shouldn't these browser at least put a button on top of the page to invite downloading an Agregator?
Because of these 2 reasons I guess I'm not using the proper content-type. What do you think?
That's the correct Content-Type for an Atom feed (application/atom+xml). However, Chromium does not handle it correctly (Issue 104358: RSS feeds are not parsed correctly).
One possible workaround for Chromium's bug is to use a more general type (e.g., application/xml). Alternatively, stick with the correct type and accept that users who have chosen that browser will get a more confusing experience.
I am currently developing a site which is not supposed to expose its developer magento platform(Sorry about that ).
I thought the wappalyzer(Mozila addon),GTmetrix site is finding the cms names by its html format but when i saw a empty white page with that tools it still shows me like am using Magento(there is nothing in the source view - its white page), so now how they are finding that am using magento. Any idea about hw they are working? I checked headers but there nothing specially mentioned as magento. Same goes with wordpress/joomla - simply wappalyzer(Mozila addon),GTmetrix finds the site platform even there is no html source.
So I guess something with in header(i might missing something) or what it can be? please advice. Attached screenshot of it.
Thanks in advance
You can view Wappalyzer's source code: (Ctrl+F Magento):
https://github.com/ElbertF/Wappalyzer/blob/master/share/js/apps.js
Most likely Wappalyzer picked up on the "Mage" JavaScript variable. You can see this by clicking the DOM tab in Firebug.
They are finding it using the words like mage,varien,magento. If it finds any of these words inside css/js file class,#id,inside comment then it found it as magento.
Also gtmetrix does one more step , like it is checking the css/js url path - if it fins the url like skin/frontend then it says it as magento.
Dont forget cookies...
I use FireBugs. Go to main menu -> Cookies
There is frontend in cookies.
I'm dynamically generating a text file in PHP, so it has a .php extension but a text/plain MIME type. All browsers display the file as nicely preformatted text, except IE8.
Googling tells me that they've added security where if the HTTP header content type doesn't match the expected content type (I think based on the extension and some sniffing) then it forces the file to be downloaded. In my case I have to open it, and also give it permission to open the file I just told it open! That's probably a Win7 annoyance though. Serving a static plain text file works fine, of course.
So can I stop IE8 from downloading the file and get it to view it normally? The code has to run on multiple shared hosting environments, so I think I'm stuck with the .php extension.
Add this to your HTTP header:
X-Content-Type-Options: nosniff
It's an IE8 feature to opt-out of its MIME-sniffing.
Source
Alternatively, you can "trick" IE8 into thinking that it is indeed serving up a text file. These 2 lines do it for me and don't involve using non-standardized "X-" headers:
Header("Content-Type: text/plain");
Header("Content-Disposition: inline; filename=\"whatever.txt\"");
For some reason my site displays a "Cannot display this message" error in those browsers while working correctly in Firefox, Opera, Safari and IE8.
It looks like this: http://www.reviewsaurus.com/images/pagedisplay.png
This document was successfully checked as XHTML 1.0 Transitional!
It doesn't have anything to do with HTML errors. The worst that can do is show a garbled or blank page.
There is some sort of server misconfiguration going on of WordPress and the gzip Content-Encoding.
Your website doesn't work in IE, but /index.php loads just fine. Inspecting the raw HTTP Response (using Fiddler2), the difference between the two responses is that on the request to /, WordPress (presumably) adds the following text to the gzipped HTTP response body:
<!-- Page not cached by WP Super Cache. No closing HTML tag. Check your theme. -->
Because of that addition to the gzipped content, it's no longer a proper gzip stream, and IE6/7 can't ungzip it.
Other browsers probably have better error handling, so they can handle the error just fine.
I don't know how you can fix that problem, but a Google search for that piece of text turns up a few hits on wordpress.org at least.
It's not valid XHTML. If IE6/7 is actually interpreting it as XML, this will cause it to stop parsing. Can you give a screenshot to show what the failure looks like?
UPDATE: Now that it is XHTML Transitional, it's validating, and I'm out of suggestions until I get someplace I can run IE.
UPDATE 2: Just ran IE7 against the site, and the page loaded fine.
It's displaying fine, albeit slowly, in IE7 for me. I would still recommend fixing the two errors and validate as Strict, but they don't seem to me to be the cause of your problem. IE6 and IE7 are intepreting them as text/html.
This document was successfully checked as XHTML 1.0 Transitional!
It still doesn't work though...
Found the problem:
Was using the following procedures to remove unnecessary characters, seems to be wrong though.
<?php
function callback($buffer)
{
$holdit=$buffer;
$holdit=str_replace(" ", " ", $holdit); // tab
$holdit=str_replace(" ", " ", $holdit); // double space
$holdit=str_replace("\n", " ", $holdit); // new line
$holdit=str_replace("\r", " ", $holdit); // new line
$holdit = eregi_replace("<!--[^>]*-->"," ",$holdit); // comment
return $holdit;
}
ob_start("ob_gzhandler");
ob_start("callback");
?>
Seems I don't need that function either, it is faster without it.
(I should probably have opted for a single eregi_replace too)