I have a live site where every error is logged and e-mailed to me.
I've been getting a lot of "Padding is invalid and cannot be removed." errors on requests to WebResource.axd. Looking closely, the request is erroneous.
This is the request in question:
/webresource.axd?d=mgqvdy8omlq71j1set2ida2&t=633700045603820000
And this is how it should look:
/WebResource.axd?d=MgQvdy8OmLQ71j1SET2IdA2&t=633700045603820000
Notice the lack of capitalization and, more importantly, the lack of ; after &.
The user agent is this:
UA: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 2.0.50727)
What could this be?
Could a real, actual user be getting errors because of this?
Is this something that IE could actually be doing wrong?
Or is this just a badly written bot?
This happens every now and then, it definitely doesn't happen to all our users, or even to all our IE users.
UPDATE: I'm also getting a lot of "Invalid character in a Base-64 string." when forms are posted, also only from IE 6.0, so i'm guessing they're related.
Thanks for your help!
Daniel
We were seeing similar errors with ScriptResource.axd and Invalid Viewstate exceptions. Eventually I found this post:
Error : /ScriptResource.axd : Invalid viewstate.
Which indicated a bug in IE (and possibly other browsers) where an invalid DOCTYPE of XHTML causes the browser to make an incorrect request to ScriptResource.axd. We solved the problem by changing the XHTML DOCTYPE to the HTML5 doctype and removing the xmlns attribute from the html tag. Our pages were not XHTML compliant anyway.
From:
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" >
To:
<!DOCTYPE html>
<html>
Just guessing here, but I had a similar problem with special characters being removed/substituted when I used IIS 7 to run some sites. Turned out to be IIS's "security feature" - its rules are in "urlscan.ini". Maybe this will help.
If you've already set a fixed MachineKey in your web.config, then this issue is most likely proxies messing up the requests. We get it with some of our IE6 users as well, and I've also seen where proxies turn & into & in the querystring (which is incorrect).
As this is semi-random, the second option in this blog post may help.
Since the URL appears to be manipulated, it looks like it is a bug in a proxy software. Maybe you can find patterns in the requesting IP Ranges to identify certain proxies or ISPs.
However, that does not really explain the constant IE6 UserAgent (unless the proxy screws that up too). It could be one of the many IE bugs (e.g. gzip issues, Missing 4k Bug, etc.) but those usually break much more than just lowercasing an URL and remove one character. You could temporarily turn off gzip to see if it has any effect.
Here is a question with similar symptoms and my answer includes links to some of the IE bugs.
You could try setting a fixed machineKey in your web.config file. For this you can use a machineKey generator or generate your own:
<system.web>
<machineKey
validationKey='SOME KEY'
decryptionKey='OTHER KEY'
validation='SHA1'/>
</system.web>
You could check:
-doctype (does it match the data you're sending? IE6 is picky)
-character set
Related
Why I am using this editor:
In the past I used PrimeFaces p:editor which is however deprecated and lacks functions that the users desperately want. I cannot use the new PrimeFaces p:textEditor because of this: Primefaces textEditor: converting text to HTML with JavaScript not working.
What is it used for:
I am using pe:ckEditor from PrimeFaces Extensions in my program, in which the editor is used by the user to create an e-mail message content. Then by click on a send button, the HTML from the editor is taken and sent via e-mail to a client.
What is the issue:
When using p:editor, I got the HTML by JavaScript function saveHTML and it worked perfectly even when the text contained Czech characters (ěščřžýáíéó), I did not even have to set enconding or anything else and it worked.
Now however when user writes "V případě dalších dotazů se na nás můžete obracet každý den na telefonním čísle", the gotten HTML has the text like this:"V pÅípadÄ dalších dotazů se na nás můžete obracet každý den na telefonním Äísle" - complete rubbish that the user obviously cannot send to a client...
My research:
EDIT: Based on some comments, I tried to add the <meta charset="utf-8> and <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> but that did not help. In pom.xml I have found also this <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>, so I do not think that there is a problem in the HTML page, but in settings of the editor itself...
So I figured, that the encoding must be set especially for the editor in its config. I finally figured how to make the editor access the custom config, but nothing that I found on the Internet and added to the config worked for me:
config.language='cs';
And:
config.entities_latin = false;
And:
config.entities = false;
And:
config.basicEntities = false;
And all its combinations.
ANOTHER EDIT:
Based on some other comments here, I also installed OmniFaces and tried to solve this by CharacterEncodingFilter, but nothing changed and it is still not working.
I also found out that my problem seems to be very related to this issue: Unicode input retrieved via PrimeFaces input components become corrupted, but the accepted answer there gives 3 ways how to solve it, one is the CharacterEncodingFilter, other way is not applicable for Tomcat users (me) and the last "solution" seems to be reporting this to PrimeFaces Extensions developers (which I did: https://github.com/primefaces-extensions/primefaces-extensions.github.com/issues/756 ).
Please let me know if you know how to fix this or if there is any workaround.
PrimeFaces Extensions - version 7.0.2;
PrimeFaces - version 7.0.7
I and my colleague found out what the issue was based on the test code that #melloware provided.
The original editor p:editor, which we had been using and which we are trying to replace by pe:ckEditor, could provide us with its content in HTML only in case we used JavaScript function saveHTML.
But with pe:ckEditor, anytime the user hit the Send button, whose onstart contained the saveHTML, the saveHTML corrupted the content. Once we erased the saveHTML and took the pe:ckEditor content as it was (which is already in HTML), it was fine without corrupted characters.
I am writing a Google Chrome extension. The targeted pages are written in Russian. Chrome translates to English. I can see some inconsistencies appear that seem to be linked to translation. For example, in the following code I check to see if I am in a particular folder:
if (searchResult[0].innerHTML.indexOf("Общая папка")!=-1) alert("You are in Shared Folder."); else (alert(searchResult[0].innerHTML));
If I reload the exact same page several times, the result is inconsistent. Sometimes it detects "Общая папка" but other times it does not. When it does not detect this phrase, the alert says I am in "Shared Folder" which is the translation for ""Общая папка." There appears to be no consistency here. Sometimes I am dealing with the original text (which is preferred) but sometimes I am dealing with crappy translations that are useless for my script because the translations change all the time.
Does anyone know how to fix this? Turning it off would probably fix it but actually the translations are useful and necessary for other aspects of the extension. I understand that the translation works with some secondary layer of the HTML (I have not researched this very well). Can I simply refer to the original in my script?
According to this answer, you can disable translation by placing the following element in the head portion of your web page:
Insert this to the head section of your web page.
<meta name="google" value="notranslate">
If you needed to programmatically disable translation, you could add that tag through JavaScript.
Not sure about disabling it, but looks like after a translation Chrome adds class="translated-ltr" to <html> element, so maybe you at least can detect when the page was translated and either warn a user that the extension might not work properly on this page or just disable it.
I've been doing some searching around and couldn't find this topic anywhere. My company wants to use an HTML doctype but wordpress outputs XHTML by default. I've seen plugins and I would use these but this site will probably outlive the development of said plugins. Plus it's something else to account for when updating or building new sites.
If I use an XHTML doctype how will HTML5 browsers render it? Will they be backwards-compatible with old doctypes?
Edit 1: It is actually recomended that in order to make the transition to HTML5 easier that you try to follow the XHTML structure when writing any HTML.
There will be additional options and types with XHTML in HTML5 but a lot of it is based on the structure in which you are writing your HTML. The X simply means that it is moving to more of an XML base.
To go along with Kayla's input, you will want to make sure that all tags are being closed:
<br/> Instead of: <br>
You will also want to make sure to put quotations around any parameters:
Instead of: <a href=value></a>
Browsers have been slowly adopting the XHTML structure. This might mean that HTML that is formatted without end tags/etc might look a little different in IE 6 than in newer brower versions. Hope that helps!
It is not recommended to use the XHTML 1.0 or 1.1 doctypes for your HTML5 pages, one because its unnecessary and two your markup won't validate when you use the newer tags. Here is a quick guide on using XML syntax in HTML5 a.k.a. XHTML5.
Update: As noted bellow checkout the W3C Specification.
I am not sure what you are asking. What do plugins have to do with DTD?
Yes, any browsers that supports HTML5 is backwards compatible with (X)HTML, you can mix and match all you want. And basically as long as you are writing tags like:
<div>Hi</div> or <p>There</p>
instead of
<DIV>Hi</DIV> or <P>There</P>
the rest is just semantics.
HTML5 began life specifically because browsers manufacturers wanted to make sure that changes they introduced were backward compatible with existing web pages, in contrast to the now defunct XHTML 2, which was shaping up to be non-backward compatible.
So yes, your XHTML doctype will work just fine in HTML5 browsers.
As far as I know all modern browsers that are adding HTML 5 support will continue to support HTML 4 and XHTML for the foreseeable future so you should be fine.
If you're using Wordpress though stick with XHTML. It'll be supported for a long time to come in all browsers and most Wordpress plugins are designed to output XHTML.
I do some work on on old tables based site. It is being replaced but I would like it to work for now.
One of the pages in question is http://www.gdsofusa.com/marantec_garage_door_openers.html. When this page (and some others) is viewed in Safari 5.0 (7533.16) and probably others, the page content is off to the right.
I just need to fix this since about 15% of the traffic is Safari.
Please help!
I got it working by:
Removing "float:left" on your "tabs" div
Setting your "tabs" li's as "display:inline-block" instead of "display:inline"
HTH
The first thing to try is always validation
A common cause of this kind of problem is a mismatched end tag on an HTML element.
Try an HTML validator like http://validator.w3.org
For some reason my site displays a "Cannot display this message" error in those browsers while working correctly in Firefox, Opera, Safari and IE8.
It looks like this: http://www.reviewsaurus.com/images/pagedisplay.png
This document was successfully checked as XHTML 1.0 Transitional!
It doesn't have anything to do with HTML errors. The worst that can do is show a garbled or blank page.
There is some sort of server misconfiguration going on of WordPress and the gzip Content-Encoding.
Your website doesn't work in IE, but /index.php loads just fine. Inspecting the raw HTTP Response (using Fiddler2), the difference between the two responses is that on the request to /, WordPress (presumably) adds the following text to the gzipped HTTP response body:
<!-- Page not cached by WP Super Cache. No closing HTML tag. Check your theme. -->
Because of that addition to the gzipped content, it's no longer a proper gzip stream, and IE6/7 can't ungzip it.
Other browsers probably have better error handling, so they can handle the error just fine.
I don't know how you can fix that problem, but a Google search for that piece of text turns up a few hits on wordpress.org at least.
It's not valid XHTML. If IE6/7 is actually interpreting it as XML, this will cause it to stop parsing. Can you give a screenshot to show what the failure looks like?
UPDATE: Now that it is XHTML Transitional, it's validating, and I'm out of suggestions until I get someplace I can run IE.
UPDATE 2: Just ran IE7 against the site, and the page loaded fine.
It's displaying fine, albeit slowly, in IE7 for me. I would still recommend fixing the two errors and validate as Strict, but they don't seem to me to be the cause of your problem. IE6 and IE7 are intepreting them as text/html.
This document was successfully checked as XHTML 1.0 Transitional!
It still doesn't work though...
Found the problem:
Was using the following procedures to remove unnecessary characters, seems to be wrong though.
<?php
function callback($buffer)
{
$holdit=$buffer;
$holdit=str_replace(" ", " ", $holdit); // tab
$holdit=str_replace(" ", " ", $holdit); // double space
$holdit=str_replace("\n", " ", $holdit); // new line
$holdit=str_replace("\r", " ", $holdit); // new line
$holdit = eregi_replace("<!--[^>]*-->"," ",$holdit); // comment
return $holdit;
}
ob_start("ob_gzhandler");
ob_start("callback");
?>
Seems I don't need that function either, it is faster without it.
(I should probably have opted for a single eregi_replace too)