We recently made a number of changes to our CSP and have gotten reports from a very small number of users that the browser is not rendering because of one of these changes. We have 5 reports out of 2mil active users, so this is something very specific.
None of our QA are able to replicate this with their test browsers, so we're unable to determine what about the new CSP is causing the problem.
Is there a way to put a browser into some super strict mode so that it will stop all dynamic content with the slightest issue?
Rundown of my analysis:
I know the latest version also introduced some [tab] unicode, but I want to replicate this issue before I remove it so that I know that we've fixed it.
We do not have any of the deprecated X- headers in there.
Tried loading it into some online CSP validators but those come back with some vulnerabilities, but says the CSP is valid as written.
We're considering the report-to directive but have not implemented yet (Does anyone know if this would even help us in this?)
We were able to screenshare with one of the users with this issue and saw their browser console showing that it was stopping because of the CSP, but it had issues with the CSP that were very old. That user was running a current Chrome install
I'd post the csp but the invalid unicode wouldn't post anyway. It would also identify some of our clients.
Related
I have switched to using Content-Security-Policy for my website. I'm starting to see reports about the following not being allowed: https://www.pagespeed-mod.com/v1/taas
Does anyone know why the website is trying to load this file? I'm using Google Analytics and Tag Manager, but I don't think that I have any page speed mod installed. Maybe this is an extension in the user's browser? Or when they open developer tools? Another source I could think of is automatic optimization through Cloudflare which I'm also running on.
Extra info: The source of loading this script is https://3001.scriptcdn.net/code/static/1 which doesn't reveal much about who made that.
Had the exactly the same issue and preventing me from using Element Inspector/ debugger. It appears to be some Chrome extension you have installed gone rogue, see if you have extension called "Auto Refresh Plus" installed like i did before.
I also see reports on https://www.pagespeed-mod.com/v1/taas being blocked with the same source of loading. It seems to happen in short periods on the various resources I have reports from. This indicates that it is related to the user/browser and not related to the site itself.
The same can be seen with translators, extensions, security proxies etc. I have given up trying to attribute the source of anything that is likely not caused by legitimate site content.
I'm working with Firefox and I'm getting a lot of 'Content Security Policy' warnings in the the console,
including :
Content Security Policy: The page's settings blocked the loading of a resource at inline ("script-src").
and
Content Security Policy: Ignoring “'unsafe-inline'” within script-src or style-src: nonce-source or hash-source specified
I'm getting these warnings on every website, or almost every website, for example, I get a lot of warnings when I go to Gmail, and less here at StackOverflow,where some website show less, and some show more.
I have recently started working with webpack and some more nodejs tools, can this be the source of these warnings?
what can I do to prevent it?
is it a security issue?
thank you!:)
If I understand your question correctly, It appears you're approaching this as a user rather than as a developer.
From the user point of view:
what can I do to prevent it?
nothing, nor should you.
is it a security issue?
No. Quite the opposite, it is security at work protecting your browsing experience.
From the website developer point of view:
what can I do to prevent it?
Read up on the CSP rules put in place for your website(s) and adjust these rules as required to only allow what you need to alow for your website to work. This is a very broad topic.
I found the lack of clarification as to what assets are blocked and why, a major flaw with the Firefox console (V66) in that it didn't give enough specifity as to what CSP rules were crossed and what site assets were blocked. I found using Google Chrome console gave me this information and helped me to clarify my CSP to allow what needed to be allowed.
is it a security issue?
Not directly. This is security at work protecting the website visitor's browsing experience. Once the CSP is set up to allow the authorised parts of your website to work, the other parts that will be flagged by the CSP mechanism can be ignored (as insecure/unsafe things that should be aborted).
I have read several articles on feature detection and that it is more reliable than browser detection because browsers lie.
I couldn't find any information on why they lie. Does anyone know the reason why they would do that?
As far as I understand it, Webmasters do browser sniffing to find the capabilities of a browser and limit what they send to the browser. If a browser lies about it's capabilities they will receive more from the webmaster, you can read more:
http://farukat.es/journal/2011/02/499-lest-we-forget-or-how-i-learned-whats-so-bad-about-browser-sniffing
http://webaim.org/blog/user-agent-string-history/
The reason is simple:
Because web sites look at the user agent string and make assumptions about the browser, which are then invalid when the browser is updated to a new version.
This has been going on almost since the begining of the web. Browser vendors don't want their new versions to break the web, so they tweak the UA string to fool the code on existing sites.
Ultimately, if everyone used the UA string responsibly and updated their sites whenever new browser versions come out, then browsers wouldn't need to lie. But you have to admit, that's asking quite a lot.
Feature detection works better because when a new browser version comes out with that feature, the detection will pick it up automatically without the either browser needing to do anything special nor the site owner.
Of course, there are times when feature detection doesn't work perfectly -- eg maybe if a feature exists but has bugs in a particular browser. In that case, yes, you may want to do browser detection as a fall-back. But in most cases, feature detection is a much better option.
Another more modern reason is to just avoid demands to install mobile apps (where product owners contol what I can and can't do with content. No thanks!).
Today Reddit started to block viewing subreddits in case they detect a mobile browser in UserAgent so I had to change it just to be able to view content.
we're trying to analyze some attack vectors on one of our MVC apps and we are considering writing some code to prevent users from accessing our site using a browser[version] that we consider to be too insecure.
For example, anything less than IE 7 is getting banned from our site.
Any browser [+version] that doesn't implement the HttpOnly cookie or has serious known holes/scripting issues would be on our watch list.
Without the obvious sarcastic comments about all versions of IE being totally insecure(!), which browsers and/or versions would you consider to be risky? IE tends to get all the bad press, but what about version 1 of Chrome or version 3 of Safari, etc.?
Honestly I still think most unsecure browser is IE. There is a lot of crashes and a lot of code execution bugs for IE. In last days of 2012, bluehole 0-day bug discovered being exploited in wild. But I don't remember last bug I've seen which successfully executes shellcode in Windows 7 with DEP and ASLR enabled. Those days almost passed for Firefox and Chrome. Specially chrome sandbox is really secure. I've seen only Vupen found a 0-day vulerability which executed code in Chrome like 1 year ago.
You can see list of vulnerabilities per year, per product and you'll see classification of bugs also.
http://www.cvedetails.com/product/3264/Mozilla-Firefox.html?vendor_id=452
Change product to Chrome, Internet Explorer and Safari.
Also IE is really vulnerable by third-party plugins, you can achieve code-execution easier on IE.
If you have more specific question, please ask.
I was thinking to add meta tag always in all the websites.
That will trigger google chorme frame to load for users who already installed. I can see the benefits but is there any concerns or facts that I should know before I do that?
Testing in google chrome is enough or testing in google chrome frame explicitly required?
Thanks
Note: please do not mention current know problems "print" and "download" issue. I'm sure those will get fixed soon :)
The only argument against chrome frame that I have seen so far is Microsoft's - "Google Chrome Frame running as a plugin has doubled the attach area for malware and malicious scripts."
Also, you may run into problems with frames. If you have chrome frame on your page and someone has that page iframed on their site you may run into some problems. More info:
http://groups.google.com/group/google-chrome-frame/browse_thread/thread/d5ffe442658bc60e/e6d7a4c1c179c931?lnk=gst&q=iframe
You should only need to test in Chrome Frame for (X)HTML, CSS, and JavaScript...basic stuff. If you are using AJAX (while trying not to break the back button), worried about caching, cookies (accessed via javascript), or other potentially browser-specific browser interactions I suggest testing on the IE+CF platform...at least until the CF team announces 100% interoperability between CF and IE.
Check out the CF Google group for more issues.
Are there any concerns or facts you should know? Yes: Not everyone has Google Chrome Frame installed.
You are adding a new user agent that you will need to test and debug against, without removing the need to test and debug the user experience for other browsers (notably plain IE by itself).
If you don't make the IE user experience equivalent to the Google Chrome experience, then you are alienating a significant percentage of users. Depending on your website and its expected users, the impact of this may range from undesirable to unacceptable. If you do make the user experience equivalent, then there is no point in adding the meta tag.