Do browser plug-ins, such as the Yahoo toolbar or others, have the ability to set cookies on multiple domains as the user browses the web? Does the browser expose the necessary access to do this to a plug-in? If this varies across browsers, that would be helpful to know as well.
Thanks!
Cookies are stored in files and real plugins (i.e. ones using NPAPI rather than the browser's addon/extension engine) can read/write files. Hence, it's possible to do for any browser this way, although not really straightforward.
Firefox exposes cookies even to addons since there are cookie editor addons (that can edit cookie for any site).
Chrome/Chromium allows setting of cookies through "content scripts" that run in the context of a page (any page) - that's only in the beta branch so far, but soon to be in stable. However, the downside is that you might have to visit the site for it to work (you could fake that using iframes).
No idea about Opera.
The only one I have found that works quite well for creating/updating/viewing cookies is Firecookie
Related
Apart from all the other typical security best practices I'm wondering about this, since I lately read some articles talking about how browser extensions can spy anything their user does. So that we shouldn't trust them.
Therefore in order to give users and additional layer of protection should I process all users credential and sensitive info inside an iframe inside my webpages?
Can content inside a sandboxed iframe be read/spied by browser
extensions?
Yes
Could I use iframe to secure user credentials?
Quick answer, no.
When a user installs a chrome extension the extension can do basically anything in the website to access the user credentials. The extension has also access to the iframes that the page generates.
My proposed solutions to overcome this two issues and keep the website feel "secure" are the following:
If the end goal is to secure the content that your user will put in the website, and by no mean you want to let the user put content if there are other kind of extensions running in the page, what you can put is some kind of pop up in the page blocking the access to the user until he is accessing the website without extensions.
Another solution you could propose to the user is to go incognito mode, as there are many options to disallow extensions in incognito without having to force him to uninstall all of the extensions that he has on his browser. This could also make less users leave your page, as if you force him to uninstall of the extensions on his browser it might make him leave your page if it's not a clear enough reason for him.
If you do know which are the extensions that shouldn't be blocked or prevented because they are harmful or known to have some kind of shady behaviour, what you can do is checkout if the user has them installed with this solution Checking if user has a certain extension installed and then print a message to him saying he can't continue until he uninstalls those extensions.
I have read several articles on feature detection and that it is more reliable than browser detection because browsers lie.
I couldn't find any information on why they lie. Does anyone know the reason why they would do that?
As far as I understand it, Webmasters do browser sniffing to find the capabilities of a browser and limit what they send to the browser. If a browser lies about it's capabilities they will receive more from the webmaster, you can read more:
http://farukat.es/journal/2011/02/499-lest-we-forget-or-how-i-learned-whats-so-bad-about-browser-sniffing
http://webaim.org/blog/user-agent-string-history/
The reason is simple:
Because web sites look at the user agent string and make assumptions about the browser, which are then invalid when the browser is updated to a new version.
This has been going on almost since the begining of the web. Browser vendors don't want their new versions to break the web, so they tweak the UA string to fool the code on existing sites.
Ultimately, if everyone used the UA string responsibly and updated their sites whenever new browser versions come out, then browsers wouldn't need to lie. But you have to admit, that's asking quite a lot.
Feature detection works better because when a new browser version comes out with that feature, the detection will pick it up automatically without the either browser needing to do anything special nor the site owner.
Of course, there are times when feature detection doesn't work perfectly -- eg maybe if a feature exists but has bugs in a particular browser. In that case, yes, you may want to do browser detection as a fall-back. But in most cases, feature detection is a much better option.
Another more modern reason is to just avoid demands to install mobile apps (where product owners contol what I can and can't do with content. No thanks!).
Today Reddit started to block viewing subreddits in case they detect a mobile browser in UserAgent so I had to change it just to be able to view content.
I successfully (with much frustration) got our c# embedded signing to work on our site, however, that was before I tested with Safari on a Mac. Safari does not allow Third party sites to open in an iframe without already having a cookie for that site stored. If you either open the site beforehand or allowing all cookies, the document will show embedded. However, even messing around with that, the redirection after completion is not working. The please wait popup does not redirect back to my site. I am looking for any embedded solution that supports mac.
The process works great on windows, but does not work on Safari for Mac and is intermittent with Firefox and Chrome on mac.
I am looking for any non-iFrame embedded solutions that I could implement that should work on all platforms and browsers.
Since you have embedding working in terms of generating a URL token, it's up to you how to access that URL. We've seen developers write their own programs to view where they have complete control over the iFrame and can do whatever they like with it, and another solution we've seen is to use a web browser control.
see this SO link
The only workaround that I know of is to pop up a new browser window. I know there is work being done to make it work without cookies, but at this time the new browser window is your only choice.
Sorry about that.
I was thinking to add meta tag always in all the websites.
That will trigger google chorme frame to load for users who already installed. I can see the benefits but is there any concerns or facts that I should know before I do that?
Testing in google chrome is enough or testing in google chrome frame explicitly required?
Thanks
Note: please do not mention current know problems "print" and "download" issue. I'm sure those will get fixed soon :)
The only argument against chrome frame that I have seen so far is Microsoft's - "Google Chrome Frame running as a plugin has doubled the attach area for malware and malicious scripts."
Also, you may run into problems with frames. If you have chrome frame on your page and someone has that page iframed on their site you may run into some problems. More info:
http://groups.google.com/group/google-chrome-frame/browse_thread/thread/d5ffe442658bc60e/e6d7a4c1c179c931?lnk=gst&q=iframe
You should only need to test in Chrome Frame for (X)HTML, CSS, and JavaScript...basic stuff. If you are using AJAX (while trying not to break the back button), worried about caching, cookies (accessed via javascript), or other potentially browser-specific browser interactions I suggest testing on the IE+CF platform...at least until the CF team announces 100% interoperability between CF and IE.
Check out the CF Google group for more issues.
Are there any concerns or facts you should know? Yes: Not everyone has Google Chrome Frame installed.
You are adding a new user agent that you will need to test and debug against, without removing the need to test and debug the user experience for other browsers (notably plain IE by itself).
If you don't make the IE user experience equivalent to the Google Chrome experience, then you are alienating a significant percentage of users. Depending on your website and its expected users, the impact of this may range from undesirable to unacceptable. If you do make the user experience equivalent, then there is no point in adding the meta tag.
Is there any good browser based WebDAV client? If not, is it possible to make one?
Look at the AjaxFileBrowser from ITHit. Pretty slick, and has FireFox & Chrome PUT support for uploading. IE, drag-and-drop from your desktop to the browser. They have a fully functional demo site up at http://www.ajaxbrowser.com.
There's a plugin for Firefox which handles WebDAV.
Webfolders is a firefox extension that gives you the ability to view the contents of WebDAV
servers in the browser and use the full functionality of the WebDAV protocol.
Depends on what you expect the client to do, and whether you're looking for a cross-browser "web application", or a browser extension.
The main issue with doing this in a "web application" (as opposed to a browser extension) is (1) the lack of binary data support in Javascript, and (2) the lack of access to the local file system (which of course is a security feature).
There is webdav-js which can be enabled as a bookmarklet or served by the WebDAV server itself as an HTML page.
It supports the regular listing of files and directories, file upload, directory creation, renaming, as well as in-page display of images and other media.
If by browser based you mean that it runs in html (ie you don't want your users to install a plugin) then the answer is partly yes and mostly no.
Partly yes, because I have built and used one. It uses the jquery jtree plugin to display folders, and selecting a folder node populates a file list in the right hand panel. Panels are done with another jquery plugin, and the file list is made dynamic with the jquery datatables plugin.
But I think for you the answer is probably "no". Thats because for the browser to use webdav is must user webdav "methods" like PROPFIND and MKCOL. These methods just arent supported in most browsers, so your javascript can't use them directly. I have a server-side mapping in my webdav server project which allows my javascript to use normal GET and POST methods, and these requests are transformed on the server to webdav methods.
I said "probably no" for you since this serve side mapping isnt standard, its a part of milton. But if you happen to use milton, or you can use milton, then its all good.
Try SMEStorage.com. They turn any WebDav back-end into a personal cloud file solution. As well as a rich browser desktop and mobile client, there are clients for Mac,Windows, Linux and Mobile clients for Android, iOS, Windows Phone, and BlackBerry.