How do I reliably detect if a Browser session is in incognito mode / inPrivate browsing? - browser

I want to be able to detect if a user is using incognito mode or such other styles of temporary sessions.
I've been using detectIncognito, but it relies on measuring heap quota and I don't believe it will remain accessible for long due to security concerns.
I also considered using Google Analytics to check if a cookie is newly requested from a previous users IP, but that might be overcomplicating things and relies on allowing the cookie.

Related

Detecting Private Browsing mode: 2019 edition

It used to be the case, as described in this answer from five years ago, that web sites could not reliably tell whether a client's browser was in Incognito Mode. However, in the past few months, I've started encountering sites which are able to throw up a banner that says, "hey, you're in Private Browsing mode, so we won't show you any content."
I have two questions, which are opposite sides of the same coin:
As a web developer in 2019, how would I construct a reliable check for a user's Private Browsing status?
As a privacy-conscious web user in 2019, who might like to keep the meta-information of his privacy-consciousness private as well, how could I reliably generate a first-time-visitor experience from a site that is desperate to track me?
In pre-Incognito days I would have accomplished #2 by using a "clean profile" to visit a site that I didn't want to follow me around. User profiles are apparently still in Firefox, though I suspect they probably don't protect against browser fingerprinting. But I'm not sure whether that is a good summary of my threat model --- my interest is mostly in opting out of the advertisement-driven data-mining ecosystem, without being treated differently for doing so.
I'll leave the main question to others who know how each browser's Private mode may differ from default. I do use Private modes extensively, but when I encounter a page that won't work, I simply use a clean non-private window, then clear all cookies and other stored state again afterwards.
You also mention fingerprinting, which is more insidious. Often it's based on collection by a client-side script, which is detectable but only somewhat defendable in practice. But server-detectable characteristics can also provide a good enough correlation for cross-site, even cross-device correlation.
Fingerprinting is very difficult to thwart. but I recommend using Tor for as much casual browsing as practical, using multiple browsers with your activity partitioned across them in a disciplined way, using a common browser with the best fingerprinting protections or at least using the most common browser config for your platform(s), keep your browsers updated and never install Java or Flash, change your IP address(es) often, change your window size often, and clear all cookies and other stored state often. Use a common platform (machine + display size + os) if possible. Making your browser more unique by loading it up with privacy extensions is quite likely to make you look more unique. There are also a few resources out there that list fingerprinting servers / domains, and you can block those in your machine, DNS, router, or wherever practical.
Keep in mind that Panopticlick and sites like it suffer from selection bias, and also combine all platforms, obscuring how unique your browser is compared to other browsers on the same platform (it's hard to change your platform, but at least you can try to make your browser look more like others used on your platform).

Securing a Browser Helper Object

I'm currently in the process of building a browser helper object.
One of the things the BHO has to do is to make cross-site requests that bypass the cross-domain policy.
For this, I'm exposing a __MyBHONameSpace.Request method that uses WebClient internally.
However, it has occurred to me that anyone that is using my BHO now has a CSRF vulnerability everywhere as a smart attacker can now make arbitrary requests from my clients' computers.
Is there any clever way to mitigate this?
The only way to fully protect against such attacks is to separate the execution context of the page's JavaScript and your extension's JavaScript code.
When I researched this issue, I found that Internet Explorer does provide a way to achieve creation of such context, namely via IActiveScript. I have not implemented this solution though, for the following reasons:
Lack of documentation / examples that combines IActiveScript with BHOs.
Lack of certainty about the future (e.g. https://stackoverflow.com/a/17581825).
Possible performance implications (IE is not known for its superb performance, how would two instances of a JavaScript engines for each page affect the browsing speed?).
Cost of maintenance: I already had an existing solution which was working well, based on very reasonable assumptions. Because I'm not certain whether the alternative method (using IActiveScript) would be bugfree and future-proof (see 2), I decided to drop the idea.
What I have done instead is:
Accept that very determined attackers will be able to access (part of) my extension's functionality.
#Benjamin asked whether access to a persistent storage API would pose a threat to the user's privacy. I consider this risk to be acceptable, because a storage quota is enforced, and all stored data is validated before it's used, and it's not giving an attacker any more tools to attack the user. If an attacker wants to track the user via persistent storage, they can just use localStorage on some domain, and communicate with this domain via an <iframe> using the postMessage API. This method works across all browsers, not just IE with my BHO installed, so it is unlikely that any attacker dedicates time at reverse-engineering my BHO in order to use the API, when there's a method that already works in all modern browsers (IE8+).
Restrict the functionality of the extension:
The extension should only be activated on pages where it needs to be activated. This greatly reduces the attack surface, because it's more difficult for an attacker to run code on https://trusted.example.com and trick the user into visiting https://trusted.example.com.
Create and enforce whitelisted URLs for cross-domain access at extension level (in native code (e.g. C++) inside the BHO).
For sensitive APIs, limit its exposure to a very small set of trusted URLs (again, not in JavaScript, but in native code).
The part of the extension that handles the cross-domain functionality does not share any state with Internet Explorer. Cookies and authorization headers are stripped from the request and response. So, even if an attacker manages to get access to my API, they cannot impersonate the user at some other website, because of missing session information.
This does not protect against sites who use the IP of the requestor for authentication (such as intranet sites or routers), but this attack vector is already covered by a correct implemention a whitelist (see step 2).
"Enforce in native code" does not mean "hard-code in native code". You can still serve updates that include metadata and the JavaScript code. MSVC++ (2010) supports ECMAScript-style regular expressions <regex>, which makes implementing a regex-based whitelist quite easy.
If you want to go ahead and use IActiveScript, you can find sample code in the source code of ceee, Gears (both discontinued) or any other project that attempts to enhance the scripting environment of IE.

how to prevent multiple browser windows from sharing the same session in node js

Is there any way to prevent the multiple browser windows from sharing the same session using Connect and redis DB. I have gone through the link Working with Sessions in Express.js. But this is implemented with express. I want the similar goal to be implemented with connect.Any help on this will be really helpful.
Most modern browsers have a shared pool for cookies, which is how express typically manages sessions. Different browser windows are just using the same cookies, and the issue lies with how browsers work.
I would recommend one of two options:
Use the privacy mode (incognito in Chrome) in your browser to get a second pool of cookies.
Use a second browser (have both Firefox and Chrome installed)
It would help if you were to specify more about what you are trying to do and maybe even why. By multiple browser windows I assume you mean in the same browser. As #Ivan Plenty said, browser windows/tabs all share cookie information and that's pretty much standard. The only way to get away from that is to use the various incognito modes of each browser.
That said, if you want a way to specifically create sessions per window or to distinguish between windows then you could employ a scheme of passing a csrf token around. Normally this is used for security reasons but you could tailor it to your needs. Have a look here for Connect's csrf.

Browsers are requesting crossdomain.xml & /-7890*sfxd*0*sfxd*0 on my site

Just recently I have seen multiple sessions on my site that are repeatedly requesting /crossdomain.xml & /-7890*sfxd*0*sfxd*0. We have had feedback from some of the folks behind these sessions that they cannot browse the site correctly. Is anyone aware of what might be causing these requests? We were thinking either virus or some toolbar.
The only common item we have seen on the requests is that they all are some version of IE (7, 8 or 9).
Independently of the nature of your site/application, ...
... the request of the /crossdomain.xml policy file is indicative of a [typically Adbobe Flash, Silverlight, JavaFX or the like] application running on the client workstation and attempting to assert whether your site allows the application to access your site on behalf of the user on said workstation. This assertion of the crossdomain policy is a security feature of the underlying "sandboxed" environment (Flash Player, Silverlight, etc.) aimed at protecting the user of the workstation. That is because when accessing third party sites "on behalf" of the user, the application gains access to whatever information these sites will provide in the context of the various sessions or cookies the user may have readily started/obtained.
... the request of /-7890*sfxd*0*sfxd*0 is a hint that the client (be it the application mentioned above, some unrelated http reference, web browser plug-in or yet some other logic) is thinking that your site is either superfish.com, some online store affiliated with superfish.com or one of the many sites that send traffic to superfish.com for the purpose of sharing revenue.
Now... these two kinds of request received by your site may well be unrelated, even though they originate from the same workstation in some apparent simultaneity. For example it could just be that the crossdomain policy assertion is from a web application which legitimately wishes to access some service from your site, while the "sfxd" request comes from some a plug-in on workstation's web browser (e.g. WindowsShopper or, alas, a slew of other plug-ins) which somehow trigger their requests based on whatever images the browser receives.
The fact that some of the clients which make these requests are not able to browse your site correctly (whatever that means...) could further indicate that some -I suspect- JavaScript logic on these clients get the root URL of their underlying application/affiliates confused with that of your site. But that's just a guess, there's not enough context about your site to get more precise hints.
A few suggestions to move forward:
Decide whether your site can and should allow crossdomain access and to whom, and remove or edit your site's crossdomain.xml file accordingly. Too many sites seem to just put <allow-access-from domain="*"/> in their crossdomain policy file for no good reason (and hence putting their users at risk). This first suggestion will not lead to solving the problem at hand, but I couldn't resist the cautionary warning.
ask one of these users which "cannot access your site properly" to disable some of the plug-in (aka add-ons) on their web browser and/or to use alternate web browser, and see if that improves the situation. Disabling plug-ins on web browser is usually very easy. To speed up the discovery, you may suggest some kind of a dichotomy approach, disabling several plug-ins at once and continuing the experiment with half of these plug-ins or with the ones that were still enabled, depending on results with your site's proper access.
If your application provides ads from third party sites, temporally disable these ads and see if that helps these users who "cannot access your site properly".

Will referencing a website image on a local network compromise network security?

I manage a website for an organization that has a network where several hundred users will access it in any given 15 minute period. When a user opens a browser, the organization's homepage is displayed. This homepage has several images on it. To try to save on bandwidth to the remote web server (which is not at all affiliated with the local network), the index file checks the ip address of the requester and if it is coming from within the network, it displays a modified webpage where the images are pulled from a local shared drive on the network.
Essentially, the code is this:
<image src="file:\\\D:/hp/picture.jpg" />
I've been told by the network administrator that this is unacceptable because of the great security risk it poses and that the folder must be deleted immediately.
I'm pretty sure it's not a risk because it's the browser that requests the file from the local network and not the remote server and the only way the picture could be displayed is if the request came from the local network which all users have access to the drive in question anyway.
Is there something I am overlooking here? Can this single image tag cause a "great security risk" to the network?
Some background to prevent the obvious questions that will arise from this:
Browser caches are cleared every time a new user logs on to a machine. New users log in roughly every 15 minutes on over 500 machines.
I've requested to have a proxy cache server set up globally for the network. The network administrators flat out refused to do this
Hosting from within the network is out of the question (again, by the decree of the network administrator)
I have no control over the network or have any part in the decisions that are made.
Every user has read access to this shared drive and they all have write access to at least some of the 100 or so directories within it.
The network is not remotely accessible by remote users (you must be logged in to a machine physically plugged into the network to access the network or any drive on it)
Thanks in advance for your help on this.
Why don't you use the very same server which serves the shared directory to share the images over HTTP, and just use:
<image src="http://local-server/images/hp/picture.jpg" />
You already have a server, it's a matter of using the proper software.
Regarding another of your points, it might be dangerous. You're allowing your browser to access local files requested by remote websites. I can't think of any exploits of the top of my head, but I'd rather avoid this sort of uncommon practice. You should not do something until you're sure it's safe (for now you're just unsure it's unsafe).
Is the tag itself a "great security risk"? Of course not -- any site can do the same (as you said, IE8 "happily opens everything you ask"). And therein lies the risk: should any website to be allowed to coerce the client into opening arbitrary network files?
From a security standpoint, the problem is likely not the image tag itself, but rather that this functionality requires that Internet sites be allowed to coerce access local resources (over the file: protocol) in the client's security context. Even with same-origin policy, this is potentially dangerous, and consequently modern browsers do not allow it.
Beginning with IE9, Microsoft disallows accessing the file: protocol from sites in the Internet zone, and "strongly discourages" disabling this feature. Other modern browsers have similar functionality.
Presumably, the network administrators will eventually need to upgrade from IE8. Upgrading to a newer browser will prevent, by default, the locally-accessed images from loading. So the organization then ultimately has a few choices:
Turn off this security setting, allowing any website to reference local content
Not upgrade, and use IE8 in perpetuity
Run the website in the "Trusted Zone", which by default will permit the site to do anything the user can do (start processes, delete files, read data, etc.).
Develop custom software (BHOs, custom applications, HTAs, etc.) or use COTS software to load the images locally, bypassing the default IE behavior.
Accept the usability impact associated with not showing the local images
Option (1) is clearly a security issue, since it requires disabling a security setting that prevents non-local websites from reading local content. Option (2) presents it own security issues, since older browsers lack some of the security features of newer browsers (like blocking file: protocol access from the Internet zone). Option (3) requires an administrative configuration change, violates least access principles, and (particularly if the site lacks server verification (SSL)) opens the organization to a new and potentially-devastating attack vector.
That leaves Option 4 -- development/deployment of software for this purpose; and Option 5 -- block the images from being displayed.
In the end, the administrators will likely have a strong security interest in moving away from IE8, and an implementation that uses a behavior new browsers do not support can impede such an upgrade, and could reasonably contradict the security interests of the organization.

Resources