How to check if a website complies the do-not-track request? - browser

I think "do-not-track" is a very old topic and maybe no one is talking about it anymore but just out of curiosity, does anyone know how to verify if a website follows (or violates) the do-not-track setting in your web browser?
While most major web browser has implemeted DNT, I don't see much support for the Tk header. So if the web server doesn't respond with the tk header, how would one know for sure if their dnt request is even complied or not.

DNT was originally proposed by the ad networks and they promised to honour its setting, secretly hoping that nobody would enable it. Browser makers then promptly implemented it, defaulted to on. Ad networks whined "not fair" and then went back on their promises and refused to honour it. It has therefore mostly been ignored (though my own services pay attention to it).
Most annoyingly, Safari dropped support for DNT altogether, after being one of the first to adopt it. This was a short-lived victory for the ad networks; Apple has just got their own back by implementing their privacy first approach to iOS, and it takes control away from ad networks completely.
One of the fundamental problems is that you can't tell if a site honours it or not. As a result, while the Tk header is well intentioned, it's entirely dependent on the honesty of the provider, so it has about as much hope of ever being implemented faithfully as RFC3514.

Related

Detecting Private Browsing mode: 2019 edition

It used to be the case, as described in this answer from five years ago, that web sites could not reliably tell whether a client's browser was in Incognito Mode. However, in the past few months, I've started encountering sites which are able to throw up a banner that says, "hey, you're in Private Browsing mode, so we won't show you any content."
I have two questions, which are opposite sides of the same coin:
As a web developer in 2019, how would I construct a reliable check for a user's Private Browsing status?
As a privacy-conscious web user in 2019, who might like to keep the meta-information of his privacy-consciousness private as well, how could I reliably generate a first-time-visitor experience from a site that is desperate to track me?
In pre-Incognito days I would have accomplished #2 by using a "clean profile" to visit a site that I didn't want to follow me around. User profiles are apparently still in Firefox, though I suspect they probably don't protect against browser fingerprinting. But I'm not sure whether that is a good summary of my threat model --- my interest is mostly in opting out of the advertisement-driven data-mining ecosystem, without being treated differently for doing so.
I'll leave the main question to others who know how each browser's Private mode may differ from default. I do use Private modes extensively, but when I encounter a page that won't work, I simply use a clean non-private window, then clear all cookies and other stored state again afterwards.
You also mention fingerprinting, which is more insidious. Often it's based on collection by a client-side script, which is detectable but only somewhat defendable in practice. But server-detectable characteristics can also provide a good enough correlation for cross-site, even cross-device correlation.
Fingerprinting is very difficult to thwart. but I recommend using Tor for as much casual browsing as practical, using multiple browsers with your activity partitioned across them in a disciplined way, using a common browser with the best fingerprinting protections or at least using the most common browser config for your platform(s), keep your browsers updated and never install Java or Flash, change your IP address(es) often, change your window size often, and clear all cookies and other stored state often. Use a common platform (machine + display size + os) if possible. Making your browser more unique by loading it up with privacy extensions is quite likely to make you look more unique. There are also a few resources out there that list fingerprinting servers / domains, and you can block those in your machine, DNS, router, or wherever practical.
Keep in mind that Panopticlick and sites like it suffer from selection bias, and also combine all platforms, obscuring how unique your browser is compared to other browsers on the same platform (it's hard to change your platform, but at least you can try to make your browser look more like others used on your platform).

Why do browsers still spoof user agents?

I know that browsers originally spoofed user agents in order to allow for feature detection. But I am wondering why they still do so. I don't think user agent spoofing has a place in the modern era of standards compliance; what is basically a browser nirvana for web developers compared to the situation during the infancy of the web.
Someone will probably say that it's for backwards compatibility for all the old code out there. Is that the only reason? After all this time I think browser vendors would be looking beyond those sites with old code. Is this being worked on, or are these user agents just forgotten relics from tougher times for browsers?
Additionally, most feature detection these days seems to be done with JavaScript, which makes part of the feature detection use case for a user agent irrelevant.
Because nowadays we have multiples possibles user agents like Iphone5s, Galaxy SIII, IPad 2 and so on. Because that it is sometimes necessary to handler the site features in different ways to specific rules for example.
Think in the scenario with user requirements look like that:
The site should be able to chat with customer only for tablet.
In mobile should not be able because it is smaller.
Thus, because we have multiples devices, we sometimes have to handler in different ways to give for user a great experience.
I'm not aware that they do. Some smaller browsers user agents might not be recognized by a server so they announce themselves as one of the major ones so they don't get ignored or treated as malicious but, otherwise, you are right; there is no need to do so and the major ones don't.

I want to use security through obscurity for the admin interface of a simple website. Can it be a problem?

For the sake of simplicity I want to use admin links like this for a site:
http://sitename.com/somegibberish.php?othergibberish=...
So the actual URL and the parameter would be some completely random string which only I would know.
I know security through obscurity is generally a bad idea, but is it a realistic threat someone can find out the URL? Don't take the employees of the hosting company and eavesdroppers on the line into account, because it is a toy site, not something important and the hosting company doesn't give me secure FTP anyway, so I'm only concerned about normal visitors.
Is there a way of someone finding this URL? It wouldn't be anywhere on the web, so Google won't now it about either. I hope, at least. :)
Any other hole in my scheme which I don't see?
Well, if you could guarantee only you would ever know it, it would work. Unfortunately, even ignoring malicious men in the middle, there are many ways it can leak out...
It will appear in the access logs of your provider, which might end up on Google (and are certainly read by the hosting admins)
It's in your browsing history. Plugins, extensions etc have access to this, and often use upload it elsewhere (i.e. StumbleUpon).
Any proxy servers along the line see it clearly
It could turn up as a Referer to another site
some completely random string
which only I would know.
Sounds like a password to me. :-)
If you're going to have to remember a secret string I would suggest doing usernames and passwords "properly" as HTTP servers will have been written to not leak password information; the same is not true of URLs.
This may only be a toy site but why not practice setting up security properly as it won't matter if you get it wrong. So hopefully, if you do have a site which you need to secure in future you'll have already made all your mistakes.
I know security through obscurity is
generally a very bad idea,
Fixed it for you.
The danger here is that you might get in the habit of "oh, it worked for Toy such-and-such site, so I won't bother implementing real security on this other site."
You would do a disservice to yourself (and any clients/users of your system) if you ignore Kerckhoff's Principle.
That being said, rolling your own security system is a bad idea. Smarter people have already created security libraries in the other major languages, and even smarter people have reviewed and tweaked those libraries. Use them.
It could appear on the web via a "Referer leak". Say your page links to my page at http://entrian.com/, and I publish my web server referer logs on the web. There'll be an entry saying that http://entrian.com/ was accessed from http://sitename.com/somegibberish.php?othergibberish=...
As long as the "login-URL" never posted anywhere, there shouldn't be any way for search engines to find it. And if it's just a small, personal toy-site with no personal or really important content, I see this as a fast and decent-working solution regarding security compared to implementing some form of proper login/authorization system.
If the site is getting a big number of users and lots of content, or simply becomes more than a "toy site", I'd advice you to do it the proper way
I don't know what your toy admin page would display, but keep in mind that when loading external images or linking to somewhere else, your referrer is going to publicize your URL.
If you change http into https, then at least the url will not be visible to anyone sniffing on the network.
(the caveat here is that you also need to consider that very obscure login system can leave interesting traces to be found in the network traces (MITM), somewhere on the site/target for enabling priv.elevation, or on the system you use to log in if that one is no longer secure and some prefer admin login looking no different from a standard user login to avoid that)
You could require that some action be taken # of times and with some number of seconds of delays between the times. After this action,delay,action,delay,action pattern was noticed, the admin interface would become available for login. And the urls used in the interface could be randomized each time with a single use url generated after that pattern. Further, you could only expose this interface through some tunnel and only for a minute on a port encoded by the delays.
If you could do all that in a manner that didn't stand out in the logs, that'd be "clever" but you could also open up new holes by writing all that code and it goes against "keep it simple stupid".

How To Distract Clients From Using IE6

How can we distract our clients from using IE6. We know IE6 is not a good standard-compliant browsers; has many issues. How to satisfy clients so that they do not use IE6?
Thanks...
I'm currently in the process of building a new site for my company and I've been looking at http://code.google.com/p/ie6-upgrade-warning/.
Essentially it's a little javascript lib that checks to see if the user is running IE6 and if so it displays a nice little overlay on top of your site. The only problem I've got with it is that it completely blocks the user from using your site. I'd like to allow for them to use it anyways but I'd like them to know that their experience may not be as good as it could be. I'm sure it can be adapted though, you should never exclude people from using your site based on their user agent. That being said I think it's a good tradeoff that you try to get your users to upgrade and if they don't wan't to they can still use your site but they probably won't see all of the fancy pancy browser tricks that you can do with modern browsers.
(source: googlecode.com)
It sure looks nice anyway
Other resources include http://ie6update.com/ (not a fan though, you shouldn't trick users)
Update: Seems like someone made a bit more customizable version of this written in jQuery. See jreject.turnwheel.com
One of the reasons this problem exists is as follows.
Many IE6 user have no choice. They sit behind corporate firewalls with locked down machines and while on their home machines they will have the latest technology they are constrained by the workplace rules and policies.
So why do the corporates not upgrade from IE6 to 7 or 8? Well here is one reason. Workload.
As a sysop you need to upgrade 500 machines to the new browser.
In many cases these browsers run mission critical add-ins as ActiveX's etc so to do the upgrade you have to do all the testing and verification and then do a planned roll out upgrade, which will have problems, hiccups and glitches, a lot of work and late nights and unpaid overtime and a lot of flak from the users as you do this.
And what is the payback for this upgrade? Well the internal systems work on IE8 exactly as they worked on IE6, (well not always and you may need to rewrite that as well) but the users can now access the latest startup site that plugs into Facebook (but will be gone in 6 months) perfectly but it is not work related.
So unless there is a tangible business benefit many shops simply cannot se a reason, or justify the cost of a browser upgrade.
These locations will convert, when they go to Windows 7 perhaps or because the "application" they use internally is upgraded and needs the newer browser version. But at this point there is a justification for doing it.
N.B. I have recently worked in two jobs where IE6 compatibility was a must for this reason, large client bases, behind firewalls with lockdown, and i am not stating the above as a reason/excuse not to do it. The sooner the better.
Provided they have the proper permissions to do install software on their machines, use Chrome Frame. The speed boost, if nothing else, should be incentive alone.
"The customer is always right."
You can advise them otherwise, but if they want IE6 for whatever reason then it's up to them.
The best way is by educating them, make them aware of why you are blocking IE6. Do a comparison, case study, etc to convince them, try and put it in terms they may understand, try to convince them that using IE6 is a bad idea (whatever your reasons).
Its simple to implement a script to prevent IE Browsers from connecting to your site, however doing that may result in users being turned away. If this is a public site take into consideration the market share internet explorer has, unless your site is really incredible it is unlikely you will get a user to install a new browser.
To get around this in the past a simple splash page that informes them of the reasons not to use IE6, Example:
You are currently using internet explorer, while you may continue to browse this site using IE, please be aware that some functionality may not be available due to compliance standards within internet explorer, and due to this we do not support issues that arise when using Internet Explorer. We recommend using Google Chrome (Download here) or Mozilla Firefox (Download here).
If this is within a corprate environment you can always work with the IT department to ensure that alternate browsers are distributed. I recommend Google Chrome, simply beacuse of the ability to create "Application Windows" that eliminate problmem causing elements of the browser GUI (Back buttion etc...)
Having a site that elegantly degrades when the user's browser is IE6 is the best option. IE6 users should still be able to use your web site - if a particular feature requires a modern browser a user will be more likely to switch if they already find your site useful.
Another point: modern javascript libraries like jQuery makes it easier to code sites that are compatible with IE6. There's no need to turn away potential customers because of their web browser choice. If you're a web designer it's your job to make sure they have a good experience.
A lot of this comes down to the reasons you want them to stop using IE6. IE6/7 are a pain in the bum if you let them be. We're now taking a more aggressive approach to browser adoption when it comes to what you can/can't do.
For instance, when you visit our new sites in most browsers you'll get rounded corners, transparency, gradients etc. When you visit in IE6 you get a square, opaque, monotone website. Wherever you have PNGs you'll get a simple GIF (even if it looks pants).
Unfortunately IE6 is tied to many businesses for internal reasons (using apps etc) and you can't force them to upgrade but you can give them a subtle message.
make them understand that ie is not bad, its ie 6 thats bad .. if they wish to use ie they can surely use it but could use ie 7 ir even ie 8... make them see that how ie 7 and 8 provide some great features which are not there in ie 6..
also ie 8 is the only browser that follows strict css 2.1 methodology
plus there are many websites which previously were running in ie 6 (with no problem) are running under a warning message that some context may not be suported by ie 6 for eg. www.yahoo.com, so why to use it?
thanks
We had the same issue in one of our projects. I made a simple conditional check and displayed an additional div with links to download firefox, Chrome and IE-8.
Try facebook.com on IE-6. This was my inspiration for the additional div.
In line with Markus' post, it's simple enough to display a popup when the site loads with a warning. Ideally you won't show this every time they load a page of course, that will get old fast.
You have a good opportunity when working on a spec with your client, to tell them "it will cost $X more if we have to support older browsers including IE6 (don't just say IE6), and it will mean we can't easily add more advanced functionality... supporting older browsers will detract from the overall quality and increase time & cost.
A while ago there was a collective effort in Norway to get users away from IE6. Several of the largest sites in Norway participated, and the user got a kind warning on top of the site that recommended him to upgrade or switch browser for an improved browsing experience - if using IE6.
Check out what Wired said about it!
make a whitepaper
Two things:
Charge extra -- double or treble rates or more -- to support IE6. (even IE7 these days).
Point out that IE6 (and WinXP too) will be losing the last vestiges of support in the near future. If you think they're insecure now, just wait till that happens -- no more security fixes. If you're still developing for IE6 now, then you're clearly not going to be ready for the upgrade in time, so you will be hacked, and hacked badly. If your client is willing to accept that, then that's his problem, but you need to help him understand the gravity of the problem. He needs to be putting his upgrade plans in now, not getting more dev work done for the old systems.

Automatic updates - what is 'adequate' security?

There are a few questions (C#, Java) that cover how one might implement automatic updates. It appears initially easy to provide automatic updates, and there are seemingly no good reasons not to provide automatic updates for most software.
However, none appear to cover the security aspects of automatic updates.
How safe are automatic updates now?
How safe should they be?
How safe can they be?
My main issue is that the internet is, for all intents and purposes, a wild west where one cannot assume anything about any data they receive. Automatic updates over the internet appears inherently risky.
A company computer gets infected, spoofs the DNS (only a small percentage of which win), and makes the other company computers believe that the update server for a common application is elsewhere, they download the 'new' application and become infected.
As a developer, what possible attacks are there, and what steps should I take to protect my customers from abuse?
-Adam
With proper use of cryptography your updates can be very safe. Protect the site you distribute your updates from with SSL. Sign all your updates with GPG/PGP or something else, make your clients verify the signature before applying the update. Takes steps to make sure your server and keys are kept extremely secure.
Adequate, is very subjective. What is adequate for a internet game, maybe completely inAdequate for the security system for our nuclear missiles. You have to decide how much potential damage could occur if someone managed to break your security.
The most obvious attack would be an attacker supplying changed binaries through his "evil" update server. So you should ensure that the downloaded data can be verified to originate from you, using a digital signature.
To ensure security, obviously you should avoid distributing the key for the signature. Therefore, you could implement some variation of RSA message signing
Connecting to your update server via SSL can be sufficient, provided your client will refuse to connect if they get an invalid certificate and your server requires negotiating a reasonable level of connection security (and the client also supports that).
However realistically almost anything you do is going to be at least as secure as the route via which your users get the first install of your software anyhow. If your users initially download your installer via plain http, it is too late to start securing things on the updates.
This is also true to some extent even if they get your intial software via https or digitally signed - as most users can easily be persuaded to click OK on almost any security warning they see on that.
there are seemingly no good reasons not to provide automatic updates for most software.
There are good reasons not to force an update.
bug fixes may break code
users may not want to risk breaking production systems that rely on older features

Resources