What is the lifetime in major browsers of user consent for W3C Geolocation API? - browser

According to section 3.1 of W3C Geolocation API recommendation:
An end-user will generally give express permission through a user interface, which usually present a range of permission lifetimes that the end-user can choose from. The choice of lifetimes vary across user agents, but they are typically time-based (e.g., "a day"), or until browser is closed, or the user might even be given the choice for the permission to be granted indefinitely.
[...]
Although the granularity of the permission lifetime varies across user-agents, this specification urges user agents to limit the lifetime to a single browsing session by default (see 3.4 Checking permission to use the API for normative requirements).
Is this recommendation of a single browsing session respected by major browsers? More generally, what are the lifetimes defined by major browsers for geolocation access?

Related

What is advantage and disadvantage of implementing single user session?

Before posting this question I searched and found that lot of questions were asked on "how to implement single user session" in different tools or frameworks.
But my question is why should we implement single user sessions?
I discussed with some friends and did some research on Google and I could find two reasons:
1. Some applications can potentially maintain "user working state" of a user, so if they allow more than one sessions then it can mess up the "user working state".
2. Some tools/applications need this for implementing licensing. Because the license allows only fixed number of users so implementing "single user session" will prevent the misuse.
Both of the above points are not related to security. Is there any other reason why it is considered a good security practice?
Thanks,
Manish
From a security point of view, single user sessions can help a user detect that their account is being used elsewhere. For example, upon logon if a user receives a message that their account is already logged on at another location, they will be alerted to the fact that their credentials have been compromised and that they should perhaps change their password.
Apart from that, I can only think of it being a good security mechanism to protect content (i.e. licensing that you covered in point (2)). However, there may be a secondary advantage to preventing multiple logons per account - users are less likely to share their credentials with other users if the account cannot be used simultaneously. Therefore their account details are likely to be kept more secure as the details have not been emailed to friends or colleagues.

Securing a Browser Helper Object

I'm currently in the process of building a browser helper object.
One of the things the BHO has to do is to make cross-site requests that bypass the cross-domain policy.
For this, I'm exposing a __MyBHONameSpace.Request method that uses WebClient internally.
However, it has occurred to me that anyone that is using my BHO now has a CSRF vulnerability everywhere as a smart attacker can now make arbitrary requests from my clients' computers.
Is there any clever way to mitigate this?
The only way to fully protect against such attacks is to separate the execution context of the page's JavaScript and your extension's JavaScript code.
When I researched this issue, I found that Internet Explorer does provide a way to achieve creation of such context, namely via IActiveScript. I have not implemented this solution though, for the following reasons:
Lack of documentation / examples that combines IActiveScript with BHOs.
Lack of certainty about the future (e.g. https://stackoverflow.com/a/17581825).
Possible performance implications (IE is not known for its superb performance, how would two instances of a JavaScript engines for each page affect the browsing speed?).
Cost of maintenance: I already had an existing solution which was working well, based on very reasonable assumptions. Because I'm not certain whether the alternative method (using IActiveScript) would be bugfree and future-proof (see 2), I decided to drop the idea.
What I have done instead is:
Accept that very determined attackers will be able to access (part of) my extension's functionality.
#Benjamin asked whether access to a persistent storage API would pose a threat to the user's privacy. I consider this risk to be acceptable, because a storage quota is enforced, and all stored data is validated before it's used, and it's not giving an attacker any more tools to attack the user. If an attacker wants to track the user via persistent storage, they can just use localStorage on some domain, and communicate with this domain via an <iframe> using the postMessage API. This method works across all browsers, not just IE with my BHO installed, so it is unlikely that any attacker dedicates time at reverse-engineering my BHO in order to use the API, when there's a method that already works in all modern browsers (IE8+).
Restrict the functionality of the extension:
The extension should only be activated on pages where it needs to be activated. This greatly reduces the attack surface, because it's more difficult for an attacker to run code on https://trusted.example.com and trick the user into visiting https://trusted.example.com.
Create and enforce whitelisted URLs for cross-domain access at extension level (in native code (e.g. C++) inside the BHO).
For sensitive APIs, limit its exposure to a very small set of trusted URLs (again, not in JavaScript, but in native code).
The part of the extension that handles the cross-domain functionality does not share any state with Internet Explorer. Cookies and authorization headers are stripped from the request and response. So, even if an attacker manages to get access to my API, they cannot impersonate the user at some other website, because of missing session information.
This does not protect against sites who use the IP of the requestor for authentication (such as intranet sites or routers), but this attack vector is already covered by a correct implemention a whitelist (see step 2).
"Enforce in native code" does not mean "hard-code in native code". You can still serve updates that include metadata and the JavaScript code. MSVC++ (2010) supports ECMAScript-style regular expressions <regex>, which makes implementing a regex-based whitelist quite easy.
If you want to go ahead and use IActiveScript, you can find sample code in the source code of ceee, Gears (both discontinued) or any other project that attempts to enhance the scripting environment of IE.

Might security not be a crosscutting concern?

I am working on a project with very detailed security requirements. I would honestly not be surprised if the model proposed was as complex as for any intelligence/security agency. Now I've read that incorporating security with business logic is a mixing of concerns and thus a practice to be avoided.
However, all attempts at abstracting security have either failed or created "abstractions" as messy as before. Is it possible for security to be so specific that it becomes part of business rules? In some situations violating security, only masks the data, whereas in other situations will terminate the session, and at others time it will trigger default values to be used instead. The are many requirements that respond to security priveleges.
My fundamental question is: could I be in an exceptional case (i.e. one where incorporating security is sound) or am I not understanding something fundamental about abstracting security?
Edit:
tl;dr (of answers as I understand them): authentication (who are you) is very much a cross cutting concern and should be abstracted, whereas authorization (what can you do) is business logic. Lacking that vocabulary and having only the term "security" (or perhaps failing to appreciate the distinction between the two) lead to my confusion.
Security is split into two parts; authentication and authorization. Authentication is a pretty specific use case. How do you determine that a user is trusted out of a set of untrusted users. I think this is cross cutting; you need to keep unauthenticated users out of your system, or a subset of your system.
Authorization (can the user do something) is very much a business rule. It can (and often is) very specific and different to each use case. Who determines what roles can do what? Well, the business does. No one else can answer that for you.
In the Csla.Net 4 framework, that's exactly how authorization is treated; as a specialized business rule. You're not even limited to "user is in role" or "user has permission." You can make more complex rules "user can edit this field if workflow step has not past this particular step."
I suppose an exceptional case would be if your business logic IS security services of some kind then yes. However I think your problem may be that you are confusing user authorization with authentication.
Certainly Authentication should have a set of rules associated with it but the end result should be, identification of the user and creation of the session.
Authorization would be seperate from that where we determine the user role, and what privileges are laid out by that role.
A typical example would be that Authentication returns a User object and stores it in session. The User has 1 to many roles. A role may have 1 to many privileges. A business logic method might be sendEmail. This method queries the User object for a specific privilege, if it exists do something, if not do something else.
EDIT: Security in my opinion should always be a cross cutting concern when it comes to the user, however if your business logic involves properties of objects that are not the user, CRUD of those objects, or administering other users then it falls in line with your business requirements and thus is Business Logic.

Should my service layer work for any user, or restrict itself to the currently authenticated user?

This is a fundamental design question about the service layer in my application, which forms the core application functionality. Pretty much every remote call reaches a service sooner or later.
Now I am wondering if
every service method should have a User argument, for which the operation should be performed
or if the service should always query the security implementation, which User is currently logged in, and operate on that user
This is basically a flexibility vs security decision, I guess.. What would you do?
There is also a DoS aspect to consider.
One approach is to offer (depending on your context) a publicly available instance / entry point to the services, on a well throttled set-up; and a less restricted instance to an internal trusted environment.
In a similar vein, if you identify where traffic originates you can (or should) be able to provide better QoS to trusted parties.
So, I would possibly keep the core system (the services you write) fairly open / flexible, and handle some of the security related stuff elsewhere (probably in the underlying platform).
Just because you write one set of services doesn't mean you can only expose those in one place and all at the same time (to the same clients).
I think you should decide which methods will need a user argument and which will need a logged in user. You'll get the following method types as a result for this:
1.) Type1: Method is best to have a User argument.
2.) Type2: Method is best to not have a User argument.
3.) Type3: A combination of 1.) and 2.)
The solution of 1.) and 2.) is simple, because they are trivial cases.
The solution of 3.) is to overload the method to have a version of 1.) type and another version of 2.) type.
I try to look at security as an aspect. User argument is required for things other than authentication as well. But, I think control should reach the service layer's more important methods only if the user has been authenticated by some other filter. You can't have every method in the service layer querying the security module before proceeding.

Defining a security policy for a system

Most of the literature on security talks about the importance of defining a security policy before starting to workout on the mechanisms and implementation. While this seems logical, it is quite unclear as to what defining a security policy really means.
Has anyone here had any experience in defining a security policy, and if so:
1) What is the outcome of such a definition? Is the form of such a policy, for say distributed system, a document containing a series of statements on the security requirements (what is allowed and what is not) of the system?
2) Can the policy take the a machine readable form (if that makes sense) and if so how can it be used?
3) How does one maintain such a policy? Is policy maintained as documentation (as with all the rest of the documentation) on the system?
4) Is is necessary to make references to the policy document in code?
Brian
You should take one of the standard security policies and work from there. The one that is most common is PCI compliance (Payment Card Industry). It's very well thought out and except for a few soft spots, generally good. I've never heard of a machine readable policy except for a Microsoft Active Directory definition or a series of Linux iptables rules.
https://www.pcisecuritystandards.org/security_standards/pci_dss.shtml
EDIT:
Check out SE Linux policies also:
http://en.wikipedia.org/wiki/Security-Enhanced_Linux
The Open Web Application Security Project OWASP is a language-agnostic project to educate about security and provide tools to test and support software. While it is web-centric, many of the core ideas are widely applicable. The website is geared towards both software engineers as well as management.
When people talk about "security policy", they might be referring to two different types of security policy.
One of them are high level ones, usually defined by managements. This security policy's primary readers are human. It is a document defining the goal, context, expectations, and requirements of security in the management's mind. Languages used inside this policy could be vague, but it's the elementary "law" of security in the applying context. Everyone involved should follow such policy.
1) The outcome of such policy is the clearly defined security requirements from the management. With this polices, everyone involved can understand the management's expectation and make security-related judgment accordingly when necessary.
2) As the primary readers of such security policies are human, and the statements are usually very general, it may not be in machine readable form. However, there may be a couple of documents defined base on the policy, namely security guidelines, procedures, and manuals. They are in the order of increasing level of details on how security should actually be implemented. For example, the requirements defined in the security policy may be realized into hardening manuals for different OS, so that the Administrators and Engineers can perform hardening tasks efficiently without spending too much time interpretation the management's thoughts. The hardening manuals may then be turned into a set of machine readable configurations (e.g. min password length, max failure login count before locking the account, etc) automating the hardening tasks.
3) The document should be made accessible to everyone involved, and regularly reviewed by management.
4) Practically it might be hard to make such references. Security policies might be updated from time to time, and you will probably not want to recompile your program if the policy changes just affect some parameters. However, it's nice to reference the policy in development documents like design sepc.
Another type of "security policies" might just refer to those sets of parameters intake be security programs. I found that some security programs really like to use the word "policy" to make their configurations more organized and structures. But anyway, these "security policies" are really just values and/or instructions for security programs to follow. For example, Windows has its own set of "security policies" for user to configure audit loggings, user rights and etc. This type of "security policies" (parameters for programs) is actually defined based on the 1st type of "security policies" (requirements from management) as mentioned above.
I might be writing too much on this. Hope it helps.
If you have to design a security policy, why not think about users and permissions?
So, let's say you have an API to something. Consider some arrangement of users that divides them in what they want to do and what minimum permissions they need to do it. So if someone only has to read documents from a database, the API itself won't let the user do something else.
Imagine this is a web JSON API. The user clicks a button and JS processes a request, and sends it. Normally it works fine, but if someone tampers the request, the server simply returns some error code because it is whitelisting just a few actions the user can do.
So I think it all boils down to users and permissions.

Resources