Defining a security policy for a system - security

Most of the literature on security talks about the importance of defining a security policy before starting to workout on the mechanisms and implementation. While this seems logical, it is quite unclear as to what defining a security policy really means.
Has anyone here had any experience in defining a security policy, and if so:
1) What is the outcome of such a definition? Is the form of such a policy, for say distributed system, a document containing a series of statements on the security requirements (what is allowed and what is not) of the system?
2) Can the policy take the a machine readable form (if that makes sense) and if so how can it be used?
3) How does one maintain such a policy? Is policy maintained as documentation (as with all the rest of the documentation) on the system?
4) Is is necessary to make references to the policy document in code?
Brian

You should take one of the standard security policies and work from there. The one that is most common is PCI compliance (Payment Card Industry). It's very well thought out and except for a few soft spots, generally good. I've never heard of a machine readable policy except for a Microsoft Active Directory definition or a series of Linux iptables rules.
https://www.pcisecuritystandards.org/security_standards/pci_dss.shtml
EDIT:
Check out SE Linux policies also:
http://en.wikipedia.org/wiki/Security-Enhanced_Linux

The Open Web Application Security Project OWASP is a language-agnostic project to educate about security and provide tools to test and support software. While it is web-centric, many of the core ideas are widely applicable. The website is geared towards both software engineers as well as management.

When people talk about "security policy", they might be referring to two different types of security policy.
One of them are high level ones, usually defined by managements. This security policy's primary readers are human. It is a document defining the goal, context, expectations, and requirements of security in the management's mind. Languages used inside this policy could be vague, but it's the elementary "law" of security in the applying context. Everyone involved should follow such policy.
1) The outcome of such policy is the clearly defined security requirements from the management. With this polices, everyone involved can understand the management's expectation and make security-related judgment accordingly when necessary.
2) As the primary readers of such security policies are human, and the statements are usually very general, it may not be in machine readable form. However, there may be a couple of documents defined base on the policy, namely security guidelines, procedures, and manuals. They are in the order of increasing level of details on how security should actually be implemented. For example, the requirements defined in the security policy may be realized into hardening manuals for different OS, so that the Administrators and Engineers can perform hardening tasks efficiently without spending too much time interpretation the management's thoughts. The hardening manuals may then be turned into a set of machine readable configurations (e.g. min password length, max failure login count before locking the account, etc) automating the hardening tasks.
3) The document should be made accessible to everyone involved, and regularly reviewed by management.
4) Practically it might be hard to make such references. Security policies might be updated from time to time, and you will probably not want to recompile your program if the policy changes just affect some parameters. However, it's nice to reference the policy in development documents like design sepc.
Another type of "security policies" might just refer to those sets of parameters intake be security programs. I found that some security programs really like to use the word "policy" to make their configurations more organized and structures. But anyway, these "security policies" are really just values and/or instructions for security programs to follow. For example, Windows has its own set of "security policies" for user to configure audit loggings, user rights and etc. This type of "security policies" (parameters for programs) is actually defined based on the 1st type of "security policies" (requirements from management) as mentioned above.
I might be writing too much on this. Hope it helps.

If you have to design a security policy, why not think about users and permissions?
So, let's say you have an API to something. Consider some arrangement of users that divides them in what they want to do and what minimum permissions they need to do it. So if someone only has to read documents from a database, the API itself won't let the user do something else.
Imagine this is a web JSON API. The user clicks a button and JS processes a request, and sends it. Normally it works fine, but if someone tampers the request, the server simply returns some error code because it is whitelisting just a few actions the user can do.
So I think it all boils down to users and permissions.

Related

Is it possible to have access rules in Fluid Framework?

Fluid looks really nice if all collaborators are equal (allowed to change the same resources), but what I don't understand is how the server can prevent certain actions for certain users. As much of the logic as possible is on the client-side right? Maybe I haven't searched good enough, but I couldn't find a resource or readme that explained that part.
Example:
User A can edit the whole markdown document.
User B can edit the whole markdown document.
Both users can lock paragraphs they've created to be read-only, which only they can unlock again.
On the Fluid FAQ it states the following:
Turn-based games?
DDSes can be used to distribute state for games, including whose turn it is. It’s up to the client to enforce the rules of a game so there may be some interesting problems to solve around preventing cheating but the Fluid team has already prototyped several games.
If there is no solution for this problem, please let me know where I should start would I fix this myself. For a fun hobby project, I'm in the middle of deciding to build something new or to use fluid (which can save me a lot of work).
Right now, Fluid doesn't have the concept of Access Control, but we could include some related features as DDS features, we could implement some features as server-hosted Fluid Bot filters, and we could implement basic ACLs at the server layer as Storage ACLs.
As DDS Features
I wrote the "OwnedMap DDS" to show this concept, where users reject invalid changes from other users. This could be extended to include your "paragraph lock" concept, but I'm not sure it's rigorously secure.
I think it'd be interesting to build a library of "OwnedDDS" or DDS with filter methods on them to prevent invalid changes".
server-hosted Fluid Bot filters
Another option is to have a server side client, so a non-user client that joins the session that is not a malicious actor. This Bot could validate that changes are legitimate and then "consent" to the changes. This breaks some optimistic insert constraints, but would add more security and is more rigorously secure.
With this approach, you may still need to modify DDSs so that they're consensus based instead of optimistic, but the only consensus would be that the Bot agrees the change is valid.
Storage & Server level ACLs
You could imagine modifications to the routerlicious reference service where you need a user login to access specific containers. This is not as find grained as your request, but would clearly work!

Which layer of an application should keep security logic (permissions, authorization)?

Since the most similar questions are related to ASP MVC I want to know some common right choice strategies.
Lets try to decide, will it go into the business layer or sit on the service layer.
Considering service layer to have a classical remote facade interface it seems to be essential just to land permission checks here as the user object instance is always here (the service session is bound to the user) and ready for .hasPermission(...) calls. But that looks like a business logic leak.
In the different approach with an implementation of security checks in the business layer we pollute domain object interfaces with 'security token' arguments and similar things.
Any suggestions how to overcome this tradeoff or maybe you know the only true solution?
I think the answer to this question is complex and worth a bit of thought early on. Here are some guidelines.
The service layer is a good place for:
Is a page public or only open to registered users?
Does this page require a user of a specific role?
Authentication process including converting tokens to an internal representation of users.
Network checks such as IP and spam filters.
The business layer is a good place for:
Does this particular user have access to the requested record? For example, a user should have access to their profile but not someone else's profile.
Auditing of requests. The business layer is in the best situation to describe the specifics about requests because protocol and other details have been filtered out by this point. You can audit in terms of the business entities that you are setting policy on.
You can play around a bit separating the access decision from the enforcement point. For example, your business logic can have code to determine if a user can access a specific role and present that as a callback to the service layer. Sometimes this will make sense.
Some thoughts to keep in mind:
The more you can push security into a framework, the better. You are asking for a bug (and maybe a vulnerability) if you have dozens of service calls where each one needs to perform security checks in the beginning of the code. If you have a framework, use it.
Some security is best nearest the network. For example, if you wish to ban IP addresses that are spamming you, that definitely shouldn't be in the business layer. The nearer to the network connection you can get the better.
Duplicating security checks is not a problem (unless it's a performance problem). It is often the case that the earlier in the workflow that you can detect a security problem, the better the user experience. That said, you want to protect business operations as close to the implementation as possible to avoid back doors that bypass earlier security checks. This frequently leads to having early checks for the sake of UI but the definitive checks happening late in the business process.
Hope this helps.

Might security not be a crosscutting concern?

I am working on a project with very detailed security requirements. I would honestly not be surprised if the model proposed was as complex as for any intelligence/security agency. Now I've read that incorporating security with business logic is a mixing of concerns and thus a practice to be avoided.
However, all attempts at abstracting security have either failed or created "abstractions" as messy as before. Is it possible for security to be so specific that it becomes part of business rules? In some situations violating security, only masks the data, whereas in other situations will terminate the session, and at others time it will trigger default values to be used instead. The are many requirements that respond to security priveleges.
My fundamental question is: could I be in an exceptional case (i.e. one where incorporating security is sound) or am I not understanding something fundamental about abstracting security?
Edit:
tl;dr (of answers as I understand them): authentication (who are you) is very much a cross cutting concern and should be abstracted, whereas authorization (what can you do) is business logic. Lacking that vocabulary and having only the term "security" (or perhaps failing to appreciate the distinction between the two) lead to my confusion.
Security is split into two parts; authentication and authorization. Authentication is a pretty specific use case. How do you determine that a user is trusted out of a set of untrusted users. I think this is cross cutting; you need to keep unauthenticated users out of your system, or a subset of your system.
Authorization (can the user do something) is very much a business rule. It can (and often is) very specific and different to each use case. Who determines what roles can do what? Well, the business does. No one else can answer that for you.
In the Csla.Net 4 framework, that's exactly how authorization is treated; as a specialized business rule. You're not even limited to "user is in role" or "user has permission." You can make more complex rules "user can edit this field if workflow step has not past this particular step."
I suppose an exceptional case would be if your business logic IS security services of some kind then yes. However I think your problem may be that you are confusing user authorization with authentication.
Certainly Authentication should have a set of rules associated with it but the end result should be, identification of the user and creation of the session.
Authorization would be seperate from that where we determine the user role, and what privileges are laid out by that role.
A typical example would be that Authentication returns a User object and stores it in session. The User has 1 to many roles. A role may have 1 to many privileges. A business logic method might be sendEmail. This method queries the User object for a specific privilege, if it exists do something, if not do something else.
EDIT: Security in my opinion should always be a cross cutting concern when it comes to the user, however if your business logic involves properties of objects that are not the user, CRUD of those objects, or administering other users then it falls in line with your business requirements and thus is Business Logic.

Security Beyond a Username/Password?

I have a webapp that requires security beyond that of a normal web application. When any user visits the domain name, they are presented with two text fields, a username field, and a password field. If they enter a valid user/pass, they get access to the web application. Standard stuff.
However, I'm looking for additional security beyond this standard setup. Ideally it would be a software solution, but I'm also open for hardware solution as well (hardware=key fobs), or even procedural changes (one time use passwords on a password pad for example).
The webapp is unique in that we know all our users ahead of time, and we create their username and password and give it to them. In this sense, we can be assured that the username and password are "strong".
However, our clients have requested additional security beyond this. Anyone have any ideas on how to add another layer of complexity to the security?
Our company used PhoneFactor and we absolutely love it.
We've also used Safeword Tokens in the past.
However, it's notthe only game in the book. I'd start by googling "Two factor authentication"
The OWASP guide to authentication is another good place to start. Actually, OWASP is the first place I'd look for ANY web security question.
Another option for additional security is to use a piece of physical 'evidence' such as a Smart Card: Protect Your Data Via Managed Code And The Windows Vista Smart Card APIs
There are lots of different areas that web apps can have their security improved on. Before getting started you need to determine what, exactly, your problem areas might be and what you want to focus on.
You might start this process by having a third party do Penetration Testing (PEN Testing) on your application. This should give a quick hit list of things you can take care of and, when you have a passing grade, is something to use in your sales literature.
Next you'll want to talk to your customers to understand what they mean by "more secure". Is it simply two factor authentication like David and Mitch mentioned or are they more concerned about things such as data in motion (ARP Poisoning, SSL, and the like), data at rest (everything from hard drive encryption to database encryption), authorization, impersonation (cross site and replay), personnel (ongoing background checks on who has access to the machines), etc..
The concept of security covers a lot of ground.

Software and Security - do you follow specific guidelines?

As part of a PCI-DSS audit we are looking into our improving our coding standards in the area of security, with a view to ensuring that all developers understand the importance of this area.
How do you approach this topic within your organisation?
As an aside we are writing public-facing web apps in .NET 3.5 that accept payment by credit/debit card.
There are so many different ways to break security. You can expect infinite attackers. You have to stop them all - even attacks that haven't been invented yet. It's hard. Some ideas:
Developers need to understand well known secure software development guidelines. Howard & Le Blanc "Writing Secure Code" is a good start.
But being good rule-followers is only half the point. It's just as important to be able to think like an attacker. In any situation (not only software-related), think about what the vulnerabilities are. You need to understand some of those weird ways that people can attack systems - monitoring power consumption, speed of calculation, random number weaknesses, protocol weaknesses, human system weaknesses, etc. Giving developers freedom and creative opportunities to explore these is important.
Use checklist approaches such as OWASP (http://www.owasp.org/index.php/Main_Page).
Use independent evaluation (eg. http://www.commoncriteriaportal.org/thecc.html). Even if such evaluation is too expensive, design & document as though you were going to use it.
Make sure your security argument is expressed clearly. The common criteria Security Target is a good format. For serious systems, a formal description can also be useful. Be clear about any assumptions or secrets you rely on. Monitor security trends, and frequently re-examine threats and countermeasures to make sure that they're up to date.
Examine the incentives around your software development people and processes. Make sure that the rewards are in the right place. Don't make it tempting for developers to hide problems.
Consider asking your QSA or ASV to provide some training to your developers.
Security basically falls into one or more of three domains:
1) Inside users
2) Network infrastructure
3) Client side scripting
That list is written in order of severity, which opposite the order to violation probability. Here are the proper management solutions form a very broad perspective:
The only solution to prevent violations from the inside user is to educate the user, enforce awareness of company policies, limit user freedoms, and monitor user activities. This is extremely important as this is where the most severe security violations always occur whether malicious or unintentional.
Network infrastructure is the traditional domain of information security. Two years ago security experts would not consider looking anywhere else for security management. Some basic strategies are to use NAT for all internal IP addresses, enable port security in your network switches, physically separate services onto separate hardware and carefully protect access to those services ever after everything is buried behind the firewall. Protect your database from code injection. Use IPSEC to reach all automation services behind the firewall and limit points of access to known points behind an IDS or IPS. Basically, limit access to everything, encrypt that access, and inherently trust every access request is potentially malicious.
Over 95% of reported security vulnerabilities are related to client side scripting from the web and about 70% of those target memory corruption, such as buffer overflows. Disable ActiveX and require administrator privileges to activate ActiveX. Patch all software that executes any sort of client side scripting in a test lab no later than 48 hours after the patches are released from the vendor. If the tests do not show interference to the companies authorized software configuration then deploy the patches immediately. The only solution for memory corruption vulnerabilities is to patch your software. This software may include: Java client software, Flash, Acrobat, all web browsers, all email clients, and so forth.
As far as ensuring your developers are compliant with PCI accreditation ensure they and their management are educated to understand the importance security. Most web servers, even large corporate client facing web servers, are never patched. Those that are patched may take months to be patched after they are discovered to be vulnerable. That is a technology problem, but even more important is that is a gross management failure. Web developers must be made to understand that client side scripting is inherently open to exploitation, even JavaScript. This problem is easily realized with the advance of AJAX since information can by dynamically injected to an anonymous third party in violation of the same origin policy and completely bypass the encryption provided by SSL. The bottom line is that Web 2.0 technologies are inherently insecure and those fundamental problems cannot be solved without defeating the benefits of the technology.
When all else fails hire some CISSP certified security managers who have the management experience to have the balls to speak directly to your company executives. If your leadership is not willing to take security seriously then your company will never meet PCI compliance.

Resources