How to design a application security architecture across multiple technology stack - security

Due to acquisition, we have lot of modules using different technology. We are considering a way to centralize application security controls implementation. Two solutions in my mind but both seems have some flaws:
Option1
Create a independent security library. The weakness is it cannot be used for different language. We will need at least provide interface for different language.
Option2
Create a security service using REST API. Then, any technology can use it. But it introduce extra performance overhead.
Do you have any experience on this?
Thanks.

Please consider XACML as an option:
XACML stands for eXtensible Access Control Markup Language. The
standard defines a declarative access control policy language
implemented in XML and a processing model describing how to evaluate
authorization requests according to the rules defined in policies.
It might be suitable for you, because it really allows to have fully centralized access control. You will only need to have a PEP (Policy Enforcement Point) implemented for every particular module.
The XACML model supports and encourages the separation of the
authorization decision from the point of use. When authorization
decisions are baked into client applications, it is very difficult
to update the decision criteria when the governing policy changes.
When the client is decoupled from the authorization decision,
authorization policies can be updated on the fly and affect all
clients immediately.

Related

Supporting multiple versions of models for different REST API versions

Are there any best practices for the implementation of API versioning? I'm interested in the following points:
Controller, service - e.g. do we use a different controller class for each version of the API? Does a newer controller class inherit the older controller?
Model - if the API versions carry different versions of the same model - how do we handle conversions? E.g. if v1 of the API uses v1 of the model, and v2 of the API uses v2 of the model, and we want to support both (for backward-compatibility) - how do we do the conversions?
Are there existing frameworks/libraries can I use for these purposes in Java and JavaScript?
Thanks!
I always recommend a distinct controller class per API version. It keeps things clean and clear to maintainers. The next version can usually be started by copying and pasting the last version. You should define a clear versioning policy; for example N-2 versions. By doing so, you end up with 3 side-by-side implementations rather than an explosion that some people think you'll have. Refactoring business logic and other components that are not specific to a HTTP API version out of controllers can help reduce code duplication.
In my strong opinion, a controller should absolutely not inherit from another controller, save for a base controller with version-neutral functionality (but not APIs). HTTP is the API. HTTP has methods, not verbs. Think of it as Http.get(). Using is another language such as Java, C#, etc is a facade that is an impedance mismatch to HTTP. HTTP does not support inheritance, so attempting to use inheritance in the implementation is only likely to exacerbate the mismatch problem. There are other practical challenges too. For example, you can unherit a method, which complicates the issue of sunsetting an API in inherited controllers (not all versions are additive). Debugging can also be confusing because you have to find the correct implementation to set a breakpoint. Putting some thought into a versioning policy and factoring responsibilities to other components will all, but negate the need for inheritance in my experience.
Model conversion is an implementation detail. It is solely up to the server. Supporting conversions is very situational. Conversions can be bidirectional (v1<->v2) or unidirectional (v2->v1). A Mapper is a fairly common way to convert one form to another. Additive attribute scenarios often just require a default value for new attributes in storage for older API versions. Ultimately, there is no single answer to this problem for all scenarios.
It should be noted that backward-compatibility is a misnomer in HTTP. There really is no such thing. The API version is a contract that includes the model. The convenience or ease by which a new version of a model can be converted to/from an old version of the model should be considered just that - convenience. It's easy to think that an additive change is backward-capable, but a server cannot guarantee that it is with clients. Striking the notion of backwards-capable in the context of HTTP will help you fall into the pit of success.
Using Open API (formerly known as Swagger) is likely the best option to integrate clients with any language. There are tools that can use the document to create clients into your preferred programming language. I don't have a specific recommendation for a Java library/framework on the server side, but there are several options.

DDD/CQRS/ES Implement aggregate member using graph database aka using an immediately consistent readModel as entity collection

Abstract
I am modelling a generic authorization subdomain for my application. The requirements are quite complicated as it needs to cope with multi tenants, hierarchical organisation structure, resource groups, user groups, permissions, user-editable permissions and so on. It's a mixture of RBAC (users assigned to roles, roles having permissions, permissions can execute commands) with claims-based auth.
Problem
When checking for business rule invariants, I have to traverse the permission "graph" to find a permission for a user to execute a command on a resource in an environment. The traversal depth is arbitrary, on multiple dimensions.
I could model this using code, but it would be best represented using a graph database as queries/updates on this aggregate would be faster. Also, it would reduce the complexity of the code itself. But this would require the graph database to be immediately consistent.
Still, I need to use CQRS/ES, and enable a distributed architecture.
So the graph database needs to be
Immediately consistent
And this introduces some drawbacks
When loading events from event-store, we have to reconstruct the graph database each time
Or, we have to introduce some kind of graph database snapshotting
Overhead when communicating with the graph database
But it has advantages
Reduced complexity of performing complex queries
Complex queries are resolved faster than with code
The graph database is perfect for this job
Why this question?
In other aggregates I modelled, I often have a EntityList instance or EntityHierarchy instance. They basically are ordered/hierarchical collection of sub-entities. Their implementation is arbitrary. They can support anything from indexing, key-value pairs, dynamic arrays, etc. As long as they implement the interfaces I declared for them. I often even have methods like findById() or findByName() on those entities (lists). Those methods are similar to methods that could be executed on a database, but they're executed in-memory.
Thus, why not have an implementation of such a list that could be bound to a database? For example, instead of having a TMemoryEntityList, I would have a TMySQLEntityList. In the case at hand, perhaps having an implementation of a TGraphAuthorizationScheme that would live inside a TOrgAuthPolicy aggregate would be desirable. As long as it behaves like a collection and that it's iterable and support the defined interfaces.
I'm building my application with JavaScript on Node.js. There is an in-memory implementation of this called LevelGraph. Maybe I could use that as well. But let's continue.
Proposal
I know that in DDD terms the infrastructure should not leak into the domain. That's what I'm trying to prevent. That's also one of the reasons I asked this question, is that it's the first time I encounter such a technical need, and I am asking people who are used to cope with this kind of problem for some advice.
The interface for the collection is IAuthorizationScheme. The implementation has to support deep traversal, authorization finding, etc. This is the interface I am thinking about implementing by supporting it with a graph database.
Sequence :
1 When a user asks to execute a command I first authenticate him. I find his organisation, and ask the OrgAuthPolicyRepository to load up his organisation's corresponding OrgAuthPolicy.
The OrgAuthPolicyRepository loads the events from the EventStore.
The OrgAuthPolicyRepository creates a new OrgAuthPolicy, with a dependency-injected TGraphAuthorizationScheme instance.
The OrgAuthPolicyRepository applies all previous events to the OrgAuthPolicy, which in turns call queries on the graph database to sync states of the GraphDatabase with the aggregate.
The command handler executes the business rule validation checks. Some of them might include checks with the aggregate's IAuthorizationScheme.
The business rules have been validated, and a domain event is dispatched.
The aggregate handles this event, and applies it to itself. This might include changes to the IAuthorizationScheme.
The eventBus dispatched the event to all listening eventHandlers on the read-side.
Example :
In resume
Is it conceivable/desirable to implement entities using external databases (ex. Graph Database) so that their implementation be easier? If yes, are there examples of such implementation, or guidelines? If not, what are the drawbacks of using such a technique?
To solve your task I would consider the following variants going from top to bottom:
Reduce task complexity by employing security frameworks or identity
management solutions. Some existent out of the box identity management solution might do the job. If it doesn't take a look on the frameworks to help you implement your own. Unfortunately I'm poorly familiar with Node.js world to advice
you any. In Java world that could be Apache Shiro or Spring Security. This could be a good option from both costs and security perspective
Maintain single model instead of CQRS. This eliminates consistency problems (if you will decide to have separate
resources to store your models). From my understanding
permissions should not be changed frequently but they will be accessed
frequently. This means you can live with one model optimised for
reads, avoiding consistency issues and maintaining 2 models. To
track down user behaviour you can implement auditing separately.
From my experience security auditing can require some additional
data which most likely is not in your data model.
Do it with CQRS. And here I would first consider revisit requirements to find a way to accept eventual consistency instead of strong consistency. This opens many options for implementation.
Regarding the question should you use introduce dedicated Graph Database it's impossible to answer without knowledge of your domain, budget, desired system throughput and performance, existent infrastructure, team knowledge and setup etc. You need to estimate costs of the solution with dedicated Graph Database and without it. My filling is that unless permission management is main idea of your project or your project is mature enough (by number of users and R&D capacities) dedicated database is unlikely to pay back it's costs for your task.
To understand what could be benefits of having dedicated Graph Database your existent storage solutions should be taken in opposite. These 2 articles explains pretty well what could be such benefits:
http://neo4j.com/developer/graph-db-vs-nosql/
http://neo4j.com/developer/graph-db-vs-rdbms/

Transition from RestKit to pure AFNetworking 2.0

I'd been using RestKit for the last two years, but recently I've started thinking about transition from these monolith framework as it seems to be really overkill.
Here's my pros for moving forward:
There is big need in using NSURLSession for background fetches and RestKit has only experimental branch for transition to AFNetworking 2.0. No actual dates when transition will be finished. (Main Reason)
No need for CoreData support in network library as no need for fully functional offline data storage.
Having headache with new concept of response/request descriptors as they don't support different parameters in path patterns (ex. access token parameter) and there is no way to create object request operation in one line with custom descriptor. Here I am loosing features of object manager as facade.
I. The biggest loss of RestKit for me in object mapping process.
Could you recommend standalone libraries that you use which shows themselves as flexible and stable?
II. And as I sad I need no fully functional storage but I still need some caching support in some places.
I've heard that NSURLCache has become useful in last OS release.
Did you use it and what's the strategy?
Does it return cached API responses when network connection is down?
III. Does anybody faces the same problems?
What solutions have you applied?
Maybe someone could give some piece of advice about architecture that he or she uses in multiple apps with pure AFNetworking?
I. In agreement with others who have commented, AFNetworking + Mantle is a simple and effective way to interact with a Restful API and to replace RestKit's object mapping process that you miss.
II. To answer the requirements of your caching support is highly dependent on the context. However, I have found for my recent functional requirements that caching a view model for a particular controller's screen and only caching reference data returned by APIs allows me to keep the application logic relatively simple whilst giving the user some continuity. A simple error notification for connectivity issues can be dealt with a cross-cutting manner.
III. One thought on the architecture relevant to this aspect is to ensure that the APIs the app is dependent on provides data according to the app experience. This allows your app to focus on what it is good at (a very slick user-experience) and moves logic into the API's closer to API dependencies such as data. This has a further benefit of reducing the chattiness of the app.

Security and Access control in a MVC application

I have only recently started working with the MVC approach, so I suppose this is an easy one for you gurus here:
Where do I put access control?
In a view? I don't want to have any logic besides switches and flags in my templates, so that sounds like the least viable option
In the model? Should each business object decide what data it will reveal about itself based on who's asking?
In the controller? That's where I have it now but it makes it hard to keep business rules consistent
Or is there another option?
This will depend on what framework you're using as that and the language will dictate a lot of the tools you have available to you.
From a high level, you should have access security configured at points-of-entry. And you should double-check access security at every level that could be considered autonomous or reused from multiple parts of your application (who knows if security was checked by your co-worker's portal that uses your logic layer? etc.). The other thing to worry about is data security, and that belongs as close to your data as possible (so, yes to your #2 above, but understand that it's separate).
This is akin to the difference between application logic and domain logic, which I'm fond of talking about. If there is any logic that is specific to one particular application (web app compared to a windows service, or whatever) then that logic should be defined in that application only. If some logic crosses the boundary between applications (is reusable between applications) then it qualifies as domain logic and should be defined in your model. Your applications can make use of domain logic, but they should not own it.
For Model (aka data) security, the Model would "control" the access and the Controller would "facilitate" the access. This provides for the reuse of the Model independently of the Controller and minimizes if not negates the general code replication necessary across dissimilar Controllers which use the Model.
For example a Car, a Driver, and a Key. (Model, Controller, API respectively). By virtue of a very small interface (key == API), the Model permits or denies Controller access per API (key fob). Different types of access are permitted (Valet key, Owner Key, Owner FOB). Using the Valet key interface, the Controller will not have access to some of the data/function of the Model such as the glove compartment, the trunk and the gas tank. This is essentially role based access implemented by the Model through identifying and categorizing the Controller with a very small API/command surface area.
This means that the Model can be used by other controllers (car with different drivers) which only need the basic authentication to access the data of the model (functions and compartments of the car).

Grid security in grid frameworks

Recently, I've become quite involved experimenting with lightweight grid frameworks (Hazelcast, Gigaspaces, Infinispan).
However, I've been somewhat surprised than none of the free frameworks I tried has any ACL or role based security features built in (Gigaspaces does have some measures).
What approaches are generally used to compensate for this? Am I supposed to only use the grid to share data between trusted server-side applications and use the traditional Java EE stack (i.e. a conventional DAO-layer) to access data from client or non-trusted server applications?
Are there any grid frameworks that provide ACL capabilities for accessing data in the grid (I'd be happy with some ad-hoc stuff, although complying to Java EE role concepts would be nice)?
This is my opinion on current state of open source distributed cache solutions (e.g. JBoss Cache and Infinispan). As a baseline I am using GigaSpaces commercial caching product. Let me know what you think about open source and proprietary cache products.
read more at: http://bigdatamatters.com/bigdatamatters/2009/09/infinispan-vs-gigaspaces.html

Resources