Just to re phrase my question:
I am actually working on policy combination and conflicts resolution for distributed networks. As distributed open systems, digital resources can be protected by a
collection of security policies created by different entities which have a different copy
of the resource.
After searching in the web, I used key words such as "Access Control policies", "security policy conflict resolution", but I didn't find many results or even a survey of all different methods
Some approaches to combine policies & resolve conflicts that I found included: "Negative policies prevails", "assign priorities to policies" e.t.c. but not a way to combine or encompass them.
Are there any different current directions to apply AC policies or is just a question of choice between classic policies such those mentioned above?
Thanks in advance!
You need to look at research done in the space of policy-based access control. Look at papers on:
policy-based access control
attribute-based access control
OASIS XACML which implements policy-based access control and policy combining algorithms.
Academics in this field include:
Elisa Bertino from Purdue (DBLP)
Theo Dimitrakos from BT/Kent(DBLP)
Ludwig Seitz from SICS (DBLP)
NIST have also done some extensive research in this space which you can find on their dedicated websites:
Role-based access control
Attribute-based access control
Related
I've been reviewing some example custom policies that revolve around account linking which I would like to integrate in to my own set of policies. I've found the Azure B2C documentation lacking when it comes to explaining the difference between an 'alternative security' and 'user identities'.
The account linking policies I've been reviewing interact with a user's collection of identities via a handful of claims transformations (e.g. CreateUserIdentity, AddItemToUserIdentityCollection, RemoveItemToUserIdentityCollection and GetIssuersFromUserIdentityCollectionTransformation). The only place these claims transformations appear to be documented is in a random github issue comment posted more than two years ago. All the sample account linking policies also haven't been touched in a couple of years.
On the other hand, the base policy from the default starter packs use documented claims transformations to interact with a user's collection of identities (or at least to add an entry there when a user signs up via social idp). There are claims transformations documented here that match all the user identity ones above (e.g. CreateAlternativeSecurityId, AddItemToAlternativeSecurityIdCollection, RemoveAlternativeSecurityIdByIdentityProvider and GetIdentityProvidersFromAlternativeSecurityIdCollectionTransformation).
The distinction between these two concepts is really not clear to me. Why are there seemingly two ways (along with a parallel set of claims transformations), to interact with identities?
Can account linking can be achieved using the documented alternative security claims transformations? This would go against what appears to be the recommendation to use userIdentities claims transformations with account linking, but using years-old samples employing undocumented features really doesn't fill me with much confidence.
Yes, you can use either.
They are essentially two different naming conventions that refer to the same base structure.
Refer this and this.
How do companies like Facebook and Google implement privacy controls at scale? For example, Facebook has a select audience type which includes public,friends, "friends except...", only me, specific friends, and even custom. From an implementation and design standpoint, how do these companies handle this? Are they defining rule bases access controls, are they manually coding in these features, do they have a privacy model they use, or is it a hybrid approach? If anyone has links to public available design docs, conference links, white papers, and even research papers, please feel free to share. Every time I try to search for how company "X" does privacy controls, I get the "Business" talk on privacy or access controls as it relates to data centers which is not what I'm looking for.
In this patent of Google they describe a "User privacy framework" which does all the things you mentioned.
It uses a database which stores rules and privacy levels for each user.
A authorization server manages this database and evaluates requests for user data.
If user A wants to access data of user B, the authorization server checks if the request is allowed or violates rules or privacy levels.
The request is then answered or rejected.
See this flow chart from the patent:
Flow chart (Sorry, I am not allowed to post images yet)
So what are privacy levels and privacy rules?
Rules are conditions which need to be met if a user requests information of another user. I couldn't find an example in the patent, but I suspect a rule could be something like "Is user A blocked by user B?".
Privacy levels seem to be more general than rules. For example the level "semi-public" allows another user to access the requested information if no rule forbids it.
The level "private" allows storage of the information on the authorization server but forbids access of it through other users.
The level "no access" forbids even the storage of the information on the authorization server.
Obviously I have no idea if they really use this on the large-scale. But it is certainly a possible implementation and for me it seems plausible to do it with databases and rule sets.
Hope this helps. Maybe you find even more patents which describe similar frameworks.
My team and I are handling hundreds of subscriptions that are belonging to different teams.
Many of them have different needs in terms of security, services to be used, etc whereas we, as a central platform, also make sure that everyone work with the same baseline (security, monitoring, automation, etc.).
We of course have a need to handle RBAC and we are using custom roles a lot. I was wondering if there was a way to create a custom role based on another one to benefit from "classic inheritance".
So I could create for example a role named "basic_user" that would contains a set of "Actions" and an "advanced_user" could have "basic_user" accesses plus additional ones, and so on with "super_advanced_user".
I know that Microsoft has designed it the opposite way so far, allowing us to assign multiple roles to a given individual/group, but for internal design reasons, we would like to stick to one role assignment for a given recipient (one AAD group containing all people accordingly to their role).
Is this something technically feasible/reproducible or does anyone heard about such a feature ? Or maybe is it something we should not consider because of some reasons you'd want to highlight ?
The feature that you would like to implement as you described is not currently not available as you were already aware of this. But however you can post about this feature directly via this link. It will reviewed directly by the Microsoft engineering teams and will respond.
What's the best practice to protect private & confidential documents/information/messages/emails related to high-management level in a network that is run by an IT department and IT staff?
As you know IT staff has access to everything. So how can we be sure that high-classified reports and information are protected from them.
My experience suggests that the simplest approach is the best approach...
In general:
Restrict the access list to the intended audience and a small group of data custodians. This minimizes your “surface area.” Remember that everyone in IT doesn’t need to access everything.
Actively review the “access list” and revise as necessary, conducting regular security audits.
Mandate and maintain security awareness training for those who are granted access as users and as data administrators
Enable logging on sensitive resources with alerts for audit failures
Refer to applicable standards for the industry your organization is part of. (i.e. FERPA, HIPAA, PCI, SOX, etc ...) to ensure that special requirements are met.
Ensure that you do not overlook the Personally Identifiable Information (PII) category of security
Restrict physical access to the location (servers) where the data is stored.
Encrypt the files/disks on which the data is stored.
This is linked to this question which seems to have asked a while back. Security implementation in a project that is adhering to basic principles of Domain driven design. let me give an example
Banking System:
Use Case: A new bank deposit is being made and requires approval as it is first deposit
a. Clerk can auto authorize if the deposit amount is <5000
b. Manager can be of two types - Bank manager / Account Manager. ONLY Account manager can authorize any accounts that have deposit >5000
My concerns are as follows (Pls correct if the concern itself is correct)
Not sure where should i build this following logic - takes care of checking whether the logged on user has authorization to do certain things taking in to account his title - (this case Account manager). Authorizing is a use case, but the security layer seems to have intimate knowledge on the domain object
In general Authorization (not authentication). I know that Role Based authentication would help, but the question is "where" - in which layer and the call flow. Should the UI layer call on some security layer or would the domain layer validate itself for all possible combinations ?
Please help. Its very confusing.
Bump to see if this gets experts notice
Cheers
Security is a cross-cutting design feature which can affect all classes, methods and properties.
From a DDD perspective you would go with specifications and roles.
Where and how those specifications get implemented comes down to your architecture. You could go with aspects, you could go with in-line calls, events, etc.
Here are some links I would check out regarding security and roles:
Security
Roles
RBAC