Grails + Securing Application - security

Im working on a legacy grails application.
I have a couple of tables like this
User ( id, name,enterprise_id)
Enterprise (id, name)
Asset (id,description, enterprise_id)
I want to validate that when a certain user wants to access an asset, it has the right enterprise_id (i.e That the user belongs to the same enterprise as the asset).
For example, consider
John, a user from Microsoft, and Charles (from Oracle), only Charles should be able to access the Java Virtual Machine.
Enterprise
id,name
--------
1 Oracle
2 Microsoft
Asset
id,description,enterprise_id
----------------------------
1 Java VM 1
2 .NET 2
User
id name enterprise_id
----------------------
1 John 2
2 Charles 1
I've been reading on spring security, but it doesn't look that it can help me. All I see is user authentication, passwords, roles, etc (Of course, I could be wrong). These things are alredy secured and working ok. For the moment i'm considering filters, but can't make them work and rolling my own security(see this question), which doesn't seem right.
Any thoughts? Is Spring Security the way to go? Shiro?
Thanks in advance

You could implement this with spring-security-acl (which depends on spring-security-core)
Otherwise you could implement a 2 phase approach (Authentication + Authorization) with a set of Object-level authorization filters.

I'm using the Hibernate Filter plugin for this. There is also the MultiTenant plugin and its companion the Falcone plugin.
What these do is basically adding constraints to all DB queries, to do just what I think you are aiming for. A typical solution for you (with Hibernate Filter) would be to add this to the Asset domain (change filter name for each new domain)...
static hibernateFilters = {
assetEnterpriseFilter(condition: ':enterpriseId=enterprise_id', types: 'integer', default: true)
}
...and extract the HibernateFilterFilters from the plugin to override like this (setting the session variable as a parameter)...
class HibernateFilterFilters {
def filters = {
all(controller:'*', action:'*') {
before = {
def hibernateSession = grailsApplication.mainContext.sessionFactory.currentSession
DefaultHibernateFiltersHolder.defaultFilters.each {name ->
hibernateSession.enableFilter(name).setParameter('enterpriseId', session?.enterpriseId ? session.enterpriseId.toInteger() : new Integer(0))
}
}
after = {
}
afterView = {
}
}
}
}
...and make sure not to use enterprise_id = 0 in the DB.

Apache Shiro has access control built-in, and there is a grails plugin for it as well.
Authentication is the act of proving that someone is who they say they are - i.e. logging in to an application. Authorization is the process of controlling access to certain data or application features (controlling 'who' can do 'what').
Shiro has both of these concepts built in to its API and does them quite well - you can even control access to individual instances (for example, 'view' the 'user' with id 12345, etc). I highly recommend looking at the Grails plugin for Shiro as well as Shiro's distribution - it includes a few sample web applications (with and without Spring), and you can see how to use its access control - either with servlet filters for URL-based resource control or via annotations to protect individual methods.
HTH,
Les

Related

I wrote a Liferay module. How to make it configurable by administrators?

I have created a Liferay 7 module, and it works well.
Problem: In the Java source code I hard-coded something that administrators need to modify.
Question: What is the Liferay way to externalize settings? I don't mind if the server has to be restarted, but of course the ability to modify settings on a live running server (via Gogo Shell?) could be cool provided that these settings then survive server restarts.
More specifically, I have a module for which I would like to be able to configure an API key that looks like "3g9828hf928rf98" and another module for which I would like to configure a list of allowed structures that looks like "BASIC-WEB-CONTENT","EVENTS","INVENTORY".
Liferay is utilizing the standard OSGi configuration. It's quite a task documenting it here, but it's well laid out in the documentation.
In short:
#Meta.OCD(id = "com.foo.bar.MyAppConfiguration")
public interface MyAppConfiguration {
#Meta.AD(
deflt = "blue",
required = false
)
public String favoriteColor();
#Meta.AD(
deflt = "red|green|blue",
required = false
)
public String[] validLanguages();
#Meta.AD(required = false)
public int itemsPerPage();
}
OCD stands for ObjectClassDefinition. It ties this configuration class/object to the configurable object through the id/pid.
AD is for AttributeDefinition and provides some hints for the configuration interface, which is auto-generated with the help of this meta type.
And when you don't like the appearance of the autogenerated UI, you "only" have to add localization keys for the labels that you see on screen (standard Liferay translation).
You'll find a lot more details on OSGi configuration for example on enroute, though the examples I found are always a bit more complex than just going after the configuration.

ASP.NET MVC 5 custom RazorViewEngine for multiple portal structure

I setup my MVC 5 site by category, then controller, model, view sub-folders in each category, i.e. root folder folders \Home and \Products would have these three sub-folders as well as a root \Shared\Views folder. I followed a terrific article my Matthew Renz, Clean Architecture in ASP.NET MVC 5. Done in part by creating a custom RazorViewEngine, specifically:
public CustomRazorViewEngine()
{
ViewLocationFormats = new string[]
{
"~/{1}/Views/{0}.cshtml",
};
PartialViewLocationFormats = new string[]
{
"~/Shared/Views/{0}.cshtml"
};
}
There aren't many changes beyond that. I was wondering if I could build on this idea and setup a website project with a \Portals root folder and sub-folders for each Portal using some identifier (name or number) - similar to DNN. The changes to the custom razor view engine code might look some like:
public CustomRazorViewEngine()
{
ViewLocationFormats = new string[]
{
"~/Portals/{2}/{1}/Views/{0}.cshtml",
};
PartialViewLocationFormats = new string[]
{
"~/Portals/{2}/Shared/Views/{0}.cshtml"
};
}
I am not sure where the values {0} and {1} come from, however. I could find a means for obtaining {2}, the portal website name. The relative paths for the rest of the site, such as \Content, \Scripts, etc. I believe I could structure myself.
The purpose for this approach is to deliver to the client a solution in which common code can be reused to support a number of portals with unique skins and features. Thank you for your time and consideration and let me know if you have any questions.
John
These are placeholders in the string that can be used to put the area name, controller name or action name into the string by the controller. {2} is area, {1} is controller,{0} is the action.
You may also be interested to know that when using Asp.Net Core it's easy to get the standard Razor View Engine to locate views and such in custom locations via a ViewLocationExpander rather than needing to create a new view engine that inherits from the Razor View Engine. I only mention this because you added the asp.net-core-mvc tag on your question.
Here is a stack overflow answer that shows how:
How to specify the view location in asp.net core mvc when using custom locations?

How to set Parse Installation Class security (class-level permissions and/or ACL)?

I'm developing a Parse App and currently checking the backend security. I'm a bit lost regarding the Installation Class permissions. It is (by default) readable and writable by everyone. Thus, any user could delete every object of the class.
My question is: is it protected by default like the User class? Or should I add ACL for every new registration to push notifications? Or change the class level permissions?
Many thanks for your help,
Parse defaults to public read/write access for everything outside of User to streamline development.
Security measures will vary from one app to another depending on use-case, but assuming that you have associated each Installation to a User, I would highly recommend applying an ACL which gives public read and limits writes to the specific user.
In case you are not already associating each Installation to a User, here's a nice piece of cloud code to take care of it for you.
Parse.Cloud.beforeSave(Parse.Installation, function(request, response) {
Parse.Cloud.useMasterKey();
if (request.user) {
request.object.set('user', request.user);
} else {
request.object.unset('user');
}
response.success();
});
It's a good place to start by creating ACLs which provide public read and user-specific write access. That one step alone will drastically improve security.

Virus Scanning Uploaded files from Azure Web/Worker Role

We are designing an Azure Website which will allow users to Upload content(MP4,Docx...MSOffice Files) which can then be accessed.
Some video content we will encode to provide several differing quality formats, before it will be streamed (using Azure Media Services).
We need to add an intermediate step so we can scan uploaded files for potential virus risk. Is there functionality built into azure (or third party) which will allow us to call an API to scan content before processing it? We are ideally looking for an API rather than just a background service on a VM, so we can get feedback potentially for use in a web or worker role.
Had a quick look at Symantec Endpoint and Windows Defender but not sure these offer an API
I have successfully done this using the open source ClamAV. You don't specify what languages you are using, but as it's Azure I'll assume .Net.
There is a .Net wrapper that should provide the API that you are looking for:
https://github.com/tekmaven/nClam
Here is some sample code (note: this is copied directly from the nClam GitHub repo page and reproduced here just to protect against link rot)
using System;
using System.Linq;
using nClam;
class Program
{
static void Main(string[] args)
{
var clam = new ClamClient("localhost", 3310);
var scanResult = clam.ScanFileOnServer("C:\\test.txt"); //any file you would like!
switch(scanResult.Result)
{
case ClamScanResults.Clean:
Console.WriteLine("The file is clean!");
break;
case ClamScanResults.VirusDetected:
Console.WriteLine("Virus Found!");
Console.WriteLine("Virus name: {0}", scanResult.InfectedFiles.First().VirusName);
break;
case ClamScanResults.Error:
Console.WriteLine("Woah an error occured! Error: {0}", scanResult.RawResult);
break;
}
}
}
There are also APIs available for refreshing the virus definition database. All the necessary ClamAV files can be included in the deployment package and any configuration can be put into the service start-up code.
ClamAV is a good idea, specially now that 0.99 is about to be released with YARA rule support - it will make it really easy for you to write custom rules and allow clamav to use tons of good YARA rules in the open today.
Another route, and a bit of shameless plugging, is to check out scanii.com, it's a SaaS for malware/virus detection and it integrates quite nicely with AWS and Azures.
There are a number of options to achieve this:
Firstly you can use ClamAV as already mentioned. ClamAV doesn't always receive the best press for its virus databases but as others have pointed out it's easy to use and is expandable.
You can also install a commercial scanner, such as avg, kaspersky etc. Many of these come with a C API that you can talk to directly, although often getting access to this can be expensive from a licensing point of view.
Alternatively you can make calls to the executable directly using something like the following to capture the output:
var proc = new Process {
StartInfo = new ProcessStartInfo {
FileName = "scanner.exe",
Arguments = "arguments needed",
UseShellExecute = false,
RedirectStandardOutput = true,
CreateNoWindow = true
}
};
proc.Start();
while (!proc.StandardOutput.EndOfStream) {
string line = proc.StandardOutput.ReadLine();
}
You would then need to parse the output to get the result and use it within your application.
Finally, now there are some commercial APIs available to do this kind of thing such as attachmentscanner (disclaimer I'm related to this product) or scanii. These will provide you with an API and a more scalable option to scan specific files and receive the response from at least one virus checking engine.
New thing coming Spring / Summer 2020. Advanced threat protection for Azure Storage includes Malware Reputation Screening, which detects malware uploads using hash reputation analysis leveraging the power of Microsoft Threat Intelligence, which includes hashes for Viruses, Trojans, Spyware and Ransomware. Note: cannot guarantee every malware will be detected using hash reputation analysis technique.
https://techcommunity.microsoft.com/t5/Azure-Security-Center/Validating-ATP-for-Azure-Storage-Detections-in-Azure-Security/ba-p/1068131

Microsoft Unity - How to register connectionstring as a parameter to repository constructor when it can vary by client?

I am relatively new to IoC containers so I apologize in advance for my ignorance.
My application is a asp.net 4.0 MVC app that uses the Entity Framework with a Repository layer on top of that. It is a multi tenant application so the connection string that is used varies by the logged in client.
The connection string is determined by a 'key' that gets passed in as part of the route which indicates the client. This route data is only present on the first request of the user's session.
The route looks kind of like this: http://{host}/login/dev/
where 'dev' indicates we are using the dev database.
Currently the IoC container is registering all dependencies in the global.asax Application_Start event handler and I have the 'key' hardcoded as follows:
var cnString = CommonServices.GetDBConnection("dev");
container.RegisterType<IRequestMgmtRecipientRepository, RequestMgmtRecipientRepository>(
new InjectionConstructor(cnString));
Is there a way with Unity to dynamically register the repository based on the logged in client using the route data that is supplied initially?
Note: I am not manually resolving the repositories. They are getting constructed by the container when the controllers get instantiated.
I am stumped.
Thanks!
Quick assumption, you can use the host to identify your tenant.
the following article has a slightly different approach http://www.agileatwork.com/bolt-on-multi-tenancy-in-asp-net-mvc-with-unity-and-nhibernate-part-ii-commingled-data/, its using NH, but it is usable.
based on the above this hacked code may work (not tried/complied the following, not much of a unity user, more of a windsor person :) )
Container.RegisterType<IRequestMgmtRecipientRepository, RequestMgmtRecipientRepository>(new InjectionFactory(c =>
{
//the following you can get via a static class
//HttpContext.Current.Request.Url.Host, if i remember correctly
var context = c.Resolve<HttpContextBase>();
var host = context.Request.Headers["Host"] ?? context.Request.Url.Host;
var connStr = CommonServices.GetDBConnection("dev_" + host); //assumed
return new RequestMgmtRecipientRepository(connStr);
}));
Scenario 2 (i do not think this was the case)
if the client identifies the Tenant (not the host, ie http: //host1), this suggests you would already need access to a database to access the client information? in this case the database which holds client information, will also need to have enough information to identify the tenant.
the issue with senario 2 will arise around anon uses, which tenant is being accessed.
assuming senario 2, then the InjectionFactory should still work.
hope this helps

Resources