I am developing an application with Vaadin and I use Content-Security-Policy in my BootstrapListener. When I test my application with OWASP ZAP, I have problem with script-src 'unsafe-inline' (medium risk). When I delete the 'unsafe-inline', my application doesn't work.
My code:
String csp = "";
String defaultSrc = "default-src 'none'";
String styleSrc = "style-src 'unsafe-inline' 'self'";
String fontSrc = "font-src 'self'";
String scriptSrc = "script-src 'unsafe-inline' 'unsafe-eval' 'self'";
String imgSrc = "img-src 'self'";
String connectSrc = "connect-src 'self'";
String frameAncestors = "frame-ancestors 'self'";
String formAction = "form-action 'self'";
csp = Arrays.asList(defaultSenter code hererc,styleSrc,fontSrc,scriptSrc,imgSrc,connectSrc,frameAncestors,formAction).stream().collect(Collectors.joining(";"));
As per Vaadin documentation using scriptSrc = "script-src 'unsafe-inline' 'unsafe-eval' 'self'"; is a known "limitation" or architectural choice of the devs that you can't change without major modifications in the framework:
The settings script-src 'unsafe-inline' 'unsafe-eval' and style-src
'unsafe-inline' are required during Vaadin application start, that is,
the bootstrap process. The bootstrap process that starts the
application loads the widget set which is the client-side engine part
of the application. This consists of precompiled JavaScript logic, for
example, for the communication protocol, DOM control, Buttons,
Layouts, etc., but not the application code. The widget set is a
static resource. After it is loaded, the client-side engine needs to
be started using JavaScript.eval().
Hence, these settings are architectural limitations in Vaadin, so that
the framework can start its client-side engine in the browser.
Reported as: Missing or insecure “Content-Security-Policy” header
XSS/Code injection securitywise, what you can do (or may already did) is using the built in escaping for outputs:
Div div = new Div();
// These are safe as they treat the content as plain text
div.setText("<b>This won't be bolded</b>");
div.getElement().setText("<b>This won't be bolded either</b>");
div.setTitle("<b>This won't be bolded either</b>");
// These are NOT safe
div.getElement().setProperty("innerHTML", "<b>This IS bolded</b>");
div.add(new Html("<b>This IS bolded</b>"));
new Checkbox().setLabelAsHtml("<b>This is bolded too</b>");
and sanitization:
String safeHtml = Jsoup.clean(dangerousText, Whitelist.relaxed());
new Checkbox().setLabelAsHtml(safeHtml);
Furthermore there is a reason why those are marked as "unsafe-", the problem is that if there is a flaw in the framework or you miss an escaping then CSP can't differentiate injected code from the original. You should always "tag" your own safe scripts by putting them in external files or using nonce.
So if I use this code
scriptSrc = "script-src 'nonce-rAnd0m' 'unsafe-eval' 'self'";
It doesn't work. I have this error
Refused to execute inline script because it violates the following Content Security Policy directive: "script-src 'nonce-rAnd0m' 'unsafe-eval' 'self'". Either the 'unsafe-inline' keyword, a hash ('sha256-2+3KFFww9bjFUMmzU872aJ+b2DMgLMn/Hel8bO8Y9xg='), or a nonce ('nonce-...') is required to enable inline execution.
Related
When using the ESLint rule no-use-before-define, the following Svelte component gets an ESLint error:
<script>
const someVariable = 'hello world'
</script>
{someVariable}
2:1 error 'someVariable' was used before it was defined no-use-before-define
Is this bad practice? Is 'someVariable' actually used before it's defined (doesn't look like it to me)
If it's totally fine, is there a way to turn off the rule for this specific case?
I know it's possible to turn of ESLint rules on a per-file basis, but it would be still great to keep the rule on in to warn against the following code:
<script>
const someVariable = helloWorld
const helloWorld = 'hello world'
</script>
{someVariable}
If using TypeScript, you can use svelte-check instead of ESLint to check this rule. Since svelte-check is designed to understand Svelte syntax, it understands this pattern correctly. Then, you can just turn off the ESLint rule for Svelte files.
https://www.npmjs.com/package/svelte-check
The current node is coming up null. I can't figure out how to make MvcSiteMapProvider resolve the node under this circumstance.
Here's the node it needs to match
<mvcSiteMapNode title="Policy" route="Details" typeName="Biz.ABC.ShampooMax.Policy" preservedRouteParameters="id" />
Here's the route:
routes.MapRoute(
"Details",
"Details/{id}",
new { Controller = "Object", Action = "Details" }
).RouteHandler = new ObjectRouteHandler();
The link that gets clicked:
http://localhost:36695/MyGreatWebSite/Details/121534455762071
It's hitting the route ok. Just the MvcSiteMapProvider.SiteMaps.Current.CurrentNode is null.
A null result for CurrentNode indicates the incoming request doesn't match any node in the SiteMap In your case there are 4 different problems that may be contributing to this:
The URL you are inputting http://localhost:36695/MyGreatWebSite/Details/121534455762071 does not (necessarily) match the URL pattern you specified in the route, "Details/{id}". It may if your site is hosted as an IIS application under IISROOT/MyGreatWebSite/.
Your mvcSiteMapNode doesn't specify a controller or action to match. route only acts like an additional piece of criteria (meaning only the named route will be considered in the match), but all of the parameters also need to be provided in order for there to be a match with the route.
You are passing in a custom RouteHandler, which could alter how the route matches the URL. Without seeing the code from your ObjectRouteHandler, it is impossible to tell if or how that will affect how the route matches the URL.
You have a custom attribute, typeName in your mvcSiteMapNode configuration. Unless you have specified to ignore this attribute, it will also be required in the URL to match, i.e. http://localhost:36695/MyGreatWebSite/Details/121534455762071?typeName=Biz.ABC.ShampooMax.Policy.
I recommend against using a custom RouteHandler for the purpose of matching URLs. The effect of doing so makes your incoming routes (URLs into MVC) act differently than your outgoing routes (URLs generated to link to other pages). Since MvcSiteMapProvider uses both parts of the route, it will cause URL generation problems if you only change the incoming routes without also changing the outgoing routes to match. Instead, I recommend you subclass RouteBase, where you can control both sides of the route. See this answer for an example of a custom RouteBase subclass.
However, do note that conventional routing is pretty powerful out of the box and you probably don't need to subclass RouteBase for this simple scenario.
Solution
I arrived at the answer by adding the mvc sitemap provider project into my own solution and stepping through the mvc sitemap provider code to see why my node wasn't being matched. A few things had to be changed. I fixed it by doing the following:
Mvc.sitemap
<mvcSiteMapNode title="Policy" controller="Object" action="Details" typeName="Biz.ABC.ShampootMax.Policy" preservedRouteParameters="id" roles="*"/>
RouteConfig.cs
routes.MapRoute(
name: "Details",
url: "details/{id}",
defaults: new { controller = "Object", action = "Details", typeName = "*" }
).RouteHandler = new ObjectRouteHandler();
Now at first it didn't want to work like this, but I modified the provider like so:
RouteValueDictionary.cs (added wildcard to match value)
protected virtual bool MatchesValue(string key, object value)
{
return this[key].ToString().Equals(value.ToString(), StringComparison.OrdinalIgnoreCase) || value.ToString() == "*";
}
SiteMapNode.cs (changed requestContext.RouteData.Values)
/// <summary>
/// Sets the preserved route parameters of the current request to the routeValues collection.
/// </summary>
/// <remarks>
/// This method relies on the fact that the route value collection is request cached. The
/// values written are for the current request only, after which they will be discarded.
/// </remarks>
protected virtual void PreserveRouteParameters()
{
if (this.PreservedRouteParameters.Count > 0)
{
var requestContext = this.mvcContextFactory.CreateRequestContext();
var routeDataValues = requestContext.HttpContext.Request.RequestContext.RouteData.Values;// requestContext.RouteData.Values;
I think the second modification wasn't strictly necessary because my request context wasn't cached; it would have worked if it was. I didn't know how to get it cached.
It's the first modification to make route values honor a wildcard (*) that made it work. It seems like a hack and maybe there's a built in way.
Note
Ignoring the typeName attribute with:
web.config
<add key="MvcSiteMapProvider_AttributesToIgnore" value="typeName" />
makes another node break:
Mvc.sitemap
<mvcSiteMapNode title="Policies" url="~/Home/Products/HarvestMAX/Policy/List" productType="HarvestMax" type="P" typeName="AACOBusinessModel.AACO.HarvestMax.Policy" roles="*">
so that's why I didn't do that.
I write JSCript and run it with the WindowsScriptHost.However, it seems to be missing Array.forEach().
['a', 'b'].forEach(function(e) {
WSH.Echo(e);
});
Fails with "test.js(66, 2) Microsoft JScript runtime error: Object doesn't support this property or method".
That can't be right? Does it really lack Array.forEach()? Do I really have to use one of the for-loop-variants?
JScript uses the JavaScript feature set as it existed in IE8. Even in Windows 10, the Windows Script Host is limited to JScript 5.7. This MSDN documentation explains:
Starting with JScript 5.8, by default, the JScript scripting engine supports the language feature set as it existed in version 5.7. This is to maintain compatibility with the earlier versions of the engine. To use the complete language feature set of version 5.8, the Windows Script interface host has to invoke IActiveScriptProperty::SetProperty.
... which ultimately means, since cscript.exe and wscript.exe have no switches allowing you to invoke that method, Microsoft advises you to write your own script host to unlock the Chakra engine.
There is a workaround, though. You can invoke the htmlfile COM object, force it to IE9 (or 10 or 11 or Edge) compatibility, then import whatever methods you wish -- including Array.forEach(), JSON methods, and so on. Here's a brief example:
var htmlfile = WSH.CreateObject('htmlfile');
htmlfile.write('<meta http-equiv="x-ua-compatible" content="IE=9" />');
// And now you can use htmlfile.parentWindow to expose methods not
// natively supported by JScript 5.7.
Array.prototype.forEach = htmlfile.parentWindow.Array.prototype.forEach;
Object.keys = htmlfile.parentWindow.Object.keys;
htmlfile.close(); // no longer needed
// test object
var obj = {
"line1" : "The quick brown fox",
"line2" : "jumps over the lazy dog."
}
// test methods exposed from htmlfile
Object.keys(obj).forEach(function(key) {
WSH.Echo(obj[key]);
});
Output:
The quick brown fox
jumps over the lazy dog.
There are a few other methods demonstrated in this answer -- JSON.parse(), String.trim(), and Array.indexOf().
I am using a library that (very selfishly, IMHO) assumes that the baseUrl will point to the company's CDN:
baseUrl: "[http protocol slash slash]cdn.wijmo.com/amd-js/"
At first I thought that I would just copy the contents of the above Url to my own folder (such as /scripts/wijmo/amd-js), but that doesn't work because the good folks at Wijmo hardcoded path references in their AMD define statements, such as this:
define(["./wijmo.widget"], function () { ... });
What the above means (if I understand things properly) is that if you have any other non-Wijmo AMD modules then you must either:
(a) place them under the amd-js path, perhaps in a sub-folder named "myScripts"
(b) use hard-coded RequireJS path references to your own AMDs, like this:
paths: {
"myAMD_1": "http://www.example.com/scripts/myScripts/myAMD_1",
"myAMD_2": "/scripts/myScripts/myAMD_2.js"
}
(a) works, but it means that the baseUrl cannot point to the Wijmo CDN because I don't have access to the Wijmo CDN site so I must move the files published under amd-js to my own server.
(b) sort of work, and here is my problem: If I use syntax myAMD_1 then all is well. But that doesn't let me test on my local development machine, which uses localhost. (I don't want to get into detecting which server I'm running on and customize the paths value... I want the path to remain the same before and after I publish to my http server.)
Now, according to the RequireJS documentation:
There may be times when you do want to reference a script directly and not conform to the "baseUrl + paths" rules for finding it. If a module ID has one of the following characteristics, the ID will not be passed through the "baseUrl + paths" configuration, and just be treated like a regular URL that is relative to the document:
* Ends in ".js".
* Starts with a "/".
* Contains an URL protocol, like "http:" or "https:".
When I try to end (terminate) my path reference with .js (as shown in AMD_2 above), RequireJS doesn't find my AMD and because it ends up looking for myAMD_2.js.js (notice the two .js suffixes). So it looks like RequireJS is no longer honoring what the docs say it employs as a path-resolution algorithm. With the .js suffix not working properly, I can't easily fix references to my own AMDs because I don't know for sure what server name or physical path structure they'll be published to--I really want to use relative paths.
Finally, I don't want to change Wijmo's AMD modules not only because they're are dozens of them, but also because I would need to re-apply my customizations each time they issue a Wijmo update.
So if my baseUrl has to point to a hard-coded Wijmo path then how can I use my own AMDs without placing them in a subfolder under the Wijmo path and without making any fixed-path or Url assumptions about where my own AMDs are published?
I can suggest a couple of approaches to consider here--both have some drawbacks, but can work.
The first approach is to provide a path for each and every Wijmo module that needs to be loaded. This will work, but you have touched on the obvious drawbacks of this approach in the description of the question: Wijmo has many modules that will need to be referenced, and maintaining the module list across updates in the future may be problematic.
If you can live with those drawbacks, here is what the RequireJS config would look like:
require.config({
paths: {
'wijmo.wijgrid': 'http://cdn.wijmo.com/amd-js/wijmo.wijgrid',
'wijmo.widget': 'http://cdn.wijmo.com/amd-js/wijmo.widget',
'wijmo.wijutil': 'http://cdn.wijmo.com/amd-js/wijmo.wijutil',
// ... List all relevant Wijmo modules here
}
});
require(['wijmo.wijgrid'], function() { /* ... */ });
The second approach is to initially configure the RequireJS baseUrl to load the Wijmo modules. Then once Wijmo modules have been loaded, re-configure RequireJS to be able to load your local app modules. The drawback of this approach is that all the Wijmo modules will need to be loaded up front, so you lose the ability to require Wijmo modules as needed within your own modules. This drawback will need to be balanced against the nastiness of listing out explicit paths for all the Wijmo modules as done in the first approach.
For example:
require.config({
baseUrl: 'http://cdn.wijmo.com/amd-js',
paths: {
// ... List minimal modules such as Jquery and Globalize as per Wijmo documentation
}
});
require(['wijmo.wijgrid'], function() {
require.config({
baseUrl: '.'
});
require(['main'], function() {
/* ... */
});
});
I have a situation where I need to determine eligiblity for for one object to "ride" another. The rules for the vehicles are wildly confusing, and I would like to be able to change them without restarting or recompiling my project.
This works but basically makes my security friends convulse and speak in tongues:
class SweetRider{
String stuff
BigDecimal someNumber
BigDecimal anotherNumber
}
class SweetVehicle{
static hasMany=[constraintLinkers:VehicleConstraintLinker]
String vehicleName
Boolean canIRideIt(SweetRider checkRider){
def checkList = VehicleConstraintLinker.findAllByVehicle(this)
checkList.each{
def theClosureObject = it.closureConstraint
def iThinkINeedAShell = new GroovyShell()
def checkerThing = iThinkINeedAShell.evaluate(theClosureObject.closureText)
def result = checkerThing(checkRider)
return result
}
}
}
class VehicleConstraintLinker{
static belongsTo = [closureConstraint:ConstraintByClosure, vehicle:SweetVehicle]
}
class ConstraintByClosure{
String humanReadable
String closureText
static hasMany = [vehicleLinkers:VehicleConstraintLinker]
}
So if I want to add the rule that you are only eligible for a certain vehicle if your "stuff" is "peggy" or "waffles" and your someNumber is greater than your anotherNumber all I have to do is this:
Make a new ConstraintByClosure with humanReadable = "peggy waffle some#>" (thats the human readable explanation) and then add this string as the closureText
{
checkRider->if(
["peggy","waffles"].contains(checkRider.stuff) &&
checkRider.someNumber > checkRider.anotherNumber ) {
return true
}
else {
return false
}
}
Then I just make a VehicleConstraintLinker to link it up and voila.
My question is this: Is there any way to restrict what the GroovyShell can do? Can I make it unable to change any files, globals or database data? Is this sufficient?
Be aware that denying access to java.io and java.lang.Runtime in their various guises is not sufficient. There are many core libraries with a whole lot of authority that a malicious coder could try to exploit so you need to either white-list the symbols that an untrusted script can access (sandboxing or capability based security) or limit what anything in the JVM can do (via the Java SecurityManager). Otherwise you are vulnerable to confused deputy attacks.
provide security sandbox for executing scripts attempts to works with the GroovyClassLoader to provide sandboxing.
Sandboxing Java / Groovy / Freemarker Code - Preventing execution of specific methods discusses ways of sandboxing groovy, but not specifically at evaluate boundaries.
Groovy Scripts and JVM Security talks about sandboxing groovy scripts. I'm not a huge fan of the JVM security policy stuff since installing a security manager can affect a whole lot of other things in the JVM but it does provide a way of intercepting access to files and runtime stuff.
I don't know that either of the first two schemes have been exposed to widespread attack so I would be leery of deploying them without a thorough attack review. The JVM security manager has been subjected to widespread attack in a browser, and suffered many failures. It can be a good defense-in-depth, so I would not rely on it as the sole line of defense for anything precious.