Groovy/Grails: is there a way to make .evaluate() at all safe? - security

I have a situation where I need to determine eligiblity for for one object to "ride" another. The rules for the vehicles are wildly confusing, and I would like to be able to change them without restarting or recompiling my project.
This works but basically makes my security friends convulse and speak in tongues:
class SweetRider{
String stuff
BigDecimal someNumber
BigDecimal anotherNumber
}
class SweetVehicle{
static hasMany=[constraintLinkers:VehicleConstraintLinker]
String vehicleName
Boolean canIRideIt(SweetRider checkRider){
def checkList = VehicleConstraintLinker.findAllByVehicle(this)
checkList.each{
def theClosureObject = it.closureConstraint
def iThinkINeedAShell = new GroovyShell()
def checkerThing = iThinkINeedAShell.evaluate(theClosureObject.closureText)
def result = checkerThing(checkRider)
return result
}
}
}
class VehicleConstraintLinker{
static belongsTo = [closureConstraint:ConstraintByClosure, vehicle:SweetVehicle]
}
class ConstraintByClosure{
String humanReadable
String closureText
static hasMany = [vehicleLinkers:VehicleConstraintLinker]
}
So if I want to add the rule that you are only eligible for a certain vehicle if your "stuff" is "peggy" or "waffles" and your someNumber is greater than your anotherNumber all I have to do is this:
Make a new ConstraintByClosure with humanReadable = "peggy waffle some#>" (thats the human readable explanation) and then add this string as the closureText
{
checkRider->if(
["peggy","waffles"].contains(checkRider.stuff) &&
checkRider.someNumber > checkRider.anotherNumber ) {
return true
}
else {
return false
}
}
Then I just make a VehicleConstraintLinker to link it up and voila.
My question is this: Is there any way to restrict what the GroovyShell can do? Can I make it unable to change any files, globals or database data? Is this sufficient?

Be aware that denying access to java.io and java.lang.Runtime in their various guises is not sufficient. There are many core libraries with a whole lot of authority that a malicious coder could try to exploit so you need to either white-list the symbols that an untrusted script can access (sandboxing or capability based security) or limit what anything in the JVM can do (via the Java SecurityManager). Otherwise you are vulnerable to confused deputy attacks.
provide security sandbox for executing scripts attempts to works with the GroovyClassLoader to provide sandboxing.
Sandboxing Java / Groovy / Freemarker Code - Preventing execution of specific methods discusses ways of sandboxing groovy, but not specifically at evaluate boundaries.
Groovy Scripts and JVM Security talks about sandboxing groovy scripts. I'm not a huge fan of the JVM security policy stuff since installing a security manager can affect a whole lot of other things in the JVM but it does provide a way of intercepting access to files and runtime stuff.
I don't know that either of the first two schemes have been exposed to widespread attack so I would be leery of deploying them without a thorough attack review. The JVM security manager has been subjected to widespread attack in a browser, and suffered many failures. It can be a good defense-in-depth, so I would not rely on it as the sole line of defense for anything precious.

Related

Why is the usage of util.inherits() discouraged?

According to the Node.js documentation :
Note: usage of util.inherits() is discouraged. Please use the ES6 class and extends keywords to get language level inheritance support. Also note that the two styles are semantically incompatible.
https://nodejs.org/api/util.html#util_util_inherits_constructor_superconstructor
The reason why util.inherits is discouraged, is because changing the prototype of an object should be avoided, as most JavaScript engines look for optimisations assuming that the prototype will not change. When it does, this may lead to bad performance.
util.inherits relies on Object.setPrototypeOf to make this change, and the MDN documentation of that native method has this warning:
Warning: Changing the [[Prototype]] of an object is, by the nature of how modern JavaScript engines optimize property accesses, currently a very slow operation in every browser and JavaScript engine. In addition, the effects of altering inheritance are subtle and far-flung, and are not limited to the time spent in the Object.setPrototypeOf(...) statement, but may extend to any code that has access to any object whose [[Prototype]] has been altered.
Because this feature is a part of the language, it is still the burden on engine developers to implement that feature performantly (ideally). Until engine developers address this issue, if you are concerned about performance, you should avoid setting the [[Prototype]] of an object. Instead, create a new object with the desired [[Prototype]] using Object.create().
As the quote says, you should use the ES6 class and extends keywords to get language level inheritance support instead of utils.inherits and that's exactly the reason for which to use it is discouraged: there exist better alternatives that are part of the core language, that's all.
util.inherits comes from the time when those utils were not part of the language and it requires you a lot of boilerplate to define your own inheritance tools.
Nowadays the language offers a valid alternative and it doesn't make sense anymore to use the ones provided with the library itself. Of course, this is true as long as you use plan to use ES6 - otherwise ignore that note and continue to use utils.inherits.
To reply to your comment:
How is util.inherits() more complicated?
It's not a matter of being more or less complicated. To use a core language feature should be ever your preferred way over using a library specific alternative for obvious reasons.
util.inherits() got deprecated in the new version of node so need to use the ES6 class and extends keywords to get language level inheritance support instead of utils.inherits.
below example which I gave below helps you to understand more clearly :
"use strict";
class Person {
constructor(fName, lName) {
this.firstName = fName;
this.lastName = lName;
}
greet() {
console.log("in a class fn..", this.firstName, "+ ", this.lastName);
}
}
class PoliceMan extends Person {
constructor(burgler) {
super("basava", "sk");
this.burgler = burgler;
}
}
let policeObj = new PoliceMan();
policeObj.greet();
Output : in a class fn.. basava + sk
Here we can see Person class is inherited by PoliceMan class, so that PoliceMan obj can access the properties of Person class by calling super(); in a constructor
Hope this will work as util.inherits();
Happy Coding !!!

Execute Spock Test in Different Environments

I have a Selenium test, which is executed with the help of Spock framework. In general it looks like this:
class SeleniumSpec extends Specification {
URL remoteAddress // Address of SE grid
Capabilities caps // Desired capabilities
WebDriver driver // Web driver
def setup() {
driver = new RemoteWebDriver(remoteAddress, caps)
}
def "some test" () {
expect:
driver.findElement(By.cssSelector("p.someParagraph")).text == 'Some text'
}
// other tests go here ...
}
The point here is, that my specification describes behavior of some component (in most cases - web views/pages). So the methods are expected to implement some business-relative logic (smth. like 'click on button and expect message in another field); but another thing, I would like to test, is to ensure that behavior is exactly the same in all browsers (capabilities).
To achieve this in 'ideal' world, I'd like to have a mechanism to specify, that a particular test class should be used several times, but with some different parameters. But for now, I see an ability to apply data sets only for a single method.
I came up only with several ideas to implement this (according to my current knowledge regarding Spock framework):
Use a list of drivers and execute each action over all list members. So each call to 'driver' will be replaced with 'drivers.each { it }' invocation. This approach, on the other hand, makes it hard to exactly discover, which of drivers failed the test.
Use method parameters and data sets to initiate a fresh copy of web driver on each iteration. This approach seems more logical according to Spock philosophy, but it requires a heavy operation of driver and web application initialization to be performed every time. It also removes ability to perform 'step-by-step' testing, since state of driver won't be preserved between test methods.
Combination of these approaches, when drivers are kept in a map, and each test invocation has exact name of driver to be used.
I'd appreciate, if anybody met this case and may come up with ideas, how to properly organize the testing process. Someone could also discover other approaches, or pros & contras of those above. Considering another test tool could be an option also.
You could create an abstract BaseSpec which contains all your features, but do not setup the driver in that spec. Then create a subspec for each different browser you want to test e.g.
class FirefoxSeleniumSpec extends BaseSeleniumSpec{
setupSpec(){
super.driver = new FirefoxDriver(...)
}
}
And then you can run all of the sub-specs to test all the browsers

ServiceStack: RESTful Resource Versioning

I've taken a read to the Advantages of message based web services article and am wondering if there is there a recommended style/practice to versioning Restful resources in ServiceStack? The different versions could render different responses or have different input parameters in the Request DTO.
I'm leaning toward a URL type versioning (i.e /v1/movies/{Id}), but I have seen other practices that set the version in the HTTP headers (i.e Content-Type: application/vnd.company.myapp-v2).
I'm hoping a way that works with the metadata page but not so much a requirement as I've noticed simply using folder structure/ namespacing works fine when rendering routes.
For example (this doesn't render right in the metadata page but performs properly if you know the direct route/url)
/v1/movies/{id}
/v1.1/movies/{id}
Code
namespace Samples.Movies.Operations.v1_1
{
[Route("/v1.1/Movies", "GET")]
public class Movies
{
...
}
}
namespace Samples.Movies.Operations.v1
{
[Route("/v1/Movies", "GET")]
public class Movies
{
...
}
}
and corresponding services...
public class MovieService: ServiceBase<Samples.Movies.Operations.v1.Movies>
{
protected override object Run(Samples.Movies.Operations.v1.Movies request)
{
...
}
}
public class MovieService: ServiceBase<Samples.Movies.Operations.v1_1.Movies>
{
protected override object Run(Samples.Movies.Operations.v1_1.Movies request)
{
...
}
}
Try to evolve (not re-implement) existing services
For versioning, you are going to be in for a world of hurt if you try to maintain different static types for different version endpoints. We initially started down this route but as soon as you start to support your first version the development effort to maintain multiple versions of the same service explodes as you will need to either maintain manual mapping of different types which easily leaks out into having to maintain multiple parallel implementations, each coupled to a different versions type - a massive violation of DRY. This is less of an issue for dynamic languages where the same models can easily be re-used by different versions.
Take advantage of built-in versioning in serializers
My recommendation is not to explicitly version but take advantage of the versioning capabilities inside the serialization formats.
E.g: you generally don't need to worry about versioning with JSON clients as the versioning capabilities of the JSON and JSV Serializers are much more resilient.
Enhance your existing services defensively
With XML and DataContract's you can freely add and remove fields without making a breaking change. If you add IExtensibleDataObject to your response DTO's you also have a potential to access data that's not defined on the DTO. My approach to versioning is to program defensively so not to introduce a breaking change, you can verify this is the case with Integration tests using old DTOs. Here are some tips I follow:
Never change the type of an existing property - If you need it to be a different type add another property and use the old/existing one to determine the version
Program defensively realize what properties don't exist with older clients so don't make them mandatory.
Keep a single global namespace (only relevant for XML/SOAP endpoints)
I do this by using the [assembly] attribute in the AssemblyInfo.cs of each of your DTO projects:
[assembly: ContractNamespace("http://schemas.servicestack.net/types",
ClrNamespace = "MyServiceModel.DtoTypes")]
The assembly attribute saves you from manually specifying explicit namespaces on each DTO, i.e:
namespace MyServiceModel.DtoTypes {
[DataContract(Namespace="http://schemas.servicestack.net/types")]
public class Foo { .. }
}
If you want to use a different XML namespace than the default above you need to register it with:
SetConfig(new EndpointHostConfig {
WsdlServiceNamespace = "http://schemas.my.org/types"
});
Embedding Versioning in DTOs
Most of the time, if you program defensively and evolve your services gracefully you wont need to know exactly what version a specific client is using as you can infer it from the data that is populated. But in the rare cases your services needs to tweak the behavior based on the specific version of the client, you can embed version information in your DTOs.
With the first release of your DTOs you publish, you can happily create them without any thought of versioning.
class Foo {
string Name;
}
But maybe for some reason the Form/UI was changed and you no longer wanted the Client to use the ambiguous Name variable and you also wanted to track the specific version the client was using:
class Foo {
Foo() {
Version = 1;
}
int Version;
string Name;
string DisplayName;
int Age;
}
Later it was discussed in a Team meeting, DisplayName wasn't good enough and you should split them out into different fields:
class Foo {
Foo() {
Version = 2;
}
int Version;
string Name;
string DisplayName;
string FirstName;
string LastName;
DateTime? DateOfBirth;
}
So the current state is that you have 3 different client versions out, with existing calls that look like:
v1 Release:
client.Post(new Foo { Name = "Foo Bar" });
v2 Release:
client.Post(new Foo { Name="Bar", DisplayName="Foo Bar", Age=18 });
v3 Release:
client.Post(new Foo { FirstName = "Foo", LastName = "Bar",
DateOfBirth = new DateTime(1994, 01, 01) });
You can continue to handle these different versions in the same implementation (which will be using the latest v3 version of the DTOs) e.g:
class FooService : Service {
public object Post(Foo request) {
//v1:
request.Version == 0
request.Name == "Foo"
request.DisplayName == null
request.Age = 0
request.DateOfBirth = null
//v2:
request.Version == 2
request.Name == null
request.DisplayName == "Foo Bar"
request.Age = 18
request.DateOfBirth = null
//v3:
request.Version == 3
request.Name == null
request.DisplayName == null
request.FirstName == "Foo"
request.LastName == "Bar"
request.Age = 0
request.DateOfBirth = new DateTime(1994, 01, 01)
}
}
Framing the Problem
The API is the part of your system that exposes its expression. It defines the concepts and the semantics of communicating in your domain. The problem comes when you want to change what can be expressed or how it can be expressed.
There can be differences in both the method of expression and what is being expressed. The first problem tends to be differences in tokens (first and last name instead of name). The second problem is expressing different things (the ability to rename oneself).
A long-term versioning solution will need to solve both of these challenges.
Evolving an API
Evolving a service by changing the resource types is a type of implicit versioning. It uses the construction of the object to determine behavior. Its works best when there are only minor changes to the method of expression (like the names). It does not work well for more complex changes to the method of expression or changes to the change of expressiveness. Code tends to be scatter throughout.
Specific Versioning
When changes become more complex it is important to keep the logic for each version separate. Even in mythz example, he segregated the code for each version. However, the code is still mixed together in the same methods. It is very easy for code for the different versions to start collapsing on each other and it is likely to spread out. Getting rid of support for a previous version can be difficult.
Additionally, you will need to keep your old code in sync to any changes in its dependencies. If a database changes, the code supporting the old model will also need to change.
A Better Way
The best way I've found is to tackle the expression problem directly. Each time a new version of the API is released, it will be implemented on top of the new layer. This is generally easy because changes are small.
It really shines in two ways: first all the code to handle the mapping is in one spot so it is easy to understand or remove later and second it doesn't require maintenance as new APIs are developed (the Russian doll model).
The problem is when the new API is less expressive than the old API. This is a problem that will need to be solved no matter what the solution is for keeping the old version around. It just becomes clear that there is a problem and what the solution for that problem is.
The example from mythz's example in this style is:
namespace APIv3 {
class FooService : RestServiceBase<Foo> {
public object OnPost(Foo request) {
var data = repository.getData()
request.FirstName == data.firstName
request.LastName == data.lastName
request.DateOfBirth = data.dateOfBirth
}
}
}
namespace APIv2 {
class FooService : RestServiceBase<Foo> {
public object OnPost(Foo request) {
var v3Request = APIv3.FooService.OnPost(request)
request.DisplayName == v3Request.FirstName + " " + v3Request.LastName
request.Age = (new DateTime() - v3Request.DateOfBirth).years
}
}
}
namespace APIv1 {
class FooService : RestServiceBase<Foo> {
public object OnPost(Foo request) {
var v2Request = APIv2.FooService.OnPost(request)
request.Name == v2Request.DisplayName
}
}
}
Each exposed object is clear. The same mapping code still needs to be written in both styles, but in the separated style, only the mapping relevant to a type needs to be written. There is no need to explicitly map code that doesn't apply (which is just another potential source of error). The dependency of previous APIs is static when you add future APIs or change the dependency of the API layer. For example, if the data source changes then only the most recent API (version 3) needs to change in this style. In the combined style, you would need to code the changes for each of the APIs supported.
One concern in the comments was the addition of types to the code base. This is not a problem because these types are exposed externally. Providing the types explicitly in the code base makes them easy to discover and isolate in testing. It is much better for maintainability to be clear. Another benefit is that this method does not produce additional logic, but only adds additional types.
I am also trying to come with a solution for this and was thinking of doing something like the below. (Based on a lot of Googlling and StackOverflow querying so this is built on the shoulders of many others.)
First up, I don’t want to debate if the version should be in the URI or Request Header. There are pros/cons for both approaches so I think each of us need to use what meets our requirements best.
This is about how to design/architecture the Java Message Objects and the Resource Implementation classes.
So let’s get to it.
I would approach this in two steps. Minor Changes (e.g. 1.0 to 1.1) and Major Changes (e.g 1.1 to 2.0)
Approach for minor changes
So let’s say we go by the same example classes used by #mythz
Initially we have
class Foo { string Name; }
We provide access to this resource as /V1.0/fooresource/{id}
In my use case, I use JAX-RS,
#Path("/{versionid}/fooresource")
public class FooResource {
#GET
#Path( "/{id}" )
public Foo getFoo (#PathParam("versionid") String versionid, (#PathParam("id") String fooId)
{
Foo foo = new Foo();
//setters, load data from persistence, handle business logic etc
Return foo;
}
}
Now let’s say we add 2 additional properties to Foo.
class Foo {
string Name;
string DisplayName;
int Age;
}
What I do at this point is annotate the properties with a #Version annotation
class Foo {
#Version(“V1.0")string Name;
#Version(“V1.1")string DisplayName;
#Version(“V1.1")int Age;
}
Then I have a response filter that will based on the requested version, return back to the user only the properties that match that version. Note that for convenience, if there are properties that should be returned for all versions, then you just don’t annotate it and the filter will return it irrespective of the requested version
This is sort of like a mediation layer. What I have explained is a simplistic version and it can get very complicated but hope you get the idea.
Approach for Major Version
Now this can get quite complicated when there is a lot of changes been done from one version to another. That is when we need to move to 2nd option.
Option 2 is essentially to branch off the codebase and then do the changes on that code base and host both versions on different contexts. At this point we might have to refactor the code base a bit to remove version mediation complexity introduced in Approach one (i.e. make the code cleaner) This might mainly be in the filters.
Note that this is just want I am thinking and haven’t implemented it as yet and wonder if this is a good idea.
Also I was wondering if there are good mediation engines/ESB’s that could do this type of transformation without having to use filters but haven’t seen any that is as simple as using a filter. Maybe I haven’t searched enough.
Interested in knowing thoughts of others and if this solution will address the original question.

How do you forbid users from doing evil things in Groovy Scripts?

I'm planning to integrate Groovy Script Engine to my game so it will give the game nice moddability but how do you prevent players from writing evil scripts like deleting all files on C: drive?
Groovy includes library like java.io.File by default so it will be pretty easy to do once they decided to write such scripts.
I guess I can't prevent users from writing something like while(1==1){} but is there anyway to at least not let them allow to delete/modify files or something dangerous for PCs?
There's a blog post by Cedric Champeau on customising the Groovy Compilation process, the second part of it shows how to use SecureASTCustomizer and CompilerConfiguration to limit what Scripts can do (and then has examples of defining your own AST checks for System.exit, etc...
Look into the SecurityContext class.
The Groovy Web Console appears to have already solved this problem, because it won't execute something like System.exit(1). The source code is available on GitHub, so you can see how they did it.
If you're not sure where to start, I suggest getting in touch with the author, who should be able to point you in the right direction.
I know this is a old question. I'm posting this as it might help some people out there.
We needed to allow end-users to upload Groovy scripts and execute them as part of a web application (that does a lot of other things). Our concern was that within these Groovy scripts, some users might attempt to read files from the file system, read System properties, call System.exit(), etc.
I looked into http://mrhaki.blogspot.com/2014/04/groovy-goodness-restricting-script.html but that will not prevent an expert Groovy developer from bypassing the checks as pointed out by others in other posts.
I then tried to get http://www.sdidit.nl/2012/12/groovy-dsl-executing-scripts-in-sandbox.html working but setting the Security Manager and Policy implementation at runtime did not work for me. I kept running into issues during app server startup and web page access. It seemed like by the time the Policy implementation took hold, it was too late and "CodeSources" (in Java-Security-speak) already took its access settings from the default Java policy file.
I then stumbled across the excellent white paper by Ted Neward (http://www.tedneward.com/files/Papers/JavaPolicy/JavaPolicy.pdf) that explained quite convincingly that the best approach (for my use case) was to set the Policy implementation on JVM startup (instead of dynamically later on).
Below is the approach that worked for me (that combines Rene's and Ted's approaches). BTW: We're using Groovy 2.3.10.
In the [JDK_HOME]/jre/lib/security/java.security file, set the "policy.provider" value to "com.yourcompany.security.MySecurityPolicy".
Create the MySecurityPolicy class:
import java.net.MalformedURLException;
import java.net.URL;
import java.security.AllPermission;
import java.security.CodeSource;
import java.security.PermissionCollection;
import java.security.Permissions;
import java.security.Policy;
import java.util.HashSet;
import java.util.Set;
public class MySecurityPolicy extends Policy {
private final Set<URL> locations;
public MySecurityPolicy() {
try {
locations = new HashSet<URL>();
locations.add(new URL("file", "", "/groovy/shell"));
locations.add(new URL("file", "", "/groovy/script"));
} catch (MalformedURLException e) {
throw new IllegalStateException(e);
}
}
#Override
public PermissionCollection getPermissions(CodeSource codeSource) {
// Do not store these in static or instance variables. It won't work. Also... they're cached by security infrastructure ... so this is okay.
PermissionCollection perms = new Permissions();
if (!locations.contains(codeSource.getLocation())) {
perms.add(new AllPermission());
}
return perms;
}
}
Jar up MySecurityPolicy and drop the jar in [JDK_HOME]/jre/lib/ext directory.
Add "-Djava.security.manager" to the JVM startup options. You do not need to provide a custom security manager. The default one works fine.
The "-Djava.security.manager" option enables Java Security Manager for the whole application. The application and all its dependencies will have "AllPermission" and will thereby be allowed to do anything.
Groovy scripts run under the "/groovy/shell" and "/groovy/script" "CodeSources". They're not necessarily physical directories on the file system. The code above does not give Groovy scripts any permissions.
Users could still do the following:
Thread.currentThread().interrupt();
while (true) {} (infinite loop)
You could prepend the following (dynamically at runtime) to the beginning of every script before passing it onto the Groovy shell for execution:
#ThreadInterrupt
import groovy.transform.ThreadInterrupt
#TimedInterrupt(5)
import groovy.transform.TimedInterrupt
These are expalined at http://www.jroller.com/melix/entry/upcoming_groovy_goodness_automatic_thread
The first one handles "Thread.currentThread().interrupt()" a little more gracefully (but it doesn't prevent the user from interupting the thread). Perhaps, you could use AST to prevent interupts to some extent. In our case, it's not a big issue as each Groovy script execution runs in its own thread and if bad actors wish to kill their own thread, they could knock themselves out.
The second one prevents the infinite loop in that all scripts time out after 5 seconds. You can adjust the time.
Note that I noticed a performance degradation in the Groovy script execution time but did not notice a significant degradation in the rest of the web application.
Hope that helps.

Code Contracts and Auto Generated Files

When I enabled code contracts on my WPF control project I ran into a problem with an auto generated file which was created at compile time (XamlNamespace.GeneratedInternalTypeHelper). Note, the generated file is called GeneratedInternalTypeHelper.g.cs and is not the same as the GeneratedInternalTypeHelper.g.i.cs which there are several obsolete blog posts about.
I'm not exactly sure what its purpose is, but I am assuming it is important for some internal reflection to resolve XAML. The problem is that it does not have code contracts, nor is the code contract system smart enough to recognize it as an auto generated file. This leads to a bunch of errors from the static checker.
I tried searching for a solution to this problem, but it seems like nobody is developing WPF controls and using code contracts. I did come across an interesting attribute, ContractVerificationAttribute, which takes a boolean value to set whether the assembly or class is to be verified. This allows you to decorate a class as not verified. Sadly the GeneratedInternalTypeHelper is regenerated with every compile, so it is not possible to exclude just this one class. The inverse scenario is possible though, decorate the assembly as not verified and then opt in for every class.
To mitigate the obvious hack I wanted to create a test that would at least verify that the exposed classes have code contract verification with a test like the following to ensure that own classes were at least being verified:
[Fact]
public void AllAssemblyTypesAreDecoratedWithContractVerificationTrue()
{
var assembly = typeof(someType).Assembly;
var exposedTypes = assembly.GetTypes().Where(t=>!string.IsNullOrWhiteSpace(t.Namespace) && t.Namespace.StartsWith("MyNamespace") && !t.Name.StartsWith("<>"));
var areAnyNotContractVerified = exposedTypes.Any(t =>
{
var verificationAttribute = t.GetCustomAttributes(typeof(ContractVerificationAttribute), true).OfType<ContractVerificationAttribute>();
return verificationAttribute.Any() && verificationAttribute.First().Value;
});
Assert.False(areAnyNotContractVerified);
}
As you can see it takes all classes in the controls assembly and finds the one from the company namespace which are not also auto generated anonymous types (<>WeirdClassName).
(I also need to exclude Resources and settings, but I hope you get the idea).
I'm not loving the solution since there are ways of avoiding contract verification, but currently it's the best I can come up with. If anyone has a better solution, please let me know.
So you can treat this class exactly like you would treat any other "3rd party" class or library. I'm sure certain assumptions would hold with the interaction with this generated class so at the interaction points, decorate your own code with Contract.Assume(result != null) or similar.
var result = new GennedClass().GetSomeValue();
Contract.Assume(result != null);
What this does is translate into an assertion that is checked at run time, but it allows the static analyzer to reason about the rest of the code that you do control.

Resources