MEF: Component authentication - security

I am building a Windows (Service) application that, in short, consists of a "bootstrapper" and an "engine" (an object loaded by the bootstrapper, which transfers control to it, and then performs the actual tasks of the application). The bootstrapper is a very basic startup routine that has few features that are likely to change. But the engine itself could be subject to upgrades after installation, and I am implementing a mechanism so that it can upgrade itself - by contacting a "master server" and checking its version number against a "most current" version. If there is a newer version of the engine available, it will download it into a designated folder and call a method in the bootstrapper to "restart".
So, whenever the bootstrapper starts up, it uses MEF to "scan" the appropriate directories for implementations of IEngine, compares their bootstrapper compatibility numbers and picks the newest compatible engine version. Then it transfers control to the engine (which then, in turn, performs the update check etc). If there are no eligible IEngines - or MEF fails during composition - it falls back on a default, built-in implementation of IEngine.
This application will be running on a remote server (or several), and the whole rationale behind this is to keep manual application maintenance to a minimum (as in not having to uninstall/download new version/reinstall etc).
So, the problem: Since the bootstrapper effectively transfers program execution to a method on the IEngine object, a malicious IEngine implementation (or impersonator) that somehow found its way to the application's scanned folders could basically wreak total havoc on the server if it got loaded and was found to be the most eligible engine version.
I am looking for a mechanism to verify that the IEngine implementation is "authentic" - as in issued by a proper authority. I've been playing around with some home brewn "solutions" (having IEngine expose a Validate function that is passed a "challenge" and has to return a proper "Response" - in various ways, like having the bootstrapper produce a random string that is encrypted and passed to the engine candidate, which then has to decrypt and modify the string, then hash it, encrypt the hash and return it to the bootstrapper which will perform a similar string modification on its random string, then hash that and compare that hash to to the decrypted response (hash) from the candidate etc), but I'm sure there are features in .Net to perform this kind of verification? I just looked at Strong Naming, but it seems it's not the best way for a system that will be dynamically loading yet unthought-of dlls..
Input will be greatly appreciated.

Assemblies can be digitally signed with a private key. The result is called a strong named assembly.
When a strong named assembly is loaded, .NET automatically checks whether its signature matches the embedded public key. So when a strong named assembly has been loaded, you have the guarantee that the author posseses the private key that corresponds to that public key.
You can get the public key by calling Assembly.GetName().GetPublicKey() and then compare it to the expected one, i.e. yours.
You can scan over the plugin assemblies, create an AssemblyCatalog for each one with the right public key (rejecting the others), finally aggregating them into an AggregateCatalog and building a CompositionContainer with it.
This is basically what Glenn Block also explained in this thread. (Best ignore the blog post linked there by Bnaya, his interpretation of StrongNameIdentityPermission is not correct.)
edit with responses to the wall of comments:
To get that public key, I make the
console application output the public
key byte array to somewhere. I embed
the byte array in my host application,
and subsequently use that to compare
against the public keys of plugin
candidates. Would that be the way to
do it?
Yes, but there is a simpler way to extract the public key. Look at the -Tp option of sn.exe.
Does this mechanism automatically prevent a malicous plugin assembly from exposing a
correct, but "faked" public key? As in, is there some mechanism to disqualify any assembly
that is signed, but has a mismatch between its exposed public key and it's internal
private key, from being loaded/run at all?
As far as I know, the check happens automatically. A strong named assembly cannot be loaded (even dynamically) if its signature is wrong. Otherwise the strong name would be useless. To test this, you can open your strong named assembly in a hex editor, change something (like a character in a const string embedded in the assembly) and verify that the assembly can no longer be loaded.
I guess what I was referring to was something akin to the type of hack/crack described here:
http://www.dotnetmonster.com/Uwe/Forum.aspx/dotnet-security/407/Signed-assemblies-easily-cracked
and here: Link
[...snip more comments...]
However, this can - apparently - be bypassed by simple tampering (as shown in first link, > and explained more here): grimes.demon.co.uk/workshops/fusionWSCrackOne.htm
The "attacks" you refer to fall in three categories:
removing the strong name altogether. This does not break the authentication, the assembly will no longer have a public key and so you will reject it.
disabling the strong name check, which requires full access to the machine. If this was done by an attacker, then it would mean that the attacker already owns your machine. Any security mechanism would be meaningless in such a context. What we are actually defending against is an attacker between the machine and the source of the assemblies.
a real exploit made possible by a bug in .NET 1.1 that has since been fixed
Conclusion: strong names are suitable to use for authentication (at least since .NET 2.0)

I've written a blog post with source code for a catalog which only loads assemblies with keys that you specify: How to control who can write extensions for your MEF application

Related

GWT - Ensure SQL password is never handed to the client

I have a GWT project where the server needs to talk to an SQL database. It needs a password to do that. That password needs to be stored somewhere. I can think of three locations to store that password:
Right there in the call to DriverManager.getConnection.
A final String field somewhere.
A .properties file.
With cases 1 and 2, the scenario comes to mind that the source code is translated to JavaScript and sent to the client.
That would never happen intentionally since it only makes sense for the server to talk to the database and not the client, but it could happen accidentally. In case 1 GWT would probably complain that it can't deal with JDBC, but in case 2 the field might be in some Constants class that compiles just fine.
I don't have enough experience with GWT to know how .properties files are handled. E.g. files in the src\foo\server directory might not be included in the JavaScript that gets handed to the client, but someone might come along later and accidentally move the file somewhere else where it is included.
So how can I ensure that the password is never accidentally sent to the client?
Note that I don't care that the password is stored in plain-text, either in code or in a config file.
Edit:
Clarification of my current situation:
My TestModule.gwt.xml only contains <source path='client'/>. It does not contain <source path='shared'/> or <source path='server'/>!
I have shared configs and server-only configs (the server-only config would contain the password for the database, then):
In the TestScreen (which is a Composite that shows a button on the page) I can use the ServerConfig class and SharedConfig class from client code without any problems:
This is a problem since I (or someone else) might accidentally cause the class with the password to be translated to JS and sent to the client.
The database password should rather be stored in a properties file than somewhere in the code. Unlike the code, this properties file should not be submitted to a version control system (like git or similar). It should also be outside the web folders.
Moreover, it would be huge security risk to use public final static String to store a password. Public members are visible to all other classes, static means no instance necessary to use it and final that it won't change. In your code you are storing a String constant that will be available to all instances of the class, and to other objects using the class. That is no good way to start considering security risks and is not directly related to GWT. It would be like storing a lot of money in a bank with no walls or doors and then asking how one could make it safe.
As long as data stays in the server side, you're fine. Per default, only client and shared paths are specified for translatable code. If your server classes do not implement IsSerializable and are not explicitly specified for translatable code in your gwt.xml file, they won't be sent to the client.
You have more than one option here :
Use a sperate classpath for both client a server so the classes in the server are never referenced in the client, this can be done following the recommended prject structure where each of the client/shared/server are a separate project, you can create such project structure using https://github.com/tbroyer/gwt-maven-archetypes, when you use this most likely the build will fail when anyone tries to depends on the server from the client, but there is still the possibilty that someone will do something and make it work.
Use #GwtIncompatible annotation on the class that holds the password which means the class will never be transpiled to JS at all and if referenced from the client side it will be a compilation error at gwt compilation phase.
Never put the password in a source file and depend on environment variable or some sort of password/key store that only exists on the server where you deploy the app and you still can set it locally for development.
If the server types and members are still accessible, you have misconfigured the .gwt.xml file, as #Adam said - make sure that the server vs client vs shared packages all exist in together in the same package as your .gwt.xml, and that no other .gwt.xml might exist.
This is not a security feature, like you are treating it, but a "how do I get the code I actually need to do my work" issue - java bytecode doesn't have enough detail in it (generics are erased, and old versions of gwt actually used javadoc tags for more detail) to generate the sources. Generally speaking, if you don't have sources, you can't pass that Java to GWT and expect it to be used in producing JS.
There are at least two edge case exceptions to this. There are probably more, but these spots of weirdness usually only matter when trying to understand why GWT can't generate JS from some Java, whereas you are trying to leverage these limitations as security features.
Generators and linkers run in the JVM, so naturally they can function with just plain JVM bytecode while the compiler is running. It would be a weird case where you would care about this, but consider something like a generator which was trying to extract some kind of reflection information and provide it in a static format for the browser.
GWT uses JDT to read the classes to be compiled, and it loads up bytecode where possible to resolve some things - one of those things happens to include constants. A "static final" string or primitive can be read from bytecode in this way without needing to go to the original .java sources.
If you have content in your bytecode that must not be considered in any way when generating JS, it belongs in a separate classpath - generally speaking, you should always separate your client code from your server code into separate projects with separate classpaths. There may exist at least one more project, to signify "shared" code which both client and server need to have access to.
And finally, it is generally speaking considered a bad idea to put secrets of any kind in your project itself, whether in the code itself or properties files, but instead to make it part of the deployment or runtime environment.

How to preserve data when updating Java Card / GlobalPlatform applet?

How can I update a Java Card applet that contains data that need to be preserved across versions? As best as I can tell updating an applet is done by deleting it and then installing the new version, which seems like it would also delete any persistent data associated with the app.
As a concrete example, suppose I was writing an authentication and encryption applet. Version 1 of the applet would generate a key protected by the hardware on installation, and support signing messages, but not encrypting them. Suppose I then wanted to release a version 2 that also supported encryption, and could use the keys created by version 1. What would I need to do in version 1 and 2 in order to make that possible?
I'm open to solutions that use GlobalPlatform mechanics in addition to pure Java Card.
You need a second applet which owns all objects you want to preserve across re-installation of first applet. Let's call them Storage applet and Worker applet.
This means that every time a Worker applet needs to use resources from Storage applet it has to invoke Shareable interface. There is a penalty in code size, code maintainability and a penalty in speed. I can not think of another way to do this in Java Card or Global Platform.
You can't.
Updating an application package while preserving the data associated with any applet instantiated from that package is not possible on current JavaCard smartcards.
For one, this has a rather simple technical reason:
Applets are Java (Card) objects instantiated from classes in the code base of the application package. Updating the code base would change the structure of classes and, consequently, the already instantiated applet objects would no longer match their class definitions. As a result, whenever an applet class in the application package changes, this also means that the respective applet instance needs to be re-created.
Since all applet data is, itself, organized as Java (Card) objects (such as objects instantiated from user-defined classes, arrays, or primitive type fields that are stored as direct or indirect attributes of the applet instance, the applet instance is the root element1 of all its associated data. Consequently, the necessity to delete and re-create that root instance, also means that data stemming from that root element is deleted and re-initialized.
Of cource, this could be overcome by making applets (and all child objects storing the applet data) serializable. That way, an applet could be serialized to an auxilliary storage area before updating its code base. After the update, the applet instance (and its object hierarchy) could then be re-created by de-serializing that data. However, allowing this would have a sever security implication: Any managing instance could serialize (and, in the worst case, extract) that data.
This brings me to the second reason: In my opinion (though I was unable to find any authoritive ressource on this), a central design principle of the Java Card platform and smartcards in general is to prevent extraction of sensitive data.
Consider the following example: You have an applet for digital signing (e.g. PIV, OpenPGP, or simply the applet you described in your question). The purpose of this applet is to securely store the secret key on a smartcard chip (dedicated hardware with protection features inhibiting physical extraction of the key material even if an attacker gains access to the physical card/chip). As a consequence, the applet securely generates its private key material on-chip. The secret private key may then be used for signing (and/or decryption), but it should never be allowed to leave the card (since this would open up for key/card duplication, keys leaking to attackers, etc.)
Now imagine, that the smartcard runtime environment allows serialization of the applet data including the secret private key. Even if the card would not provide any means to directly extract that serialized data, simply consider an attacker writing a new applet that has exactly the same structure as your own applet, except that it provides one additional method to extract the key material from the card. The attacker now creates an application package that indicates it is an update to your existing applet. Further assume that the attacker is able to load that application package on the card. Now the attacker is able to update your applet and to break you original design goal (to make it impossible to extract the key from the card).
The design priciple that overwriting an existing application/applet requires to erase the existing applet and all its data is also (somewhat) manifested in both the GlobalPlatform Card specification and the Java Card Runtime specification:
GlobalPlatform Card Specification, Version 2.3, Oct. 2015:
When loading any application package with the INSTALL [for load] command, the OPEN needs to
"Check that the AID of the Load File is not already present in the GlobalPlatform Registry as an Executable Load File or Application."
When installing an applet instance with the INSTALL [for install] command, the OPEN needs to
"Check that the Application AID [...] is not already present in the GlobalPlatform Registry as an Application or Executable Load File"
Java Card 3 Platform, Runtime Environment Specification, Classic Edition, Version 3.0.4, Sept. 2011:
"The Java Card RE shall guarantee that an applet will not be deemed successfully installed in the following cases:
The applet package as identified by the package AID is already resident on the card.
The applet package contains an applet with the same Java Card platform name as that of another applet already resident on the card."
Moreover, the the specification makes clear that applet removal must remove all objects owned by the applet instance:
"Applet instance deletion involves the removal of the applet object instance and the objects owned by the applet instance and associated Java Card RE structures."
Nevertheless, sometimes there might be valid reasons to make parts of an application updatable without wiping information such as secret keys, and there are some ways to overcome this:
Chip and card OS manufacturers probably do have ways to patch some functionality of the card operating system and runtime environments on existing cards. I can't cite any authorative source for this assumption through.
If you consider patch-/upgradability as a design goal from the beginning, you would certainly be able to design your application in a way that parts of it may be upgraded without losing associated data. You would typically split your application into multiple application packages that contain the different parts of your application functionality (also cf. Shuckey's answer). In the simplest form, one applet would contain and manage all the business logic and another applet (it needs to be in a separate application package through) is responsible for storing sensitive persistable data. You could then use shareable interface to access the data stored in the data storage applet from the business logic applet.
However, you would certainly want to carefully craft the interface between the two instances. For example you would definitely not want to pass secret private keys around directly. Instead you would only want to share an interface to perform signing/decryption operations on data passed from the business logic applet to the data storage applet. Otherwise, the same extraction issues as indicated above would arise.
1) This is not necessarily always true since data may (technically) be stored in static fields. In that case, the application package would be the root of these elements. However, doing this has a severe impact on security since access to static fields is not protected by applet isolation/the applet firewall. Cf. Java Card 3 Platform, Runtime Environment Specification, Classic Edition, Version 3.0.4, Sept. 2011, sect. 6.1.6:
"There is no runtime context check that can be performed when a class static field is accessed."
The GlobalPlatform Amendment H Card Executable Load File Update provides a solution for this issue.
(https://globalplatform.org/wp-content/uploads/2018/03/GPC_2.3_H_ELF_Upgrade_v1.1_PublicRelease.pdf). However I don't know if there is already a product in the market that implements this specification.

Can a running nodejs application cryptographically prove it is the same as published source code version?

Can a running nodejs program cryptographically prove that it is the same as a published source code version in a way that could not be tampered with?
Said another way, is there a way to ensure that the commands/code executed by a nodejs program are all and only the commands and code specified in a publicly disclosed repository?
The motivation for this question is the following: In an age of highly sophisticated hackers as well as pressures from government agencies for "backdoors" that allow them to snoop on private transactions and exchanges, can we ensure that an application has been neither been hacked nor had a backdoor added?
As an example, consider an open source-based nodejs application like lesspass (lesspass/lesspass on github) which is used to manage passwords and available for use here (https://lesspass.com/#/).
Or an alternative program for a similar purpose encryptr (SpiderOak/Encryptr on github) with its downloadable version (https://spideroak.com/solutions/encryptr).
Is there a way to ensure that the versions available on their sites to download/use/install are running exactly the same code as is presented in the open source code?
Even if we have 100% faith in the integrity of the the teams behind applications like these, how can we be sure they have not been coerced by anyone to alter the running/downloadable version of their program to create a backdoor for example?
Thank you for your help with this important issue.
sadly no.
simple as that.
the long version:
you are dealing with the outputs of a program, and want to ensure that the output is generated by a specific version of one specific program
lets check a few things:
can an attacker predict the outputs of said program?
if we are talking about open source programs, yes, an attacker can predict what you are expecting to see and even can reproduce all underlying crypto checks against the original source code, or against all internal states of said program
imagine running the program inside a virtual machine with full debugging support like firing up events at certain points in code, directly reading memory to extract cryptographic keys and so on. the attacker does not even have to modify the program, to be able to keep copys of everything you do in plaintext
so ... even if you could cryptographically make sure that the code itself was not tampered with, it would be worth nothing: the environment itself could be designed to do something harmful, and as Maarten Bodewes wrote: in the end you need to trust something.
one could argue that TPM could solve this but i'm afraid of the world that leads to: in the end ... you still have to trust something like a manufacturer or worse a public office signing keys for TPMs ... and as we know those would never... you hear? ... never have other intentions than what's good for you ... so basically you wouldn't win anything with a centralized TPM based infrastructure
You can do this cryptographically by having a runtime that checks signatures before running any code. Of course, you'd have to trust that runtime environment as well. Unless you have such an environment you're out of luck - that is, unless you do a full code review.
Furthermore you can sign the build by placing a signature within the build system. The build system and developer access in turn can be audited. This is usually how secure development environments are build. But in the end you need to trust something.
If you're just afraid that a particular download is corrupted you can test against an official hash published at one or more trusted locations.

How could I start executables from within a modern browser?

I am responsible for our corporate application menu page (intranet only). It contains many links to resources (web pages and installed application) and is tailored to the current user.
In the past, I have used an applet to allow installed applications to be started directly from the browser.
The corporate web is going though a revamp and I have been told to find a solution which requires no plugins of any kind.
My first attempt was to register a custom protocol handler. The menu provider contains definitions for all the links and application commands and each user has different rights. I could imagine that, when the menu is created for a user, the commands could be encoded and added as something like custom://base64encodedcommand. The handler would decode the command, perform checks and execute it.
This works well in IE, FF and Chrome. At the moment, we have mainly Windows workstations and it will be used only within the company intranet.
Is this a viable approach? Are there security issues?
Unfortunately with any solution it is possible to only prove the existance of a vulnerability, and never the lack there-of. But there are some necessery, but not sufficient ways to make your system more resistant to attacks.
Currently you are base64 encoding the execution string. This adds absolutely nothing to security. Even if you chose some different method, this will only be security through obscurity, and can easily be reverse engineered by somebody with enough time.
What you can do is to have some sort of public-private key signing set up. So that you can sign each link with your own private key, and that would mean that this link is allowed to be executed, a link without a signature or with an invalid signature should not even be decoded.
So what you would have is custom://+base64link+separator+base64signature.
Things to keep in mind:
It is very important that only you (or very select group of people) have access to private key. This is the same as with any other pub-priv key system.
Not only should you not run the link if the signature is invalid, but you must not even decode it (thus you sign the base64 string, not the decoded command). Assume that it is an attack right away, and probably even inform the user of the fact.
And i repeat, while this can be considered to be a necessary for security, it is not something that is sufficient. So keep thinking of other possible attacks.

How can I add, delete, and update resources in CLR assemblies?

I've taken a long look around and can't find any information on altering managed resources in assemblies (note that I'm already familar with Win32 resources and the APIs for altering those).
My application has resources that need to be updated by the end user and the application will be distributed as a single executable (so I can't just use satellite assemblies) .
I see a few possible workarounds, but they seem hackish:
The first is to use ILMerge: I'd create a new assembly in-memory which contains the new resources and use ILMerge to combine it with the original assembly to form the new program. The only downside is that resources cannot be updated or deleted.
The second is somewhat similar: There would be a .netmodule (emited from the C# compiler) which is ran against al.exe with the /embed switch to add the resources to form the new assembly. The downside being that none of the resources in the original assembly would be present.
I'm leaning towards the ILMerge option, but the terms on redistribution are ambiguous. The EULA makes no reference to redistribution rights (so I assume in this Negative Freedom society that it's permitted) yet the Microsoft Research page says redistribution is not permitted (but it's ambiguously worded, from what I can tell it might be referring to commercial redistribution, which wouldn't apply to me since this is a non-profit GPL project).
Thanks
IMHO, I don't think it is a good idea to do it anyway. If this resources are actually user data, even if there is a "preinstalled" set of it, it does not belong to a embedded resource.
Are you're assemblies signed? You would have to resign them after changing, your private key is exposed and everyone can sign your application. So it's not worth to sign it and you have a security risk anyway.
Move your resources to an external file. You can still embed the "predefined" resources. The first time your application starts, you create the external file and copy the embedded resources to the external file. If the external file exists, you don't care about the embedded resources anymore.

Resources