How to preserve data when updating Java Card / GlobalPlatform applet? - security

How can I update a Java Card applet that contains data that need to be preserved across versions? As best as I can tell updating an applet is done by deleting it and then installing the new version, which seems like it would also delete any persistent data associated with the app.
As a concrete example, suppose I was writing an authentication and encryption applet. Version 1 of the applet would generate a key protected by the hardware on installation, and support signing messages, but not encrypting them. Suppose I then wanted to release a version 2 that also supported encryption, and could use the keys created by version 1. What would I need to do in version 1 and 2 in order to make that possible?
I'm open to solutions that use GlobalPlatform mechanics in addition to pure Java Card.

You need a second applet which owns all objects you want to preserve across re-installation of first applet. Let's call them Storage applet and Worker applet.
This means that every time a Worker applet needs to use resources from Storage applet it has to invoke Shareable interface. There is a penalty in code size, code maintainability and a penalty in speed. I can not think of another way to do this in Java Card or Global Platform.

You can't.
Updating an application package while preserving the data associated with any applet instantiated from that package is not possible on current JavaCard smartcards.
For one, this has a rather simple technical reason:
Applets are Java (Card) objects instantiated from classes in the code base of the application package. Updating the code base would change the structure of classes and, consequently, the already instantiated applet objects would no longer match their class definitions. As a result, whenever an applet class in the application package changes, this also means that the respective applet instance needs to be re-created.
Since all applet data is, itself, organized as Java (Card) objects (such as objects instantiated from user-defined classes, arrays, or primitive type fields that are stored as direct or indirect attributes of the applet instance, the applet instance is the root element1 of all its associated data. Consequently, the necessity to delete and re-create that root instance, also means that data stemming from that root element is deleted and re-initialized.
Of cource, this could be overcome by making applets (and all child objects storing the applet data) serializable. That way, an applet could be serialized to an auxilliary storage area before updating its code base. After the update, the applet instance (and its object hierarchy) could then be re-created by de-serializing that data. However, allowing this would have a sever security implication: Any managing instance could serialize (and, in the worst case, extract) that data.
This brings me to the second reason: In my opinion (though I was unable to find any authoritive ressource on this), a central design principle of the Java Card platform and smartcards in general is to prevent extraction of sensitive data.
Consider the following example: You have an applet for digital signing (e.g. PIV, OpenPGP, or simply the applet you described in your question). The purpose of this applet is to securely store the secret key on a smartcard chip (dedicated hardware with protection features inhibiting physical extraction of the key material even if an attacker gains access to the physical card/chip). As a consequence, the applet securely generates its private key material on-chip. The secret private key may then be used for signing (and/or decryption), but it should never be allowed to leave the card (since this would open up for key/card duplication, keys leaking to attackers, etc.)
Now imagine, that the smartcard runtime environment allows serialization of the applet data including the secret private key. Even if the card would not provide any means to directly extract that serialized data, simply consider an attacker writing a new applet that has exactly the same structure as your own applet, except that it provides one additional method to extract the key material from the card. The attacker now creates an application package that indicates it is an update to your existing applet. Further assume that the attacker is able to load that application package on the card. Now the attacker is able to update your applet and to break you original design goal (to make it impossible to extract the key from the card).
The design priciple that overwriting an existing application/applet requires to erase the existing applet and all its data is also (somewhat) manifested in both the GlobalPlatform Card specification and the Java Card Runtime specification:
GlobalPlatform Card Specification, Version 2.3, Oct. 2015:
When loading any application package with the INSTALL [for load] command, the OPEN needs to
"Check that the AID of the Load File is not already present in the GlobalPlatform Registry as an Executable Load File or Application."
When installing an applet instance with the INSTALL [for install] command, the OPEN needs to
"Check that the Application AID [...] is not already present in the GlobalPlatform Registry as an Application or Executable Load File"
Java Card 3 Platform, Runtime Environment Specification, Classic Edition, Version 3.0.4, Sept. 2011:
"The Java Card RE shall guarantee that an applet will not be deemed successfully installed in the following cases:
The applet package as identified by the package AID is already resident on the card.
The applet package contains an applet with the same Java Card platform name as that of another applet already resident on the card."
Moreover, the the specification makes clear that applet removal must remove all objects owned by the applet instance:
"Applet instance deletion involves the removal of the applet object instance and the objects owned by the applet instance and associated Java Card RE structures."
Nevertheless, sometimes there might be valid reasons to make parts of an application updatable without wiping information such as secret keys, and there are some ways to overcome this:
Chip and card OS manufacturers probably do have ways to patch some functionality of the card operating system and runtime environments on existing cards. I can't cite any authorative source for this assumption through.
If you consider patch-/upgradability as a design goal from the beginning, you would certainly be able to design your application in a way that parts of it may be upgraded without losing associated data. You would typically split your application into multiple application packages that contain the different parts of your application functionality (also cf. Shuckey's answer). In the simplest form, one applet would contain and manage all the business logic and another applet (it needs to be in a separate application package through) is responsible for storing sensitive persistable data. You could then use shareable interface to access the data stored in the data storage applet from the business logic applet.
However, you would certainly want to carefully craft the interface between the two instances. For example you would definitely not want to pass secret private keys around directly. Instead you would only want to share an interface to perform signing/decryption operations on data passed from the business logic applet to the data storage applet. Otherwise, the same extraction issues as indicated above would arise.
1) This is not necessarily always true since data may (technically) be stored in static fields. In that case, the application package would be the root of these elements. However, doing this has a severe impact on security since access to static fields is not protected by applet isolation/the applet firewall. Cf. Java Card 3 Platform, Runtime Environment Specification, Classic Edition, Version 3.0.4, Sept. 2011, sect. 6.1.6:
"There is no runtime context check that can be performed when a class static field is accessed."

The GlobalPlatform Amendment H Card Executable Load File Update provides a solution for this issue.
(https://globalplatform.org/wp-content/uploads/2018/03/GPC_2.3_H_ELF_Upgrade_v1.1_PublicRelease.pdf). However I don't know if there is already a product in the market that implements this specification.

Related

Windows equivalent of application-scoped Linux Wallet

In Linux, there's a KDE Wallet (and GNOME Wallet) application, that stores passwords and other sensitive data. These wallets by default prevent accidental data access of application other than the one that stored the data.
E.g. if the piece of data was stored by the /bin/app1, then /bin/app2 won't have full access to that data, and the wallet will first ask the user if they really want to allow /bin/app2 to access the data stored by /bin/app1.
I find this feature important for some aspects of local data security for an application I participate in.
On Windows, a somewhat analogous UX is provided by wincred.h, but, as I currently understand, there's no any kind of per-application restrictions in it. It will provide the data access to any application started by the current user, and thus provide less security that the application-scoped defaults of Linux wallets.
Is there any way to achieve a similar application- (or vendor-) scoped security in Windows using only standard APIs?

Security threats to updating a java desktop application

Im looking at security threats to my java application when doing updates.
Also looking for ways to update my application. If there is an urgent update needed that it will be forced onto the user. Also what would be the security issues with these ways of updating?
You need to be more specific. What mechanism do you use to update your application?
A way to update your app is, for example, to replace single class files.
In general you have to check the source of the update. A possible attacker could try to fake an Update (class-file) to get into the host. To counter this threat you should sign your updates with a private key and use your public key to check if the signature is valid. (Overall you should sign your applications/jar-files)(Java Code Signing)
Code signing is also usefull if an attacker tries to trick the user to install some manipulated update.
If you use object-serialization you need to be aware of additional points (Object (De-)Serialization Vulnerabilities)
Another question Stackoverflow about updating java applications: How can I write a Java application that can update itself at runtime?

Two OwnerPIN object in Java Card

I am working on a Java Card application where our requirement is to keep some static data and balance in the card.
For security I was thinking to make 2 object of OwnerPIN. One object is for terminal authentication (i.e. the terminal needs to send 8 bytes of data to authenticate itself) and the other object is for user authentication (i.e. the user needs to enter a 4 digit PIN to authenticate theirself)
Only if both authentications are successful, we can read the data or update the balance.
Or is there any other advice on how to implement security on card to avoid theft?
Also is there any guideline for choosing proprietary class and instruction bytes during applet development?
For user authentication, the OwnerPIN is certainly one good way to go (there are alternatives ofcourse, but OwnerPIN provides security features (e.g. tearing protection) that you would otherwise have to implement manually).
For terminal authentication, nothing should prevent you from using an approach based on an instance of the OwnerPIN. However, depending on your security requirements, you might want to choose some form of mutual authentication instead of a simple PIN code. If the terminal simply sends a PIN code (especially if it does that in plain text), an attacker could simply intercept that PIN code (while sent to a card) and then use that discovered PIN code to create their own (malicious) terminal.
With regard to class and instruction byte (and especially with regard to standard operations like PIN code verification) I would suggest that you stick to standards. ISO/IEC 7816-4 defines many instructions for such standard operations.

How can I protect a key against other applications?

Setup
I have a SQLite database which has confidential user information.
This database may be replicated on other machines
I trust the user, but not other applications
The user has occasional access to a global server
Security Goals
Any program other than the authorized one (mine) cannot access the SQLite database.
Breaking the security on one machine will NOT break the security on other machines
The system must be updatable (meaning that if some algorithm such as a specific key generation algorithm is shown to be flawed, it can be changed)
Proposed Design
Use an encrypted SQLite database storing the key within OS secure storage.
Problems
Any windows hack will allow the person to access the key for all machines which violates goal #2
Notes
Similar to this method, if I store the key in the executable, breaking the security will comprimise all systems.
Also, I have referenced windows secure storage. While, I will go to an os specific solution if I have to, I would prefer a non-os specific solution
Any idea on how to meet the design goals?
I think you will need to use TPM hardware e.g. via TBS or something similar, to actually make a secure version of this. My understanding is, TPM lets the application check that it is not being debugged or traced at a software level, and the operating system should prevent any other application pretending to the TPM module that it is your application. I may be wrong though.
You can use some kind of security-through-obscurity kludge, but it will be crackable with a debugger unless you use TPM.

MEF: Component authentication

I am building a Windows (Service) application that, in short, consists of a "bootstrapper" and an "engine" (an object loaded by the bootstrapper, which transfers control to it, and then performs the actual tasks of the application). The bootstrapper is a very basic startup routine that has few features that are likely to change. But the engine itself could be subject to upgrades after installation, and I am implementing a mechanism so that it can upgrade itself - by contacting a "master server" and checking its version number against a "most current" version. If there is a newer version of the engine available, it will download it into a designated folder and call a method in the bootstrapper to "restart".
So, whenever the bootstrapper starts up, it uses MEF to "scan" the appropriate directories for implementations of IEngine, compares their bootstrapper compatibility numbers and picks the newest compatible engine version. Then it transfers control to the engine (which then, in turn, performs the update check etc). If there are no eligible IEngines - or MEF fails during composition - it falls back on a default, built-in implementation of IEngine.
This application will be running on a remote server (or several), and the whole rationale behind this is to keep manual application maintenance to a minimum (as in not having to uninstall/download new version/reinstall etc).
So, the problem: Since the bootstrapper effectively transfers program execution to a method on the IEngine object, a malicious IEngine implementation (or impersonator) that somehow found its way to the application's scanned folders could basically wreak total havoc on the server if it got loaded and was found to be the most eligible engine version.
I am looking for a mechanism to verify that the IEngine implementation is "authentic" - as in issued by a proper authority. I've been playing around with some home brewn "solutions" (having IEngine expose a Validate function that is passed a "challenge" and has to return a proper "Response" - in various ways, like having the bootstrapper produce a random string that is encrypted and passed to the engine candidate, which then has to decrypt and modify the string, then hash it, encrypt the hash and return it to the bootstrapper which will perform a similar string modification on its random string, then hash that and compare that hash to to the decrypted response (hash) from the candidate etc), but I'm sure there are features in .Net to perform this kind of verification? I just looked at Strong Naming, but it seems it's not the best way for a system that will be dynamically loading yet unthought-of dlls..
Input will be greatly appreciated.
Assemblies can be digitally signed with a private key. The result is called a strong named assembly.
When a strong named assembly is loaded, .NET automatically checks whether its signature matches the embedded public key. So when a strong named assembly has been loaded, you have the guarantee that the author posseses the private key that corresponds to that public key.
You can get the public key by calling Assembly.GetName().GetPublicKey() and then compare it to the expected one, i.e. yours.
You can scan over the plugin assemblies, create an AssemblyCatalog for each one with the right public key (rejecting the others), finally aggregating them into an AggregateCatalog and building a CompositionContainer with it.
This is basically what Glenn Block also explained in this thread. (Best ignore the blog post linked there by Bnaya, his interpretation of StrongNameIdentityPermission is not correct.)
edit with responses to the wall of comments:
To get that public key, I make the
console application output the public
key byte array to somewhere. I embed
the byte array in my host application,
and subsequently use that to compare
against the public keys of plugin
candidates. Would that be the way to
do it?
Yes, but there is a simpler way to extract the public key. Look at the -Tp option of sn.exe.
Does this mechanism automatically prevent a malicous plugin assembly from exposing a
correct, but "faked" public key? As in, is there some mechanism to disqualify any assembly
that is signed, but has a mismatch between its exposed public key and it's internal
private key, from being loaded/run at all?
As far as I know, the check happens automatically. A strong named assembly cannot be loaded (even dynamically) if its signature is wrong. Otherwise the strong name would be useless. To test this, you can open your strong named assembly in a hex editor, change something (like a character in a const string embedded in the assembly) and verify that the assembly can no longer be loaded.
I guess what I was referring to was something akin to the type of hack/crack described here:
http://www.dotnetmonster.com/Uwe/Forum.aspx/dotnet-security/407/Signed-assemblies-easily-cracked
and here: Link
[...snip more comments...]
However, this can - apparently - be bypassed by simple tampering (as shown in first link, > and explained more here): grimes.demon.co.uk/workshops/fusionWSCrackOne.htm
The "attacks" you refer to fall in three categories:
removing the strong name altogether. This does not break the authentication, the assembly will no longer have a public key and so you will reject it.
disabling the strong name check, which requires full access to the machine. If this was done by an attacker, then it would mean that the attacker already owns your machine. Any security mechanism would be meaningless in such a context. What we are actually defending against is an attacker between the machine and the source of the assemblies.
a real exploit made possible by a bug in .NET 1.1 that has since been fixed
Conclusion: strong names are suitable to use for authentication (at least since .NET 2.0)
I've written a blog post with source code for a catalog which only loads assemblies with keys that you specify: How to control who can write extensions for your MEF application

Resources