Java Card applets, secure data transmission and Secure Channel - javacard

I want to write my applet in a way that its APDU commands and status words wasn't be clear in the transmission channel between my card and my reader. I mean I don't want to send APDU commands and responses to be plain text for third parties.
I think I have two option :
After selecting my applet on the card, for all the other commands, do an encryption function on the data section of APDU commands and decrypt them on the card and after that analyze them. Note that I can't encrypt whole the command using this methodology because the result may have conflict with another SELECT APDU command and the SD of card recognize it as a SELECT command wrongly. is that right?
Its Diagram :
Using SD Secure Channel : As far as I know secure channel means : Whole of the APDU commands and responses transmit in an encrypted form (i.e. they encrypt in the source(Security Domain/Card reader) and decrypt in the destination(Secutity Domain/Card Reader). is that right? As far as I know the SD perform the cryptography method role in this mechanism and the communication between my applet and the SD is in plain (Below diagram), right?
Its Diagram :
Is there any other way?
It seems that the first solution is not good enough, because :
I must implement it myself! :)
We can't hide all parts of the command and response from third-parties.(We can hide data only)
Am I right?
Now, let assume that I want to be sure that my applet works only with APDU commands that transmitted using secure channel . I think I have two option again :
Put the card in SECURED state. As the user can't communicate with the card with plain text APDU commands in this state (right?) therefore he must send the commands to my applet using secure channel. right? if it is not correct, is there any way to force the SD to work with Secure Channel only?
Keep the card in any life cycle that it is (for example OP_READY), But instead, on reception of any APDU command, check the CLA section to see if it is a secure transmitted or not! (Is it possible? Is there any difference between CLA part of APDU commands that come from secure channel and the other ones? Am I right?)
Is there any other way?
And finally the main question :
How can I use SD to have a secure communication with my applet? As I thought I must use GlobalPlatform classes(Am I?), I took a look at its API-s. I found a method named getSecureChannel in a package named org.globalplatform.GPSystem. Am I in a right way? Am I must use this method?
I know that this may be too long to answer, but I'm sure that it clarify a lot of questions not only for me, but also for other future viewers.
I appreciate any body shed any light in this issue for me.
And a sample applet is more appreciable.

Don't worry for secure channel communication via applet. It's very simple if you use Global Platform APIs in your applet.
You don't need to think about lot's of questions, just try to write an secure channel applet and it will process your applet as per defined security level in the command data.
Refer GP secure channel APIs:
http://www.win.tue.nl/pinpasjc/docs/apis/gp22/
And you should keep the card in SECURED state.
And this is the sample applet for secure channel scp02:
package secureChannel;
import javacard.framework.APDU;
import javacard.framework.Applet;
import javacard.framework.ISO7816;
import javacard.framework.ISOException;
import org.globalplatform.GPSystem;
import org.globalplatform.SecureChannel;
public class Scp02 extends Applet
{
final static byte INIT_UPDATE = (byte) 0x50;
final static byte EXT_AUTHENTICATE = (byte) 0x82;
final static byte STORE_DATA = (byte) 0xE2;
public static void install(byte[] bArray, short sOffset, byte bLength)
{
new Scp02().register(bArray, sOffset, bLength);
}
public void process(APDU apdu) throws ISOException
{
SecureChannel sc = GPSystem.getSecureChannel();
byte[] buffer = apdu.getBuffer();
short inlength = 0;
switch (ISO7816.OFFSET_INS)
{
case INIT_UPDATE:
case EXT_AUTHENTICATE:
inlength = apdu.setIncomingAndReceive();
sc.processSecurity(apdu);
break;
case STORE_DATA:
//Receive command data
inlength = apdu.setIncomingAndReceive();
inlength = sc.unwrap(buffer, (short) 0, inlength);
apdu.setOutgoingAndSend((short)0, inlength);
//Process data
break;
}
}
}

I'll answer in order:
Yes, for ISO/IEC 7816-4 only the data section is encrypted. The header is just protected by the authentication tag.
No, the Global Platform secure channel also just (optionally) encrypts the data. The integrity is over header and command data though.
No, the secured state is for Global Platform only, you'll have to program this for yourself using the on card GP API. The GP API has access methods to perform authentication, request the secure channel and to retrieve the current state.
Correct, the CLA byte determines if the APDU is encrypted (not how it is encrypted though). If the first bit of the CLA is zero then your secure channel must however be compliant to ISO/IEC 7816-4.

For the sake of Google search, the code from Anurag Bajpai doesn't work without a slight modification since, as stated in GP secure channel APIs, the applet should output eventual response data:
If response data is present, this data will be placed in the APDU buffer at offset ISO7816.OFFSET_CDATA. The return value indicates the length and the applet is responsible for outputting this data if necessary.
Hence, the corrected code is:
package secureChannel;
import javacard.framework.APDU;
import javacard.framework.Applet;
import javacard.framework.ISO7816;
import javacard.framework.ISOException;
import org.globalplatform.GPSystem;
import org.globalplatform.SecureChannel;
public class Scp02 extends Applet
{
final static byte INIT_UPDATE = (byte) 0x50;
final static byte EXT_AUTHENTICATE = (byte) 0x82;
final static byte STORE_DATA = (byte) 0xE2;
public static void install(byte[] bArray, short sOffset, byte bLength)
{
new Scp02().register(bArray, sOffset, bLength);
}
public void process(APDU apdu) throws ISOException
{
SecureChannel sc = GPSystem.getSecureChannel();
byte[] buffer = apdu.getBuffer();
short inlength = 0;
switch (ISO7816.OFFSET_INS)
{
case INIT_UPDATE:
case EXT_AUTHENTICATE:
inlength = apdu.setIncomingAndReceive();
short respLen = sc.processSecurity(apdu);
apdu.setOutgoingAndSend(ISO7816.OFFSET_CDATA, respLen);
break;
case STORE_DATA:
//Receive command data
inlength = apdu.setIncomingAndReceive();
inlength = sc.unwrap(buffer, (short) 0, inlength);
apdu.setOutgoingAndSend((short)0, inlength);
//Process data
break;
}
}
}

Please note that it is incorrect to call apdu.setIncomingAndReceive() before calling sc.processSecurity(apdu), as processSecurity() "is responsible for receiving the data field of commands that are recognized".

Related

Best Practice to creating an custom expandable protocol architecture? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
Disclaimer: I have 0 experience regarding creating custom, larger scaled protocols.
I am about to start a new project for fun (preferably in java), consisting of one Master-Server (MS), several smaller servers (SS) on the same network, and several clients. All of those three parties should be to communicate information to each other.
Examples:
A Client 'logs in' to the MS.
The MS sends a client to an SS. (SS has to be started, MS sends IP/PORT of SS to Client and tells him to connect, SS waiting for Client to connect, ...)
SS and Client communicate information to each other (e.g. game server and client)
The most experience with custom protocols and packets on a larger scale I have is from Minecraft Servers (Spigot, etc.). When reading the Servers packet system I still get confused a bit.
Whilst researching this most of the time I only found basic tutorials on how to create a TCP/UDP Server-Client model in various programming languages, which I am not interested in.
What I want to Know:
I want to create my own protocol 'architecture', but I have no idea where to start. I want it to be very expandable, but not to complex.
Are there any common practices to creating good packets -> "How should a packet message look like?"
A simple answer or a link recommendation could already help me quite a lot! I know it is a very broad question, however I need to start at some point.
Basically what you are describing is a proxy server.
For now, this is what has come to my mind. Let me know any doubts so that I can solve them by expanding the response.
What is a proxy server?
A proxy server is a server that routes incoming traffic to other servers (internal or external) and acts as an intermediary between the client and the final server.
There are multiple approaches to your problem.
Approach 1: Nginx + JSON
In this case, I would recommend you use some proxy server like Nginx that uses the HTTP protocol. The information would then be transferred as JSON strings instead of using raw binary packets which would simplify the problem quite much.
For more info about NGINX:
Main website
Official docs
Nice youtube tutorial series for beginners.
For more info about JSON:
Working with JSON. Nice introduction by Mozilla.
Working with JSON in java with Jackson.
Approach 2: Making your own proxy server and using binary packets
For the proxy part, you could use Java Sockets and a class that distributes the connections by reading and opening packet form the client where it specifies the wanted destination. Then you would have two options:
Redirect the (Client-Proxy) socket streams to the (Proxy-WantedDestination) socket.
Tell the WantedDestination to open a connection to the client. (ServerSocket on client and Socket on WantedDestination) So in this way, the WantedDestination would open a socket connection with the Client instead of the Client opening a connection with the Wanted destination.
The first method allows you to log all incoming and outgoing data. The second method allows you to keep the WantedDestination secure.
First method:
Client <--> Proxy <--> WantedDestination (2 Sockets)
Second method:
Step 1: Client <--> Proxy
Step 2: Proxy <--> WantedDestination
Step 3: Client <---------------> WantedDestination (1 socket)
How to structure packets
I usually structure packets in the following way:
Packet header
Packet length
Packet payload
Packet checksum
The packet header can be used to identify if the packet is coming from your software and that you are starting to read the data from the right position.
The packet length will indicate how many bytes the stream must read before trying to deserialize the packet into its wrapper class. Let's imagine that the header has a length of 2 bytes and that the length has a length of 3 bytes. Then if the length indicates that the packet is 30 bytes long, you will know that the end of the packet is (30 - 3 - 2) = 25 bytes away.
The packet payload will have a variable size and will contain some fixed size bytes at the beginning indicating the packet type. The packet type can be chosen arbitrarily. For example, you can determine that a packet of type (byte) 12 must be interpreted as a packet containing data about a pong match.
Finally, the packet checksum indicates the sum of the bytes of the packet you that you can verify the integrity of the packet. Java already provides some checksum algorithms, such as CRC32. If Packet Checksum = CRC32(Packet header, Packet length, and Packet Payload), then the data is not corrupted.
In the end, a packet is a byte array that can be transmitted using Java Input and Output streams. Despite this, working directly with byte arrays can be usually difficult and frustrating, so I would recommend that you use a wrapper class to represent a packet and then extend that class to create other packets. For example:
package me.PauMAVA.DBAR.common.protocol;
import java.util.Arrays;
import java.util.zip.CRC32;
import java.util.zip.Checksum;
import static me.PauMAVA.DBAR.common.util.ConversionUtils.*;
public abstract class Packet implements Serializable {
public static final byte[] DEFAULT_HEADER = new byte[]{(byte) 0xAB, (byte) 0xBA};
private final byte[] header;
private final byte packetType;
private byte[] packetParameter;
private byte[] packetData;
private byte[] packetCheckSum;
Packet(PacketType type, PacketParameter parameter) {
this(type, parameter, new byte[0]);
}
Packet(PacketType type, PacketParameter parameter, byte[] data) {
this.header = DEFAULT_HEADER;
this.packetType = type.getCode();
this.packetParameter = parameter.getData();
this.packetData = data;
recalculateChecksum();
}
public byte[] getParameterBytes() {
return packetParameter;
}
public PacketParameter getPacketParameter() {
return PacketParameter.getByData(packetParameter);
}
public byte[] getPacketData() {
return packetData;
}
public void setParameter(PacketParameter parameter) {
this.packetParameter = parameter.getData();
recalculateChecksum();
}
public void setPacketData(byte[] packetData) {
this.packetData = packetData;
recalculateChecksum();
}
public void recalculateChecksum() {
Checksum checksum = new CRC32();
checksum.update(header);
checksum.update(packetParameter);
checksum.update(packetType);
if (packetData.length > 0) {
checksum.update(packetData);
}
this.packetCheckSum = longToBytes(checksum.getValue());
}
public byte[] toByteArray() {
return concatArrays(header, new byte[]{packetType}, packetParameter, packetData, packetCheckSum);
}
And then a custom packet could be:
package me.PauMAVA.DBAR.common.protocol;
import java.nio.charset.StandardCharsets;
import static me.PauMAVA.DBAR.common.util.ConversionUtils.subArray;
public class PacketSendPassword extends Packet {
private String passwordHash;
public PacketSendPassword() {
super(PacketType.SEND_PASSWORD, PacketParameter.NO_PARAM);
}
public PacketSendPassword(String passwordHash) {
super(PacketType.SEND_PASSWORD, PacketParameter.NO_PARAM);
super.setPacketData(passwordHash.getBytes(StandardCharsets.UTF_8));
}
#Override
public byte[] serialize() {
return toByteArray();
}
#Override
public void deserialize(byte[] data) throws ProtocolException {
validate(data, PacketType.SEND_PASSWORD, PacketParameter.NO_PARAM);
PacketParameter packetParameter = PacketParameter.getByData(subArray(data, 3, 6));
if (packetParameter != null) {
super.setParameter(packetParameter);
}
byte[] passwordHash = subArray(data, 7, data.length - 9);
super.setPacketData(passwordHash);
this.passwordHash = new String(passwordHash, StandardCharsets.UTF_8);
}
public String getPasswordHash() {
return passwordHash;
}
}
Sending a packet over a stream would be as easy as:
byte[] buffer = packet.serialize();
dout.write(buffer);
You can take a look at a small protocol that I developed for a Bukkit server auto reloader here.
Be advised that this method will need you to convert between different data types and byte arrays, so you would need a good understanding of numeric and character representation in binary.

Can't publish data from CC3100 + MSP430F5529 on PUBNUB

I followed the following tutorial: http://www.pubnub.com/blog/pubnub-streaming-texas-instruments-iot/
step by step and I managed to compile and code and connect to my Wi-Fi access point.
I think I managed to connect to PubNub (the code prints on the terminal screen "PubNub Set Up" but in the code there is no real verification that it was indeed set up.
I opened an account on PubNub and I named my channel "testing" (I named it the same in the code I uploaded - I checked that a million times) and when I go to the Dev Console and click on subscribe I can't see anything! I mean I can post messages through the Dev Console but what I really want to see are the messages from the CC3100.
I checked the UART terminal on my computer and I see the data being printed constantly so I know it is working.
I went over the tutorial again and again and I'm doing the same thing but it just doesn't work.
Any help would be appreciated!
What am I missing?
Thanks
First to verify your PubNub account is properly configured and your local Wi-Fi connectivity is working - are you able to publish messages from the dev console in one browser and receive them in the dev console on another browser? (both using the same channel name, of course). If that works, please send a message to help (at) pubnub (dot) com with your sub-key info and information about your project and we will try to assist you tracking down the issue.
This answer is posted really late. I admit I forgot about this post so I just decided to update it (a few years late though).
I started digging to try and see what was the problem and I think I found it. First of all, I saw that PubNub.publish() wasn't working properly with the json_String because the json_String was 90% gibrish. So I erased most of the code that constructed the json_String (the part that inserts the analog values) and made it simpler.
I then also added a part of code at the end which was needed for proper performance of the client variable which I got off of a part of code which was used for an arduino based project using the CC3100.
Anyway, the new code is the one below and now it works FINE! I finally see all the input streaming on PubNub! Thanks a lot! :D
/*PubNub sample JSON-parsing client with WiFi support
This combines two sketches: the PubNubJson example of PubNub library
and the WifiWebClientRepeating example of the WiFi library.
This sample client will properly parse JSON-encoded PubNub subscription
replies using the aJson library. It will send a simple message, then
properly parsing and inspecting a subscription message received back.
This is achieved by integration with the aJson library. You will need
a version featuring Wiring Stream integration, that can be found
at http://github.com/pasky/aJson as of 2013-05-30.
Please refer to the PubNubJson example description for some important
notes, especially regarding memory saving on Arduino Uno/Duemilanove.
You can also save some RAM by not using WiFi password protection.
created 30 May 2013
by Petr Baudis
https://github.com/pubnub/pubnub-api/tree/master/arduino
This code is in the public domain.
*/
#include <SPI.h>
#include <WiFi.h>
#include <PubNub.h>
#include <aJSON.h>
static char ssid[] = "NetSSID_Name"; // your network SSID (name)
static char pass[] = "NetworkdPassword"; // your network password
static int keyIndex = 0; // your network key Index number (needed only for WEP)
const static char pubkey[] = "pub-c-51eb45ec-b647-44da-b2aa-9bf6b0b98705";
const static char subkey[] = "sub-c-7e78ed9c-991d-11e4-9946-02ee2ddab7fe";
const static char channel[] = "testing";
#define NUM_CHANNELS 4 // How many analog channels do you want to read?
const static uint8_t analog_pins[] = {23, 24, 25, 26}; // which pins are you reading?
void setup()
{
Serial.begin(9600);
Serial.println("Start WiFi");
WiFi.begin(ssid, pass);
while(WiFi.localIP() == INADDR_NONE) {
Serial.print(".");
delay(300);
}
Serial.println("WiFi set up");
PubNub.begin(pubkey, subkey);
Serial.println("PubNub set up");
delay(5000);
}
void loop()
{
WiFiClient *client;
// create JSON objects
aJsonObject *msg, *analogReadings;
msg = aJson.createObject();
aJson.addItemToObject(msg, "analogReadings", analogReadings = aJson.createObject());
// get latest sensor values then add to JSON message
/*for (int i = 0; i < NUM_CHANNELS; i++) {
String analogChannel = String(analog_pins[i]);
char charBuf[analogChannel.length()+1];
analogChannel.toCharArray(charBuf, analogChannel.length()+1);
int analogValues = analogRead(analog_pins[i]);
aJson.addNumberToObject(analogReadings, charBuf, analogValues);
}*/
// convert JSON object into char array, then delete JSON object
char *json_String = aJson.print(msg);
aJson.deleteItem(msg);
// publish JSON formatted char array to PubNub
Serial.print("publishing a message: ");
Serial.println(json_String);
Serial.println(channel);
client = PubNub.publish(channel, json_String);
Serial.println(*client);
free(json_String);
if (!client) {
Serial.println("publishing error");
delay(1000);
return;
}
client->stop();
delay(500);
}
//- See more at: http://www.pubnub.com/blog/pubnub-streaming-texas-instruments-iot/#sthash.tbQXMIzw.dpuf

Verify RFC 3161 trusted timestamp

In my build process, I want to include a timestamp from an RFC-3161-compliant TSA. At run time, the code will verify this timestamp, preferably without the assistance of a third-party library. (This is a .NET application, so I have standard hash and asymmetric cryptography functionality readily at my disposal.)
RFC 3161, with its reliance on ASN.1 and X.690 and whatnot, is not simple to implement, so for now at least, I'm using Bouncy Castle to generate the TimeStampReq (request) and parse the TimeStampResp (response). I just can't quite figure out how to validate the response.
So far, I've figured out how to extract the signature itself, the public cert, the time the timestamp was created, and the message imprint digest and nonce that I sent (for build-time validation). What I can't figure out is how to put this data together to generate the data that was hashed and signed.
Here's a rough idea of what I'm doing and what I'm trying to do. This is test code, so I've taken some shortcuts. I'll have to clean a couple of things up and do them the right way once I get something that works.
Timestamp generation at build time:
// a lot of fully-qualified type names here to make sure it's clear what I'm using
static void WriteTimestampToBuild(){
var dataToTimestamp = Encoding.UTF8.GetBytes("The rain in Spain falls mainly on the plain");
var hashToTimestamp = new System.Security.Cryptography.SHA1Cng().ComputeHash(dataToTimestamp);
var nonce = GetRandomNonce();
var tsr = GetTimestamp(hashToTimestamp, nonce, "http://some.rfc3161-compliant.server");
var tst = tsr.TimeStampToken;
var tsi = tst.TimeStampInfo;
ValidateNonceAndHash(tsi, hashToTimestamp, nonce);
var cms = tst.ToCmsSignedData();
var signer =
cms.GetSignerInfos().GetSigners()
.Cast<Org.BouncyCastle.Cms.SignerInformation>().First();
// TODO: handle multiple signers?
var signature = signer.GetSignature();
var cert =
tst.GetCertificates("Collection").GetMatches(signer.SignerID)
.Cast<Org.BouncyCastle.X509.X509Certificate>().First();
// TODO: handle multiple certs (for one or multiple signers)?
ValidateCert(cert);
var timeString = tsi.TstInfo.GenTime.TimeString;
var time = tsi.GenTime; // not sure which is more useful
// TODO: Do I care about tsi.TstInfo.Accuracy or tsi.GenTimeAccuracy?
var serialNumber = tsi.SerialNumber.ToByteArray(); // do I care?
WriteToBuild(cert.GetEncoded(), signature, timeString/*or time*/, serialNumber);
// TODO: Do I need to store any more values?
}
static Org.BouncyCastle.Math.BigInteger GetRandomNonce(){
var rng = System.Security.Cryptography.RandomNumberGenerator.Create();
var bytes = new byte[10]; // TODO: make it a random length within a range
rng.GetBytes(bytes);
return new Org.BouncyCastle.Math.BigInteger(bytes);
}
static Org.BouncyCastle.Tsp.TimeStampResponse GetTimestamp(byte[] hash, Org.BouncyCastle.Math.BigInteger nonce, string url){
var reqgen = new Org.BouncyCastle.Tsp.TimeStampRequestGenerator();
reqgen.SetCertReq(true);
var tsrequest = reqgen.Generate(Org.BouncyCastle.Tsp.TspAlgorithms.Sha1, hash, nonce);
var data = tsrequest.GetEncoded();
var webreq = WebRequest.CreateHttp(url);
webreq.Method = "POST";
webreq.ContentType = "application/timestamp-query";
webreq.ContentLength = data.Length;
using(var reqStream = webreq.GetRequestStream())
reqStream.Write(data, 0, data.Length);
using(var respStream = webreq.GetResponse().GetResponseStream())
return new Org.BouncyCastle.Tsp.TimeStampResponse(respStream);
}
static void ValidateNonceAndHash(Org.BouncyCastle.Tsp.TimeStampTokenInfo tsi, byte[] hashToTimestamp, Org.BouncyCastle.Math.BigInteger nonce){
if(tsi.Nonce != nonce)
throw new Exception("Nonce doesn't match. Man-in-the-middle attack?");
var messageImprintDigest = tsi.GetMessageImprintDigest();
var hashMismatch =
messageImprintDigest.Length != hashToTimestamp.Length ||
Enumerable.Range(0, messageImprintDigest.Length).Any(i=>
messageImprintDigest[i] != hashToTimestamp[i]
);
if(hashMismatch)
throw new Exception("Message imprint doesn't match. Man-in-the-middle attack?");
}
static void ValidateCert(Org.BouncyCastle.X509.X509Certificate cert){
// not shown, but basic X509Chain validation; throw exception on failure
// TODO: Validate certificate subject and policy
}
static void WriteToBuild(byte[] cert, byte[] signature, string time/*or DateTime time*/, byte[] serialNumber){
// not shown
}
Timestamp verification at run time (client site):
// a lot of fully-qualified type names here to make sure it's clear what I'm using
static void VerifyTimestamp(){
var timestampedData = Encoding.UTF8.GetBytes("The rain in Spain falls mainly on the plain");
var timestampedHash = new System.Security.Cryptography.SHA1Cng().ComputeHash(timestampedData);
byte[] certContents;
byte[] signature;
string time; // or DateTime time
byte[] serialNumber;
GetDataStoredDuringBuild(out certContents, out signature, out time, out serialNumber);
var cert = new System.Security.Cryptography.X509Certificates.X509Certificate2(certContents);
ValidateCert(cert);
var signedData = MagicallyCombineThisStuff(timestampedHash, time, serialNumber);
// TODO: What other stuff do I need to magically combine?
VerifySignature(signedData, signature, cert);
// not shown: Use time from timestamp to validate cert for other signed data
}
static void GetDataStoredDuringBuild(out byte[] certContents, out byte[] signature, out string/*or DateTime*/ time, out byte[] serialNumber){
// not shown
}
static void ValidateCert(System.Security.Cryptography.X509Certificates.X509Certificate2 cert){
// not shown, but basic X509Chain validation; throw exception on failure
}
static byte[] MagicallyCombineThisStuff(byte[] timestampedhash, string/*or DateTime*/ time, byte[] serialNumber){
// HELP!
}
static void VerifySignature(byte[] signedData, byte[] signature, System.Security.Cryptography.X509Certificates.X509Certificate2 cert){
var key = (RSACryptoServiceProvider)cert.PublicKey.Key;
// TODO: Handle DSA keys, too
var okay = key.VerifyData(signedData, CryptoConfig.MapNameToOID("SHA1"), signature);
// TODO: Make sure to use the same hash algorithm as the TSA
if(!okay)
throw new Exception("Timestamp doesn't match! Don't trust this!");
}
As you might guess, where I think I'm stuck is the MagicallyCombineThisStuff function.
I finally figured it out myself. It should come as no surprise, but the answer is nauseatingly complex and indirect.
The missing pieces to the puzzle were in RFC 5652. I didn't really understand the TimeStampResp structure until I read (well, skimmed through) that document.
Let me describe in brief the TimeStampReq and TimeStampResp structures. The interesting fields of the request are:
a "message imprint", which is the hash of the data to be timestamped
the OID of the hash algorithm used to create the message imprint
an optional "nonce", which is a client-chosen identifier used to verify that the response is generated specifically for this request. This is effectively just a salt, used to avoid replay attacks and to detect errors.
The meat of the response is a CMS SignedData structure. Among the fields in this structure are:
the certificate(s) used to sign the response
an EncapsulatedContentInfo member containing a TSTInfo structure. This structure, importantly, contains:
the message imprint that was sent in the request
the nonce that was sent in the request
the time certified by the TSA
a set of SignerInfo structures, with typically just one structure in the set. For each SignerInfo, the interesting fields within the structure are:
a sequence of "signed attributes". The DER-encoded BLOB of this sequence is what is actually signed. Among these attributes are:
the time certified by the TSA (again)
a hash of the DER-encoded BLOB of the TSTInfo structure
an issuer and serial number or subject key identifier that identifies the signer's certificate from the set of certificates found in the SignedData structure
the signature itself
The basic process of validating the timestamp is as follows:
Read the data that was timestamped, and recompute the message imprint using the same hashing algorithm used in the timestamp request.
Read the nonce used in the timestamp request, which must be stored along with the timestamp for this purpose.
Read and parse the TimeStampResp structure.
Verify that the TSTInfo structure contains the correct message imprint and nonce.
From the TimeStampResp, read the certificate(s).
For each SignerInfo:
Find the certificate for that signer (there should be exactly one).
Verify the certificate.
Using that certificate, verify the signer's signature.
Verify that the signed attributes contain the correct hash of the TSTInfo structure
If everything is okay, then we know that all signed attributes are valid, since they're signed, and since those attributes contain a hash of the TSTInfo structure, then we know that's okay, too. We have therefore validated that the timestamped data is unchanged since the time given by the TSA.
Because the signed data is a DER-encoded BLOB (which contains a hash of the different DER-encoded BLOB containing the information the verifier actually cares about), there's no getting around having some sort of library on the client (verifier) that understands X.690 encoding and ASN.1 types. Therefore, I conceded to including Bouncy Castle in the client as well as in the build process, since there's no way I have time to implement those standards myself.
My code to add and verify timestamps is similar to the following:
Timestamp generation at build time:
// a lot of fully-qualified type names here to make sure it's clear what I'm using
static void WriteTimestampToBuild(){
var dataToTimestamp = ... // see OP
var hashToTimestamp = ... // see OP
var nonce = ... // see OP
var tsq = GetTimestampRequest(hashToTimestamp, nonce);
var tsr = GetTimestampResponse(tsq, "http://some.rfc3161-compliant.server");
ValidateTimestamp(tsq, tsr);
WriteToBuild("tsq-hashalg", Encoding.UTF8.GetBytes("SHA1"));
WriteToBuild("nonce", nonce.ToByteArray());
WriteToBuild("timestamp", tsr.GetEncoded());
}
static Org.BouncyCastle.Tsp.TimeStampRequest GetTimestampRequest(byte[] hash, Org.BouncyCastle.Math.BigInteger nonce){
var reqgen = new TimeStampRequestGenerator();
reqgen.SetCertReq(true);
return reqgen.Generate(TspAlgorithms.Sha1/*assumption*/, hash, nonce);
}
static void GetTimestampResponse(Org.BouncyCastle.Tsp.TimeStampRequest tsq, string url){
// similar to OP
}
static void ValidateTimestamp(Org.BouncyCastle.Tsp.TimeStampRequest tsq, Org.BouncyCastle.Tsp.TimeStampResponse tsr){
// same as client code, see below
}
static void WriteToBuild(string key, byte[] value){
// not shown
}
Timestamp verification at run time (client site):
/* Just like in the OP, I've used fully-qualified names here to avoid confusion.
* In my real code, I'm not doing that, for readability's sake.
*/
static DateTime GetTimestamp(){
var timestampedData = ReadFromBuild("timestamped-data");
var hashAlg = Encoding.UTF8.GetString(ReadFromBuild("tsq-hashalg"));
var timestampedHash = System.Security.Cryptography.HashAlgorithm.Create(hashAlg).ComputeHash(timestampedData);
var nonce = new Org.BouncyCastle.Math.BigInteger(ReadFromBuild("nonce"));
var tsq = new Org.BouncyCastle.Tsp.TimeStampRequestGenerator().Generate(System.Security.Cryptography.CryptoConfig.MapNameToOID(hashAlg), timestampedHash, nonce);
var tsr = new Org.BouncyCastle.Tsp.TimeStampResponse(ReadFromBuild("timestamp"));
ValidateTimestamp(tsq, tsr);
// if we got here, the timestamp is okay, so we can trust the time it alleges
return tsr.TimeStampToken.TimeStampInfo.GenTime;
}
static void ValidateTimestamp(Org.BouncyCastle.Tsp.TimeStampRequest tsq, Org.BouncyCastle.Tsp.TimeStampResponse tsr){
/* This compares the nonce and message imprint and whatnot in the TSTInfo.
* It throws an exception if they don't match. This doesn't validate the
* certs or signatures, though. We still have to do that in order to trust
* this data.
*/
tsr.Validate(tsq);
var tst = tsr.TimeStampToken;
var timestamp = tst.TimeStampInfo.GenTime;
var signers = tst.ToCmsSignedData().GetSignerInfos().GetSigners().Cast<Org.BouncyCastle.Cms.SignerInformation>();
var certs = tst.GetCertificates("Collection");
foreach(var signer in signers){
var signerCerts = certs.GetMatches(signer.SignerID).Cast<Org.BouncyCastle.X509.X509Certificate>().ToList();
if(signerCerts.Count != 1)
throw new Exception("Expected exactly one certificate for each signer in the timestamp");
if(!signerCerts[0].IsValid(timestamp)){
/* IsValid only checks whether the given time is within the certificate's
* validity period. It doesn't verify that it's a valid certificate or
* that it hasn't been revoked. It would probably be better to do that
* kind of thing, just like I'm doing for the signing certificate itself.
* What's more, I'm not sure it's a good idea to trust the timestamp given
* by the TSA to verify the validity of the TSA's certificate. If the
* TSA's certificate is compromised, then an unauthorized third party could
* generate a TimeStampResp with any timestamp they wanted. But this is a
* chicken-and-egg scenario that my brain is now too tired to keep thinking
* about.
*/
throw new Exception("The timestamp authority's certificate is expired or not yet valid.");
}
if(!signer.Verify(signerCerts[0])){ // might throw an exception, might not ... depends on what's wrong
/* I'm pretty sure that signer.Verify verifies the signature and that the
* signed attributes contains a hash of the TSTInfo. It also does some
* stuff that I didn't identify in my list above.
* Some verification errors cause it to throw an exception, some just
* cause it to return false. If it throws an exception, that's great,
* because that's what I'm counting on. If it returns false, let's
* throw an exception of our own.
*/
throw new Exception("Invalid signature");
}
}
}
static byte[] ReadFromBuild(string key){
// not shown
}
I am not sure to understand why you want to rebuild the data structure signed in the response. Actually if you want to extract the signed data from the time-stamp server response you can do this:
var tsr = GetTimestamp(hashToTimestamp, nonce, "http://some.rfc3161-compliant.server");
var tst = tsr.TimeStampToken;
var tsi = tst.TimeStampInfo;
var signature = // Get the signature
var certificate = // Get the signer certificate
var signedData = tsi.GetEncoded(); // Similar to tsi.TstInfo.GetEncoded();
VerifySignature(signedData, signature, certificate)
If you want to rebuild the data structure, you need to create a new Org.BouncyCastle.Asn1.Tsp.TstInfo instance (tsi.TstInfo is a Org.BouncyCastle.Asn1.Tsp.TstInfo object) with all elements contained in the response.
In RFC 3161 the signed data structure is defined as this ASN.1 sequence:
TSTInfo ::= SEQUENCE {
version INTEGER { v1(1) },
policy TSAPolicyId,
messageImprint MessageImprint,
-- MUST have the same value as the similar field in
-- TimeStampReq
serialNumber INTEGER,
-- Time-Stamping users MUST be ready to accommodate integers
-- up to 160 bits.
genTime GeneralizedTime,
accuracy Accuracy OPTIONAL,
ordering BOOLEAN DEFAULT FALSE,
nonce INTEGER OPTIONAL,
-- MUST be present if the similar field was present
-- in TimeStampReq. In that case it MUST have the same value.
tsa [0] GeneralName OPTIONAL,
extensions [1] IMPLICIT Extensions OPTIONAL }
Congratulations on getting that tricky protocol work done!
See also a Python client implementation at rfc3161ng 2.0.4.
Note that with the RFC 3161 TSP protocol, as discussed at Web Science and Digital Libraries Research Group: 2017-04-20: Trusted Timestamping of Mementos and other publications, you and your relying parties must trust that the Time-Stamping Authority (TSA) is operated properly and securely. It is of course very difficult, if not impossible, to really secure online servers like those run by most TSAs.
As also discussed in that paper, with comparisons to TSP, now that the world has a variety of public blockchains in which trust is distributed and (sometimes) carefully monitored, there are new trusted timestamping options (providing "proof of existence" for documents). For example see
OriginStamp - Trusted Timestamping with Bitcoin. The protocol is much much simpler, and they provide client code for a large variety of languages. While their online server could also be compromised, the client can check whether their hashes were properly embedded in the Bitcoin blockchain and thus bypass the need to trust the OriginStamp service itself.
One downside is that timestamps are only posted once a day, unless an extra payment is made. Bitcoin transactions have become rather expensive, so the service is looking at supporting other blockchains also to drive costs back down and make it cheaper to get more timely postings.
Update: check out Stellar and Keybase
For free, efficient, lightning-fast, widely-vetted timestamps, check out the Stellar blockchain protocol, and the STELLARAPI.IO service.

Javamail base64 decoding

I use javamail to get message, when I get the message i have:
com.sun.mail.util.BASE64DecoderStream,
I know that is part of multipart message, in the source of message I have
Content-Type: image/png; name=index_01.png
Content-Transfer-Encoding: base64
How encode this message??
edit:
I have that code:
else if (mbp.getContent() instanceof BASE64DecoderStream){
InputStream is = null;
ByteArrayOutputStream os = null;
is = mbp.getInputStream();
os = new ByteArrayOutputStream(512);
int c = 0;
while ((c = is.read()) != -1) {
os.write(c);
}
System.out.println(os.toString());
}
And that code return strange string, for example:
Ř˙á?Exif??II*????????????˙ě?Ducky???????˙á)
com.sun.mail.util.BASE64DecoderStream is platform dependent. You cannot rely on that always being the type of class that handles base64 decoding.
Rather the javamail APIs already support decoding for you:
// part is instanceof javax.mail.Part
ByteArrayOutputStream bos = new ByteArrayOutputStream();
part.getDataHandler().writeTo(bos);
String decodedContent = bos.toString()
What are you expecting when you read the contents of an image part? The image is stored in the message in an encoded format, but JavaMail is decoding the data before returning the bytes to you. If you store the bytes in a file, you can display the image with many image viewing/editing applications. If you want to display them with your Java program, you'll need to convert the bytes to an appropriate Java Image object using (e.g.) APIs in the java.awt.image package.
That Sun's base 64 encoder is in the optional package and can be moved or renamed at any time without warning, also may be missing in alternative Java runtimes, also access to these packages may be disabled. It is much better not to rely on it.
I would say, use Base64 from Apache Commons instead, should do the same. Hope you can rebuild and fix the source code.

How to get information/data from platformRequest() in J2ME?

I want to implement a behavior similar to Whatsapp, where when the user can upload an image. I tried opening the images in my app, but if the image is too large, I will have an out of memory error.
To solve this, I'm opening forwarding the images to be open in the phone's native image viewer using the platformRequest() method.
However, I want to know how is it Whatsapp modifies the phone's native image viewer to add a "Select" button, with which the user selects the image he wants to upload. How is that information sent back to the J2ME application and how is the image resized?
Edit:
I tried this in two different ways, both of which gave me the OOME.
At first, I tried the more direct method:
FileConnection fc = (FileConnection) Connector.open("file://localhost/" + currDirName + fileName);
if (!fc.exists()) {
throw new IOException("File does not exists");
}
InputStream fis = fc.openInputStream();
Image im = Image.createImage(fis);
fis.close();
When that didn't work, I tried a more "manual" approach, but that gave me an error as well.
FileConnection fc = (FileConnection) Connector.open("file://localhost/" + currDirName + fileName);
if (!fc.exists()) {
throw new IOException("File does not exists");
}
InputStream fis = fc.openInputStream();
ByteArrayOutputStream file = new ByteArrayOutputStream();
int c;
byte[] data = new byte[1024];
while ((c = fis.read(data)) != -1) {
file.write(data, 0, c);
}
byte[] fileData = null;
fileData = file.toByteArray();
fis.close();
fc.close();
file.close();
Image im = Image.createImage(fileData, 0, fileData.length);
When I call the createImage method, the out of memory error occurs in both cases.
This varies with the devices. An E72 gives me the error with 3MB images, while a newer device will give me the error with images larger than 10MBs.
MIDP 2 (JSR 118) does not have API for that, you need to find another way to handle big images.
As for WhatsApp, it looks like they do not rely on MIDP in supporting this functionality. If you check the Wikipedia page you'll note that they don't claim general Java ME as supported platform, but instead, list narrower platforms like Symbian, S40, Blackberry etc.
This most likely means that they implement "problematic features" like one you're asking about using platform-specific API of particular target devices, having essentially separate projects / releases for every platform listed.
If this feature is really necessary in your application, you likely will have to do something like this.
In this case, consider also encapsulating problematic features in a way to make it easier to switch just part of your source code when building it for different platforms. For example, Class.forName(String) can be used to load platform specific implementation depending on target platform.
//...
Image getImage(String resourceName) {
// ImageUtil is an interface with method getImage
ImageUtil imageUtil = (ImageUtil) Class.forName(
// get platform-specific implementation, eg
// "mypackage.platformspecific.s40.S40ImageUtil"
// "mypackage.platformspecific.bb.BBImageUtil"
// "mypackage.platformspecific.symbian.SymbialImageUtil"
"mypackage.platformspecific.s40.S40ImageUtil");
return imageUtil.getImage(resourceName);
}
//...

Resources