Unknown Error (6c 15) with Setoutgoinglength in java card 2.2.1 - javacard

I wrote a code that for java card 2.2.1 and I test it eith JCIDE.
I get error in method Setoutgoinglength()
public void getoutput(APDU apdu)
{
byte [] buffer = apdu.getBuffer();
byte hello[] = {'H','E','L','L','O',' ','W','O','R','L','D',' ', 'J','A','V','A',' ','C','A','R','D'};
short le = apdu.setOutgoing();
short totalBytes = (short) hello.length;
Util.arrayCopyNonAtomic(hello, (short)0, buffer, (short)0, (short)totalBytes);
apdu.setOutgoingLength(totalBytes);
apdu.sendBytes((short) 0, (short) hello.length);}

6CXX means that your Le is not equal to the correct length of response data (XX is equal to the length of correct response data). 6C15 specifically means that the correct Le to be used should be 0x15.
What happened is that your Le field is 0x00 (which is actually interpreted by the card as 256 in decimal form) but you used totalBytes, which has a value of 0x15, as parameter to apdu.setOutgoingLength() which is not equal to 256.
The correct APDU to send is 00 40 00 00 15

Related

Java Card OwnerPin - APDU Command

I'm starting to learn java card and I'm reading a sample code of a wallet and there an OwnerPin in it.
Here's the part of the code, related to the pin and its verification:
OwnerPIN pin;
private myApplet(byte[] bArray, short bOffset, byte bLength) {
// It is good programming practice to allocate
// all the memory that an applet needs during
// its lifetime inside the constructor
pin = new OwnerPIN(PIN_TRY_LIMIT, MAX_PIN_SIZE);
byte iLen = bArray[bOffset]; // aid length
bOffset = (short) (bOffset + iLen + 1);
byte cLen = bArray[bOffset]; // info length
bOffset = (short) (bOffset + cLen + 1);
byte aLen = bArray[bOffset]; // applet data length
// The installation parameters contain the PIN
// initialization value
pin.update(bArray, (short) (bOffset + 1), aLen);
register();
}
I'm having a little trouble understanding this code. I know that this is the part when the pin is set according to the installation script:
0x80 0xB8 0x00 0x00 0xd 0xb 0x01 0x02 0x03 0x04 0x05 0x06 0x07 0x08 0x09 0x00 0x00 0x00 0x7F;
I can't understand what is the value of pin will be after installing the applet.
The shown code is not enough to actually say anything about the given APDU.
This code sample though:
byte iLen = bArray[bOffset]; // aid length
bOffset = (short) (bOffset + iLen + 1);
byte cLen = bArray[bOffset]; // info length
bOffset = (short) (bOffset + cLen + 1);
byte aLen = bArray[bOffset]; // applet data length
is the default code for the Applet's installation method, therefore could be triggered by a Global Platform INSTALL command. However, the given APDU is not a valid Global Platform at all.
From your code we cannot see the entrypoint of the APDU in the process method, but probably it works like this: the given data is a LV-encoded list (Length/Value), therefore you parse the length byte for aid first, save the length iLen and increment bOffset to the next LV-pair. in the end the value and length of the applet data is taken and feeded into the pin.update.
In the given APDU, the PIN is acutally missing, try to parse the contents and length for aid and info and you will see that the applet data bytes are missing.

why does i2cset send extra bytes?

I've been working on PIC18F55K42 chip for a while. The PIC is setup as a slave and it's receiving bytes correctly. But I encountered a few problems.
For example, when I do:
i2cset -y 1 0x54 0x80 0x01
It looks correct on the controller side and I can see the address 0x80(data address) and byte value 0x01.
When I send in block mode like:
i2cset -y 1 0x54 0x80 0x01 0x02 0x03 0x04 i
I see spurious bytes appearing on the controller. More precisely, it looks like this:
ADDRESS 80 6c 00 2f 01 02 03 04 STOP
At first I thought this is something to do with my controller and even tried digging into it's clock settings. Used Salae logic analyser too. There's nothing wrong with the controller or it's set up. The only place I can think of is the complex onion driver layering done by Linux.
I'd like to know why Linux is sending the 3 extra bytes (6c 00 2f). Why does i2c_smbus_write_block_data send extra bytes and how can it be avoided?
It's a bug in the i2cset implementation in Busybox. See miscutils/i2c_tools.c:
/* Prepare the value(s) to be written according to current mode. */
switch (mode) {
case I2C_SMBUS_BYTE_DATA:
val = xstrtou_range(argv[3], 0, 0, 0xff);
break;
case I2C_SMBUS_WORD_DATA:
val = xstrtou_range(argv[3], 0, 0, 0xffff);
break;
case I2C_SMBUS_BLOCK_DATA:
case I2C_SMBUS_I2C_BLOCK_DATA:
for (blen = 3; blen < (argc - 1); blen++)
block[blen] = xstrtou_range(argv[blen], 0, 0, 0xff);
val = -1;
break;
default:
val = -1;
break;
}
Should be block[blen - 3] = xstrtou_range(argv[blen], 0, 0, 0xff);. The bug results in 3 extra garbage bytes from stack being sent.
Use i2c_smbus_write_i2c_block_data for raw i2c transfers
i2c_smbus_write_block_data makes data transfer using SMBUS protocol

Sending a byte [] over javacard apdu

I send a byte [] from the host application to the javacard applet. But when I try to retrieve it as byte [] via the command buffer[ISO7816.OFFSET_CDATA], I am told that I cannot convert byte to byte[]. How can I send a byte [] via command APDU from the host application and retrieve it as byte[] on the other end (javacard applet). It appears buffer[ISO7816.OFFSET_CDATA] returns byte. See my comments on where the error occurs.
My idea is as follows:
The host application sends challenge as a byte [] to be signed by the javacard applet. Note that the signature requires the challenge to be a byte []. The javacard signs as follows:
private void sign(APDU apdu) {
if(!pin.isValidated()) ISOException.throwIt(SW_PIN_VERIFICATION_REQUIRED);
else{
byte [] buffer = apdu.getBuffer();
byte numBytes = buffer[ISO7816.OFFSET_LC];
byte byteRead =(byte)(apdu.setIncomingAndReceive());
if ( ( numBytes != 20 ) || (byteRead != 20) )
ISOException.throwIt(ISO7816.SW_WRONG_LENGTH);
byte [] challenge = buffer[ISO7816.OFFSET_CDATA];// error point cannot convert from byte to byte []
byte [] output = new byte [64];
short length = 64;
short x =0;
Signature signature =Signature.getInstance(Signature.ALG_RSA_SHA_PKCS1, false);
signature.init(privKey, Signature.MODE_SIGN);
short sigLength = signature.sign(challenge, offset,length, output, x); // challenge must be a byte []
//This sequence of three methods sends the data contained in
//'serial' with offset '0' and length 'serial.length'
//to the host application.
apdu.setOutgoing();
apdu.setOutgoingLength((short)output.length);
apdu.sendBytesLong(output,(short)0,(short)output.length);
}
}
The challenge is sent by the host application as shown below:
byte [] card_signature=null;
SecureRandom random = SecureRandom . getInstance( "SHA1PRNG" ) ;
byte [] bytes = new byte [ 20 ] ;
random . nextBytes ( bytes) ;
CommandAPDU challenge;
ResponseAPDU resp3;
challenge = new CommandAPDU(IDENTITY_CARD_CLA,SIGN_CHALLENGE, 0x00, 0x20,bytes);
resp3= c.transmit(challenge);
if(resp3.getSW()==0x9000) {
card_signature = resp3.getData();
String s= DatatypeConverter.printHexBinary(card_signature);
System.out.println("signature: " + s);
} else System.out.println("Challenge signature error " + resp3.getSW());
Generally, you send bytes over through the APDU interface. A Java or Java Card byte[] is a construct that can hold those bytes. This is where the APDU buffer comes in: it is the byte array that holds the bytes sent over the APDU interface - or at least a portion of them after calling setIncomingAndReceive().
The challenge therefore is within the APDU buffer; instead of calling:
short sigLength = signature.sign(challenge, offset,length, output, x);
you can therefore simply call:
short sigLength = signature.sign(buffer, apdu.getOffsetCdata(), CHALLENGE_SIZE, buffer, START);
where CHALLENGE_SIZE is 20 and START is simply zero.
Then you can use:
apdu.getOutgoingAndSend(START, sigLength);
to send back the signed challenge.
If you require to keep the challenge for a later stage then you should create a byte array in RAM using JCSystem.makeTransientByteArray() during construction of the Applet and then use Util.arrayCopy() to move the byte values into the challenge buffer. However, since the challenge is generated by the offcard system, there doesn't seem to be any need for this. The offcard system should keep the challenge, not the card.
You should not use ISO7816.OFFSET_CDATA anymore; it will not return the correct result if you would use larger key sizes that generate larger signatures and therefore require the use of extended length APDUs.

What is meaning of the response status word 0x61xx from a smart card?

I wrote a Java Card applet that saves some data into the APDU buffer at offset ISO7816.OFFSET_CDATA and sends those bytes as a response.
Util.arrayCopy(Input_Data, (short)0, buffer, (short) ISO7816.OFFSET_CDATA, (short)Datalength);
apdu.setOutgoing();
apdu.setOutgoingLength((short)(DataLength) );
apdu.sendBytesLong(buffer, ISO7816.OFFSET_CDATA, (short)(DataLength));
I tested this in a simulator without any problem. But when I test this on a real smart card (Java Card v2.2.1 manufactured by Gemalto), I get the status word 0x6180 as response.
My command APDU is 00 40 00 00 80 Data, where data has a length of 128 bytes, so I have 4+128 bytes in the buffer and (260-(4+128)) byte is null.
Your simulator probably uses T=1 transport protocol, but your real card does not. It uses T=0 protocol, which means it can either receive data, or send data in a single APDU.
Status word 0x6180 indicates there are 0x80 bytes to receive from the card. Generally, 61XX means XX bytes to receive.
How to receive them? Well, there is a special APDU command called GET RESPONSE. You should call it each time you get 61XX status word. Use XX as the Le byte of your GET RESPONSE APDU
APDU -> 61 XX
00 C0 00 00 XX -> your data 90 00
A few other notes on your code:
Datalength vs DataLength?
Copy your output data to 0 instead of ISO7816.OFFSET_CDATA
Why do you cast DataLength to short each time? Is it short? Do not cast then. Is it byte? You cast it in a wrong way then, because unsigned byte value > 0x80 will be cast to a negative short. The correct cast from an unsigned byte to a short is (short) (DataLength & 0xFF)
Use setOutgoingAndSend whenever you can. It is much simpler.
Use arrayCopyNonAtomic instead of arrayCopy whenever you are not copying to a persistent array. Performance of arrayCopyNonAtomic is much better.

Why do I get nonstandard responses from the TPM Through TBS?

I have a C++ program to do a basic TPM_GetCapabilities Through TPM Base Services and the Windows 7 SDK.
I've setup the program below
int _tmain(int argc, _TCHAR* argv[])
{
TBS_CONTEXT_PARAMS pContextParams;
TBS_HCONTEXT hContext;
TBS_RESULT rv;
pContextParams.version = TBS_CONTEXT_VERSION_ONE;
rv = Tbsi_Context_Create(&pContextParams, &hContext);
printf("\n1 RESULT : %x STATUS : %x", rv, hContext);
BYTE data[200] =
{0,0xc1, /* TPM_TAG_RQU_COMMAND */
0,0,0,18, /* blob length, bytes */
0,0,0,0x65, /* TPM_ORD_GetCapability */
0,0,0,0x06, /* TPM_CAP_VERSION */
0,0,0,0}; /* 0 bytes subcap */
BYTE buf[4000];
UINT32 len = 4000;
rv = Tbsip_Submit_Command(hContext,0,TBS_COMMAND_PRIORITY_NORMAL,data,18,buf,&len);
//CAPABILITY_RETURN* retVal = new CAPABILITY_RETURN(buf);
//printf("\n2 Response Tag: %x Output Bytes: %x",tag,);
printf("\n2 RESULT : %x STATUS : %x\n", rv, hContext);
printBuf(buf,len);
rv = Tbsip_Context_Close(hContext);
printf("\n3 RESULT : %x STATUS : %x", rv, hContext);
My Return Buffer looks like:
00:C4:00:00:00:12:00:00:00:00:00:00:00:04:01:01:00:00
According to this doc, Section 7.1 TPM_GetCapability I should get the following:
Looking at my output buffer, I am getting TPM_TAG_RSP_COMMAND,a value of 18 for my paramSize, 0 for my TPM_RESULT, 0x...04 for ordinal (Not sure what this is supposed to mean.) then 1,1,0,0 for my final bits. I'm at a loss as to how to decipher this.
The answer to your question:
You don't get a nonstandard response.
The response is perfectly fine, there is nothing nonstandard in it. It looks exactly like it is defined in the spec.
The response' content resp you get is also what is to be expected. A Standard conform TPM has to answer with 01 01 00 00 when asked for TPM_CAP_VERSION.
Why?
First of all: The line stating TPM_COMMAND_CODE ordinal is not part of the response.
It has no PARAM # and no PARAM SZ. It is only relevant for calculating the HMAC of the response.
So the response is the following:
00 C4 tag
00 00 00 12 paramSize
00 00 00 00 returnCode
00 00 00 04 respSize
01 01 00 00 resp
You asked for the capability TPM_CAP_VERSION. Here is what the spec says:
Value: 0x00000006
Capability Name: TPM_CAP_VERSION
Sub cap: Ignored
TPM_STRUCT_VER structure.
The major and minor version MUST indicate 1.1.
The firmware revision MUST indicate 0.0.
The use of this value is deprecated, new software SHOULD
use TPM_CAP_VERSION_VAL to obtain version and revision information
regarding the TPM.
So when you decode resp, which is a TPM_STRUCT_VER, you get the following:
typedef struct tdTPM_STRUCT_VER {
BYTE major; // ==> 1
BYTE minor; // ==> 1
BYTE revMajor; // ==> 0
BYTE revMinor; // ==> 0
} TPM_STRUCT_VER;
So 1.1 and 0.0, exactly according to specification.

Resources