What exactly is the class byte in JavaCard? - javacard

I've started to work with the JavaCards and trying to grasp the sense of CLA byte.
If to read RFC 5.4.1 Class byte
5.4.1 Class byte
According to table 8 used in conjunction with table 9, the class byte
CLA of a command is used to indicate to what extent the command and the response comply with this part of ISO/IEC 7816 and when applicable (see table 9), the format of secure messaging and the logical channel number.
So... CLA flag is used for the indication, but what exact? Because, the table and description as for the beginner is rather difficult, I understand that usually are used the next CLA bytes: 0x00, 0x80, 0x84.
For e.g. if to read the content from table:
0X' Structure and coding of command and response according to this part of ISO/IEC 7816 (for coding of 'X' see table 9)
10 to 7F RFU
Reserved for PTS
I understand that for the fine developing - I should read GlobalPlatform specification, the specification about the exact card (mine is NXP one) and other related materials, but I want to admit, that it's difficult to understand the content.
I've expected the following (pseudo-table):
0x00 -> for reading streams from file system
0x01 -> for writing byte buffer to memory blocks
0x02 -> call AES/RSA methods

The CLASS byte is defined in ISO 7816-4. The first bit indicates the interindustry class. Java Card applets shall operate in this interindustry standard. Global Platform is another specification to manage and maintain the card and all commands will have class byte 0x80 - 0x8F. Class byte 0xFF is used for communication with the card reader in some cases and is otherwise invalid for a card.
The interindustry meaning for the CLA serves 3 major functions:
Function 1: Chaining
bit5 = 1 signalizes that the current command is not the last command of a chain, meaning that multiple APDUs all belong together and the card may therefore do additional things
Function 2: Secure Messaging
bit4+3 serve to signalize the secure messaging status of the current command. This means that the APDU is authenticated(e.g. MACed) and the data is encrypted(e.g. block cipher). The command header is never encrypted.
Function 3: Logical Channel
bit2+1 serve to identify the logical channel number. Logical channels are parallel communication interfaces through the card, therefore an applet A can be selected on Channel 0 and an applet B can be selected on Channel 1 while both applets remain in their internal state(no RAM is reset). Most cards do not support logical channels or you have to enable them explicitly.
CLA byte is a typical trap for Java Card beginners and its usually best that you leave at 0x00 for the start.

Related

Decode weight in data packets from wireshark with bluetooth low energy

I've been working doing some reverse engineering to different BLE based devices and I have a weight scale where I can't find a pattern to find/decode/interpret the weight value that I can get from the android app. I was also able to get the services and characteristics of this device but did not found a SIG standard match with the UUIDs from the bluetooth site.
I'm using an nRF51 dongle to sniffing data packets between the android app and the weight scale and I can look the ble traffic, but there are several events during the communication that I can't really understand what's the relation and I can't be able to convert those values to readable weight in kg or pounds.
My target value is: 71.3kg readed from the weight scale app.
Let me show you what I get from the ble sniffer.
first I see that the master is sending notification/indication requests to the handles 0x0009(notify), 0x000c(indication) and 0x000f(notify) in each characteristic of one service.
Then I start to read notification/indications values mixed with write commands. At the end of the communication, I see some packets that I feel that they are the ones with the weight scale data and BMI.
Packets number 574, 672 and 674 in the image.
So that give us the following candidates:
1st. packet_number_574 = '000002c9070002
2nd. packet_number_672 = '420001000000005ed12059007f02c9011d01f101'
3rd. packet_number_674 = '42018c016b00070237057d001a01bc001d007c'
1st packet during the communication exchange looks more like a counter/RT/clock than a real measurement because of the data behavior, so I feel this one is not a real option.
2nd and 3rd looks more like real candidates, I have split them and convert them to decimal values and I have not found a relation, even combining bytes since this values are floating point data types. So my real question is, Am I missing something that you might see with this information? Do you know if there is a relation between this data packets and my target?
Thank you for taking your time reading me and any help could be good, thanks!
EDIT:
I have a python script that allows me to check the services and their characteristics hierarchy and some useful data like properties, their handles and descriptors.
Service 'fff0' (0000fff0-0000-1000-8000-00805f9b34fb):
Characteristic 'fff1' (0000fff1-0000-1000-8000-00805f9b34fb):
Handle: 8 (8)
Readable: False
Pro+perties: WRITE NOTIFY
Descriptor: Descriptor <Client Characteristic Configuration> (handle 0x9; uuid 00002902-0000-1000-8000-00805f9b34fb)
Characteristic 'fff2' (0000fff2-0000-1000-8000-00805f9b34fb):
Handle: 11 (b)
Readable: False
Properties: WRITE NO RESPONSE INDICATE
Descriptor: Descriptor <Client Characteristic Configuration> (handle 0xc; uuid 00002902-0000-1000-8000-00805f9b34fb)
Characteristic 'fff3' (0000fff3-0000-1000-8000-00805f9b34fb):
Handle: 14 (e)
Readable: False
Properties: NOTIFY
Descriptor: Descriptor <Client Characteristic Configuration> (handle 0xf; uuid 00002902-0000-1000-8000-00805f9b34fb)
This are the characteristics related to the notifications and indications that I see in wireshark. I think the packet number 574 (which characteristics has only a notifiy property) is more important than I think.
I solve it by myself.
This post gives me the idea to take the values (2 bytes) and multiply them by 0.1, that way I could get the weight.
In bytes I could look for my target value without decimals 713, which is in hex = 0x02c9
If we look at the packet number 574:
000002c9070002 and split it 00:00:02:c9:07:00:02
I could see that 2nd and 3rd bytes match with this pattern!
The only thing required to do is to group this bytes and multiply the decimal value 713 x 0.1 = 71.3. I made different tests and found that this pattern is constant so I feel is accurate for my solution. Hope this could help someone in the future.

Linux SMBus Block transaction types

The latest SMBus spec V3.0 20th Dec 2014 shows only one type of block write/read (excluding block process call):
6.5.7 Block Write/Read
Write: Address(Wr), Command, Count = N, Byte 1, Byte 2, ..., Byte N [, PEC]
Read: Address(Wr), Command, Address(Rd), Count = N, Byte 1, Byte 2, ..., Byte N [, PEC]
However, looking at the Linux user-space interface, there are 3 block transaction types to use with ioctl I2C_SMBUS from uapi/linux/i2c.h:
#define I2C_SMBUS_BLOCK_DATA 5
#define I2C_SMBUS_I2C_BLOCK_BROKEN 6
#define I2C_SMBUS_I2C_BLOCK_DATA 8
Following the code under drivers/i2c/* it delegates to smbus_xfer/master_xfer(if emulated) in i2c_algorithm, which is specific to an adapter/device.
1. Do all these transaction types end up following the block wire spec for SMBus 3.0?
2. And how would I decide which one I need to use?
I am creating a Java JNA interface on Raspbian GNU/Linux 10 (buster)
Do all these transaction types end up following the block wire spec for SMBus 3.0?
At the moment I'm writing the answer, I2C module within Linux kernel still doesn't support SMBus 3.0/3.1. It implements SMBus 2.0 communication.
And for the 3 types - this can't be answered. I guess no. To learn how these commands work, look into KMD sources. For example, I2C_SMBUS_I2C_BLOCK_BROKEN gets converted into I2C_SMBUS_I2C_BLOCK_DATA with the following comment:
/* Convert old I2C block commands to the new
convention. This preserves binary compatibility. */
And whether I2C_SMBUS_I2C_BLOCK_DATA follows block data protocol - it's up to the user. The command which does enforce the protocol is I2C_SMBUS_BLOCK_DATA.
And how would I decide which one I need to use?
If you want to just use block protocol, then just use I2C_SMBUS_BLOCK_DATA.
If you want more control, or want to overcome restrictions from SMBus 2.0, you'll have to use I2C_SMBUS_I2C_BLOCK_DATA. Though in these cases you will likely have to move to constructing SMBus messages manually, as I2C_SMBUS_I2C_BLOCK_DATA still keeps you quite restricted - you'll get one more byte in max message length, but it's still far from 255.
If you're writing according to SMBus 1 spec, use I2C_SMBUS_I2C_BLOCK_BROKEN when appropriate.

What determines the SAK of a Mifare device?

I have a Mifare fob and a magic Mifare Classic card. When I fully clone the fob onto the card, the SAK found from the card is 0x88, despite a SAK of 0x08 on the fob.
If I change the sixth byte of block 0 on the card from 0x88 to 0x08, the SAK changes accordingly. However, the fob holds a value of 0x88 at that position whilst reporting a SAK of 0x08. So, what determines the SAK such that two cards with supposedly identical data can report different values for it?
I had the same problem. I had:
An original TAG (mifare 1k)
A chinese TAG (mifare 1k) gen1a with block 0 rewritable.
From the Rfid Research Group documentation we can found this information:
MIFARE Classic block0:
11223344440804006263646566676869
^^^^^^^^ UID
^^ BCC
^^ SAK(*)
^^^^ ATQA
^^^^^^^^^^^^^^^^ Manufacturer data
(*) some cards have a different SAK in their anticollision and in block0: +0x80 in the block0 (e.g. 08->88, 18->98)
so usually the SAK is determined by the bytes #6 of the block 0.
But as specified in the doc, some cards have a different SAK in their anticollision and in block0.
So unfortunately my chinese TAG gen1a was unable to reproduce the same behavior of my original TAG. Also a gen1a TAG accept magic command, which means that a backdoor exist with those tags and you can read or write a block without using the access key, this backdoor is now well known and some reader can detect that.
The solution was to use a gen2 TAG aka CUID card, with block 0 rewritable. This TAG add a 08 SAK, by default, that did not change, even if a rewrite the byte #6 of the block 0.
The SAK byte identifies the manufacturer code and product code.
x08 would be NPX mifare clasic 1k and
x88 would be Infineon mifare clasic 1k
You would need to clarify with your card supplier which one does he sale.
source: http://nfc-tools.org/index.php?title=ISO14443A

spi_write_then_read with variant register size

As I understand the term "word length" (spi_bits_per_word) in spi, defines the CS (chip select) active time.
It therefore seems that linux driver will function correctly when dealing with simple spi protocols which keeps word size constant.
But, How can we deal with spi protocols which use different spi size as part of protocol.
for example cs need to be active for sending spi word - 9 bits, and then reading spi - 8 bits or 24 bits (the length of the register read is different each time, depends on register)
How can we implement that using spi_write_then_read ?
Do we need to set bits_per_word size for the sending and then another bits_per_word for the receiving ?
Regards,
Ran
"word length" means number of bits you can send in one transaction. It doesn't defines the CS (chip select) active time. You can keep it active for whatever time you want(least is for word-length).
SPI has got some format. You cannot randomly read-write whatever number of bits you want.Most of SPI supports 4-bit, 8-bit, 16-bit and 32-bit mode. If the given mode doesn't satisfy your requirement then you need to break your requirement. For eg:- To read 24-bit data, we need to use 8-bit word-length transfer for 3 times.
Generally SPI is fullduplex means it will read at same time it will write.

How can I convert a Bluetooth 16 bit service UUID into a 128 bit UUID?

All assigned services only state the 16 bit UUID. How can I determine the 128 bit counterpart if I have to specify the service in that format?
From Service Discovery Protocol Overview I know that 128 bit UUIDs are based on a so called "BASE UUID" which is also stated there:
00000000-0000-1000-8000-00805F9B34FB
But how do I create a 128 bit UUID from the 16 bit counterpart? Probably some of the 0 digits have to be replaced, but which and how?
This can be found in the Bluetooth 4.0 Core spec Vol. 3 - Core System. See the list of adopted specs.
In Part B, covering the Service Discovery Protocol (SDP) under Chapter 2.5.1 "Searching for Services / UUID" will explain how to calculate the UUID.
The full 128-bit value of a 16-bit or 32-bit UUID may be computed by a simple arithmetic operation.
128_bit_value = 16_bit_value * 2^96 + Bluetooth_Base_UUID
128_bit_value = 32_bit_value * 2^96 + Bluetooth_Base_UUID
A 16-bit UUID may be converted to 32-bit UUID format by zero-extending the 16-bit value to 32-bits. An equivalent method is to add the 16-bit UUID value to a zero-valued 32-bit UUID.
Note that, in another section, there's a handy mnemonic:
Or, to put it more simply, the 16-bit Attribute UUID replaces the x’s in the follow-
ing:
0000xxxx-0000-1000-8000-00805F9B34FB
In addition, the 32-bit Attribute UUID replaces the x's in the following:
xxxxxxxx-0000-1000-8000-00805F9B34FB
The same equations go for attribute UUIDs. See Part F, covering the Attribute Protocol (ATT) under Chapter 3.2.1 "Protocol Requirements / Basic Concepts". 32 bit attribute UUIDs are first specified in the Bluetooth Core 4.1 spec.

Resources