Handling of faulty ISO7816 APDUs - iso

Are there any specifications in the Java Card API, RE or VM specs as to how the card must react to faulty ISO7816-4 APDUs (provided that such malformed APDUs are passed to the card at all)?
Are there different requirements for the APDU handling of applets?
If I were to send e.g. a (faulty) 3-byte long first interindustry APDU to the
card/applet - who should detect/report this error?
Who would detect/report a first interindustry APDU containing a bad LC
length field?

No, there is no generic specification that defines how to handle malformed APDU's.
In general you should always return a status word that is in a valid ISO 7816-3/4 range. Which one depends fully on the context. Generally you should try always to throw an ISOException with a logical status word on error conditions. You should try never to return a 6F00 status word, which is returned if the Applet.process() method exits with an exception other than ISOException. The most common (not all) ISO status words have been defined in the ISO7816 interface.
Unfortunately, ISO 7816-4 only provides some hints regarding which status words may be expected. On the other hand, unless the error is very specific (e.g. incorrect PIN), there is not too much a terminal can do if it receives a status word on a syntactically incorrect APDU (it is unlikely to fix an incorrect APDU command data field). Any specific status words should be defined by higher level protocols. ISO 7816-4 itself can only be used as a (rotten) foundation for other protocols. No clear rules for handling syntactic (wrong length) or semantic (wrong PIN) errors have been defined.
With regard to malformed APDU's: 3 byte APDU's won't be received by the Applet. Bytes with an incorrect Lc byte may be received. It would however be more logical if this would influence the transport layer in such a way that the transport layer either times out because it is expecting more data, or that spurious bytes are discarded. It cannot hurt to check and return a wrong length error, but please use the values of APDU.getIncomingLength() or APDU.setIncomingAndReceive() as final values for Ne if you decide to continue.

Related

Unused bytes by protobuf implementation (for limiter implementation)

I need to transfer data over a serial port. In order to ensure integrity of the data, I want a small envelope protocol around each protobuf message. I thought about the following:
message type (1 byte)
message size (2 bytes)
protobuf message (N bytes)
(checksum; optional)
The message type will mostly be a mapping between messages defined in proto files. However, if a message gets corrupted or some bytes are lost, the message size will not be correct and all subsequent bytes cannot be interpreted anymore. One way to solve this would be the introduction of limiters between messages, but for that I need to choose something that is not used by protobuf. Is there a byte sequence that is never used by any protobuf message?
I also thought about a different way. If the master finds out that packages are corrupted, it should reset the communication to a clean start. For that I want the master to send a RESTART command to the slave. The slave should answer with an ACK and then start sending complete messages again. All bytes received between RESTART and ACK are to be discarded by the master. I want to encode ACK and RESTART as special messages. But with that approach I face the same problem: I need to find byte sequences for ACK and RESTART that are not used by any protobuf messages.
Maybe I am also taking the wrong approach - feel free to suggest other approaches to deal with lost bytes.
Is there a byte sequence that is never used by any protobuf message?
No; it is a binary serializer and can contain arbitrary binary payloads (especially in the bytes type). You cannot use sentinel values. Length prefix is fine (your "message size" header), and a checksum may be a pragmatic option. Alternatively, you could impose an artificial sentinel to follow each message (maybe a guid chosen per-connection as part of the initial handshake), and use that to double-check that everything looks correct.
One way to help recover packet synchronization after a rare problem is to use synchronization words in the beginning of the message, and use the checksum to check for valid messages.
This means that you put a constant value, e.g. 0x12345678, before your message type field. Then if a message fails checksum check, you can recover by finding the next 0x12345678 in your data.
Even though that value could sometimes occur in the middle of the message, it doesn't matter much. The checksum check will very probably catch that there isn't a real message at that position, and you can search forwards until you find the next marker.

Can somebody shed a light what this strange DHT response means?

Sometimes I receive this strange responses from other nodes. Transaction id match to my request transaction id as well as the remote IP so I tend to believe that node responded with this but it looks like sort of a mix of response and request
d1:q9:find_node1:rd2:id20:.éV0özý.?tj­N.?.!2:ip4:DÄ.^7:nodes.v26:.ï?M.:iSµLW.Ðä¸úzDÄ.^æCe1:t2:..1:y1:re
Worst of all is that it is malformed. Look at 7:nodes.v it means that I add nodes.v to the dictionary. It is supposed to be 5:nodes. So, I'm lost. What is it?
The internet and remote nodes is unreliable or buggy. You have to code defensively. Do not assume that everything you receive will be valid.
Remote peers might
send invalid bencoding, discard those, don't even try to recover.
send truncated messages. usually not recoverable unless it happens to be the very last e of the root dictionary.
omit mandatory keys. you can either ignore those messages or return an error message
contain corrupted data
include unknown keys beyond the mandatory ones. this is not an error, just treat them as if they weren't there for the sake of forward-compatibility
actually be attackers trying to fuzz your implementation or use you as DoS amplifier
I also suspect that some really shoddy implementations are based on whatever string types their programming language supports and incorrectly handle encoding instead of using arrays of uint8 as bencoding demands. There's nothing that can be done about those. Ignore or occasionally send an error message.
Specified dictionary keys are usually ASCII-mappable, but this is not a requirement. E.g. there are some tracker response types that actually use random binary data as dictionary keys.
Here are a few examples of junk I'm seeing[1] that even fails bdecoding:
d1:ad2:id20:�w)��-��t����=?�������i�&�i!94h�#7U���P�)�x��f��YMlE���p:q9Q�etjy��r7�:t�5�����N��H�|1�S�
d1:e�����������������H#
d1:ad2:id20:�����:��m�e��2~�����9>inm�_hash20:X�j�D��nY��-������X�6:noseedi1ee1:q9:get_peers1:t2:�=1:v4:LT��1:y1:qe
d1:ad2:id20:�����:��m�e��2~�����9=inl�_hash20:X�j�D��nY���������X�6:noseedi1ee1:q9:get_peers1:t2:�=1:v4:LT��1:y1:qe
d1:ad2:id20:�����:��m�e��2~�����9?ino�_hash20:X�j�D��nY���������X�6:noseedi1ee1:q9:get_peers1:t2:�=1:v4:LT��1:y1:qe
[1] preserved char count. replaced all non-printable, ASCII-incompatible bytes with the unicode replacement character.

Is the FCS field in 802.11 message is mandatory?

From the 802.11 spec, the FCS field seems be mandatory. But I do see this field is missing in some wifi traffic.
What I'm trying to do is decoding the 802.11 messages in my program.
If FCS field is optional, how to determine if it's present since the length of FrameBody part may be variable?
[Update]
The snapshot the parsing result of this capture mesh.pcap from Wireshark SampleCaptures website.
You can see there is no FCS field in the parsing result.
Unfortunately, certain firmware strips the FCS while others do not. You would think that the presence of this field would be identified in the MPDU layer where it resides however, it is identified in the radiotap header which means the radiotap parser has to share information with the MPDU parser. To get to it, you have 'unpack' the flags portion of the radiotap and '&' with 0x10

Best way to store data using APDU's?

I have bunch of records in my offcard application and I want to save them all in javacard,
The question is:
What is the best way of transferring data to Java Card?
Should I transfer all data record by record (each one with a APDU) or send all the records in just one APDU?
Of course I know the limitation size of APDU and I'm using extended APDU in order to send all data just in one extended APDU which is more than 255 bytes..
It does not matter much if you send your data in one extended length APDU or one single APDU security wise. It is however much better to send unrelated information using separate APDU's. This would make your application much more modular. Note that if you send related information using separate APDU's, you may need to keep state between those APDU's for validation purposes (e.g. you may have to send either none or all of them, or send the APDU's in specific order).
Furthermore, ISO 7816-4 only defines 2 byte status words to send back to the sender, e.g. 8A80 to indicate any error in the command data. This means that it is impossible to tell from the status word which of the records contains failure information.
Finally, there are certainly still readers and software out there that have issues handling extended length APDU's. So if your software is going to be used by other parties you may want to stick to normal length APDU's.

error detection/correction/recovery in serial protocols

I have some designing to do for a serial protocol and am running into some questions that I figure must have been considered elsewhere.
So I'm wondering if there are some recommendations for best practices in designing serial protocols. (Please either state a fact that is easily verifiable, or cite a reputable source if you make a claim.) General recommendations for websites/books are also welcome.
In particular I have to deal with issues like
parsing a stream of bytes into packets
verifying a packet is correct (easy with a CRC, for instance)
identifying reasonable types of errors that can occur (e.g. in a point-to-point serial stream, sporadic single bit errors, and dropped series of bytes, are both likely, but extra phantom bytes are unlikely; whereas with a record stored in flash memory or on a disk drive the types of errors that predominate are different)
error correction or recovery (if I detect an error in a packet, can I correct it? If not, can I resync to the boundary of the next packet?)
how to make variable-length packets robust to error correction / recovery.
Any suggestions?
Packet delimiting
For syncing to packet boundaries, typically you have a byte or byte sequence that identifies the packet boundary, which cannot occur within the packet itself. If the packet data happens to contain that identifier, then you have to "escape" (aka byte stuff) it.
Examples:
PPP Encapsulation
Consistent Overhead Byte Stuffing (COBS), or maybe COBS/R, which encodes data packets so no zero bytes are present, thus you can use zero bytes for packet delimiting
Packet verification
Various options are:
Checksum
Adler-32
Fletcher
CRC (the more bits the better the check)
Error correction etc
Good questions. I've not had much experience with that.
Have you considered FEC (Forward Error Correction)?
This procedure is very often used in "physical" level communication protocols such as WDM (Wavelength Division Multiplexing) / OTN (Optical Transport Network).

Resources