Why my SIMCard returns 6985 to EXTERNAL AUTHENTICATE APDU Command? - javacard

I have a Javacard-based SIM Card with the following specification:
D:\>gp -i
# GlobalPlatformPro 325fe84
# Running on Windows 8.1 6.3 amd64, Java 1.8.0_20 by Oracle Corporation
Unlimited crypto policy is NOT installed!
IIN: <Censored by OP>
CIN: <Censored by OP>
Card Data:
Tag 6: 1.2.840.114283.1
-> Global Platform card
Tag 60: 1.2.840.114283.2.2.2
-> GP Version: 2.2
Tag 63: 1.2.840.114283.3
Tag 64: 1.2.840.114283.4.0
-> GP SCP80 i=00
Tag 64: 1.2.840.114283.4.2.21
-> GP SCP02 i=15
Tag 65: 1.2.840.114283.5.4
Tag 66: 1.3.6.1.4.1.42.2.110.1.2
-> JavaCard v2
Card Capabilities:
[WARN] GPKeyInfo - Access and Usage not parsed: 01180100
[WARN] GPKeyInfo - Access and Usage not parsed: 01140100
[WARN] GPKeyInfo - Access and Usage not parsed: 01480100
[WARN] GPKeyInfo - Access and Usage not parsed: 01180100
[WARN] GPKeyInfo - Access and Usage not parsed: 01140100
[WARN] GPKeyInfo - Access and Usage not parsed: 01480100
Version: 32 (0x20) ID: 1 (0x01) type: DES3 length: 16
Version: 32 (0x20) ID: 2 (0x02) type: DES3 length: 16
Version: 32 (0x20) ID: 3 (0x03) type: DES3 length: 16
Version: 33 (0x21) ID: 1 (0x01) type: DES3 length: 16
Version: 33 (0x21) ID: 2 (0x02) type: DES3 length: 16
Version: 33 (0x21) ID: 3 (0x03) type: DES3 length: 16
Version: 34 (0x22) ID: 1 (0x01) type: DES3 length: 16
Version: 34 (0x22) ID: 2 (0x02) type: DES3 length: 16
Version: 34 (0x22) ID: 3 (0x03) type: DES3 length: 16
Version: 35 (0x23) ID: 1 (0x01) type: DES3 length: 16
Version: 35 (0x23) ID: 2 (0x02) type: DES3 length: 16
Version: 35 (0x23) ID: 3 (0x03) type: DES3 length: 16
Version: 1 (0x01) ID: 1 (0x01) type: DES3 length: 16
Version: 1 (0x01) ID: 2 (0x02) type: DES3 length: 16
Version: 1 (0x01) ID: 3 (0x03) type: DES3 length: 16
Version: 2 (0x02) ID: 1 (0x01) type: DES3 length: 16
Version: 2 (0x02) ID: 2 (0x02) type: DES3 length: 16
Version: 2 (0x02) ID: 3 (0x03) type: DES3 length: 16
Version: 3 (0x03) ID: 1 (0x01) type: DES3 length: 16
Version: 3 (0x03) ID: 2 (0x02) type: DES3 length: 16
Version: 3 (0x03) ID: 3 (0x03) type: DES3 length: 16
Version: 4 (0x04) ID: 1 (0x01) type: DES3 length: 16
Version: 4 (0x04) ID: 2 (0x02) type: DES3 length: 16
Version: 4 (0x04) ID: 3 (0x03) type: DES3 length: 16
Version: 5 (0x05) ID: 1 (0x01) type: DES3 length: 16
Version: 5 (0x05) ID: 2 (0x02) type: DES3 length: 16
Version: 5 (0x05) ID: 3 (0x03) type: DES3 length: 16
Version: 6 (0x06) ID: 1 (0x01) type: DES3 length: 16
Version: 6 (0x06) ID: 2 (0x02) type: DES3 length: 16
Version: 6 (0x06) ID: 3 (0x03) type: DES3 length: 16
Version: 7 (0x07) ID: 1 (0x01) type: DES3 length: 16
Version: 7 (0x07) ID: 2 (0x02) type: DES3 length: 16
Version: 7 (0x07) ID: 3 (0x03) type: DES3 length: 16
Version: 8 (0x08) ID: 1 (0x01) type: DES3 length: 16
Version: 8 (0x08) ID: 2 (0x02) type: DES3 length: 16
Version: 8 (0x08) ID: 3 (0x03) type: DES3 length: 16
Warning: no keys given, defaulting to 404142434445464748494A4B4C4D4E4F
When I want to do a Mutual Authentication with it, I receive 69 85 (Condition of use not satisfied) error status words:
D:\>python mutual_auth.py
Connected to Card with ATR = 3B9F95803FC6A08031E073FE211B670110B26094101401
---> 00 A4 04 00 08 A0 00 00 00 03 00 00 00
<--- 6F 10 84 08 A0 00 00 00 03 00 00 00 A5 04 9F 65 01 FF 90 00
---> 80 50 00 00 08 37 CD BA 7B B4 57 B5 1B
<--- 00 00 C6 D8 6A 1C B2 02 14 13 20 02 00 00 71 90 98 C2 77 8A 07 3D 4A 4B F1 4D D4 FB 90 00
:: Calculated "Session Keys" based on host and card challenges:
Session ENC: 43cc9d7949a13e83d22626400645c4c143cc9d7949a13e83
Session MAC: 4abaaa3864d8fbf2ae0ac430c550ef564abaaa3864d8fbf2
Session DEK: e1fbe0ccb299f3dcf756308f94fa4fb5e1fbe0ccb299f3dc
:: Card cryptogram verified successfully.
---> 84 82 00 00 10 22 66 0D BB EF 34 74 D3 11 43 98 00 F6 15 B9 ED
<--- 69 85
Error: Failed to Mutual Authenticate!
What is wrong with the external authenticate command?
Note 1: I can do successful mutual authentication with different Javacards (Which are SCP02-i15) and that means that the tool creates the session keys and MAC values correctly, but when I want to have mutual authentication with my SIM Cards, I received 6985.
Note 2: The card cryptogram is correct based on the generated session keys.
Note 3: I even tried C-MAC on unmodified APDU (i=1A), but nothing changed.
Verification of session keys:
based on the INITIALIZE UPDATE APDU command and its response we have:
Host Challenge: 37 CD BA 7B B4 57 B5 1B
key_diversification_data : 00 00 C6 D8 6A 1C B2 02 14 13
key_info : 20 02
sequence_counter : 00 00
card_challenge : 71 90 98 C2 77 8A
card_cryptogram : 07 3D 4A 4B F1 4D D4 FB
Let's verify the correctness of generated session ENC key with above data.
Based on the KeyInfo data, out SIMCard uses SCP02. In SCP02 we have:
card_auth_data = host_challenge + sequence_counter + card_challenge + 800000000000000
==> card_auth_data = 37CDBA7BB457B51B0000719098C2778A8000000000000000
card_cryptogram = 3des_cbc_enc(card_auth_data, ZERO_IV)[-8:]
As you see above, we generated the card cryptogram equal with the value that card returns in INITIALIZE UPDATE response. So the session ENC key have a correct value.
Let's generate host cryptogram for External Authenticate command:
host_auth_data = sequence_counter + card_challenge + host_challenge + 800000000000000
==> host_auth_data = 0000719098C2778A37CDBA7BB457B51B8000000000000000
host_cryptogram = 3des_cbc_enc(host_auth_data, ZERO_IV)[-8:]
As you see above, the generated host_cryptogram in above picture is equal with the value which I sent to the card too.
So the only possible problem which I may have is the MAC value in the External Authenticate command. Let assume that the Session Key is generated correctly (We can't verify its value with the provided information in the question and I don't want to expose my card's static MAC key). Is there any other possible origin for the issue?

When reading your question carefully I see that you are using a SIM card.
SIM cards usually do not expose SCP02/SCP03. The reason for this is that the cards are not meant to be used by end users in a card reader. The SCP02/SCP03 is disabled. For remote updates the MNO is just using SCP80 over SM or BIP or CAT_TP. This is a different SCP. The reported SCP02 can be a false positive, some SIM cards are not very spec compliant.
Also if your card is potentially supporting SCP02, are you sure that you have the SCP02 keys? The MNOs / SIM card vendor are distributing in output files usually only the Kic/Kid for the SCP80 protocol.

Related

Read bytes of inode (in the inode table) of newly created file in EXT4

I would like to read the raw bytes of an inode in the inode table. I am able to read the inode of the root directory, but not of a newly created file. Here some information about the FS:
Filesystem UUID: db0769b9-cbcf-4882-a283-79ccb7fa3134
Filesystem magic number: 0xEF53
Filesystem revision #: 1 (dynamic)
Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent 64bit flex_bg sparse_super large_file huge_file dir_nlink extra_isize metadata_csum
Filesystem flags: signed_directory_hash
Default mount options: user_xattr acl
Filesystem state: clean
Errors behavior: Continue
Filesystem OS type: Linux
Inode count: 4161536
Block count: 16645376
Reserved block count: 832268
Free blocks: 12277640
Free inodes: 3875167
First block: 0
Block size: 4096
Fragment size: 4096
Group descriptor size: 64
Reserved GDT blocks: 1024
Blocks per group: 32768
Fragments per group: 32768
Inodes per group: 8192
Inode blocks per group: 512
Flex block group size: 16
Filesystem created: Wed Feb 2 22:45:08 2022
Last mount time: Sun May 29 22:48:32 2022
Last write time: Sun May 29 22:48:32 2022
Mount count: 11
Maximum mount count: -1
Last checked: Wed Feb 2 22:45:08 2022
Check interval: 0 (<none>)
Lifetime writes: 124 GB
Reserved blocks uid: 0 (user root)
Reserved blocks gid: 0 (group root)
First inode: 11
Inode size: 256
Required extra isize: 32
Desired extra isize: 32
Journal inode: 8
Step 1: Check if I can read the bytes of the root directory
I first successfully read the bytes of the root directory (inode 2) with the following commands:
Get inode from the root directory. This returns 2, as expected
stat /
Find the start block of inode 2
debugfs -R "imap <2>" /dev/sda5
this returns the following:
Inode 2 is part of block group 0
located at block 1065, offset 0x0100
Now I'm going to use this information to find the bytes in the file system. 1065 is the start block and a block is 4KB in size. Offset is 256 (Ox0100)
hexdump -C -n 256 -s $((1065*4096+256)) /dev/sda5
partial result is:
00429100 ed 41 00 00 00 10 00 00 24 dc 93 62 60 c7 93 62 |.A......$..b`..b|
00429110 60 c7 93 62 00 00 00 00 00 00 15 00 08 00 00 00 |`..b............|
...
After inspecting these values, I concluded that this is indeed the correct data of the root directory.
Step 2: Perform above steps but with newly created inode
Now I'm testing above steps with a newly created inode, which gives me only 0-valued bytes.
Create file and obtain inode number
touch test.txt
echo "my content!" > test.txt
ls -li test.txt
When performing then the steps from above (debugfs etc.) I get the following result:
1010af300 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
*
1010af400
Potential causes
Do I have to remount the FS? Is that why the inode table is empty for the specific inode?

Sockets programming: DIG DNS Query Messages: Incorrect header length?

RFC Reference
I am working on a project which involves sockets programming and interpreting the output from DIG DNS queries.
I'm using RFC 1035 as my reference. Although this is quite old now (1987) as far as I can tell from later RFCs (for example 8490) the DNS headers are still the same.
https://www.rfc-editor.org/rfc/rfc1035
Code Overview: IPv6 TCP query
I have written a short program in C which reads from a IPv6 TCP socket. I send data to this socket using DIG. (My program simply reads all data it sees on a socket, and prints it to stdout.)
Note that there are two unusual things here:
Firstly the use of IPv6
Secondly the use of TCP (DNS messages are often UDP)
Here is the command used:
dig #::1 -p 8053 duckduckgo.com +tcp
I am running dig version DiG 9.16.13-Debian, on Debian Testing. (cera 2021-May)
Output, Discussion and Question
Here is the hexadecimal and printable character output which is read from the socket:
Hex:
00 37 61 78 01 20 00 01 00 00 00 00 00 01 0A 64 75 63 6B 64 75 63 6B 67 6F 03 63 6F 6D 00 00 01 00 01 00 00 29 10 00 00 00 00 00 00 0C 00 0A 00 08 00 7A 4* 48 2C 16 0* 33
Char:
00 7 61 x 01 20 00 01 00 00 00 00 00 01 0A d u c k d u c k g o 03 c o m 00 00 01 00 01 00 00 ) 10 00 00 00 00 00 00 0C 00 0A 00 08 00 z 4* H , 16 0* 33
If non-printable characters are encountered, the hex value is printed instead.
Although this is a fairly long stream of data, the question relates to the length of the header.
According to RFC 1035, the length of the header should be 12 bytes.
4.1.1. Header section format
The header contains the following fields:
1 1 1 1 1 1
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
| ID |
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
|QR| Opcode |AA|TC|RD|RA| Z | RCODE |
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
| QDCOUNT |
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
| ANCOUNT |
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
| NSCOUNT |
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
| ARCOUNT |
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
The header is followed by a QUESTION SECTION. The question section begins with a single byte which specifies the length.
Inspecting the data stream above, we see that the byte at offset 12 has a value of 0. I repeat it below with offset numbers to make it clear. The data is in the middle row, the row above and below are byte offsets.
0 1 2 3 4 5 6 7 8 9 10 11 <- byte 12
00 37 61 78 01 20 00 01 00 00 00 00 00 01 0A 64 75 63 6B ...
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 <- byte 15
This clearly doesn't make any sense.
Looking again at the stream, we can see that "duckduckgo" is preceeded by the byte 0A. This is 10 in decimal and corresponds to the 10 characters of "duckduckgo". This string is followed by a byte 03 which corresponds to the 3 bytes of "com".
The offset of the byte 0A is 15. Not 12.
I must have misunderstood the RFC specification. But what have I misunderstood? Does the header itself start at a different offset to what I think it is? (Byte zero.) Or is there perhaps some padding between the end of the header and the beginning of the first question section?
Existing Question on this site:
Comments: The below link states that there is no padding. This is the only answer on this question. The question is about DNS responses rather than queries, and does not ask about the header section of a query. (Although information from one should presumably apply to the other, but possibly does not.)
Do DNS messages pad names to an even number of bytes?
Comments: The below link asks about the best way to build a data structure to handle DNS data. Additionally, the answer notes that one has to be careful about network byte order and machine byte order. I am already aware of this and I use ntohs() to convert from network byte order to x86_64 byte order before printing information to stdout. This is not the problem and does not explain why I see information about the dns query starting at byte 15 instead of 12, when the header should be a fixed size of 12 bytes.
Implementing a DNS Query in c++ according to RFC 1035
Thanks to #SteffenUllrich who prompted the solution for this in the comments.
RFC 1035 4.2.2 states
4.2.2. TCP usage
Messages sent over TCP connections use server port 53 (decimal). The
message is prefixed with a two byte length field which gives the message
Mockapetris [Page 32]
RFC 1035 Domain Implementation and Specification November 1987
length, excluding the two byte length field. This length field allows
the low-level processing to assemble a complete message before beginning
to parse it.
I had removed the 2-byte field at the start of my struct at some point.
This is what the structure looks like with the 2 byte length field re-enabled.
struct __attribute__((__packed__)) dns_header
{
unsigned short ID;
union
{
unsigned short FLAGS;
struct
{
unsigned short QR : 1;
unsigned short OPCODE : 4;
unsigned short AA : 1;
unsigned short TC : 1;
unsigned short RD : 1;
unsigned short RA : 1;
unsigned short Z : 3;
unsigned short RCODE : 4;
};
};
unsigned short QDCOUNT;
unsigned short ANCOUNT;
unsigned short NSCOUNT;
unsigned short ARCOUNT;
};
struct __attribute__((__packed__)) dns_struct_tcp
{
unsigned short length; // length excluding 2 bytes for length field
struct dns_header header;
};
For example: I recieved a TCP packet of length 53 bytes. The value of length is set to 51.
To read data into this struct:
memcpy(&dnsdata, buf, sizeof(struct dns_struct_tcp));
To interpret this data (since it is stored in network byte order):
void dns_header_print(FILE *file, const struct dns_header *header)
{
fprintf(file, "ID: %u\n", ntohs(header->ID));
char str_FLAGS[8 * sizeof(unsigned short) + 1];
str_FLAGS[8 * sizeof(unsigned short)] = '\0';
print_binary_16_fixed_width(str_FLAGS, header->FLAGS);
fprintf(file, "FLAGS: %s\n", str_FLAGS);
fprintf(file, "FLAGS: QOP ATRRZZZR \n");
fprintf(file, " RCODEACDA CODE\n");
fprintf(file, "QDCOUNT: %u\n", ntohs(header->QDCOUNT));
fprintf(file, "ANCOUNT: %u\n", ntohs(header->ANCOUNT));
fprintf(file, "NSCOUNT: %u\n", ntohs(header->NSCOUNT));
fprintf(file, "ARCOUNT: %u\n", ntohs(header->ARCOUNT));
}
Note that the flags are unchanged, since each field of flags is less than 8 bits in length. However on x86_64 systems, unsigned short is stored in little-endian format, hence ntohs() is use to convert data which is in big-endian (network) byte order to little-endian (host) byte order.

How does jwt implement RSA256 signature verification in nodejs

I was trying to understand the internal working of how JWT performs RS256 signature verification. The signature algorithm works on following basic steps:
Hash the original data
Encrypt the hash with RSA private key
And for verification it follows the following steps:
Decrypt using RSA public key
Match the hash generated from decryption with the SHA256 hash of original message.
While trying to test this on one of the jwt
eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiYWRtaW4iOnRydWUsImlhdCI6MTUxNjIzOTAyMn0.POstGetfAytaZS82wHcjoTyoqhMyxXiWdR7Nn7A29DNSl0EiXLdwJ6xC6AfgZWF1bOsS_TuYI3OG85AmiExREkrS6tDfTQ2B3WXlrr-wp5AokiRbz3_oB4OxG-W9KcEEbDRcZc0nH3L7LzYptiy1PtAylQGxHTWZXtGz4ht0bAecBgmpdgXMguEIcoqPJ1n3pIWk_dUZegpqx0Lka21H6XxUTxiy8OcaarA8zdnPUnV6AmNP3ecFawIFYdvJB_cm-GvpCSbr8G8y_Mllj8f4x9nBH8pQux89_6gUY618iYv7tuPWBFfEbLxtF2pZS6YC1aSfLQxeNe8djT9YjpvRZA
I found that the hash obtained from signature contains some extra characters.
E.g. the SHA256 of original message in case of above jwt in hex encoding is
8041fb8cba9e4f8cc1483790b05262841f27fdcb211bc039ddf8864374db5f53
but the hash obtained from signature of above jwt after decryption is
3031300d0609608648016503040201050004208041fb8cba9e4f8cc1483790b05262841f27fdcb211bc039ddf8864374db5f53
Which has 3031300d060960864801650304020105000420 extra characters infront of the hash.
What do these characters represent and shouldn't the hash obtained from message and signature be identical?
rfc7518 3.3 defines JWS algorithms RS256,384,512:
This section defines the use of the RSASSA-PKCS1-v1_5 digital
signature algorithm as defined in Section 8.2 of RFC 3447 [RFC3447]
(commonly known as PKCS #1), using SHA-2 [SHS] hash functions.
and rfc3447 8.2 defines RSASS-PKCS1-v1_5
RSASSA-PKCS1-v1_5 combines the RSASP1 and RSAVP1 primitives with the
EMSA-PKCS1-v1_5 encoding method. ....
where EMSA-PKCS1-v1_5 is defined in rfc3447 9.2 as:
1. Apply the hash function to the message M to produce a hash value
H:
H = Hash(M).
If the hash function outputs "message too long," output "message
too long" and stop.
2. Encode the algorithm ID for the hash function and the hash value
into an ASN.1 value of type DigestInfo (see Appendix A.2.4) with
the Distinguished Encoding Rules (DER), where the type DigestInfo
has the syntax
DigestInfo ::= SEQUENCE {
digestAlgorithm AlgorithmIdentifier,
digest OCTET STRING
}
The first field identifies the hash function and the second
contains the hash value. Let T be the DER encoding of the
DigestInfo value (see the notes below) and let tLen be the length
in octets of T.
3. If emLen < tLen + 11, output "intended encoded message length too
short" and stop.
4. Generate an octet string PS consisting of emLen - tLen - 3 octets
with hexadecimal value 0xff. The length of PS will be at least 8
octets.
5. Concatenate PS, the DER encoding T, and other padding to form the
encoded message EM as
EM = 0x00 || 0x01 || PS || 0x00 || T.
6. Output EM. [added: which is then modexp'ed with d by RSASP1 to
sign, or matched to the value modexp'ed with e by RSAVP1 to verify]
Notes.
1. For the six hash functions mentioned in Appendix B.1, the DER
encoding T of the DigestInfo value is equal to the following:
MD2: (0x)30 20 30 0c 06 08 2a 86 48 86 f7 0d 02 02 05 00 04
10 || H.
MD5: (0x)30 20 30 0c 06 08 2a 86 48 86 f7 0d 02 05 05 00 04
10 || H.
SHA-1: (0x)30 21 30 09 06 05 2b 0e 03 02 1a 05 00 04 14 || H.
SHA-256: (0x)30 31 30 0d 06 09 60 86 48 01 65 03 04 02 01 05 00
04 20 || H.
SHA-384: (0x)30 41 30 0d 06 09 60 86 48 01 65 03 04 02 02 05 00
04 30 || H.
SHA-512: (0x)30 51 30 0d 06 09 60 86 48 01 65 03 04 02 03 05 00
04 40 || H.
The prefix you discovered corresponds exactly to that specified in Note 1 for the encoding in step 2 of a DigestInfo structure for a SHA-256 hash, as expected.
Note rfc3447=PKCS1v2.1 has been superseded by rfc8017=PKCS1v2.2, but the only relevant change in this area is the addition of the SHA512/224 and SHA512/256 hashes, which JWS doesn't use.
Describing signing and verifying as 'encrypting' and 'decrypting' the hash (really, the encoding aka padding of the hash) is considered obsolete. It was used originally, decades ago, and only for RSA not other signature algorithms, because of the mathematical similarity between the modexp operations used for encrypting and decrypting vs signing and verifying, but it was found that thinking of these as the same or interchangeable resulted in system implementations that were vulnerable and broken. In particular see rfc3447 5.2:
The main mathematical operation in each primitive is
exponentiation, as in the encryption and decryption primitives of
Section 5.1. RSASP1 and RSAVP1 are the same as RSADP and RSAEP
except for the names of their input and output arguments; they are
distinguished as they are intended for different purposes.
nodejs uses this obsolete terminology because it uses OpenSSL which via its predecessor SSLeay dates back to the early 1990s when this mistake was still common.
However, that isn't really a programming/development issue and is more on topic for crypto.SX and security.SX; see some of the links I collected at https://security.stackexchange.com/questions/159282/can-openssl-decrypt-the-encrypted-signature-in-an-amazon-alexa-request#159289 .

Chip EMV - Getting AFL for every smart card

Continue from: EMV Reading PAN Code
I'm working in C, so I havn't Java tools and all the functions that parse automatically the response of APDU command.
I want to read all types of smart cards.
I have to parse the response of an GET PROCESSING OPTIONS and get the AFL (Access File Locator) of every card.
I have three cards with three different situation:
A) HelloBank: 77 12 82 2 38 0 94 c 10 2 4 1 18 1 1 0 20 1 1 0 90
B) PayPal: 77 12 82 2 39 0 94 c 18 1 1 0 20 1 1 0 28 1 3 1 90
C) PostePay: 80 a 1c 0 8 1 1 0 18 1 2 0 90
Case A)
I've got three different AFL: 10 2 4 1, 18 1 1 0, 20 1 1 0
So I send 00 B2 SFI P2 00 where SFI was 10>>3 (10 was first byte of first AFL) and P2 was SFI<<3|4 and this way I got the correct PAN Code of my card.
Case B)
I've got three different AFL: 18 1 1 0, 20 1 1 0, 28 1 3 1.
So I send 00 B2 SFI P2 00 builded in the same way as Case A, but I got the response 6A 83 for every AFL.
Case C)
I've got two different AFL: 8 1 1 0, 18 1 2 0 but I cannot parse those automatically because there isn't the same TAG of previous response.
If I use those AFL it worked and I can get the PAN Code of the card.
How can I make an universal way to read the correct AFL and how can I make the correct command with those AFL?
Here is the decoding of AFL:
You will get the AFL in multiple of 4 Bytes normally. Divide your complete AFL in a chunk of 4 Bytes. Lets take an example of 1 Chunk:
AABBCCDD
AA -> SFI (Decoding is described below)
BB -> First Record under this SFI
CC -> Last Record under this SFI
DD -> Record involved for Offline Data Authentication (Not for your use for the moment)
Taking your example 10 02 04 01 18 01 01 00 20 01 10 00
Chunks are 10 02 04 01, 18 01 01 00, 20 01 10 00
10 02 04 01 -->
Taking 1st Byte 10 : 00010000 Take initial 5 bits from MSB --> 00010 means 2 : Means SFI 2
Taking 2nd Byte 02 : First Record under SFI 2 is 02
Taking 3rd Byte 04 : Last Record under SFI 2 is 04
Excluding 4 Byte explanation since no use
Summary : SFI 2 contains record 2 to 4
How Read Record command will form :
APDU structure : CLA INS P1 P2 LE
CLA 00
INS B2
P1 (Rec No)02 (SInce in this SFI 2 inital record is 02)
P2 (SFI) SFI 02 : Represent the SFI in 5 binay digit 00010 and then append 100 in the end : 00010100 : In Hex 14
So P2 is 14
LE 00
APDU to Read SFI 2 Rec 2 : 00 B2 02 14 00
APDU to Read SFI 2 Rec 3 : 00 B2 03 14 00
APDU to Read SFI 2 Rec 4 : 00 B2 04 14 00
Now if you will try to Read Rec 5, Since this Rec is not present you will get SW 6A83 in this case.
Use the same procedure for all chunk to identify the available Records and SFIs
BY this mechanisam you can write the function to parse the AFL

Broadcasting message in Bluetooth low energy mode

Your Honor:
I would like to know how to broadcast message in BLE(bluetooth low energy mode).
That behavior is just like iBeacon in Macintosh.
As my know, windows(7 or 8) do not support this function.
But linux does.
Anyone could guide/cue me a way to achieve this in linux ?
By command line or code are ok , good in both.
That is like, x86-linux boardcasting a message , like: "I am laptop"
And I could use another device(phone/computer..etc) to receiver this message.
Thank your help.
You can use the BlueZ stack to advertise a BLE device in Linux. See this question for the basics of how to do this:
Use BlueZ Stack As A Peripheral (Advertiser)
Depending on what you want to advertise, you need to figure out the format of the bytes in the advertisement. Here is an example of how you can use BlueZ to transmit the open-source AltBeacon advertisement format: https://github.com/RadiusNetworks/altbeacon-reference/blob/master/altbeacon_transmit
Step 0:
(if you have mac, download mactsAsBeacon for verify)
Download iBeacon scanner in you android/iOS mobile phone.
Step 1:
It is my shell script:
#!/bin/bash
set -x
export BLUETOOTH_DEVICE=hci0
#sudo hcitool -i hcix cmd <OGF> <OCF> <No. Significant Data Octets> <iBeacon Prefix> <UUID> <Major> <Minor> <Tx Power> <Placeholder Octets>
#OGF = Operation Group Field = Bluetooth Command Group = 0x08
#OCF = Operation Command Field = HCI_LE_Set_Advertising_Data = 0x0008
#No. Significant Data Octets (Max of 31) = 1E (Decimal 30)
#iBeacon Prefix (Always Fixed) = 02 01 1A 1A FF 4C 00 02 15
export OGF="0x08"
export OCF="0x0008"
export IBEACONPROFIX="02 01 1A 1A FF 4C 00 02 15"
#export UUID="92 77 83 0A B2 EB 49 0F A1 DD 7F E3 8C 49 2E DE"
export UUID="B9 40 7F 30 F5 F8 46 6E AF F9 25 55 6B 57 FE 6D"
export MAJOR="01 02"
export MINOR="03 04"
export POWER="C5 00"
sudo hciconfig $BLUETOOTH_DEVICE up
sudo hciconfig $BLUETOOTH_DEVICE noleadv
sudo hciconfig $BLUETOOTH_DEVICE noscan
sudo hciconfig $BLUETOOTH_DEVICE leadv 3
sudo hcitool -i $BLUETOOTH_DEVICE cmd 0x08 0x0008 $IBEACONPROFIX $UUID $MAJOR $MINOR $POWER
#sudo hciconfig $BLUETOOTH_DEVICE leadv 3
Step 2:
Run this script, you will find the iBeacon scanner on your mobile has found the linux ibeacon transmitter.
If you want to turn off the boardcasting:
sudo hciconfig hci0 noleadv
export IBEACONPROFIX="02 01 1A 1A FF 4C 00 02 15"
is correct but can be further divided into Bluetooth HCI data plus Apple proprietary data:
3 bytes of Flags as per Supplement to the Bluetooth Core Specification
02 : length (1)
01 : type "Flag"
1A : value of flags
followed by vendor proprietary data
1A : length of proprietary payload (1), here 0x1A == 26: 5 bytes iBeacon header + 21 iBeacon payload data
FF : indicator of proprietary data (1)
4C 00: company ID (2), Apple
02 : iBeacon type
15 : iBeacon data length (1) 0x15 == 21: 16 bytes UUID, 2 bytes major, 2 bytes
minor, 1 byte TX Power
https://www.bluetooth.com/specifications/assigned-numbers/company-identifiers/
https://www.bluetooth.com/specifications/bluetooth-core-specification/

Resources