How to construct 9F10 tag? [duplicate] - payment

I received the the following IAD after processing the GPO command, my question then, how is the 9F10 EMV token constructed? Here is the token.
06010A03A020000F04000000000000000000006232E4F9
I am required to send only the CVR portion to the acquiring switch.

Looking at the cryptogram version I assume this is from a Visa card. The TLV is 9F10 17 06010A03A020000F04000000000000000000006232E4F9 ?
17 is the total length of the data
06 is the length of issuer descretionary data
01 is derivation key index
0A is the cryptogram version (10 in this case ).
03 Length of CVR
A02000 is the CVR here

From EMV 4.3 Book 3 Common Core Definitions, Application Specification, November 2011, Page 206, C7.2
The CVR has a fixed length of 5 bytes (10 hexadecimals characters) that are the bytes 4-8 included of Issuer Application Data, EMV tag 9F10.
The 3 first bytes of 9F10 being the following.
b1 the length
b2 derivation key index
b3 the cryptogram version
It seems however that the format of this field might vary between schemes.

Related

JAVACARD possible to change ATQB response

I am using JC30M48CR Type B Javacard and JCIDE for compiling.
I searched for the whole forum to find out if it is possible to change ATQB response for JAVACARD. However, all topics are about change ATR as given in setATRHistBytes() method always returns false.
May I know whether it is possible to customise ATQB? For example, the request code for ISO14443B is 05 00 00, then the ATQB response is 50 00 00 00 00 D1 03 86 0C 00 80 80.
Thanks
No, because Java Card does not control the lower level protocols at that level. And actually, the historical bytes are not applicable to Type B cards; you'd need to have an ATR specific file in the root folder to be able to communicate the historical bytes, because they are simply not present in the ISO/IEC 14443 type B protocol.
If the communication parameters can be set then that specific functionality is OS specific. So in general - if you're big enough - then you can have chips delivered with special settings. You may also be able to set the parameters yourself through another OS provided initialization application on chip. Those are all vendor specific.
Of course the vendors do not want to let any applet change the communication parameters. For the historical bytes the Java Card Forum compromised on only allow the default selected applet change the historical bytes (instead of using a specific INSTALL for INSTALL flag or other authentication measures).
In short: contact your supplier and ask for the user manual.

Sending image directly to Epson Projector, trouble decoding jpeg image

I have an Epson Ex5220 that doesn't have a linux driver and have been trying to work out communication through wifi. I can connect and send images captured through packet traces from a Windows machine with a driver but cannot create an acceptable image. Here is where the problem lies:
In the data send, a jpeg image is sent with a header attached like this.
00:00:00:01:00:00:00:00:02:70:01:a0:00:00:00:07:90:80:85:00
00:00:00:04 - Number of jpeg images being sent (only the first header)
00:00 - X offset
00:00 - Y offset
02:70 - Width of Jpeg image (624 in this case)
01:a0 - Height of Jpeg image (416 in this case)
00:00:00:07:90 - Unknown (I believe it's a version number perhaps)
80:85:00 - (What I'm after) Some count of data units?
Following the header is a normal jpeg image. If I strip off that header, I can view the image. Here is a screen shot of a partial capture with the 3 bytes highlighted:
I have found what seems to be a base line by setting those last three bytes to 80:85:00. Anything less and the image will not project. Also the smallest image size I can send to the projector is a 3w x 1h which correlates with my first two images show below.
Here are some examples:
1a - All white (RGB565) image 1024x768 - filesize 12915 - 4 blocks
2a - Color (RGB565) image 1024x768 - filesize 58577 - only 3 blocks
I then changed the 3 bytes to 00:b5:80 (incremented the middle one by 0x30)
1b - All white (RGB565) image 1024x768 - filesize 12915 - 22 full rows and 4 blocks.
2b - Color (RGB565) image 1024x768 - filesize 58577 - 7 rows and 22 blocks.
So it seems that the 3 bytes have something to do with data units. I've read lots of stuff about jpeg and am still digesting much of it but I think if I knew what was required to calculate data units, I'd find my mystery 3 bytes.
ADDITIONAL INFO:
The projector only supports the use of RGB565 jpeg images inside the data send.
(Posted on behalf of the OP).
I was able to solve this, but I would like to know why this works. Here is the formula for those last 3 bytes:
int iSize = bImage.length;
baHeader[17] = (byte) ((iSize) | 0x80);
baHeader[18] = (byte) ((iSize >> 7) | 0x80);
baHeader[19] = (byte) ((iSize >> 14));
I got fed up with messing with it and just look at several images, wrote down all the file sizes and the magic bytes, converted everything to binary and hammered away at ANDing ORing bitshifting until I forced a formal that worked. I would like to know if this is related to calculating jpeg data units. I'm still researching Jpeg but it's not simple stuff!
It looks like you're misinterpreting how the SOS marker works. Here are bytes you show in one of your examples:
SOS = 00 0C 03 01 00 02 11 03 11 00 3F 00 F9 FE
This erroneously has two bytes of compressed data (F9 FE) included in the SOS. The length of 12 (00 0C) includes the 2 length bytes themselves, so there are really only 10 bytes of data for this marker.
The 00 byte before the F9 FE is the "successive approximation" bits field and is used for progressive JPEG images. It's actually a pair of 4-bit fields.
The bytes that you see as varying between images are really the first 2 compressed data bytes (which encode the DC value for the first MCU).

Advice Required response from EMV card?

I am trying to communicate with a SAM which apparently is implemented according to EMV specifications. The developer does only refer me to EMV books whenever I ask them a question. After limping through the EMV card and terminal specifications, I finally managed to send commands one after the other, and get to the GENERATE CRYPTOGRAM command with CDOL 1. My command looks like this (CDOL1):
80AE40001D0000000000010000000000000364000000000003640B300E001234567800
And here's the card's response:
802B08003280DBD8B5E81B4AF5065B0E038420000000000000000000000F000000000000000000000000000000
Now, am I correctly reading it? ADVICE REQUIRED bit to 1, correct? If that is the case, what happens now? This SAM is supposed to work off-line with only a PIN number and no online connectivity requirements.
Your gen ac command having p1 = 40 , here terminal is requesting an Transaction Certificate (offline transaction)
Your response is showing , card is returning with using format 1 - premitive data object with tag equal to 80.
Here response containing :-
1 - Cryptogram Information Data ( 1 byte)
08
2 - Application transaction counter ( 2 byte)
0032
3 - Application cryptogram ( 8 byte)
80DBD8B5E81B4AF5
4 - Issuer Application data ( 32 byte)
065B0E038420000000000000000000000F000000000000000000000000000000
the CID byte indicate which type of cryptogram is returned by the card and here value is 08 - Transaction declined
Actually The CID reveal what kind of Application Cryptogram is returned. It can optionally contain an advice message if the transaction will rejected.
for more about advice message , how it process between card and terminal, look EMV book 2 and 3 (6.3.7 -card action analysis)
this is what your command response is indicating. hope it helps, if you have any other query , please share.

generate AC cryptogram manually

I am trying to generate AC manually, I have a tool to generate AC but I want to generate it by my own to understand the algorithm for the same.
My calculation is fine for Discover card but it is failing for MasterCard. As per my understanding, data used to generate AC is depend on Tag 8C - CDOL1 which we provide to card with Gen AC command + AIP + ATC.
AIP and ATC - accessed internally by ICC.
Data used to generate AC is:-
data part of Gen AC command + value of tag 82 + value of tag 9f36 + 80 + optional 00 to make it multiple of 8.
this is my logic ,it might be I am using wrong data to calculate A.C that's why getting different result from my test tool.
Terminal Supplied Data
Amount, Authorised - 000000000201
Amount, Other - 000000000000
Terminal Country Code - 0826 - United Kingdom
Terminal Verification Results - 00 00 00 00 00
Transaction Currency Code - 0826 - Pound Sterling
Transaction Date - 15 04 28
Transaction Type - 00 - Goods and Services
Unpredictable Number - 30 90 1B 6A
Terminal Type - 23 - Attended, offline only. Operated by Merchant
Data Authentication Code - 00 00
ICC Dynamic Number - 1E AB C1 26 F8 54 99 76
CVM Results - 00 00 00
Gen AC Using CDOL1
80 AE 40 00 2B 0000000002010000000000000826000000000008261504280030901B6A2300001EABC126F8549976000000
this command is returning 9F26.
data i used for calcatation is:-
0000000002010000000000000826000000000008261504280030901B6A2300001EABC126F85499760000003800000180 [ data is multiple of 8]
Where 3800 is AIP 0001 is ATC and 80 for Padding [ Padding method 2 EMV] This is my logic, Is any body tell me where I should focus more to generate same AC as my tool generated.
MasterCard Application Cryptogram (AC) generation is more complicated than other card schemes.
Card can use different ICC Session Key Derivation (SKD) methods:
MasterCard Proprietary SKD, where involved Application Transaction Counter (ATC) and Unpredictable Number (UN);
EMV2000 Method, where involved only ATC - see EMV 4.0, Book 2;
EMV Common Session Key (CSK) Method, where involved only ATC - see EMV 4.2, Book 2;
Data Objects (DO) can be with different sets of additional or modified values:
Card Verification Results (CVR) can be 6 or 4 bytes.
The offline counters from Issuer Application Data (IAD) can be included.
Last online ATC value can be included.
The used method and data variant can be detected by Cryptogram Version Number (CVN) and Application Control bits. CVN is sub-field of IAD tag 0x9F10 .
The detailed information are proprietary and available for the MasterCard members.
For deep learning about it take a look into "M/Chip Card Application Cryptographic Algorithms" and M/Chip Card Application references.

How to properly format a raw DNS query?

I'm creating a Lua library to help process sending and receiving DNS requests and am currently reading this (DNS protocol RFC), but I'm unaware of how to properly format the request. For instance, do I have to specify how long the message? How do I do this?
I understand, from my Wireshark inspection, that I'm supposed to also include options afterwards. I also see a 0x00 in the response; does this mean that I just simply have to zero-terminate the request name before adding in values?
The section I'm specifically talking about is 4.1.3 of the RFC.
Some notes: I tested this with a personal server and got these values in the query section: 06 61 6c 69 73 73 61 04 69 6e 66 6f 00. The 00 in particular is what interests me, it's highlighted in WireShark, which means it's significant. I assume it means that the values are null-terminated? Then the options about the type and class follow?
When section 4.1.3 refers to a "NAME", it's referring back to the definition in section 3.1, which specifies that a domain name consists of a sequence of labels, each of which consist of a length specification octet and a number of octets. The final label is always the root zone, which has a zero-length name and thus consists only of a length octet with a zero in it. So, yes, the whole name is terminated with a zero octet, but it's not "zero-terminated" in the usual C string sense.
Note also that only the lower six bits in the length octets are the length data, the uppermost two bits are flags.

Resources