How to write a TLV[TAG_MAX_ATTEMPTS]? - apdu

I am using a smartcard with JCOP 4.7 Java applet on it. I want to create an authentication object, for example a UserID. The writeUserID command Has the below arguments:
CLA, INS, P1, P2, Lc, TLV[TAG_POLICY], TLV[TAG_MAX_ATTEMPTS], TLV[TAG_1], TLV[TAG_2].
I know TLV takes TAG, LENGTH and VALUE arguments. My question is How can I get bytes for TLV[TAG_MAX_ATTEMPTS] if I want to set maximum number of attempts to to 3 for USERID that I am creating?
[TAG_MAX_ATTEMPTS] has a value 0x12 and the applet description document says it takes 2-byte maximum number of attempts. In this case what would be my APDU byte for TLV[TAG_MAX_ATTEMPTS]?
For example: I know that TLV[TAG_1] is a 4-byte object identifier then its corresponding byte would be "41047FFF0001" where "41" is the value for [TAG_1] "04" is Length and "7FFF0001" is the 4-byte object identifier.
As per my understanding I am giving "12020003" where "12" is value of [TAG_MAX_ATTEMPTS], "02" length and "0003" 2-byte value.
When I pass this value in my APDU, i get an error "6985" which means conditions not satisfied.
Can someone tell me where am I going wrong?

Related

Generating Alternative Initial Value while wrapping keys with AES

Am following the instructions on https://datatracker.ietf.org/doc/html/rfc5649#section-3 and I have gotten to a point where I need to generate the LSB(32,A) for the Alternative Initial Value (AIV). Am using NodeJS with buffers to implement the algorithm. My understanding is 32-bits === buffer.length == 4 or in other words, length 4 of buffer is the 32-bits referenced in the article. I have padded the key after converting it buffer then padding with the length value of 8 - (length % 8) with 0s as value as indicated in the article. Now the thing I have not been able to figure out is getting the value of 32-bit MLI. How do I get the MLI, I just know its Message Length Indicator but thats all I know about it.
Example:
const key = Buffer.from('base64 key', 'base64');
const kek = Buffer.from('A65959A6', 'hex');
Now here I have only MSB(32, A) but not LSB(32, A), how do I get the value, and is there anything I am doing wrong, please help I have already spent alot of time trying to figure this out.
Scenario: Let's say my key length is 75, now I have to pad the remaining 5 characters for it to be multiples of 8, now how do I generate LSB(32, A) in this case?

Hexadecimal string with float number to float

Iam working in python 3.6
I receive from serial comunication an string '3F8E353F'. This is a float number 1.111. How can I convert this string to a float number?
Thank you
Ah yes. Since this is 32-bits, unpack it into an int first then:
x='3F8E353F'
struct.unpack('f',struct.pack('i',int(x,16)))
On my system this gives:
>>> x='3F8E353F'
>>> struct.unpack('f',struct.pack('i',int(x,16)))
(1.1109999418258667,)
>>>
Very close to the expected value. However, this can give 'backwards' results based on the 'endianness' of bytes in your system. Some systems store their bytes least significant byte first, others most significant byte first. See this reference page for the descriptors to format based on byte order.
I used struct.unpack('f',struct.pack('i',int(x,16))) to convert Hex value to float but for negative value I am getting below error
struct.error: argument out of range
To resolve this I used below code which converts Hex (0xc395aa3d) to float value (-299.33). It works for both positive as well for negative value.
x = 0xc395aa3d
struct.unpack('f', struct.pack('I', int(x,16) ))
Another way is to use bytes.fromhex()
import struct
hexstring = '3F8E353F'
struct.unpack('!f', bytes.fromhex(hexstring))[0]
#answer:1.1109999418258667
Note: The form '!' is available for those poor souls who claim they can’t remember whether network byte order is big-endian or little-endian (from struct docs).

VBA Byte Array to String

Apologies if this question has been previously answered, I was unable to find an explanation. I've created a script in VBScript to encrypt an user input and match to an already encrypted password. I ran into some issues along the way and managed to deduce to the following.
I have a byte array (1 to 2) as values (1, 16). I am then defining a string with the value of the array as per below:
Dim bytArr(1 To 2) As Byte
Dim output As String
bytArr(1) = 16
bytArr(2) = 1
output = bytArr
Debug.Print output
The output I get is Ð (Eth) ASCII Value 208. Could someone please explain how the byte array is converted to this character?
In VBA, Byte Arrays are special because, unlike arrays of other datatypes, a string can be directly assigned to a byte array. In VBA, Strings are UNICODE strings, so when one assigns a string to a byte array then it stores two digits for each character;
although the glyphs seem to be the same, see charmap:
Ð is Unicode Character 'LATIN CAPITAL LETTER ETH' (U+00D0) shown in charmap DOS Western (Central) Europe character set (0xD1, i.e. decimal 209);
Đ is Unicode Character 'LATIN CAPITAL LETTER D WITH STROKE' (U+0110) shown in charmap Windows Western (Central Europe) character set (0xD0, i.e. decimal 208).
Get above statements together keeping in mind endianness (byte order) of the computer architecture: Intel x86 processors use little-endian, so byte array (0x10, 0x01) is the same as unicode string U+0110.
Charaters are amalgamated via flagrant mojibake case. For proof, please use Asc and AscW Functions as follows: Debug.Print output, Asc(output), AscW(output) with different console code pages, e.g. under chcp 852 and chcp 1250.

what does it mean to convert octet strings to nonnegative integers in RSA?

I am trying to implement RSA PKCS #1 based on this spec
http://www.emc.com/emc-plus/rsa-labs/pkcs/files/h11300-wp-pkcs-1v2-2-rsa-cryptography-standard.pdf
However, I am not sure what the purpose of OS2IP is in page 9. Assume my message is integer 258 and private key is e. Also assume we don't do any other formatting besides OS2IP.
So I will convert the 258 into octet strings and store it into char buf[2] = {0x02, 0x01}. Now before I compute the exponentiation 258^e. I need to call OS2IP to reverse the byte order and save it to buf_new[2] = {0x01, 0x02}. Now 0x0102 = 258.
However, if I initially stored 258 as buf[2] = {0x01, 0x02}. Then there is no need to call OS2IP, correct? or is this the convention that I have to save it into {0x02, 0x01}?
OS2IP encodes an non negative integer into it's big endian representation.
However, if I initially stored 258 as buf[2] = {0x01, 0x02}. Then there is no need to call OS2IP, correct?
That is correct. 258 is already encoded in big endian although depending the length you choosed (!=2) you might be missing the leading zeros.
or is this the convention that I have to save it into {0x02, 0x01}?
I don't understand your question :/

Check whether a specific bit is set in Python 3

I have two bytes in:
b'T'
and
b'\x40' (only bit #6 is set)
In need to perform a check on the first byte to see if bit # 6 is set. For example, on [A-Za-9] it would be set, but on all some characters it would not be set.
if (b'T' & b'\x40') != 0:
print("set");
does not work ...
Byte values, when indexed, give integer values. Use that to your advantage:
value = b'T'
if value[0] & 0x40:
print('set')
You cannot use the & operator on bytes, but it works just fine on integers.
See the documentation on the bytes type:
While bytes literals and representations are based on ASCII text, bytes objects actually behave like immutable sequences of integers, with each value in the sequence restricted such that 0 <= x < 256[.]
…
Since bytes objects are sequences of integers (akin to a tuple), for a bytes object b, b[0] will be an integer[.]
Note that non-zero numbers always test as true in a boolean context, there is no need to explicitly test for != 0 here either.
You are looking for the ord built-in function, which converts single-character strings (byte or unicode) to the corresponding numeric codepoint.
if ord(b'T') & 0x40:
print("set")

Resources