Mifare card memory space? - rfid

What is the net memory space remaining in a MIFARE Classic 1K card considering that keys and access bits take 16 bytes per sector, and the unique id (UID) and manufacturer data takes 16 bytes for each card?

MIFARE Classic 1K consists of 16 sectors. One sector consists of 4 blocks (sector trailer + 3 data blocks). Each block consists of 16 bytes.
This gives 16 Sectors * 4 Blocks * 16 Bytes = 1024 Bytes.
The actually usable data area depends on how you want to use the card:
You use only one key per sector (key A); you use the unused parts of the sector trailers for data storage; you don't use a MIFARE application directory (MAD):
The first block of the first sector is always reserved (UID/manufacturer data) and cannot be used to store user data.
6 bytes of each sector trailer are reserved for key A. 3 bytes of each sector trailer are reserved for the access conditions. The remaining 7 bytes of the sector trailer can be used to store user data.
Thus, you can store 1 Sector * (2 Blocks * 16 Bytes + 1 Block * 7 Bytes) + 15 Blocks * (3 Blocks * 16 Bytes + 1 Block * 7 Bytes) = 864 Bytes.
You use two keys per sector (key A and key B); you use the unused parts of the sector trailers for data storage; you don't use a MIFARE application directory (MAD):
12 bytes of each sector trailer are reserved for key A and B. 3 bytes of each sector trailer are reserved for the access conditions. The remaining byte of the sector trailer can be used to store user data.
Thus, you can store 1 Sector * (2 Blocks * 16 Bytes + 1 Block * 1 Byte) + 15 Blocks * (3 Blocks * 16 Bytes + 1 Block * 1 Byte) = 768 Bytes.
You use two keys per sector (key A and key B); you don't use the unused parts of the sector trailers for data storage; you don't use a MIFARE application directory (MAD):
Thus, you can store 1 Sector * 2 Blocks * 16 Bytes + 15 Blocks * 3 Blocks * 16 Bytes = 752 Bytes.
You use two keys per sector (key A and key B); you use the unused parts of the sector trailers for data storage; you use a MIFARE application directory (MAD):
The data blocks and the general purpose byte (remaining byte in the sector trailer) of the first sector are reserved for the MAD.
The general purpose byte in the other sectors can be used.
Thus, you can store 15 Blocks * (3 Blocks * 16 Bytes + 1 Block * 1 Byte) = 735 Bytes.
You use two keys per sector (key A and key B); you use NXP's NDEF data mapping to transport an NDEF message:
The MAD is used to assign sectors to the NDEF application.
NDEF data can only be stored in the 3 data blocks of each NDEF sector.
The NDEF message is wrapped in an NDEF TLV structure (1 byte for the tag 0x03, three bytes to indicate a length of more than 254 bytes).
Thus, you can store an NDEF message of up to 15 Blocks * 3 Blocks * 16 Bytes - 4 bytes = 716 Bytes. Such an NDEF message could have a maximum payload of 716 Bytes - 1 Byte - 1 Byte - 4 Bytes = 710 Bytes (when using a NDEF record with TNF unknown, 1 header byte, 1 type length byte, 4 payload length bytes).

Related

What is the internal format of .Xauthority file?

Well, the subject.
I have searched a lot, but unfortunately, found nothing. Is there some document describing this format? Or the structure need to be extracted out from the xauth source files?
Probably not exactly what you are looking for but putting in an answer just for the formatting.
The .Xauthority is an array of structures:
typedef struct xauth {
unsigned short family;
unsigned short address_length;
char *address;
unsigned short number_length;
char *number;
unsigned short name_length;
char *name;
unsigned short data_length;
char *data;
} Xauth;
You would probably still need to be able to decode each entry -- if nothing else by slogging through the source:Xauth.h
For example:
$ od -xc --endian=big .Xauthority | more
0000000 0100 0007 6d61 7869 6d75 7300 0130 0012
001 \0 \0 \a m a x i m u s \0 001 0 \0 022
0000020 4d49 542d 4d41 4749 432d 434f 4f4b 4945
M I T - M A G I C - C O O K I E
0000040 2d31 0010 c0ac 9e9c ee82 ef59 f406 b7f9
- 1 \0 020 300 254 236 234 356 202 357 Y 364 006 267 371
0000060 b745 254e 0100 0007 6d61 7869 6d75 7300
267 E % N 001 \0 \0 \a m a x i m u s \0
The first short is 0x100 indicating the family
The next short is 0x0007 indicating the length of the address
The next 7 bytes are the address: maximus
The next short is 0001, the length of the seat number
The next byte is 30, ascii 0, the seat number
The next short is 0x0012, decimal 18, the length of the name
The next 18 bytes are the name: MIT-MAGIC-COOKIE-1
The next short is 0x0010, decimal 16, the length of the data
And the next 16 bytes are the data: 0xc0ac thru 0x254e.
Then it starts over.
Here are some documents for your reference.
Cookie-based access (.Xauthority file) follows the Inter-Client Exchange (ICE) Protocol and implemented in Inter-Client Exchange Library, you will find more format details in Appendix session.
for example, Appendix B describes the common MIT-MAGIC-COOKIE-1 Authentication method.
The correct specification is in the documentation of the Xau library.
The .Xauthority file is a binary file consisting of a sequence of entries
in the following format:
2 bytes Family value (second byte is as in protocol HOST)
2 bytes address length (always MSB first)
A bytes host address (as in protocol HOST)
2 bytes display "number" length (always MSB first)
S bytes display "number" string
2 bytes name length (always MSB first)
N bytes authorization name string
2 bytes data length (always MSB first)
D bytes authorization data string

Converting Base64 to Hex confusion

I was working on a problem for converting base64 to hex and the problem prompt said as an example:
3q2+7w== should produce deadbeef
But if I do that manually, using the base64 digit set ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/ I get:
3 110111
q 101010
2 110110
+ 111110
7 111011
w 110000
As a binary string:
110111 101010 110110 111110 111011 110000
grouped into fours:
1101 1110 1010 1101 1011 1110 1110 1111 0000
to hex
d e a d b e e f 0
So shouldn't it be deadbeef0 and not deadbeef? Or am I missing something here?
Base64 is meant to encode bytes (8 bit).
Your base64 string has 6 characters plus 2 padding chars (=), so you could theoretically encode 6*6bits = 36 bits, which would equal 9 4bit hex numbers. But in fact you must think in bytes and then you only have 4 bytes (32 bits) of significant information. The remaining 4 bits (the extra '0') must be ignored.
You can calculate the number of insignificant bits as:
y : insignificant bits
x : number of base64 characters (without padding)
y = (x*6) mod 8
So in your case:
y = (6*6) mod 8 = 4
So you have 4 insignificant bit on the end that you need to ignore.

Why do I get an `Incorrect Padding` Error while trying to decode my base32 string?

I get an Incorrect padding error while trying to decode a BASE32 string in python using the base64.b32decode() function. I think I have my padding correct. Where have I gone wrong?
import base64
my_string=b'SOMESTRING2345'
print(my_string)
print("length : "+str(len(my_string)))
print("length % 8 : "+str(len(my_string)%8))
p_my_string = my_string+b'='*(8-(len(my_string)%8))
print("\nPadded:\n"+str(p_my_string))
print("length : "+str(len(p_my_string)))
b32d = base64.b32decode(p_my_string)
print("\nB32 decode : " + str(b32d))
print("length : " + str(len(b32d)))
Running this code gets me
b'SOMESTRING2345'
length : 14
length % 8 : 6
Padded:
b'SOMESTRING2345=='
length : 16
---------------------------------------------------------------------------
Error Traceback (most recent call last)
<ipython-input-2-9fe7cf88581a> in <module>()
10 print("length : "+str(len(p_my_string)))
11
---> 12 b32d = base64.b32decode(p_my_string)
13 print("\nB32 decode : " + str(b32d))
14 print("length : " + str(len(b32d)))
/opt/anaconda3/lib/python3.6/base64.py in b32decode(s, casefold, map01)
244 decoded[-5:] = last[:-4]
245 else:
--> 246 raise binascii.Error('Incorrect padding')
247 return bytes(decoded)
248
Error: Incorrect padding
​
However, if I change my_string to b'SOMESTRING23456', I get the code working perfectly with the output -
b'SOMESTRING23456'
length : 15
length % 8 : 7
Padded:
b'SOMESTRING23456='
length : 16
B32 decode : b'\x93\x98IN(i\xb5\xbew'
length : 9
There are no legal 14-character base32 strings. Any remainder beyond modulus 8 can only be 2, 4, 5, or 7 characters long, so padding must always be 6, 4, 3 or 1 = character, any other length is invalid. Since a remainder of 6 characters is not a legal base32 encoding, the base32decode() function can’t do anything but reject the invalid 5 character used instead of a valid = padding character.
A base32 character encodes 5 bits and a byte is always 8 bits long. That means that you don’t need padding for inputs of a multiple of 5 bytes (5 times 8 == 40 bits, which can be encoded cleanly in 8 characters).
Any remainder over a multiple of 5 is encoded thus
1 byte = 8 bits: 2 characters (10 bits)
2 bytes = 16 bits: 4 characters (20 bits)
3 bytes = 24 bits: 5 characters (25 bits)
4 bytes = 32 bits: 7 characters (35 bits)
14 characters would hold 70 bits, which is 8 bytes (64 bits) with 6 bits to spare, so character 14 would carry no meaning!
So for any base32 string with a remainder of 1, 3, or 6 characters you will always get an Incorrect padding exception, regardless of how many = characters you add.
Note that the last character in a remainder encodes a limited number of bits so is also going to fall in a specific range; for 2 characters (encoding 1 byte) the second character only encodes 3 bits with the last 2 bits left at 0, so only A, E, I, M, Q, U, Y and 4 are then possible (so every 4th character of the base32 alphabet, A-Z + 2-7). With 4 characters the last character represents just one bit, so only A and Q are legal. 5 characters leaves 1 redundant bit so every second character can be used (A, C, E, etc) and for 7 characters and 3 redundant bits, every 8th character (A, I, Q, Y).
A decoder can choose to accept all possible base32 characters as at that last position and just mask off the bits that still are needed, so for 2 characters a B or 7 or any of the other invalid characters can still lead to a successful decode, but then there is no difference between AA, AB, AC and AD, all 4 will only use the top 3 bits of the second character and all 4 sequences decode to the hex value 0x00.

mDDR chips used in Beagleboard xM

I'm analysing the X-Loader settings for the POP mDDR on the Beagleboard xM.
The amount of mDDR POP memory in the BB xM is 512MB (according to the Manual).
More precisely the Micron variant: 256MB on CS0 + 256MB on CS1 = 512MB total.
The bus width is 32 bits, this can be verified in the SDRC_MCFG_p register settings in the X-Loader.
The type of memory used is the MT46H128M32L2KQ-5 as mentioned in this group:
https://groups.google.com/forum/#!topic/beagleboard/vgrq2bOxXrE
Reading the data sheet of that memory, the 32 bit configuration with the maximum capacity is 16Meg x 32 x 4 = 64Meg x 32.
So 64MB are not 256MB, 128 MB are feasible but only with 16 bit bus width, and even then, we are still not at 256MB.
The guy in the group mentioned above says that the memory is a 4Gb, but the data sheet says that it is a 2Gb.
My question:
How can 512MB be achieved by using 2 memory chips of the above type and 32 bit bus width?
Thanks in advance for your help.
Martin
According to datasheet MT46H128M32L2KQ-5 has following configuration:
MT46H128M32L2 – 16 Meg x 32 x 4 Banks x 2
16 Meg x 32 x 4 Banks x 2 = 4096 Meg (bits, not bytes)
4096 Meg (bits) / 8 = 512 MB (Megabytes)
More from datasheet:
The 2Gb Mobile low-power DDR SDRAM is a high-speed CMOS, dynamic
random-access memory containing 2,147,483,648 bits.
Each of the x32’s 536,870,912-bit banks is organized as 16,384 rows by 1024
columns by 32 bits. (p. 8)
So, if you multiply the number of rows by the number of columns by the number of bits (it's specified in the datasheet), you will get the size of a bank in bits. Bank size is = 16384 x 1024 x 32 = 16 Megs x 32 = 536870912 (bits).
Next, you need to multiply the bank size (in bits) by the number of banks in chip: chip size = 536870912 x 4 = 2147483648 (bits).
In order to get result in bytes, you have to dived it by 8.
chip size (bytes) = 2147483648 (bits) / 8 = 268435456
In order to get result in megabytes, you have to dived it by 1024 x 1024
chip size = 268435456 / 1024 / 1024 = 256 MB (Megabytes)
This is dual LPDDR chip internally organized as 2 x 256 MB chips (it has two chip selects: CS0#, CS1#) (it's specified in the datasheet). The single chip contains two memory chips inside, 256MB each. For BB this single chip must be configured like 2 memories 256MB each in order to get 512MB. So, you have to setup CS0 as 256MB and CS1 as 256MB.

Finding File WIth Fixed File Size (>0) in Unix/Linux

I have a list of file that looks like this
4 -rw-r--r-- 1 neversaint hgc0746 53 May 1 10:37 SRX016372-SRR037477.est_count
4 -rw-r--r-- 1 neversaint hgc0746 53 May 1 10:34 SRX016372-SRR037478.est_count
4 -rw-r--r-- 1 neversaint hgc0746 53 May 1 10:41 SRX016372-SRR037479.est_count
0 -rw-r--r-- 1 neversaint hgc0746 0 Apr 27 11:16 SRX003838-SRR015096.est_count
0 -rw-r--r-- 1 neversaint hgc0746 0 Apr 27 11:32 SRX004765-SRR016565.est_count
What I want to do is to find files that has exactly size 53. But why this command failed?
$ find . -name "*.est_count" -size 53 -print
It works well though if I just want to find file of size 0 with this command:
$ find . -name "*.est_count" -size 0 -print
You need to suffix the size 53 by 'c'. As per find's manpage -
-size n[cwbkMG]
File uses n units of space. The following suffixes can be used:
`b' for 512-byte blocks (this is the default if no suffix is
used)
`c' for bytes
`w' for two-byte words
`k' for Kilobytes (units of 1024 bytes)
`M' for Megabytes (units of 1048576 bytes)
`G' for Gigabytes (units of 1073741824 bytes)
The size does not count indirect blocks, but it does count
blocks in sparse files that are not actually allocated. Bear in
mind that the `%k' and `%b' format specifiers of -printf handle
sparse files differently. The `b' suffix always denotes
512-byte blocks and never 1 Kilobyte blocks, which is different
to the behaviour of -ls.
-size n[ckMGTP]
True if the file's size, rounded up, in 512-byte blocks is n. If
n is followed by a c, then the primary is true if the file's size
is n bytes (characters). Similarly if n is followed by a scale
indicator then the file's size is compared to n scaled as:
k kilobytes (1024 bytes)
M megabytes (1024 kilobytes)
G gigabytes (1024 megabytes)
T terabytes (1024 gigabytes)
P petabytes (1024 terabytes)
You need to use -size 53c.
This is what I get on A Mac OS 10.5
> man find
...
-size n[c]
True if the file's size, rounded up, in 512-byte blocks is n. If n
is followed by a c, then the primary is true if the file's size is n
bytes (characters).

Resources