I got 8 bytes Hex values in a cell as below which is in little endian format
00 00 08 04 22 00 40 00
With text split I could get individual hex values in an array.
= TEXTSPLIT(A1, , " ")
00
00
08
04
22
00
40
00
Is there an excel formula that I can use to grab the values in reverse order from an array to do below?
00
40
00
22
04
08
00
00
I don't want to use LEFT or MID or RIGHT extractors as I want to create generic formula that works on all data types.
For this very specific case you could use =TRIM(CONCAT(MID(" "&A1,SEQUENCE(8,,22,-3),3))) but to be more generic, try:
Formula in A2:
=TEXTJOIN(" ",,SORTBY(TEXTSPLIT(A1,," "),ROW(1:8),-1))
I suppose you can make this even more generic for any string you split on space:
=LET(r,TEXTSPLIT(A1,," "),TEXTJOIN(" ",,SORTBY(r,SEQUENCE(ROWS(r)),-1)))
Note this is almost an exact copy of this question where you could also use the technique shown by #ScottCraner using INDEX().
=MID(SUBSTITUTE(A1, " ", ""), SEQUENCE(1, LEN(SUBSTITUTE(A1, " ", ""))/2, LEN(SUBSTITUTE(A1, " ", ""))-1, -2), 2)
Related
I'm new to Rust and I've been trying to understand how it stores enums in memory.
I already know Rust implements tagged unions to represent enums.
From what I've understood, this is what I should see in memory:
An incremental tag (1 byte)
As many bytes as the largest field
Some padding bytes (if needed) for alignment purposes
Consider the following piece of code:
enum MyEnum {
A(u8, u8),
B(u16),
C(bool),
D
}
fn main() {
let v = vec![
MyEnum::D,
MyEnum::A(3, 2),
MyEnum::B(10),
MyEnum::C(true),
];
}
This is what I see inside actual memory:
03 00 00 00
00 03 02 FF
01 F0 0A 00
02 01 00 00
My explanation:
First row => TAG = 03 && VALUE = 3 null bytes
Second row => TAG = 00 && VALUE = (03, 02) && PADDING = 1 byte (I guess padding doesn't necessarily have to be a NULL byte)
Third row => TAG = 01 && PADDING = 1 byte && VALUE = 0A 00 (little-endian memory)
Fourth row => TAG = 02 && VALUE = 01 (true) && PADDING = 2 bytes
What I don't understand:
I don't quite understand the third row's layout: why does it have a padding byte right after the tag? Shouldn't it be at the end?
It becomes even worse if I add a 32-bit field to the enum.
Second example with 32-bit field:
enum MyEnum {
A(u8, u8),
B(u16),
C(bool),
D,
E(u32)
}
fn main() {
let v = vec![
MyEnum::D,
MyEnum::A(3, 2),
MyEnum::B(10),
MyEnum::C(true),
MyEnum::E(12949)
];
}
This is what I see inside actual memory:
03 00 00 00 00 00 00 00
00 03 02 00 00 00 00 00
01 FF 0A 00 FF FF FF FF
02 01 7F FF FF 7F 00 00
04 00 00 00 95 32 00 00
What I don't understand:
Why doesn't the 32-bit value (0x3295 = 12949) start from the end like the 16-bit value in the previous example? Why is there padding right after the tag (1 byte) and right after the number (2 bytes)?
In your last example, the value 12949 actually stands in the four last bytes: 95 32 00 00 in little endian (0x95 + 0x32 * 256)
This a 4-bytes word, then it is aligned to a multiple of 4 address.
The value 10 is stored in a 2-bytes word, then its value is aligned to a multiple of 2 address.
If it was just after the tag, then the alignment of this field would not be 2.
The whole enum is probably aligned to a large power of 2, in order to be certain of the alignment of the various fields it contains, just by adding the required padding.
That's why the enum grows from 4 bytes to 8 bytes when you add the last field.
If the whole enum is already aligned to a multiple of 4, and the first byte is used by the discriminant, then we need to skip 3 bytes in order to find the next multiple of 4.
I'm trying to convert and write string data into the file as bytes.
I have already tried something to, but instead of seeing 00 inside hexdump, im seeing 0x30 inside file which is hexadecimal value for character 0.
Here is what I wrote:
local data = "000000010000000100000004000000080000000100000000"
for i=1,#data,2 do
file:write(tonumber(data:sub(i,i+1)))
end
io.close(file)
When I do hexdump of the file I'm getting this:
0000000 30 30 30 31 30 30 30 31 30 30 30 34 30 30 30 38
0000010 30 30 30 31 30 30 30 30
0000018
Expected is:
0000000 00 00 00 01 00 00 00 01 00 00 00 04 00 00 00 08
0000010 00 00 00 01 00 00 00 00
0000018
You want to use string.char in one way:
local data = "000000010000000100000004000000080000000100000000"
for i=1,#data,2 do
file:write(string.char(tonumber(data:sub(i,i+1), 16)))
end
io.close(file)
or another:
local data = string.char(0,0,0,1,0,0,0,1,0,0,0,4,0,0,0,8,0,0,0,1,0,0,0,0)
file:write(data)
io.close(file)
Note that strings in Lua may contain any bytes you want including null bytes. See Values and Types.
Hint: Use string.char to convert numbers to bytes:
file:write(string.char(tonumber(data:sub(i,i+1))))
If the strings contains hexadecimal, use tonumber(...,16).
I know there is already existing topic with the same name Convert a hex string to base64 in an excel function However I am not receiving the same result as I am expecting by VBA provided in the thread. I am using this website and it provides the correct answer https://cryptii.com/pipes/base64-to-hex
It looks like there are 12 unnecessary characters in result provided by Hex2Base64. How to make it work like a website?
I know there is simple solution to input to another cell something like: =LEFT(cell;20) but how it is possible to make with current VBA?
So I have:
HEX: 01 00 00 00 05 00 00 00 00 E3 07 04 00 0F 00
On the website I receive BASE64: AQAAAAUAAAAA4wcEAA8A
By solution provided, in already existing thread, I receive BASE64: AQAAAAUAAAAA4wcEAA8AAAAAAAAAAA==
Function Hex2Base64(byVal strHex)
Dim arrBytes
If Len(strHex) Mod 2 <> 0 Then
strHex = Left(strHex, Len(strHex) - 1) & "0" & Right(strHex, 1)
End If
With CreateObject("Microsoft.XMLDOM").createElement("objNode")
.DataType = "bin.hex"
.Text = strHex
arrBytes = .nodeTypedValue
End With
With CreateObject("Microsoft.XMLDOM").createElement("objNode")
.DataType = "bin.base64"
.nodeTypedValue = arrBytes
Hex2Base64 = .Text
End With
End Function
The problem is with input parameter - hex string with spaces. Removing spaces gives expected results:
Debug.Print Hex2Base64(Replace("01 00 00 00 05 00 00 00 00 E3 07 04 00 0F 00", " ",""))
I'm working to get useful data from a VISA (such as PAN, expiry date...) credit card using a list of AIDs I got stuck.
I have been able to access to all the data manually. Using the next tutorial: http://www.openscdp.org/scripts/tutorial/emv/reademv.html
>>00 A4 04 00 07 A0 00 00 00 03 10 10 00
In ASCII:
<<o<EM>„<BEL> <0><0><0><ETX><DLE><DLE>¥<SO>P<EOT>VISA¿<FF><ENQ>ŸM<STX><VT><LF><0>
In Hexadecimal:
<<6F 19 84 07 A0 00 00 00 03 10 10 A5 0E 50 04 56 49 53 41 BF 0C 05 9F 4D 02 0B 0A 90 00
After that I used:
>>33 00 B2 01 0C 00 //sfi1, rec1
...
...
>>33 00 B2 10 FC 00 //sfi31, rec16
I continued with the tutorial and learned that the proper way to obtain the data was using GPO (Get Processing Options) command. And tried that next:
>>80 A8 00 00 0D 83 0B 00 00 00 00 00 00 00 00 00 00 00 00 // pdo = 83 0B 00 00 00 00 00 00 00 00 00 00 00 which suposse to be the correct one for VISA.
<< 69 85
So the condition of use is not satisfied.
>> 80 A8 00 00 02 83 00 00 //pdo= 83 00 that should work with every non visa card
<< 80 0E 3C 00 08 01 01 00 10 01 04 00 18 01 03 01 90 00
If this response is correct and it looks quite well for me as it starts by 80 and ends by 90 00, I am not able to identify AFL which I think that would make me possible to determine the PAN, expiry date... Can somebody help me?
The FCI that you received in response to the select command (00 A4 0400 07 A0000000031010 00) decodes to
6F 19 (File Control Information (FCI) Template)
84 07 (Dedicated File (DF) Name)
A0000000031010
A5 0E (File Control Information (FCI) Proprietary Template)
50 04 (Application Label)
56495341 ("VISA")
BF0C 05 (File Control Information (FCI) Issuer Discretionary Data)
9F4D 02 (Log Entry)
0B0A (SFI = 11; # of records = 10)
This FCI does not include any PDOL (processing options data list). Consequently, you need to assume a default value for the PDOL (which is an empty list for your card type). Consequently, the PDOL-related data field in the GET PROCESSING OPTIONS command must be empty:
83 00
Where 0x83 is the tag for PDOL-related data and 0x00 is a length of zero bytes.
Thus, the correct GPO command is (as you already found out):
80 A8 0000 02 8300 00
You got the response
800E3C00080101001001040018010301 9000
This decodes to
80 0E (Response Message Template Format 1)
3C00 (Application Interchange Profile)
08010100 10010400 18010301 (Application File Locator)
Consequently, the Application File Locator contains the following three entries:
08010100: SFI = 1, first record = 1, last record = 1, records involved in offline data authentication = 0
10010400: SFI = 2, first record = 1, last record = 4, records involved in offline data authentication = 0
18010301: SFI = 3, first record = 1, last record = 3, records involved in offline data authentication = 1
Consequently, you can read those record with the READ RECORD commands:
00 B2 010C 00
00 B2 0114 00
00 B2 0214 00
00 B2 0314 00
00 B2 0414 00
00 B2 011C 00
00 B2 021C 00
00 B2 031C 00
I have a 256-bit long but written as a little endian:
<Buffer 21 a2 bc 03 6d 18 2f 11 f5 5a bd 5c b4 32 a2 7b 22 79 7e 53 9b cb 44 5b 0e 00 00 00 00 00 00 00>
How can I correctly print it as a hexadeciaml value?
buf.toString('hex')
buk.toString('hex').split("").reverse().join("")) gives 0x00000000000000e0b544bcb935e79722b72a234bc5dba55f11f281d630cb2a12 instead of 0x000000000000000e5b44cb9b537e79227ba232b45cbd5af5112f186d03bca221
You can use match instead of split to get an array of the two character groups. Then you can reverse the array and join it.
buf.toString('hex').match(/.{2}/g).reverse().join("")
Actually Buffer objects support reverse() method, and it may be better to use it before converting to hex string,
buf.reverse().toString('hex')