I am taking hex dump of the same file from two Machines ( Mac & Linux), but the order bytes looks different in both. I am something wrong?
Hexdump output from OSX
➜ samples hexdump 1500_77807e3eb2eeacd9ac870c24103f5b_fno.bin
0000000 02 20 00 01 01 0b 1a 02 59 5a 2a 2a 31 32 33 00
0000010 40 00 05 4a fa a4 77 00 00 1c 28 40 28 02 17 ef
0000020 6a 36 00 e0 4c 01 b8 f5 32 20 a0 00 07 01 d6 00
0000030 02 00 00 c3 1f 00 01 40 1f 18 67 8f 00 2f 26 43
0000040 2d 40 96 30 89 40 11 4b a0 ff 28 34 00 35 00 02
0000050 02 58 40 bd 25 40 ec 40 01 01 40 a6 26 20 61 34
0000060 40 2d c2 5f 16 00 03 27 00 2d 11 be 00 4b 5d 0c
0000070 c0 bd 2c 40 2d 4d a0 be 06 72 40 2f 52 00 05 7e
0000080 00 96 5d 57 a0 5c 80 2d 5c be 04 b0 5d 61 bc 03
0000090 00 00 41 1c f7 a0 80 1f 05 c5 70 7d 40 c2 ce 2a
00000a0 e9 00 0e 2c dd 00 0e 24 ef 00 0e b7 f0 83 54 61
00000b0 d7 03 bb 3c 43 57 03 c6 c7 be 99 84 60 44 36 03
00000c0 54 41 d7 03 b0 e2 a2 96 00 e1 00 2e af 5c bc 43
00000d0 b7 03 ad 6d c0 5c 41 4e 9f 6c 28 00 2e 9c c9 28
00000e0 00 2e b9 43 dc 41 1d b9 c2 c4 43 b7 03 ba 51 27
00000f0 03 e7 03 bb af a0 2e 0e a6 5d dc bc 43 17 d0 a0
0000100 40 80 1f d3 30 80 c3 54 62 cc 62 ac c0 0c 11 00
0000110 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
*
00005d0 00 00 00 00 00 00 00 00 00 00 00 00
00005dc
Hexdump output of the same file from Ubuntu server SSH ed in from OSX
exdump 1500_77807e3eb2eeacd9ac870c24103f5b_fno.bin
0000000 2002 0100 0b01 021a 5a59 2a2a 3231 0033
0000010 0040 4a05 a4fa 0077 1c00 4028 0228 ef17
0000020 366a e000 014c f5b8 2032 00a0 0107 00d6
0000030 0002 c300 001f 4001 181f 8f67 2f00 4326
0000040 402d 3096 4089 4b11 ffa0 3428 3500 0200
0000050 5802 bd40 4025 40ec 0101 a640 2026 3461
0000060 2d40 5fc2 0016 2703 2d00 be11 4b00 0c5d
0000070 bdc0 402c 4d2d bea0 7206 2f40 0052 7e05
0000080 9600 575d 5ca0 2d80 be5c b004 615d 03bc
0000090 0000 1c41 a0f7 1f80 c505 7d70 c240 2ace
00000a0 00e9 2c0e 00dd 240e 00ef b70e 83f0 6154
00000b0 03d7 3cbb 5743 c603 bec7 8499 4460 0336
00000c0 4154 03d7 e2b0 96a2 e100 2e00 5caf 43bc
00000d0 03b7 6dad 5cc0 4e41 6c9f 0028 9c2e 28c9
00000e0 2e00 43b9 41dc b91d c4c2 b743 ba03 2751
00000f0 e703 bb03 a0af 0e2e 5da6 bcdc 1743 a0d0
0000100 8040 d31f 8030 54c3 cc62 ac62 0cc0 0011
0000110 0000 0000 0000 0000 0000 0000 0000 0000
*
Even the 1st word is different 0220 & 2002
It looks like you're having an issue with both endianness and hexdump defaulting to single bytes on your OSX environment and 2-byte words on your Linux environment.
To get identical output you might need to explicitly provide a format string.
Running hexdump -e \"%07_ax\ \"\ 16/1\ \"\ %02x\"\ \"\\n\" 1500_77807e3eb2eeacd9ac870c24103f5b_fno.bin should give you the same result in both environments.
The format string is "%08_ax " 16/1 " %02x" "\n" with escaping.
Related
Suppose I create a simple PNG with:
convert -size 1x1 canvas:red red.png
Here is a similar image (bigger size) for reference:
Then run the command identify on it. It tells me the ColorSpace of the image is sRGB but there seems to be NO indication of this inside the file. In fact running
$ hexdump -C red.png
00000000 89 50 4e 47 0d 0a 1a 0a 00 00 00 0d 49 48 44 52 |.PNG........IHDR|
00000010 00 00 00 01 00 00 00 01 01 03 00 00 00 25 db 56 |.............%.V|
00000020 ca 00 00 00 04 67 41 4d 41 00 00 b1 8f 0b fc 61 |.....gAMA......a|
00000030 05 00 00 00 20 63 48 52 4d 00 00 7a 26 00 00 80 |.... cHRM..z&...|
00000040 84 00 00 fa 00 00 00 80 e8 00 00 75 30 00 00 ea |...........u0...|
00000050 60 00 00 3a 98 00 00 17 70 9c ba 51 3c 00 00 00 |`..:....p..Q<...|
00000060 06 50 4c 54 45 ff 00 00 ff ff ff 41 1d 34 11 00 |.PLTE......A.4..|
00000070 00 00 01 62 4b 47 44 01 ff 02 2d de 00 00 00 07 |...bKGD...-.....|
00000080 74 49 4d 45 07 e5 01 0d 17 04 37 80 ef 04 02 00 |tIME......7.....|
00000090 00 00 0a 49 44 41 54 08 d7 63 60 00 00 00 02 00 |...IDAT..c`.....|
000000a0 01 e2 21 bc 33 00 00 00 25 74 45 58 74 64 61 74 |..!.3...%tEXtdat|
000000b0 65 3a 63 72 65 61 74 65 00 32 30 32 31 2d 30 31 |e:create.2021-01|
000000c0 2d 31 33 54 32 33 3a 30 34 3a 35 35 2b 30 30 3a |-13T23:04:55+00:|
000000d0 30 30 2d af d4 01 00 00 00 25 74 45 58 74 64 61 |00-......%tEXtda|
000000e0 74 65 3a 6d 6f 64 69 66 79 00 32 30 32 31 2d 30 |te:modify.2021-0|
000000f0 31 2d 31 33 54 32 33 3a 30 34 3a 35 35 2b 30 30 |1-13T23:04:55+00|
00000100 3a 30 30 5c f2 6c bd 00 00 00 00 49 45 4e 44 ae |:00\.l.....IEND.|
00000110 42 60 82 |B`.|
00000113
does not provide a clue, that I know of.
I understand that identifying the ColorSpace of an image, that does not contain that information, is a very hard problem -- see one proposed solution looking at the histogram of colors here.
So how identify, from the ImageMagick suite, determines the ColorSpace of this image?
It is common, but not standardized to assume that an image without an embedded or sidecar ICC profile or without an explicit encoding description is encoded according to IEC 61966-2-1:1999, i.e. sRGB specification.
This is just a bug in ImageMagick. You can use exiftool to check whether sRGB + intent chunk is present. In this case, no.
Gamma 2.2 is not sRGB. Thus ImageMagic is wrong here. That is a common problem on Wikipedia, all SVG images when converted to PNG have this and it destroys the colours. See: https://phabricator.wikimedia.org/T26768
We will have to reencode all images on Wikipedia, since we use ImageMagick. Sigh.
I'm going to delete an existent key from my card's ISD. To do so I sent a DELETE Key APDU command with corresponding KeyID and KeyVersion to the ISD after a successful Mutual Authentication as below:
--> 00 A4 04 00 08 A0 00 00 01 51 00 00 00
<-- 6F 5B 84 08 A0 00 00 01 51 00 00 00 A5 4F 73 49 06 07 2A 86 48 86 FC 6B 01 60 0B 06 09 2A 86 48 86 FC 6B 02 02 02 63 09 06 07 2A 86 48 86 FC 6B 03 64 0B 06 09 2A 86 48 86 FC 6B 04 02 55 65 0B 06 09 2A 86 48 86 FC 6B 02 01 03 66 0C 06 0A 2B 06 01 04 01 2A 02 6E 01 03 9F 65 01 FF 90 00
--> 80 50 00 00 08 79 71 01 3C 63 9D 72 A3
<-- 00 00 90 30 09 0A 90 72 3D A3 01 02 00 00 60 AD 80 68 C2 A1 79 AE B9 E4 4A 4D B7 99 90 00
--> 84 82 00 00 10 AB E9 10 5B 60 7C DE C6 9C DC 15 E0 DA 9B 81 44
<-- 90 00
--> 80 E4 00 00 06 D0 01 01 D2 01 71
<-- 6A 80
As you see above, I received 6A80 status word which means Wrong Data. I've have tried the same command and data with a different card and it successfully returned 90 00 status words.
So
What is wrong with this card and how I can delete this key?
Is there anyway to list all existent keys on the card? As far as I know, GET DATA APDU command with Tag 66 (Key Information Template) does not return list of all available keys.
Some card are simply not supporting it. As alternative you can rotate the keys to a random value.
The tag for key templates is 00E0. You can use this with GET DATA. E.g. GPShell provides the command get_key_information_templates -keyTemplate index. Use 0 as index. This output returns a more readable list.
I'm wanting to make a PCap Analyzer script where it can detect what traffic is what from a pcap file.
The general idea is: HTTP(x10), DNS(x5), HTTPS(x20)
Now as you can see the majority of traffic is HTTPS based I want to be able to pull that from the pcap packet data to pass to another section of my analyzer script.
I don't have a clue nor any idea of what NPMs or anything that I can use, I have looked into pcap-parser which is a 9+ Yr old NPM this package , and only provides packet.data, packet.header.
I'm just completely losing all hope on making this script as I've tried ever potential resource even went into researching a potential API system to upload the pcap and bring the info I wish to obtain with no avail.
Example of packet.header
{
timestampSeconds: 1606145597,
timestampMicroseconds: 444357,
capturedLength: 60,
originalLength: 60
}
Example of packet.data (Buffer)
<Buffer 01 00 5e 7f ff fa 34 29 8f 99 09 70 08 00 45 00 00 a5 a4 76 00 00 04 11 10 f3 0a c8 06 1d ef ff ff fa ed 0c 07 6c 00 91 17 56 4d 2d 53 45 41 52 43 48 ... 129 more bytes>
<Buffer ff ff ff ff ff ff 34 29 8f 99 09 6e 08 06 00 01 08 00 06 04 00 01 34 29 8f 99 09 6e 0a c8 06 e6 00 00 00 00 00 00 0a c8 06 de 00 00 00 00 00 00 00 00 ... 10 more bytes>
<Buffer e0 55 3d 5e 95 a0 40 ec 99 d3 06 fd 08 00 45 00 05 6b a7 ed 40 00 80 06 00 00 0a 91 a6 ce 34 ef cf 64 e9 9f 01 bb a2 30 72 ed d9 06 6d cc 80 18 02 00 ... 1351 more bytes>
<Buffer 40 ec 99 d3 06 fd e0 55 3d 5e 95 a0 08 00 45 00 00 34 72 2d 40 00 70 06 e2 e3 34 ef cf 64 0a 91 a6 ce 01 bb e9 9f d9 06 6d cc a2 30 14 19 80 10 1b 25 ... 16 more bytes>
<Buffer e0 55 3d 5e 95 a0 40 ec 99 d3 06 fd 08 00 45 00 00 34 05 b4 40 00 80 06 00 00 0a 91 a6 ce 17 d9 8a 6c e9 a8 01 bb f0 0d cc ed 00 00 00 00 80 02 fa f0 ... 16 more bytes>
I use 'socat TCP4-LISTEN:8080,fork EXEC:./bashttpd' for http server. when try to receive image file from client socat remove some byte and corrupt my image.
correct:
01b0 0a 89 50 4e 47 0d 0a 1a 0a 00 00 00 0d 49 48 44 ..PNG........IHD
01c0 52 00 00 07 80 00 00 04 38 08 02 00 00 00 67 b1 R.......8.....g.
01d0 56 14 00 00 00 09 70 48 59 73 00 00 11 b0 00 00 V.
incorrect:(socat -> read line -> xxh)
00000000: 89 50 4e 47 0d 0a .PNG..
00000000: 1a 0a ..
00000000: 0d 49 48 44 52 07 80 04 38 08 02 67 b1 56 14 09 .IHDR...8..g.V
how to solve this problem?
thanks
I'm looking at socket.io packets and they are TCP. When I review the value, I see encrypted data. Where and how is socket.io encrypting the messages that pass through the soccket? Is it really secure? This is a VM running with requests over http.
For example, I see
0000 bc ec 23 c3 64 6a 00 15 5d 01 59 06 08 00 45 08 ..#.dj..].Y...E.
0010 00 58 68 cc 40 00 40 06 4e 6c c0 a8 01 05 c0 a8 .Xh.#.#.Nl......
0020 01 0a 00 16 c6 51 15 15 7f 44 69 87 60 58 50 18 .....Q...Di.\XP.`
0030 0b 2e 6c ae 00 00 8b 6f 92 7f b9 1b c2 d6 54 60 ..l....o......T
0040 5e 24 65 2a 0c d6 87 90 fd 87 63 30 9d 69 11 26 ^$e*......c0.i.&
0050 4d 75 8c 7b 5e b2 ad 47 12 9d 05 d0 7c 3b 7c 9e Mu.{^..G....|;|.
0060 b1 0d a0 b7 f1 88 ......