How "readelf --hex-dump" gives little endian output - hexdump

I am using readelf --hex-dump=<section name> <elf> to get hex dump of a specific section. However, the result is shown as big endian DWORD. Is there a native way to make the output little endian?
Hex dump of section 'PrgCode':
0x1ffe0000 b0b54ef2 0010cef2 00000168 490507d5 ..N........hI...
0x1ffe0010 4ff48061 c0f88010 bff34f8f bff36f8f O..a......O...o.
0x1ffe0020 0021c5f2 2d410a68 4cf22050 cdf62810 .!..-A.hL. P..(.
0x1ffe0030 92040246 5ebf4cf2 20524a60 4df62812 ...F^.L. RJ`M.(.
0x1ffe0040 4a604ff6 ff728a60 0b6843f0 200323f0 J`O..r.`.hC. .#.

Related

Japanese Unicode: Convert radical to regular character code

How can I convert Japanese radical characters into their "regular" kanji character counterparts?
For instance, the character for the radical fire is ⽕ (with a Unicode value of 12117)
And the regular character is 火 (with a Unicode value of 28779)
EDIT:
To clarify, the reason why I think I need this is because I would like to obtain the stroke information for each radical by using the kanjivg data set. However, (I need to look into this further), I'm not sure if kanjivg has stroke data for the radical characters, but it definitely has stroke data for the regular kanji characters.
The language that I'm working with is Java - but I assumed that conversion would be similar for any language.
Using RADKFILE for this was was a neat idea (#Paul) but I don't think it uses Kangxi radicals because it's encoded in EUC-JP and if my browser (or Github) doesn't automatically convert between Kangxi/kanji, the list only has non-Kangxi characters as long as we're talking about Unicode.
The Unicode range for Kangxi radicals is on this Wikipedia page: Unicode/Character reference/2000-2FFF (bottom).
Somebody has created a mapping between them: Kanji to Kangxi Radical remapping tables. I did not check the correctness but when you convert the code points to characters you can see if they look the same. Here's how you do it in Java: Creating Unicode character from its number
Here is the list in CSV for convenience (kanji,radical):
0x4E00,0x2F00
0x4E28,0x2F01
0x4E36,0x2F02
0x4E3F,0x2F03
0x4E59,0x2F04
0x4E85,0x2F05
0x4E8C,0x2F06
0x4EA0,0x2F07
0x4EBA,0x2F08
0x513F,0x2F09
0x5165,0x2F0A
0x516B,0x2F0B
0x5182,0x2F0C
0x5196,0x2F0D
0x51AB,0x2F0E
0x51E0,0x2F0F
0x51F5,0x2F10
0x5200,0x2F11
0x529B,0x2F12
0x52F9,0x2F13
0x5315,0x2F14
0x531A,0x2F15
0x5338,0x2F16
0x5341,0x2F17
0x535C,0x2F18
0x5369,0x2F19
0x5382,0x2F1A
0x53B6,0x2F1B
0x53C8,0x2F1C
0x53E3,0x2F1D
0x56D7,0x2F1E
0x571F,0x2F1F
0x58EB,0x2F20
0x5902,0x2F21
0x590A,0x2F22
0x5915,0x2F23
0x5927,0x2F24
0x5973,0x2F25
0x5B50,0x2F26
0x5B80,0x2F27
0x5BF8,0x2F28
0x5C0F,0x2F29
0x5C22,0x2F2A
0x5C38,0x2F2B
0x5C6E,0x2F2C
0x5C71,0x2F2D
0x5DDB,0x2F2E
0x5DE5,0x2F2F
0x5DF1,0x2F30
0x5DFE,0x2F31
0x5E72,0x2F32
0x5E7A,0x2F33
0x5E7F,0x2F34
0x5EF4,0x2F35
0x5EFE,0x2F36
0x5F0B,0x2F37
0x5F13,0x2F38
0x5F50,0x2F39
0x5F61,0x2F3A
0x5F73,0x2F3B
0x5FC3,0x2F3C
0x6208,0x2F3D
0x6236,0x2F3E
0x624B,0x2F3F
0x652F,0x2F40
0x6534,0x2F41
0x6587,0x2F42
0x6597,0x2F43
0x65A4,0x2F44
0x65B9,0x2F45
0x65E0,0x2F46
0x65E5,0x2F47
0x66F0,0x2F48
0x6708,0x2F49
0x6728,0x2F4A
0x6B20,0x2F4B
0x6B62,0x2F4C
0x6B79,0x2F4D
0x6BB3,0x2F4E
0x6BCB,0x2F4F
0x6BD4,0x2F50
0x6BDB,0x2F51
0x6C0F,0x2F52
0x6C14,0x2F53
0x6C34,0x2F54
0x706B,0x2F55
0x722A,0x2F56
0x7236,0x2F57
0x723B,0x2F58
0x723F,0x2F59
0x7247,0x2F5A
0x7259,0x2F5B
0x725B,0x2F5C
0x72AC,0x2F5D
0x7384,0x2F5E
0x7389,0x2F5F
0x74DC,0x2F60
0x74E6,0x2F61
0x7518,0x2F62
0x751F,0x2F63
0x7528,0x2F64
0x7530,0x2F65
0x758B,0x2F66
0x7592,0x2F67
0x7676,0x2F68
0x767D,0x2F69
0x76AE,0x2F6A
0x76BF,0x2F6B
0x76EE,0x2F6C
0x77DB,0x2F6D
0x77E2,0x2F6E
0x77F3,0x2F6F
0x793A,0x2F70
0x79B8,0x2F71
0x79BE,0x2F72
0x7A74,0x2F73
0x7ACB,0x2F74
0x7AF9,0x2F75
0x7C73,0x2F76
0x7CF8,0x2F77
0x7F36,0x2F78
0x7F51,0x2F79
0x7F8A,0x2F7A
0x7FBD,0x2F7B
0x8001,0x2F7C
0x800C,0x2F7D
0x8012,0x2F7E
0x8033,0x2F7F
0x807F,0x2F80
0x8089,0x2F81
0x81E3,0x2F82
0x81EA,0x2F83
0x81F3,0x2F84
0x81FC,0x2F85
0x820C,0x2F86
0x821B,0x2F87
0x821F,0x2F88
0x826E,0x2F89
0x8272,0x2F8A
0x8278,0x2F8B
0x864D,0x2F8C
0x866B,0x2F8D
0x8840,0x2F8E
0x884C,0x2F8F
0x8863,0x2F90
0x897E,0x2F91
0x898B,0x2F92
0x89D2,0x2F93
0x8A00,0x2F94
0x8C37,0x2F95
0x8C46,0x2F96
0x8C55,0x2F97
0x8C78,0x2F98
0x8C9D,0x2F99
0x8D64,0x2F9A
0x8D70,0x2F9B
0x8DB3,0x2F9C
0x8EAB,0x2F9D
0x8ECA,0x2F9E
0x8F9B,0x2F9F
0x8FB0,0x2FA0
0x8FB5,0x2FA1
0x9091,0x2FA2
0x9149,0x2FA3
0x91C6,0x2FA4
0x91CC,0x2FA5
0x91D1,0x2FA6
0x9577,0x2FA7
0x9580,0x2FA8
0x961C,0x2FA9
0x96B6,0x2FAA
0x96B9,0x2FAB
0x96E8,0x2FAC
0x9751,0x2FAD
0x975E,0x2FAE
0x9762,0x2FAF
0x9769,0x2FB0
0x97CB,0x2FB1
0x97ED,0x2FB2
0x97F3,0x2FB3
0x9801,0x2FB4
0x98A8,0x2FB5
0x98DB,0x2FB6
0x98DF,0x2FB7
0x9996,0x2FB8
0x9999,0x2FB9
0x99AC,0x2FBA
0x9AA8,0x2FBB
0x9AD8,0x2FBC
0x9ADF,0x2FBD
0x9B25,0x2FBE
0x9B2F,0x2FBF
0x9B32,0x2FC0
0x9B3C,0x2FC1
0x9B5A,0x2FC2
0x9CE5,0x2FC3
0x9E75,0x2FC4
0x9E7F,0x2FC5
0x9EA5,0x2FC6
0x9EBB,0x2FC7
0x9EC3,0x2FC8
0x9ECD,0x2FC9
0x9ED1,0x2FCA
0x9EF9,0x2FCB
0x9EFD,0x2FCC
0x9F0E,0x2FCD
0x9F13,0x2FCE
0x9F20,0x2FCF
0x9F3B,0x2FD0
0x9F4A,0x2FD1
0x9F52,0x2FD2
0x9F8D,0x2FD3
0x9F9C,0x2FD4
0x9FA0,0x2FD5
It's not entirely clear why you want this, but one possible way to do it is with Jim Breen's radkfile file that maps radicals to associated kanjis and the reverse. Combine that with some heuristics and Breen's kanjidic file (to the extent that these resources are reliable), and you can pretty easily generate a mapping. Here's an example in Python, using the cjktools library, which has Python wrappers for these things.
from cjktools.resources.radkdict import RadkDict
from cjktools.resources.kanjidic import Kanjidic
def make_rad_to_kanji_dict():
rdict = RadkDict()
kdict = Kanjidic()
# Get all the radicals where there are kanji made up entirely of the one
# radical - the ones we want are a subset of those
tmp = ((rads[0], kanji) for kanji, rads in rdict.items()
if len(rads) == 1)
# All the ones with the same number of strokes - should be all the ones that
# are homographs
out = {rad: kanji for rad, kanji in tmp
if (kanji in kdict and
kdict[kanji].stroke_count == rdict.radical_to_stroke_count[rad])}
return out
RAD_TO_KANJI_DICT = make_rad_to_kanji_dict()
if __name__ == "__main__":
print(RAD_TO_KANJI_DICT['⽕'])
You can iterate through the file it generates and output a static mapping pretty easily. There may be existing homograph lists for that sort of thing, but I don't know of any. radkdict only has 128 kanji consisting of exactly 1 radical, so it is also a simple matter to just enumerate all of those and manually check which ones match your criteria.
Note: I looked through the list of things that are caught by the "consisting of exactly one radical" heuristic but skipped over in the "has the same stroke order" list, it seems that '老' (radical) -> '老' (kanji) and '刈' (radical) -> '刈' (kanji) are the only ones that, for whatever reason, don't get caught by this. Here is a CSV generated with this method.

Dumpbin output ,meaning?

If I do something like this:
dumpbin myexe.exe
I got output similar to:
Dump of file myexe.exe
File Type: EXECUTABLE IMAGE
Summary
21000 .data
1000 .gfids
3C9000 .rdata
4F000 .reloc
B4000 .rsrc
325000 .text
1000 .tls
Second column (.data, .gfids, .rdata...) represents section name.
But what is first column? Section size?
This value is actually the aligned section size.
If you do dumpbin /headers myexe.exe, you will get a more verbose output. For example, dumpbin C:\Windows\explorer.exe on my system produces:
Dump of file c:\Windows\explorer.exe
File Type: EXECUTABLE IMAGE
Summary
4000 .data
1000 .didat
1000 .imrsiv
18000 .pdata
7B000 .rdata
6000 .reloc
1EA000 .rsrc
1C5000 .text
dumpbin /headers C:\Windows\explorer.exe, contains the output for the .text section as follows (... = lines omitted):
...
SECTION HEADER #1
.text name
1C4737 virtual size
1000 virtual address (0000000140001000 to 00000001401C5736)
1C4800 size of raw data
400 file pointer to raw data (00000400 to 001C4BFF)
0 file pointer to relocation table
0 file pointer to line numbers
0 number of relocations
0 number of line numbers
60000020 flags
Code
Execute Read
...
It also gives 1000 section alignment in the OPTIONAL HEADER VALUES section.
As you can see, the size of the .text section is actually 1C4737, when aligned, it becomes 1C5000, as reported in the /summary (which is the default option for dumpbin).

How to write string as unicode byte in python?

When i write '你' in agend and save it as test-unicode.txt in unicode mode,open it with xxd g:\\test-unicode.txt ,i got :
0000000: fffe 604f ..`O
1.fffe stand for little endian
2.the unicode of 你 is \x4f\x60
I want to write the 你 as 604f or 4f60 in the file.
output=open("g://test-unicode.txt","wb")
str1="你"
output.write(str1)
output.close()
error:
TypeError: 'str' does not support the buffer interface
When i change it into the following ,there is no errror.
output=open("g://test-unicode.txt","wb")
str1="你"
output.write(str1.encode())
output.close()
when open it with xxd g:\\test-unicode.txt ,i got :
0000000: e4bd a0 ...
How can i write 604f or 4f60 into my file the same way as microsoft aengda do(save as unicode format)?
"Unicode" as an encoding is actually UTF-16LE.
with open("g:/test-unicode.txt", "w", encoding="utf-16le") as output:
output.write(str1)

using SVM for binary classification

I am using sVM-light for binary classification.and I am using SVM in the learning mode.
I have my train.dat file ready.but when i run this command ,instead of creating file model ,it writes somethings in terminal:
my command:
./svm_learn example1/train.dat example1/model
output:
Scanning examples...done
Reading examples into memory...Feature numbers must be larger or equal to 1!!!
: Success
LINE: -1 0:1.0 6:1.0 16:1.0 18:1.0 28:1.0 29:1.0 31:1.0 48:1.0 58:1.0 73:1.0 82:1.0 93:1.0 95:1.0 106:1.0 108:1.0 118:1.0 121:1.0 122:1.0151:1.0 164:1.0 167:1.0 169:1.0 170:1.0 179:1.0 190:1.0 193:1.0 220:1.0 237:1.0250:1.0 252:1.0 267:1.0 268:1.0 269:1.0 278:1.0 283:1.0 291:1.0 300:1.0 305:1.0320:1.0 332:1.0 336:1.0 342:1.0 345:1.0 348:1.0 349:1.0 350:1.0 368:1.0 370:1.0384:1.0 390:1.0 394:1.0 395:1.0 396:1.0 397:1.0 400:1.0 401:1.0 408:1.0 416:1.0427:1.0 433:1.0 435:1.0 438:1.0 441:1.0 446:1.0 456:1.0 471:1.0 485:1.0 510:1.0523:1.0 525:1.0 526:1.0 532:1.0 540:1.0 553:1.0 567:1.0 568:1.0 581:1.0 583:1.0604:1.0 611:1.0 615:1.0 616:1.0 618:1.0 623:1.0 624:1.0 626:1.0 651:1.0 659:1.0677:1.0 678:1.0 683:1.0 690:1.0 694:1.0 699:1.0 713:1.0 714:1.0 720:1.0 722:1.0731:1.0 738:1.0 755:1.0 761:1.0 763:1.0 768:1.0 776:1.0 782:1.0 792:1.0 817:1.0823:1.0 827:1.0 833:1.0 834:1.0 838:1.0 842:1.0 848:1.0 851:1.0 863:1.0 867:1.0890:1.0 900:1.0 903:1.0 923:1.0 935:1.0 942:1.0 946:1.0 947:1.0 949:1.0 956:1.0962:1.0 965:1.0 968:1.0 983:1.0 986:1.0 987:1.0 990:1.0 998:1.0 1007:1.0 1014:1.0 1019:1.0 1022:1.0 1024:1.0 1029:1.0 1030:1.01032:1.0 1047:1.0 1054:1.0 1063:1.0 1069:1.0 1076:1.0 1085:1.0 1093:1.0 1098:1.0 1108:1.0 1109:1.01116:1.0 1120:1.0 1133:1.0 1134:1.0 1135:1.0 1138:1.0 1139:1.0 1144:1.0 1146:1.0 1148:1.0 1149:1.01161:1.0 1165:1.0 1169:1.0 1170:1.0 1177:1.0 1187:1.0 1194:1.0 1212:1.0 1214:1.0 1239:1.0 1243:1.01251:1.0 1257:1.0 1274:1.0 1278:1.0 1292:1.0 1297:1.0 1304:1.0 1319:1.0 1324:1.0 1325:1.0 1353:1.01357:1.0 1366:1.0 1374:1.0 1379:1.0 1392:1.0 1394:1.0 1407:1.0 1412:1.0 1414:1.0 1419:1.0 1433:1.01435:1.0 1437:1.0 1453:1.0 1463:1.0 1464:1.0 1469:1.0 1477:1.0 1481:1.0 1487:1.0 1506:1.0 1514:1.01519:1.0 1526:1.0 1536:1.0 1549:1.0 1551:1.0 1553:1.0 1561:1.0 1569:1.0 1578:1.0 1603:1.0 1610:1.01615:1.0 1617:1.0 1625:1.0 1638:1.0 1646:1.0 1663:1.0 1666:1.0 1672:1.0 1681:1.0 1690:1.0 1697:1.01699:1.0 1706:1.0 1708:1.0 1717:1.0 1719:1.0 1732:1.0 1737:1.0 1756:1.0 1766:1.0 1771:1.0 1789:1.01804:1.0 1805:1.0 1808:1.0 1814:1.0 1815:1.0 1820:1.0 1824:1.0 1832:1.0 1841:1.0 1844:1.0 1852:1.01861:1.0 1875:1.0 1899:1.0 1902:1.0 1904:1.0 1905:1.0 1917:1.0 1918:1.0 1919:1.0 1921:1.0 1926:1.01934:1.0 1937:1.0 1942:1.0 1956:1.0 1965:1.0 1966:1.0 1970:1.0 1971:1.0 1980:1.0 1995:1.0 2000:1.02009:1.0 2010:1.0 2012:1.0 2015:1.0 2018:1.0 2022:1.0 2047:1.0 2076:1.0 2082:1.0 2095:1.0 2108:1.02114:1.0 2123:1.0 2130:1.0 2133:1.0 2141:1.0 2142:1.0 2143:1.0 2148:1.0 2157:1.0 2160:1.0 2162:1.02170:1.0 2195:1.0 2199:1.0 2201:1.0 2202:1.0 2205:1.0 2211:1.0 2218:1.0
I dont know what to do.
p.s.when i make my train.dat very shorter ,everything works fine!!!
Thank you
From what I could interpret from the log, your training set has an issue.
The first few characters of the training row that has issue are
-1 0:1.0 6:1.0
The issue is not with the size but with feature indexing. You are starting your feature index at 0 (0:1) whereas svmlight requires that all feature index be equal or greater than 1.
Change the indexing to start at 1 and it should work fine.

can anyone tell me what the encoding of this string is? Its meant to be base64

cpxSR2bnPUihaNxIFFA8Sc+8gUnWuJxJi8ywSW5ju0npWrFJHW2MSZAeMklcZ71IjrBySF2ci0gdecRI0vD/SM4ZF0m1ZSJJBY8bSZJl/0intaxIlQJBSPdY3EdBLM9Hp4wLSOK8Nki8L1pIoglxSAvNbkjHg0VIDlv7R6B2Y0elCqVGFWuVRgagAkdxHTdHELxRR9i2VkdyEUlHU84kRzTS2kalKFxG
This is a string from an XML file from my mass spectrometer. I am trying to write a program to load two such files, subtract one set of values from another, and write the results to a new file. According to the specification file for the .mzML format, the encoding of the numerical data is alleged to be base64. I can't convert this data string to anything legible using any of the many online base64 converter or using NotepaD++ and the MIME toolkit's base64 converter.
The string, in the context of the results file, looks like this:
<binaryDataArray encodedLength="224">
<cvParam cvRef="MS" accession="MS:1000515" name="intensity array" unitAccession="MS:1000131" unitName="number of counts" unitCvRef="MS"/>
<cvParam cvRef="MS" accession="MS:1000521" name="32-bit float" />
<cvParam cvRef="MS" accession="MS:1000576" name="no compression" />
<binary>cpxSR2bnPUihaNxIFFA8Sc+8gUnWuJxJi8ywSW5ju0npWrFJHW2MSZAeMklcZ71IjrBySF2ci0gdecRI0vD/SM4ZF0m1ZSJJBY8bSZJl/0intaxIlQJBSPdY3EdBLM9Hp4wLSOK8Nki8L1pIoglxSAvNbkjHg0VIDlv7R6B2Y0elCqVGFWuVRgagAkdxHTdHELxRR9i2VkdyEUlHU84kRzTS2kalKFxG</binary>
I can't proceed until I can work out what format this encoding is meant to be!
Thanks in advance for any replies.
You can use this trivial program to convert it to plaintext:
#include <stdio.h>
int main(void)
{
float f;
while (fread(&f, 1, 4, stdin) == 4)
printf("%f\n", f);
}
I compiled this to "floatdecode" and used this command:
echo "cpxSR2bnPUihaNxIFFA8Sc+8gUnWuJxJi8ywSW5ju0npWrFJHW2MSZAeMklcZ71IjrBySF2ci0gdecRI0vD/SM4ZF0m1ZSJJBY8bSZJl/0intaxIlQJBSPdY3EdBLM9Hp4wLSOK8Nki8L1pIoglxSAvNbkjHg0VIDlv7R6B2Y0elCqVGFWuVRgagAkdxHTdHELxRR9i2VkdyEUlHU84kRzTS2kalKFxG" | base64 -d | ./floatdecode
Output is:
53916.445312
194461.593750
451397.031250
771329.250000
1062809.875000
1283866.750000
1448337.375000
1535085.750000
1452893.125000
1150371.625000
729577.000000
387898.875000
248514.218750
285922.906250
402376.906250
524166.562500
618908.875000
665179.312500
637168.312500
523052.562500
353709.218750
197642.328125
112817.929688
106072.507812
142898.609375
187123.531250
223422.937500
246822.531250
244532.171875
202255.109375
128694.109375
58230.625000
21125.322266
19125.541016
33440.023438
46877.441406
53692.062500
54966.843750
51473.445312
42190.324219
28009.101562
14090.161133
Yet another Java Base64 decode with options to uncompress should you need it.
Vendor spec indicated "32-bit float" = IEEE-754 and specified little-endian.
Schmidt's converter shows the bit pattern for IEEE-754.
One more Notepad++ step to look at the hex codes:
Notepad++ TextFX plugin (after the Base64 decode you already did)
select the text
TextFX > TextFX Convert > Convert text to Hex-32
lets you look at the hex codes:
"000000000 72 9C 52 47 66 E7 3D 48- ... 6E 63 BB 49 |rœRGfç=H¡hÜHP
Little-endian: 47529C72 converts (via Schmidt) as shown above by David.
You can access such data from mzML files in Python through pymzML, a python interface to mzML files.
http://pymzml.github.com/

Resources