I'm practice to reverse engineering a il2cpp unity project
Things I done:
get the apk
using Apktool to extract files
open libunity.so with Ghidra ( or IDA works too )
And I found a wired block of instructions like :
004ac818 f4 0f 1e f8 str x20,[sp, #local_20]!
004ac81c f3 7b 01 a9 stp x19,x30,[sp, #local_10]
004ac820 e1 03 1f 2a mov w1,wzr
004ac824 77 b5 00 94 bl FUN_004d9e00
I follow bl FUN_004d9e00 and I found :
FUN_004d9e00
004d9e00 6e ?? 6Eh n
004d9e01 97 ?? 97h
004d9e02 85 ?? 85h
004d9e03 60 ?? 60h `
004d9e04 6d ?? 6Dh m
But here is the thing, the instruction in FUN_004d9e00 is not a valid one. How can the libunity.so still work properly
Perhaps there is a relocation symbol for address 0x004ac824? In that case the linker would modify the instruction when libunity.so is loaded, and it would end up calling a different address (maybe in a different shared library).
I'm porting a game server to NodeJS. The problem is that I'm not familiar at all with packets and building them out. After reading through the NodeJS docs, I think that I've structured my response the way that the client expects, but I'm the client doesn't seem to respond well to what my server is sending.
Hoping someone can confirm that it's something in my server response that doesn't match the documentation requirements..
The client expects the following response from the TCP socket:
Packet Build
BYTE[1] cmd (0xA8)
BYTE[2] total length of this packet
BYTE[1] System Info Flag (0x5D)
BYTE[2] # of servers
(Repeat as needed for each server)
BYTE[2] server index (0-based)
BYTE[32] server name
BYTE percent full
BYTE timezone
BYTE[4] server IP to ping
Here is my NodeJS interpretation of the docs.
/** Build response header */
const length = 45
serverResponse = Buffer.alloc(length)
serverResponse.fill(0xA8, 0)
serverResponse.fill(Buffer.alloc(2, length), 1)
// Last fill had a buffer size of 2, so our next offset considers that
serverResponse.fill(0x5d,3)
serverResponse.fill(Buffer.alloc(2, 1),4)
/** Build response server list */
serverResponse.fill(Buffer.alloc(2, 0),5) /* 2 Bytes (server index, 0-based) */
// Last fill had a buffer size of 2, so our next offset considers that
serverResponse.fill(
Buffer.alloc(32, Buffer.from('Heres your server')),
7) /* 32 bytes (Server name) */
serverResponse.fill(Buffer.alloc(1, 9), 39) /* 1 Bytes (% Full) */
/**
* Trying -12 - 12 range divided by (60 * 60)). #see
* https://github.com/Sphereserver/Source/blob/0be2bc1d2e16659239460495b9819eb8dcfd39ed/src/graysvr/CServRef.cpp#L42
*/
serverResponse.fill(Buffer.alloc(1, -5 / (60 *60)),40) /* 1 Bytes (Timezone */
serverResponse.fill(Buffer.from([0,0,0,0]), 41) /** IP Address */
This outputs:
<Buffer a8 2d 2d 5d 01 00 00 48 65 72 65 73 20 79 6f 75 72 20 73 65 72 76 65 72 48 65 72 65 73 20 79 6f 75 72 20 73 65 72 76 09 00 00 00 00 00>
Edit: I've discovered the following that may help me along.
NodeJS equivalents
/**
* BYTE 8-bit unsigned buf.writeUInt8()
* SBYTE 8-bit signed buf.writeInt8()
* BOOL 8-bit boolean (0x00=False, 0xFF=True) buf.fill(0x00) || buf.fill(0xFF)
* CHAR 8-bit single ASCII character buf.from('Text', 'ascii') - Make 8-bit?
* UNI 16-bit single unicode character buf.from('A', 'utf16le') - Correct?
* SHORT 16-bit signed buf.writeInt16BE() - #see https://www.reddit.com/r/node/comments/9hob2u/buffer_endianness_little_endian_or_big_endian_how/
* USHORT 16-bit unsigned buf.writeUInt16BE() - #see https://www.reddit.com/r/node/comments/9hob2u/buffer_endianness_little_endian_or_big_endian_how/
* INT 32-bit signed buf.writeInt32BE - #see https://www.reddit.com/r/node/comments/9hob2u/buffer_endianness_little_endian_or_big_endian_how/
* UINT 32-bit unsigned buf.writeUInt32BE - #see https://www.reddit.com/r/node/comments/9hob2u/buffer_endianness_little_endian_or_big_endian_how/
*/
I have memory leak on .Net Web service application.Upon suggestion, I'm able to analyze the dump file. I guess it is a native memory leak. But, I'm unable to figure out the root cause of the issue. I have followed steps mentioned in the link
Here what I have so far
address summary
--- Usage Summary ---------------- RgnCount ----------- Total Size -------- %ofBusy %ofTotal
Heap 328 4a256000 ( 1.159 GB) 69.38% 57.93%
<unknown> 1253 1b64b000 ( 438.293 MB) 25.63% 21.40%
Free 246 151fa000 ( 337.977 MB) 16.50%
Native Heap
0:000> !heap -s
Heap Flags Reserv Commit Virt Free List UCR Virt Lock Fast
(k) (k) (k) (k) length blocks cont. heap
-----------------------------------------------------------------------------
001b0000 00000002 1036480 1024552 1036480 411 745 68 5 0 LFH
00010000 00008000 64 4 64 2 1 1 0 0
001b0000 is using more than 1GB
Allocation info
0:000> !heap -stat -h 001b0000
heap # 001b0000
group-by: TOTSIZE max-display: 20
size #blocks total ( %) (percent of total busy bytes)
4e24 3a24 - 11bf2510 (28.82)
1001f e89 - e8ac297 (23.61)
Filtering 4e24
0:000> !heap -flt s 4e24
_HEAP # 1b0000
HEAP_ENTRY Size Prev Flags UserPtr UserSize - state
01fa4810 09c6 0000 [00] 01fa4818 04e24 - (busy)
01fa9640 09c6 09c6 [00] 01fa9648 04e24 - (busy)
01fae470 09c6 09c6 [00] 01fae478 04e24 - (busy)
There are ton of busy
0:000> dc 01fb32a0 L 2000
01fb32a0 b87718ff 0c12a52e 6f4d3c00 656c7564 ..w......<Module
01fb32b0 6977003e 4d2e796c 4d6b636f 6c75646f >.wily.MockModul
01fb32c0 65440065 6c756166 79442074 696d616e e.Default Dynami
01fb32d0 6f4d2063 656c7564 6c697700 67412e79 c Module.wily.Ag
01fb32e0 00746e65 2e6d6f63 796c6977 6573692e ent.com.wily.ise
01fb32f0 7261676e 65722e64 74736967 49007972 ngard.registry.I
01fb3300 69676552 79727473 76726553 00656369 RegistryService.
01fb3310 6f63736d 62696c72 73795300 006d6574 mscorlib.System.
01fb3320 656a624f 50007463 79786f72 67655249 Object.ProxyIReg
01fb3330 72747369 72655379 65636976 6f4d4e00 istryService.NMo
01fb3340 49006b63 6f766e49 69746163 61486e6f ck.IInvocationHa
01fb3350 656c646e 695f0072 636f766e 6f697461 ndler._invocatio
01fb3360 6e61486e 72656c64 73795300 2e6d6574 nHandler.System.
01fb3370 6c666552 69746365 4d006e6f 6f687465 Reflection.Metho
01fb3380 666e4964 6d5f006f 6f687465 666e4964 dInfo._methodInf
01fb3390 70614d6f 766e4900 00656b6f 2e6d6f63 oMap.Invoke.com.
01fb33a0 796c6977 6573692e 7261676e 74752e64 wily.isengard.ut
01fb33b0 742e6c69 00656572 65726944 726f7463 il.tree.Director
01fb33c0 74615079 646e4168 72746e45 65520079 yPathAndEntry.Re
01fb33d0 74736967 6e457972 00797274 72657571 gistryEntry.quer
01fb33e0 746e4579 73656972 6d6f6300 6c69772e yEntries.com.wil
01fb33f0 73692e79 61676e65 6f2e6472 696f676e y.isengard.ongoi
01fb3400 7571676e 00797265 65755141 6f4e7972 ngquery.AQueryNo
01fb3410 69666974 69746163 72006e6f 73696765 tification.regis
01fb3420 4f726574 696f676e 7551676e 00797265 terOngoingQuery.
01fb3430 2e6d6f63 796c6977 6573692e 7261676e com.wily.isengar
01fb3440 6f702e64 666f7473 65636966 736f5000 d.postoffice.Pos
01fb3450 66664f74 53656369 69636570 72656966 tOfficeSpecifier
01fb3460 72694400 6f746365 61507972 61006874 .DirectoryPath.a
01fb3470 6e456464 00797274 45746567 7972746e ddEntry.getEntry
01fb3480 6c656400 45657465 7972746e 74656700 .deleteEntry.get
01fb3490 44627553 63657269 69726f74 2e007365 SubDirectories..
01fb34a0 726f7463 74632e00 0000726f 00000000 ctor..ctor......
01fb34b0 00000000 00000000 00000000 00000000 ...............
I'm not sure if I'm in the right path of casue analysis
That < Module > in the beginning is a sign of dynamically generated assembly.
Load SOS extension using .loadby sos clr (for current machine dump) or .cordll -ve -u -l if you debug someone else's dump (doesn't work well in old Windbg 6.x, but works well for WinDbg from Windows Development Kit 8 and above)
Execute !eeheap and check the Module Thunk heaps section. It should contain thousands of records:
--------------------------------------
Module Thunk heaps:
Module 736b1000: Size: 0x0 (0) bytes.
Module 004f2ed4: Size: 0x0 (0) bytes.
...
<thousands of similar lines>
...
Total size: Size: 0x0 (0) bytes.
In my case it was assemblies generated for serialization by MS XmlSerializer class that took all the memory:
00000000`264b7640 00 00 3e 00 ce 01 00 00 00 00 00 00 00 3c 4d 6f ..>..........<Mo
00000000`264b7650 64 75 6c 65 3e 00 4d 69 63 72 6f 73 6f 66 74 2e dule>.Microsoft.
00000000`264b7660 47 65 6e 65 72 61 74 65 64 43 6f 64 65 00 52 65 GeneratedCode.Re
00000000`264b7670 66 45 6d 69 74 5f 49 6e 4d 65 6d 6f 72 79 4d 61 fEmit_InMemoryMa
00000000`264b7680 6e 69 66 65 73 74 4d 6f 64 75 6c 65 00 6d 73 63 nifestModule.msc
00000000`264b7690 6f 72 6c 69 62 00 53 79 73 74 65 6d 2e 53 65 63 orlib.System.Sec
I could avoid this leak by using only a single instance of XmlSerializer for each type.
In your case it seems that some other thing (wily.MockModule) generates assemblies, and some other solution might be required
I have a .bin file that contains slopes and intercepts. I'm using Fortran to read the values and I'm getting different values on machines running AIX and Linux. I believe the Linux data to be accurate. Does this have something to do with stack size or endians?
For example, AIX max value is: 0.3401589687E+39 while Linux max value is: 6.031288
program read_bin_files
REAL :: slope(2500,1250)
INTEGER :: recl=2500*1250*4
OPEN(UNIT=8, FILE='MODIS_AVHRR_years_slope.bin', ACTION='READ', ACCESS='direct', FORM='unformatted', RECL=recl, IOSTAT=iostat)
READ(unit=8, REC = 1, IOSTAT = iostat) slope
print *, "Max slope value is:", maxval(slope)
CLOSE(8)
end
AIX runs (these days) on POWER CPUs, which are usually big-endian, whereas Linux is usually run on x86es, which are little-endian. So you are correct to suspect that endianness may be a problem. You report that the result of running this program
program read_bin_files
INTEGER*4 :: slope(2500,1250)
INTEGER :: recl=2500*1250*4
OPEN(UNIT=8, FILE='MODIS_AVHRR_years_slope.bin', ACTION='READ', &
ACCESS='direct', FORM='unformatted', RECL=recl)
READ(unit=8, REC = 1) slope
DO i = 1, 10
WRITE(*, '(Z8.8)') slope(1, i)
END DO
CLOSE(8)
end
is the following. ("AIX" and "Linux" are in quotes in the column headers because it's the CPU that matters here, not the operating system.)
"Linux" | "AIX"
------------+------------
3E C2 61 8F | 8F 61 C2 3E
3E F5 64 52 | 52 64 F5 3E
BC F3 E0 7E | 7E E0 F3 BC
BF B9 71 0D | 0D 71 B9 BF
3E F5 B9 73 | 73 B9 F5 3E
3F 29 3C 2F | 2F 3C 29 3F
3E DC C2 09 | 09 C2 DC 3E
3F 66 86 89 | 89 86 66 3F
3E 5B 91 A9 | A9 91 5B 3E
3F 67 73 25 | 25 73 67 3F
In each row, the right-hand half is the mirror image of the left-hand half. That demonstrates that the issue is endianness. What we still don't know is which byte order is correct. The answer to that question will almost certainly be "the byte order used by the CPU that ran the program that generated the file."
If you are using GNU Fortran, the CONVERT specifier to OPEN should solve the problem, provided you can figure out which way around the data is supposed to be interpreted. However, I think that's an extension. In the general case, I don't know enough FORTRAN to tell you what to do.
If you have control over the process generating these data files, you can avoid the entire problem in the future by switching both sides to a self-describing data format, such as HDF.
Your AIX machine is likely big-endian RISC and your Linux is likely PC or other Intel platform. Just convert the endianness.
I use these procedures for 4 byte and 8 byte variables (use iso_fortran_env in the module):
elemental function SwapB32(x) result(res)
real(real32) :: res
real(real32),intent(in) :: x
character(4) :: bytes
integer(int32) :: t
real(real32) :: rbytes, rt
equivalence (rbytes, bytes)
equivalence (t, rt)
rbytes = x
t = ichar(bytes(4:4),int32)
t = ior( ishftc(ichar(bytes(3:3),int32),8), t )
t = ior( ishftc(ichar(bytes(2:2),int32),16), t )
t = ior( ishftc(ichar(bytes(1:1),int32),24), t )
res = rt
end function
elemental function SwapB64(x) result(res)
real(real64) :: res
real(real64),intent(in) :: x
character(8) :: bytes
integer(int64) :: t
real(real64) :: rbytes, rt
equivalence (rbytes, bytes)
equivalence (t, rt)
rbytes = x
t = ichar(bytes(8:8),int64)
t = ior( ishftc(ichar(bytes(7:7),int64),8), t )
t = ior( ishftc(ichar(bytes(6:6),int64),16), t )
t = ior( ishftc(ichar(bytes(5:5),int64),24), t )
t = ior( ishftc(ichar(bytes(4:4),int64),32), t )
t = ior( ishftc(ichar(bytes(3:3),int64),40), t )
t = ior( ishftc(ichar(bytes(2:2),int64),48), t )
t = ior( ishftc(ichar(bytes(1:1),int64),56), t )
res = rt
end function
usage:
SLOPE = SwapB32(SLOPE)
There are other ways. Some compilers support non-standard OPEN(...,CONVERT='big_endian',... and some have command line options like -fconvert=big-endian.
That elemental function SwapB64 is elegant, and good to have for these issues.
Or
Try these using big_endian, little endian etc.
(Personally I would do both)
!OPEN(UNIT=8, FILE='MODIS_AVHRR_years_slope.bin', ACTION='READ', ACCESS='direct', FORM='unformatted', RECL=recl, IOSTAT=iostat)
OPEN(UNIT=8, FILE='MODIS_AVHRR_years_slope.bin', CONVERT='big_endian', ACTION='READ', ACCESS='direct', FORM='unformatted', RECL=recl, IOSTAT=iostat)
Background: I'm using node.js to get the volume setting from a device via serial connection. I need to obtain this data as an integer value.
I have the data in a buffer ('buf'), and am using readInt16BE() to convert to an int, as follows:
console.log( buf )
console.log( buf.readInt16BE(0) )
Which gives me the following output as I adjust the external device:
<Buffer 00 7e>
126
<Buffer 00 7f>
127
<Buffer 01 00>
256
<Buffer 01 01>
257
<Buffer 01 02>
258
Problem: All looks well until we reach 127, then we take a jump to 256. Maybe it's something to do with signed and unsigned integers - I don't know!
Unfortunately I have very limited documentation about the external device, I'm having to reverse engineer it! Is it possible it only sends a 7-bit value? Hopefully there is a way around this?
Regarding a solution - I must also be able to convert back from int to this format!
Question: How can I create a sequential range of integers when 7F seems to be the largest value my device sends, which causes a big jump in my integer scale?
Thanks :)
127 is the maximum value of a signed 8-bit integer. If the integer is overflowing into the next byte at 128 it would be safe to assume you are not being sent a 16 bit value, but rather 2 signed 8-bit values, and reading the value as a 16-bit integer would be incorrect.
I would start by using the first byte as a multiplier of 128 and add the second byte, this will give the series you are seeking.
buf = Buffer([0,127]) //<Buffer 00 7f>
buf.readInt8(0) * 128 + buf.readInt8(1)
>127
buf = Buffer([1,0]) //<Buffer 01 00>
buf.readInt8(0) * 128 + buf.readInt8(1)
>128
buf = Buffer([1,1]) //<Buffer 01 01>
buf.readInt8(0) * 128 + buf.readInt8(1)
>129
The way to get back is to divide by 128, round it down to the nearest integer for the first byte, and the second byte contains the remainder.
i = 129
buf = Buffer([Math.floor(i / 128), i % 128])
<Buffer 01 01>
Needed to treat the data as two signed 8-bit values. As per #forrestj the solution is to do:
valueInt = buf.readInt8(0) * 128 + buf.readInt8(1)
We can also convert the int value into the original format by doing the following:
byte1 = Math.floor(valueInt / 128)
byte2 = valueInt % 128