FASM - IsWow64Process - Checking Boolean Result - 64-bit

I am trying to determine if the PC my app is running on is x64 or x86.
Here is my current code:
format PE GUI 4.0
include "Win32A.Inc"
entry start
section ".idata" import data readable writable
library kernel32,"KERNEL32.DLL",user32,"USER32.DLL"
import kernel32,\
IsWow64Process,"IsWow64Process",\
GetCurrentProcess,"GetCurrentProcess",\
ExitProcess,"ExitProcess"
import user32,\
MessageBox,"MessageBoxA"
section '.data' data readable writeable
hProcess dd ?
hResult dd ?
section '.code' code readable executable
start:
invoke GetCurrentProcess
mov [hProcess],eax
invoke IsWow64Process,hProcess,hResult
cmp [hResult],1
je Is64
cmp [hResult],0
je Is32
invoke ExitProcess,0
Is64:
invoke MessageBox,0,'64','AR',0
invoke ExitProcess,0
Is32:
invoke MessageBox,0,'32','AR',0
invoke ExitProcess,0
It simply crashes upon execution.
What is the proper way to check the value of a boolean, am I doing that part correctly?
Thanks for any help solving this issue.

To be able to declare inline strings, you need to include the extended headers:
include "Win32AX.Inc"
or else '64' and so on will be interpreted as constants.
You're also not passing hProcess as a value:
invoke IsWow64Process,[hProcess],hResult

Related

Put a breakpoint in shared library with GEF GDB

I want to put a breakpoint in a Linux share library in specific offset ( in libTest.so in function 0x1234 ) while I debugging with GEF GDB. But I want to put it with gdb script.
If I run vmmap libTest.so I got
[ Legend: Code | Heap | Stack ]
Start End Offset Perm Path
0x00000012a1XXX 0x00000012a3XXX 0x00000000000000 r-x /lib/libTest.so
0x00000012a3XXX 0x00000012a5XXX 0x000000000XXXX --- /lib/libTest.so
0x00000012aXXXX 0x00000012aXXXX 0x000000000XXXX r-- /lib/libTest.so
0x00000012aXXXX 0x00000012aXXXX 0x000000000XXXX rw- /lib/libTest.so
So the first line is where the code located. So I want to put a breakpoint in 0x00000012a1XXX+0x1234
For now I put that code in GDB script
python
str = gdb.execute('vmmap libTest.so', False, True)
addr_base= (str.split('\n')[2].split(' ')[0])
gdb.execute('b '+addr_base+0x1234)
end
But that is bad code.
Is there any simple way?

How "readelf --hex-dump" gives little endian output

I am using readelf --hex-dump=<section name> <elf> to get hex dump of a specific section. However, the result is shown as big endian DWORD. Is there a native way to make the output little endian?
Hex dump of section 'PrgCode':
0x1ffe0000 b0b54ef2 0010cef2 00000168 490507d5 ..N........hI...
0x1ffe0010 4ff48061 c0f88010 bff34f8f bff36f8f O..a......O...o.
0x1ffe0020 0021c5f2 2d410a68 4cf22050 cdf62810 .!..-A.hL. P..(.
0x1ffe0030 92040246 5ebf4cf2 20524a60 4df62812 ...F^.L. RJ`M.(.
0x1ffe0040 4a604ff6 ff728a60 0b6843f0 200323f0 J`O..r.`.hC. .#.

Driver is unable to get I/O DMA Adapter

My driver for a PCI Bus-Master Device without Scatter/Gather capability calls IoGetDmaAdapter(), but fails with 0xFFFFFFFFC0000005 Access Violation. This causes a BSOD.
Here's how I set it up:
RtlZeroMemory(&deviceDescription, sizeof(DEVICE_DESCRIPTION));
deviceDescription.Master = TRUE; // this is a bus-master device without scatter/gather ability
deviceDescription.Dma32BitAddresses = TRUE; // this device is unable to perform 64-bit addressing
deviceDescription.InterfaceType = InterfaceTypeUndefined;
KdBreakPoint();
deviceDescription.Version = DEVICE_DESCRIPTION_VERSION2;
IoGetDmaAdapter(deviceObject, &deviceDescription, &fakeRegs);
Here's my Windows kernel debugging session:
MyDriver!AllocateHardWareResource+0x313:
fffff803`319626a3 488b8424e8000000 mov rax,qword ptr [rsp+0E8h]
MyDriver!AllocateHardWareResource+0x324:
fffff803`319626b4 488d442478 lea rax,[rsp+78h]
MyDriver!AllocateHardWareResource+0x34d:
fffff803`319626dd 8b442450 mov eax,dword ptr [rsp+50h]
MyDriver!AllocateHardWareResource+0x358:
fffff803`319626e8 c684248900000001 mov byte ptr [rsp+89h],1
MyDriver!AllocateHardWareResource+0x360:
fffff803`319626f0 c784248000000002000000 mov dword ptr [rsp+80h],2
MyDriver!AllocateHardWareResource+0x36b:
fffff803`319626fb 4c8d44244c lea r8,[rsp+4Ch]
KDTARGET: Refreshing KD connection
KDTARGET: Refreshing KD connection
*** Fatal System Error: 0x0000007e
(0xFFFFFFFFC0000005,0x0000000000000000,0xFFFF9400DE25D4B8,0xFFFF9400DE25CCF0)
WARNING: This break is not a step/trace completion.
The last command has been cleared to prevent
accidental continuation of this unrelated event.
Check the event, location and thread before resuming.
Break instruction exception - code 80000003 (first chance)
A fatal system error has occurred.
Before the crash at Guard_Dispatch_iCall_NOP, I see the following call-stack:
HalpGetCacheCoherency + 6D
HalGetAdapterV2 + A8
IoGetDmaAdapter + C0
IoGetDmaAdapter + C0
IoGetDmaAdapter + C0
My Call-Site
I checked that Physical Device Object has the same address as originally provided to my AddDevice handler.
How should I ask politely to avoid a "Sorry Dave, I can't do that" from Windows Kernel I/O Manager?
When my driver calls into IoGetDmaAdapter(), the driver receives two interface queries via IRP_MN_QUERY_INTERFACE: GUID_BUS_INTERFACE_STANDARD and GUID_DMA_CACHE_COHERENCY_INTERFACE.
GUID_DMA_CACHE_COHERENCY_INTERFACE is new to Windows 10 or Server 2016.
GUID_DMA_CACHE_COHERENCY_INTERFACE query should normally pass to the next driver in the stack. I was making a mistake, setting the status to success, but should have left it alone.

Reading console output from mplayer to parse track's position/length

When you run mplayer, it will display the playing track's position and length (among some other information) through, what I'd assume is, stdout.
Here's a sample output from mplayer:
MPlayer2 2.0-728-g2c378c7-4+b1 (C) 2000-2012 MPlayer Team
Cannot open file '/home/pi/.mplayer/input.conf': No such file or directory
Failed to open /home/pi/.mplayer/input.conf.
Cannot open file '/etc/mplayer/input.conf': No such file or directory
Failed to open /etc/mplayer/input.conf.
Playing Bomba Estéreo - La Boquilla [Dixone Remix].mp3.
Detected file format: MP2/3 (MPEG audio layer 2/3) (libavformat)
[mp3 # 0x75bc15b8]max_analyze_duration 5000000 reached
[mp3 # 0x75bc15b8]Estimating duration from bitrate, this may be inaccurate
[lavf] stream 0: audio (mp3), -aid 0
Clip info:
album_artist: Bomba Estéreo
genre: Latin
title: La Boquilla [Dixone Remix]
artist: Bomba Estéreo
TBPM: 109
TKEY: 11A
album: Unknown
date: 2011
Load subtitles in .
Selected audio codec: MPEG 1.0/2.0/2.5 layers I, II, III [mpg123]
AUDIO: 44100 Hz, 2 ch, s16le, 320.0 kbit/22.68% (ratio: 40000->176400)
AO: [pulse] 44100Hz 2ch s16le (2 bytes per sample)
Video: no video
Starting playback...
A: 47.5 (47.4) of 229.3 (03:49.3) 4.1%
The last line (A: 47.5 (47.4) of 229.3 (03:49.3) 4.1%) is what I'm trying to read but, for some reason, it's never received by the Process.OutputDataReceived event handler.
Am I missing something? Is mplayer using some non-standard way of outputting the "A:" line to the console?
Here's the code in case it helps:
Public Overrides Sub Play()
player = New Process()
player.EnableRaisingEvents = True
With player.StartInfo
.FileName = "mplayer"
.Arguments = String.Format("-ss {1} -endpos {2} -volume {3} -nolirc -vc null -vo null ""{0}""",
tmpFileName,
mTrack.StartTime,
mTrack.EndTime,
100)
.CreateNoWindow = False
.UseShellExecute = False
.RedirectStandardOutput = True
.RedirectStandardError = True
.RedirectStandardInput = True
End With
AddHandler player.OutputDataReceived, AddressOf DataReceived
AddHandler player.ErrorDataReceived, AddressOf DataReceived
AddHandler player.Exited, Sub() KillPlayer()
player.Start()
player.BeginOutputReadLine()
player.BeginErrorReadLine()
waitForPlayer.WaitOne()
KillPlayer()
End Sub
Private Sub DataReceived(sender As Object, e As DataReceivedEventArgs)
If e.Data = Nothing Then Exit Sub
If e.Data.Contains("A: ") Then
' Parse the data
End If
End Sub
Apparently, the only solution is to run mplayer in "slave" mode, as explained here: http://www.mplayerhq.hu/DOCS/tech/slave.txt
In this mode we can send commands to mplayer (via stdin) and the response (if any) will be sent via stdout.
Here's a very simple implementation that displays mplayer's current position (in seconds):
using System;
using System.Threading;
using System.Diagnostics;
using System.Collections.Generic;
namespace TestMplayer {
class MainClass {
private static Process player;
public static void Main(string[] args) {
String fileName = "/home/pi/Documents/Projects/Raspberry/RPiPlayer/RPiPlayer/bin/Electronica/Skrillex - Make It Bun Dem (Damian Marley) [Butch Clancy Remix].mp3";
player = new Process();
player.EnableRaisingEvents = true;
player.StartInfo.FileName = "mplayer";
player.StartInfo.Arguments = String.Format("-slave -nolirc -vc null -vo null \"{0}\"", fileName);
player.StartInfo.CreateNoWindow = false;
player.StartInfo.UseShellExecute = false;
player.StartInfo.RedirectStandardOutput = true;
player.StartInfo.RedirectStandardError = true;
player.StartInfo.RedirectStandardInput = true;
player.OutputDataReceived += DataReceived;
player.Start();
player.BeginOutputReadLine();
player.BeginErrorReadLine();
Thread getPosThread = new Thread(GetPosLoop);
getPosThread.Start();
}
private static void DataReceived(object o, DataReceivedEventArgs e) {
Console.Clear();
Console.WriteLine(e.Data);
}
private static void GetPosLoop() {
do {
Thread.Sleep(250);
player.StandardInput.Write("get_time_pos" + Environment.NewLine);
} while(!player.HasExited);
}
}
}
I found the same problem with another application that works more or less in a similar way (dbPowerAmp), in my case, the problem was that the process output uses Unicode encoding to write the stdout buffer, so I have to set the StandardOutputEncoding and StandardError to Unicode to be able start reading.
Your problem seems to be the same, because if "A" cannot be found inside the output that you published which clearlly shows that existing "A", then probably means that the character differs when reading in the current encoding that you are using to read the output.
So, try setting the proper encoding when reading the process output, try setting them to Unicode.
ProcessStartInfo.StandardOutputEncoding
ProcessStartInfo.StandardErrorEncoding
Using "read" instead of "readline", and treating the input as binary, will probably fix your problem.
First off, yes, mplayer slave mode is probably what you want. However, if you're determined to parse the console output, it is possible.
Slave mode exists for a reason, and if you're half serious about using mplayer from within your program, it's worth a little time to figure out how to properly use it. That said, I'm sure there's situations where the wrapper is the appropriate approach. Maybe you want to pretend that mplayer is running normally, and control it from the console, but secretly monitor file position to resume it later? The wrapper might be easier than translating all of mplayers keyboard commands into slave mode commands?
Your problem is likely that you're trying to use "readline" from within python on an endless line. That line of output contains \r instead of \n as the line separator, so readline will treat it as a single endless line. sed also fails this way, but other commands (such as grep) treat \r as \n under some circumstances.
Handling of \r is inconsistent, and can't be relied on. For instance, my version of grep treats \r as \n when matching IF output is a console, and uses \n to seperate the output. But if output is a pipe, it treats it as any other character.
For instance:
mplayer TMBG-Older.mp3 2>/dev/null | tr '\r' '\n' | grep "^A: " | sed 's/^A: *\([0-9.]*\) .*/\1/' | tail -n 1
I'm using "tr" here to force it to '\n', so other commands in the pipe can deal with it in a consistent manner.
This pipeline of commands outputs a single line, containing ONLY the ending position in seconds, with decimal point. But if you remove the "tr" command from this pipe, bad things happen. On my system, it shows only "0.0" as the position, as "sed" doesn't deal well with the '\r' line separators, and ALL the position updates are treated as the same line.
I'm fairly sure python doesn't handle \r well either, and that's likely your problem. If so, using "read" instead of "readline" and treating it like binary is probably the correct solution.
There are other problems with this approach though. Buffering is a big one. ^C causes this command to output nothing, mplayer must quit gracefully to show anything at all, as pipelines buffers things, and buffers get discarded on SIGINT.
If you really wanted to get fancy, you could probably cat several input sources together, tee the output several ways, and REALLY write a wrapper around mplayer. A wrapper that's fragile, complicated, and might break every time mplayer is updated, a user does something unexpected, or the name of the file being played contains something weird, SIGSTOP or SIGINT. And probably other things that I haven't though of.

can anyone tell me what the encoding of this string is? Its meant to be base64

cpxSR2bnPUihaNxIFFA8Sc+8gUnWuJxJi8ywSW5ju0npWrFJHW2MSZAeMklcZ71IjrBySF2ci0gdecRI0vD/SM4ZF0m1ZSJJBY8bSZJl/0intaxIlQJBSPdY3EdBLM9Hp4wLSOK8Nki8L1pIoglxSAvNbkjHg0VIDlv7R6B2Y0elCqVGFWuVRgagAkdxHTdHELxRR9i2VkdyEUlHU84kRzTS2kalKFxG
This is a string from an XML file from my mass spectrometer. I am trying to write a program to load two such files, subtract one set of values from another, and write the results to a new file. According to the specification file for the .mzML format, the encoding of the numerical data is alleged to be base64. I can't convert this data string to anything legible using any of the many online base64 converter or using NotepaD++ and the MIME toolkit's base64 converter.
The string, in the context of the results file, looks like this:
<binaryDataArray encodedLength="224">
<cvParam cvRef="MS" accession="MS:1000515" name="intensity array" unitAccession="MS:1000131" unitName="number of counts" unitCvRef="MS"/>
<cvParam cvRef="MS" accession="MS:1000521" name="32-bit float" />
<cvParam cvRef="MS" accession="MS:1000576" name="no compression" />
<binary>cpxSR2bnPUihaNxIFFA8Sc+8gUnWuJxJi8ywSW5ju0npWrFJHW2MSZAeMklcZ71IjrBySF2ci0gdecRI0vD/SM4ZF0m1ZSJJBY8bSZJl/0intaxIlQJBSPdY3EdBLM9Hp4wLSOK8Nki8L1pIoglxSAvNbkjHg0VIDlv7R6B2Y0elCqVGFWuVRgagAkdxHTdHELxRR9i2VkdyEUlHU84kRzTS2kalKFxG</binary>
I can't proceed until I can work out what format this encoding is meant to be!
Thanks in advance for any replies.
You can use this trivial program to convert it to plaintext:
#include <stdio.h>
int main(void)
{
float f;
while (fread(&f, 1, 4, stdin) == 4)
printf("%f\n", f);
}
I compiled this to "floatdecode" and used this command:
echo "cpxSR2bnPUihaNxIFFA8Sc+8gUnWuJxJi8ywSW5ju0npWrFJHW2MSZAeMklcZ71IjrBySF2ci0gdecRI0vD/SM4ZF0m1ZSJJBY8bSZJl/0intaxIlQJBSPdY3EdBLM9Hp4wLSOK8Nki8L1pIoglxSAvNbkjHg0VIDlv7R6B2Y0elCqVGFWuVRgagAkdxHTdHELxRR9i2VkdyEUlHU84kRzTS2kalKFxG" | base64 -d | ./floatdecode
Output is:
53916.445312
194461.593750
451397.031250
771329.250000
1062809.875000
1283866.750000
1448337.375000
1535085.750000
1452893.125000
1150371.625000
729577.000000
387898.875000
248514.218750
285922.906250
402376.906250
524166.562500
618908.875000
665179.312500
637168.312500
523052.562500
353709.218750
197642.328125
112817.929688
106072.507812
142898.609375
187123.531250
223422.937500
246822.531250
244532.171875
202255.109375
128694.109375
58230.625000
21125.322266
19125.541016
33440.023438
46877.441406
53692.062500
54966.843750
51473.445312
42190.324219
28009.101562
14090.161133
Yet another Java Base64 decode with options to uncompress should you need it.
Vendor spec indicated "32-bit float" = IEEE-754 and specified little-endian.
Schmidt's converter shows the bit pattern for IEEE-754.
One more Notepad++ step to look at the hex codes:
Notepad++ TextFX plugin (after the Base64 decode you already did)
select the text
TextFX > TextFX Convert > Convert text to Hex-32
lets you look at the hex codes:
"000000000 72 9C 52 47 66 E7 3D 48- ... 6E 63 BB 49 |rœRGfç=H¡hÜHP
Little-endian: 47529C72 converts (via Schmidt) as shown above by David.
You can access such data from mzML files in Python through pymzML, a python interface to mzML files.
http://pymzml.github.com/

Resources