How to get pixels from window using Direct2D - visual-c++

My problem is getting the pixels in a window. I can't find a way to do this. I using standard windows functions and Direct2D (not DirectDraw).
I am using standard initialization of new window:
WNDCLASS wc;
wc.style = CS_OWNDC;
wc.lpfnWndProc = WndProc;
wc.cbClsExtra = 0;
wc.cbWndExtra = 0;
wc.hInstance = hInstance;
wc.hIcon = LoadIcon(NULL, IDI_APPLICATION);
wc.hCursor = LoadCursor(NULL, IDC_ARROW); wc.hbrBackground = (HBRUSH)(6);
wc.lpszMenuName = 0;
wc.lpszClassName = L"WINDOW"; RegisterClass(&wc);
hWnd = CreateWindow(L"WINDOW", L"Game", WS_OVERLAPPEDWINDOW,100,100,1024,768,NULL,NULL,hInstance,NULL);
Then I create a D2D1factory object and draw the bitmap in the window:
HWND hWnd = NULL;
srand((unsigned int)time((time_t)NULL));
ID2D1Factory* factory = NULL;
ID2D1HwndRenderTarget* rt = NULL;
CoInitializeEx(NULL,COINIT_MULTITHREADED);
My_CreateWindow(&hWnd, hInstance);
My_CreateFactory(&hWnd, factory, rt);
D2D1CreateFactory(D2D1_FACTORY_TYPE_SINGLE_THREADED,&factory);
factory->CreateHwndRenderTarget(RenderTargetProperties(), HwndRenderTargetProperties(hWnd,SizeU(800,600)), &rt);
// tons of code
rt->BeginDraw(); rt->DrawBitmap(background[0], RectF(0,0,800,600));
rt->DrawBitmap(GIFHorse[(int)iterator],
RectF(0+MyX-100,0+MyY-100,100+MyX,100+MyY));
rt->DrawBitmap(Star[(int)iterator], RectF(0 + starX, 0 + starY, 50 + starX, 50+starY)); rt->EndDraw();
I need to calculate the hits (I am trying to make simple game). So that's why I need access to all pixels in window.
I'm thinking about using other algorithms, but they're harder to make and I want to start with something easier.
I know about the GetPixel() function, but I cant understand what I should use for HDC.

There is no simple and easy way to get to D2D pixels.
A simple collision detection can be implemented using geometries. If you have outlines of your objects in ID2D1Geometry objects you can compare them using ID2D1Geometry::CompareWithGeometry.
If you insist on the pixels, a way to access them is from the DC obtainable via IDXGISurface1::GetDC
In this case you need to use the DXGI surface render target and the swap chain instead of the hwnd render target.
Step by step:
D3D11CreateDeviceAndSwapChain gives you the swap chain.
IDXGISwapChain::GetBuffer gives you the DXGI surface.
ID2D1Factory::CreateDxgiSurfaceRenderTarget gives you the render target.
IDXGISurface1::GetDC gives you the DC. Read the remarks on this page!

Related

Clicking a moving object in a game

I have made some very simple bots for some web based games and I wanted to move on to other games which require to use some more advanced features.
I have used pyautogui to bot in web based games and it has been easy because all the images are static (not moving) but when I want to click something in a game what is moving, it could be a Character or a Creature running around pyautogui is not really efficient because it looks for pixels/colors that are exactly the same.
Please suggest any references or any libraries or functions that can detect a model or character even though the character is moving?
Here is an example of something I'd like to click on:
Moving creature Gif image
Thanks.
I noticed the image you linked to is a gif of a mob from world of warcraft.
As a hobby I have been designing bots for MMO's on and off over the past few years.
There are no specific python libraries that will allow you to do what you're asking that I'm aware of; however, taking WoW as an example...
If you are using Windows as your OS in question you will be using Windows API calls to get manipulate your game's target process (here wow.exe).
There are two primary approaches to this:
1) Out of process - you do everything via reading memory values from known offsets and respond by using the Windows API to simulate mouse and/or keyboard input (your choice).
1a) I will quickly mention that although for most modern games it is not an option (due to built-in anti-cheating code), you can also manipulate the game by writing directly to memory. In WAR (warhammer online) when it was still live, I made a grind bot that wrote to memory whenever possible, as they had not enabled punkbuster to protect the game from this. WoW is protected by the infamous "Warden."
2) DLL Injection - WoW has a built-in API created in LUA. As a result, over the years, many hobbyist programmers and hackers have taken apart the binary to reveal its inner workings. You might check out the Memory Editing Forum on ownedcore.com if you are wanting to work with WoW. Many have shared the known offsets in the binary where one can hook into LUA functions and as a result perform in-game actions directly and also tap into needed information. Some have even shared their own DLL's
You specifically mentioned clicking in-game 3d objects. I will close by sharing with you a snippet shared on ownedcore that allows one to do just this. This example encompasses use of both memory offsets and in-game function calls:
using System;
using SlimDX;
namespace VanillaMagic
{
public static class Camera
{
internal static IntPtr BaseAddress
{
get
{
var ptr = WoW.hook.Memory.Read<IntPtr>(Offsets.Camera.CameraPtr, true);
return WoW.hook.Memory.Read<IntPtr>(ptr + Offsets.Camera.CameraPtrOffset);
}
}
private static Offsets.CGCamera cam => WoW.hook.Memory.Read<Offsets.CGCamera>(BaseAddress);
public static float X => cam.Position.X;
public static float Y => cam.Position.Y;
public static float Z => cam.Position.Z;
public static float FOV => cam.FieldOfView;
public static float NearClip => cam.NearClip;
public static float FarClip => cam.FarClip;
public static float Aspect => cam.Aspect;
private static Matrix Matrix
{
get
{
var bCamera = WoW.hook.Memory.ReadBytes(BaseAddress + Offsets.Camera.CameraMatrix, 36);
var m = new Matrix();
m[0, 0] = BitConverter.ToSingle(bCamera, 0);
m[0, 1] = BitConverter.ToSingle(bCamera, 4);
m[0, 2] = BitConverter.ToSingle(bCamera, 8);
m[1, 0] = BitConverter.ToSingle(bCamera, 12);
m[1, 1] = BitConverter.ToSingle(bCamera, 16);
m[1, 2] = BitConverter.ToSingle(bCamera, 20);
m[2, 0] = BitConverter.ToSingle(bCamera, 24);
m[2, 1] = BitConverter.ToSingle(bCamera, 28);
m[2, 2] = BitConverter.ToSingle(bCamera, 32);
return m;
}
}
public static Vector2 WorldToScreen(float x, float y, float z)
{
var Projection = Matrix.PerspectiveFovRH(FOV * 0.5f, Aspect, NearClip, FarClip);
var eye = new Vector3(X, Y, Z);
var lookAt = new Vector3(X + Matrix[0, 0], Y + Matrix[0, 1], Z + Matrix[0, 2]);
var up = new Vector3(0f, 0f, 1f);
var View = Matrix.LookAtRH(eye, lookAt, up);
var World = Matrix.Identity;
var WorldPosition = new Vector3(x, y, z);
var ScreenPosition = Vector3.Project(WorldPosition, 0f, 0f, WindowHelper.WindowWidth, WindowHelper.WindowHeight, NearClip, FarClip, World*View*Projection);
return new Vector2(ScreenPosition.X, ScreenPosition.Y-20f);
If the mobs colors are somewhat easy to differentiate from the background you can use pyautogui pixel matching.
import pyautogui
screen = pyautogui.screenshot()
# Use this to scan the area of the screen where the mob appears.
(R, G, B) = screen.getpixel((x, y))
# Compare to mob color
If colors vary you can use color tolerance:
pyautogui.pixelMatchesColor(x, y, (R, G, B), tolerance=5)

Frame buffer texture data update using DirectX

I am trying my hands on Direct X 11 template in VS 2015 in VC++. I am using:
D3D11_MAPPED_SUBRESOURCE Resource and MAP and UNMAP to update texture.
Now i have a separate file in my project where i am reading pixels and need to upload it to this texture.
I am using a struct to hold the texture data :
struct Frames{
int text_Width;
int text_height;
unsigned int text_Sz;
unsigned char* text_Data; };
Want to know how can i use this struct from a separate file to upload the texture data in my Direct X based Spinning Cube file.
You don't mention what format the data is, which is essential to knowing how to do this, but let's assume your text_Data points to an array of R8G8B8A8 data (i.e. each pixel is 32-bits with 8-bits each of Red, Green, Blue, and Alpha in that order from LSB to MSB). If so, it would look like:
Frames f = ...; // your structure
D3D11_TEXTURE2D_DESC desc = {};
desc.Width = UINT(f.text_Width);
desc.Height = UINT(f.text_height);
desc.MipLevels = desc.ArraySize = 1;
desc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
desc.SampleDesc.Count = 1;
desc.Usage = D3D11_USAGE_DEFAULT;
desc.BindFlags = D3D11_BIND_SHADER_RESOURCE;
desc.CPUAccessFlags = 0;
desc.MiscFlags = 0;
D3D11_SUBRESOURCE_DATA initData = {};
initData.pSysMem = f.text_Data;
initData.SysMemPitch = UINT( 4 * f.text_width );
initData.SysMemSlicePitch = UINT( text_Sz );
Microsoft::WRL::ComPtr<ID3D11Texture2D> pTexture;
HRESULT hr = d3dDevice->CreateTexture2D( &desc, &initData, &pTexture );
if (FAILED(hr))
...
Note this is covered on MSDN in the How to use Direct3D 11 topics, although the sample code style there is a little dated.
Take a look at the DirectX Tool Kit for DirectX 11 and the tutorials in particular. There's no reason to write your own loader when you can just DDSTextureLoader or WICTextureLoader.

vkCreateWin32SurfaceKHR not writing to surface

I'm trying to get a simple test of Vulkan working. I've been following the LunarG tutorials, but ran into the problem that vkCreateWin32SurfaceKHR seems to do nothing. Namely, surface is not being written to. The function vkCreateWin32SurfaceKHR returns 0, so it isn't reporting a failure. Any help is appreciated.
// create window
sdlWindow = SDL_CreateWindow(APP_SHORT_NAME, SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED, width, height, 0);
struct SDL_SysWMinfo wmInfo;
SDL_VERSION(&wmInfo.version);
SDL_GetWindowWMInfo(sdlWindow, &wmInfo);
hWnd = wmInfo.info.win.window;
hInstance = GetModuleHandle(NULL);
// create a surface attached to the window
VkWin32SurfaceCreateInfoKHR surface_info = {};
surface_info.sType = VK_STRUCTURE_TYPE_WIN32_SURFACE_CREATE_INFO_KHR;
surface_info.pNext = NULL;
surface_info.hinstance = hInstance;
surface_info.hwnd = hWnd;
sanity(!vkCreateWin32SurfaceKHR(inst, &surface_info, NULL, &surface));
Sascha Willems correctly identified that I was not requesting the extensions necessary to create a surface. I changed my code to request extensions as shown below, and now everything works as expected.
// create an instance
vector<char*> enabledInstanceExtensions;
enabledInstanceExtensions.push_back(VK_KHR_SURFACE_EXTENSION_NAME);
enabledInstanceExtensions.push_back(VK_KHR_WIN32_SURFACE_EXTENSION_NAME);
#ifdef VALIDATE_VULKAN
enabledInstanceExtensions.push_back("VK_EXT_debug_report");
#endif
vector<char*> enabledInstanceLayers;
#ifdef VALIDATE_VULKAN
enabledInstanceLayers.push_back("VK_LAYER_LUNARG_standard_validation");
#endif
VkInstanceCreateInfo inst_info = {};
inst_info.sType = VK_STRUCTURE_TYPE_INSTANCE_CREATE_INFO;
inst_info.pNext = NULL;
inst_info.flags = 0;
inst_info.pApplicationInfo = &app_info;
inst_info.enabledExtensionCount = (uint32_t)enabledInstanceExtensions.size();
inst_info.ppEnabledExtensionNames = enabledInstanceExtensions.data();
inst_info.enabledLayerCount = (uint32_t)enabledInstanceLayers.size();
inst_info.ppEnabledLayerNames = enabledInstanceLayers.data();
sanity(!vkCreateInstance(&inst_info, NULL, &instance));
Beside what Joe added in his answer, I will also say that the call to vkCreateWin32SurfaceKHR() if provided invalid arguments does not fail and return VK_SUCCESS. I`m not sure about other platforms if this is still the case.
When I say invalid arguments I am referring to the two most important hinstance and hwnd of the vulkan structure VkWin32SurfaceCreateInfoKHR.
So pay close attention to those two arguments, it tricked me few times.
Not sure tough why is returning VK_SUCCESS while providing invalid arguments, there may be some internal related things that god know why.

Memory leak in CoreGraphics while switching sprite texture. Code inside

EDIT: I tried using the push/pop thing, but now it crashes.
I have a feeling what I'm attempting to do is way off.. Is there any way to just get core graphics showing up on the screen? I need something thats able to be updated every frame, like drawing a line between two points that always move around..
Even if someone knows of a complete alternative Ill try it.
in .h
CGContextRef context;
in .m
in init method
int width = 100;
int height = 100;
void *buffer = calloc(1, width * height * 4);
context = CreateBitmapContextWithData(width, height, buffer);
CGContextSetRGBFillColor(context, 1, 1, 1, 1);
CGContextAddRect(context, CGRectMake(0, 0, width, height));
CGContextFillPath(context);
CGImageRef image = CGBitmapContextCreateImage(context);
hud_sprite = [CCSprite spriteWithCGImage:image key:#"hud_image1"];
free(buffer);
free(image);
hud_sprite.anchorPoint = CGPointMake(0, 0);
hud_sprite.position = CGPointMake(0, 0);
[self addChild:hud_sprite z:100];
in a method I call when I want to update it.
int width = 100;
int height = 100;
UIGraphicsPushContext(context);
CGContextClearRect(context, CGRectMake(0, 0, width, height)); //<-- crashes here. bad access...
CGContextSetRGBFillColor(context, random_float(0, 1),random_float(0, 1),random_float(0, 1), .8);
CGContextAddRect(context, CGRectMake(0, 0, width, height));
CGContextFillPath(context);
CGImageRef image = CGBitmapContextCreateImage(context);
UIGraphicsPopContext();
//CGContextRelease(ctx);
[[CCTextureCache sharedTextureCache] removeTextureForKey:#"hud_image1"];
[hud_sprite setTexture:[[CCTextureCache sharedTextureCache] addCGImage:image forKey:#"hud_image1"]];
free(image);
You are calling UIGraphicsPushContext(context). You must balance this with UIGraphicsPopContext(). Since you're not calling UIGraphicsPopContext(), you are leaving context on UIKit's graphics context stack, so it never gets deallocated.
Also, you are calling UIGraphicsBeginImageContext, which creates a new graphics context that you later release (correctly) by calling UIGraphicsEndImageContext. But you never use this context. You would access the context by calling UIGraphicsGetCurrentContext, but you never call that.
UPDATE
Never call free on a Core Foundation object!
You are getting a CGImage (which is a Core Foundation object) with this statement:
CGImageRef image = CGBitmapContextCreateImage(context);
Then later you are calling free on it:
free(image);
You must never do that.
Go read the Memory Management Programming Guide for Core Foundation. When you are done with a Core Foundation object, and you have ownership of it (because you got it from a Create function or a Copy function), you must release it with a Release function. In this case you can use either CFRelease or CGImageRelease:
CGImageRelease(image);
Furthermore, you are allocating buffer using calloc, then passing it to CreateBitmapContextWithData (which I guess is your wrapper for CGBitmapContextCreateWithData), and then freeing buffer. But context keeps a pointer to the buffer. So after you free(buffer), context has a dangling pointer. That's why you're crashing. You cannot free buffer until after you have released context.
The best way to handle this is to let CGBitmapContextCreateWithData take care of allocating the buffer itself by passing NULL as the first (data) argument. There is no reason to allocate the buffer yourself in this case.

Freetype2 failing under WoW64

I built a tff to D3D texture function using freetype2(2.3.9) to generate grayscale maps from the fonts. it works great under native win32, however, on WoW64 it just explodes (well, FT_Done and FT_Load_Glyph do). from some debugging, it seems to be a problem with HeapFree as called by free from FT_Free.
I know it should work, as games like WCIII, which to the best of my knowledge use freetype2, run fine, this is my code, stripped of the D3D code(which causes no problems on its own):
FT_Face pFace = NULL;
FT_Error nError = 0;
FT_Byte* pFont = static_cast<FT_Byte*>(ARCHIVE_LoadFile(pBuffer,&nSize));
if((nError = FT_New_Memory_Face(pLibrary,pFont,nSize,0,&pFace)) == 0)
{
FT_Set_Char_Size(pFace,nSize << 6,nSize << 6,96,96);
for(unsigned char c = 0; c < 95; c++)
{
if(!FT_Load_Glyph(pFace,FT_Get_Char_Index(pFace,c + 32),FT_LOAD_RENDER))
{
FT_Glyph pGlyph;
if(!FT_Get_Glyph(pFace->glyph,&pGlyph))
{
LOG("GET: %c",c + 32);
FT_Glyph_To_Bitmap(&pGlyph,FT_RENDER_MODE_NORMAL,0,1);
FT_BitmapGlyph pGlyphMap = reinterpret_cast<FT_BitmapGlyph>(pGlyph);
FT_Bitmap* pBitmap = &pGlyphMap->bitmap;
const size_t nWidth = pBitmap->width;
const size_t nHeight = pBitmap->rows;
//add to texture atlas
}
}
}
}
else
{
FT_Done_Face(pFace);
delete pFont;
return FALSE;
}
FT_Done_Face(pFace);
delete pFont;
return TRUE;
}
ARCHIVE_LoadFile returns blocks allocated with new.
As a secondary question, I would like to render a font using pixel sizes, I came across FT_Set_Pixel_Sizes, but I'm unsure as to whether this stretches the font to fit the size, or bounds it to a size. what I would like to do is render all the glyphs at say 24px (MS Word size here), then turn it into a signed distance field in a 32px area.
Update
After much fiddling, I got a test app to work, which leads me to think the problems are arising from threading, as my code is running in a secondary thread. I have compiled freetype into a static lib using the multithread DLL, my app uses the multithreaded libs. gonna see if i can set up a multithreaded test.
Also updated to 2.4.4, to see if the problem was a known but fixed bug, didn't help however.
Update 2
After some more fiddling, it turns out I wasn't using the correct lib for 2.4.4 -.- after fixing that, the test app works 100%, but the main app still crashes when FT_Done_Face is called, still seems to be a crash in the memory heap management of windows. is it possible that there is a bug in freetype2 that makes it blow up under user threads?

Resources