Device screen size scaling in Apportable - apportable

I'm trying to convert an iphone app that consists of 3 different asset sizes for three different screen sizes (base-iphone(320x480), mid-iphone(640x960) ipad(768x1024),high-ipad3) into one for android that utilizes these different assets based on the resolution of different devices.
The code utilizes the ipad/iphone Idioms and apportable overrides the UIDevice methods for this using the VerdeConfigIsTablet() method. It is very unclear how this is done. Is there any good resource to understand how each resolution is assigned and scaled?
Thanks

See the Apportable UIScreen docs.
Also, potentially useful is [[UIScreen mainScreen] bounds]:
(gdb) p [UIScreen mainScreen]
$2 = (struct objc_object *) 0x6acd5490
(gdb) p [$2 bounds]
$3 = {origin = {x = 0, y = 0}, size = {width = 800, height = 1205}}

Related

Clicking a moving object in a game

I have made some very simple bots for some web based games and I wanted to move on to other games which require to use some more advanced features.
I have used pyautogui to bot in web based games and it has been easy because all the images are static (not moving) but when I want to click something in a game what is moving, it could be a Character or a Creature running around pyautogui is not really efficient because it looks for pixels/colors that are exactly the same.
Please suggest any references or any libraries or functions that can detect a model or character even though the character is moving?
Here is an example of something I'd like to click on:
Moving creature Gif image
Thanks.
I noticed the image you linked to is a gif of a mob from world of warcraft.
As a hobby I have been designing bots for MMO's on and off over the past few years.
There are no specific python libraries that will allow you to do what you're asking that I'm aware of; however, taking WoW as an example...
If you are using Windows as your OS in question you will be using Windows API calls to get manipulate your game's target process (here wow.exe).
There are two primary approaches to this:
1) Out of process - you do everything via reading memory values from known offsets and respond by using the Windows API to simulate mouse and/or keyboard input (your choice).
1a) I will quickly mention that although for most modern games it is not an option (due to built-in anti-cheating code), you can also manipulate the game by writing directly to memory. In WAR (warhammer online) when it was still live, I made a grind bot that wrote to memory whenever possible, as they had not enabled punkbuster to protect the game from this. WoW is protected by the infamous "Warden."
2) DLL Injection - WoW has a built-in API created in LUA. As a result, over the years, many hobbyist programmers and hackers have taken apart the binary to reveal its inner workings. You might check out the Memory Editing Forum on ownedcore.com if you are wanting to work with WoW. Many have shared the known offsets in the binary where one can hook into LUA functions and as a result perform in-game actions directly and also tap into needed information. Some have even shared their own DLL's
You specifically mentioned clicking in-game 3d objects. I will close by sharing with you a snippet shared on ownedcore that allows one to do just this. This example encompasses use of both memory offsets and in-game function calls:
using System;
using SlimDX;
namespace VanillaMagic
{
public static class Camera
{
internal static IntPtr BaseAddress
{
get
{
var ptr = WoW.hook.Memory.Read<IntPtr>(Offsets.Camera.CameraPtr, true);
return WoW.hook.Memory.Read<IntPtr>(ptr + Offsets.Camera.CameraPtrOffset);
}
}
private static Offsets.CGCamera cam => WoW.hook.Memory.Read<Offsets.CGCamera>(BaseAddress);
public static float X => cam.Position.X;
public static float Y => cam.Position.Y;
public static float Z => cam.Position.Z;
public static float FOV => cam.FieldOfView;
public static float NearClip => cam.NearClip;
public static float FarClip => cam.FarClip;
public static float Aspect => cam.Aspect;
private static Matrix Matrix
{
get
{
var bCamera = WoW.hook.Memory.ReadBytes(BaseAddress + Offsets.Camera.CameraMatrix, 36);
var m = new Matrix();
m[0, 0] = BitConverter.ToSingle(bCamera, 0);
m[0, 1] = BitConverter.ToSingle(bCamera, 4);
m[0, 2] = BitConverter.ToSingle(bCamera, 8);
m[1, 0] = BitConverter.ToSingle(bCamera, 12);
m[1, 1] = BitConverter.ToSingle(bCamera, 16);
m[1, 2] = BitConverter.ToSingle(bCamera, 20);
m[2, 0] = BitConverter.ToSingle(bCamera, 24);
m[2, 1] = BitConverter.ToSingle(bCamera, 28);
m[2, 2] = BitConverter.ToSingle(bCamera, 32);
return m;
}
}
public static Vector2 WorldToScreen(float x, float y, float z)
{
var Projection = Matrix.PerspectiveFovRH(FOV * 0.5f, Aspect, NearClip, FarClip);
var eye = new Vector3(X, Y, Z);
var lookAt = new Vector3(X + Matrix[0, 0], Y + Matrix[0, 1], Z + Matrix[0, 2]);
var up = new Vector3(0f, 0f, 1f);
var View = Matrix.LookAtRH(eye, lookAt, up);
var World = Matrix.Identity;
var WorldPosition = new Vector3(x, y, z);
var ScreenPosition = Vector3.Project(WorldPosition, 0f, 0f, WindowHelper.WindowWidth, WindowHelper.WindowHeight, NearClip, FarClip, World*View*Projection);
return new Vector2(ScreenPosition.X, ScreenPosition.Y-20f);
If the mobs colors are somewhat easy to differentiate from the background you can use pyautogui pixel matching.
import pyautogui
screen = pyautogui.screenshot()
# Use this to scan the area of the screen where the mob appears.
(R, G, B) = screen.getpixel((x, y))
# Compare to mob color
If colors vary you can use color tolerance:
pyautogui.pixelMatchesColor(x, y, (R, G, B), tolerance=5)

Frame buffer texture data update using DirectX

I am trying my hands on Direct X 11 template in VS 2015 in VC++. I am using:
D3D11_MAPPED_SUBRESOURCE Resource and MAP and UNMAP to update texture.
Now i have a separate file in my project where i am reading pixels and need to upload it to this texture.
I am using a struct to hold the texture data :
struct Frames{
int text_Width;
int text_height;
unsigned int text_Sz;
unsigned char* text_Data; };
Want to know how can i use this struct from a separate file to upload the texture data in my Direct X based Spinning Cube file.
You don't mention what format the data is, which is essential to knowing how to do this, but let's assume your text_Data points to an array of R8G8B8A8 data (i.e. each pixel is 32-bits with 8-bits each of Red, Green, Blue, and Alpha in that order from LSB to MSB). If so, it would look like:
Frames f = ...; // your structure
D3D11_TEXTURE2D_DESC desc = {};
desc.Width = UINT(f.text_Width);
desc.Height = UINT(f.text_height);
desc.MipLevels = desc.ArraySize = 1;
desc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
desc.SampleDesc.Count = 1;
desc.Usage = D3D11_USAGE_DEFAULT;
desc.BindFlags = D3D11_BIND_SHADER_RESOURCE;
desc.CPUAccessFlags = 0;
desc.MiscFlags = 0;
D3D11_SUBRESOURCE_DATA initData = {};
initData.pSysMem = f.text_Data;
initData.SysMemPitch = UINT( 4 * f.text_width );
initData.SysMemSlicePitch = UINT( text_Sz );
Microsoft::WRL::ComPtr<ID3D11Texture2D> pTexture;
HRESULT hr = d3dDevice->CreateTexture2D( &desc, &initData, &pTexture );
if (FAILED(hr))
...
Note this is covered on MSDN in the How to use Direct3D 11 topics, although the sample code style there is a little dated.
Take a look at the DirectX Tool Kit for DirectX 11 and the tutorials in particular. There's no reason to write your own loader when you can just DDSTextureLoader or WICTextureLoader.

Direct3D Window->Bounds.Width/Height differs from real resolution

I noticed a strange behaviour with Direct3D while doing this tutorial.
The dimensions I am getting from the Window Object differ from the configured resolution of windows. There I set 1920*1080, the width and height from the Winows Object is 1371*771.
CoreWindow^ Window = CoreWindow::GetForCurrentThread();
// set the viewport
D3D11_VIEWPORT viewport = { 0 };
viewport.TopLeftX = 0;
viewport.TopLeftY = 0;
viewport.Width = Window->Bounds.Width; //should be 1920, actually is 1371
viewport.Height = Window->Bounds.Height; //should be 1080, actually is 771
I am developing on an Alienware 14, maybe this causes this problem, but I could not find any answers, yet.
CoreWindow sizes, pointer locations, etc. are not expressed in pixels. They are expressed in Device Independent Pixels (DIPS). To convert to/from pixels you need to use the Dots Per Inch (DPI) value.
inline int ConvertDipsToPixels(float dips) const
{
return int(dips * m_DPI / 96.f + 0.5f);
}
inline float ConvertPixelsToDips(int pixels) const
{
return (float(pixels) * 96.f / m_DPI);
}
m_DPI comes from DisplayInformation::GetForCurrentView()->LogicalDpi and you get the DpiChanged event when and if it changes.
See DPI and Device-Independent Pixels for more details.
You should take a look at the Direct3D UWP Game templates on GitHub, and check out how this is handled in Main.cpp.

Am I doing something wrong, or do Intel graphic cards suck so bad?

I have
VGA compatible controller: Intel Corporation 82G33/G31 Express Integrated Graphics Controller (rev 10) on Ubuntu 10.10 Linux.
I'm rendering statically one VBO per frame. This VBO has 30,000 triangles, with 3 lights and one texture, and I'm getting 15 FPS.
Are intel cards so bad, or am I doing sth wrong?
Drivers are standard, open source drivers from intel.
My code:
void init() {
glGenBuffersARB(4, vbos);
glBindBufferARB(GL_ARRAY_BUFFER_ARB, vbos[0]);
glBufferDataARB(GL_ARRAY_BUFFER_ARB, sizeof(GLfloat) * verticesNum * 3, vertXYZ, GL_STATIC_DRAW_ARB);
glBindBufferARB(GL_ARRAY_BUFFER_ARB, vbos[1]);
glBufferDataARB(GL_ARRAY_BUFFER_ARB, sizeof(GLfloat) * verticesNum * 4, colorRGBA, GL_STATIC_DRAW_ARB);
glBindBufferARB(GL_ARRAY_BUFFER_ARB, vbos[2]);
glBufferDataARB(GL_ARRAY_BUFFER_ARB, sizeof(GLfloat) * verticesNum * 3, normXYZ, GL_STATIC_DRAW_ARB);
glBindBufferARB(GL_ARRAY_BUFFER_ARB, vbos[3]);
glBufferDataARB(GL_ARRAY_BUFFER_ARB, sizeof(GLfloat) * verticesNum * 2, texXY, GL_STATIC_DRAW_ARB);
}
void draw() {
glPushMatrix();
const Vector3f O = ps.getPosition();
glScalef(scaleXYZ[0], scaleXYZ[1], scaleXYZ[2]);
glTranslatef(O.x() - originXYZ[0], O.y() - originXYZ[1], O.z()
- originXYZ[2]);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_COLOR_ARRAY);
glEnableClientState(GL_NORMAL_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glBindBufferARB(GL_ARRAY_BUFFER_ARB, vbos[0]);
glVertexPointer(3, GL_FLOAT, 0, 0);
glBindBufferARB(GL_ARRAY_BUFFER_ARB, vbos[1]);
glColorPointer(4, GL_FLOAT, 0, 0);
glBindBufferARB(GL_ARRAY_BUFFER_ARB, vbos[2]);
glNormalPointer(GL_FLOAT, 0, 0);
glBindBufferARB(GL_ARRAY_BUFFER_ARB, vbos[3]);
glTexCoordPointer(2, GL_FLOAT, 0, 0);
texture->bindTexture();
glDrawArrays(GL_TRIANGLES, 0, verticesNum);
glBindBufferARB(GL_ARRAY_BUFFER_ARB, 0); //disabling VBO
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_COLOR_ARRAY);
glEnableClientState(GL_NORMAL_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glPopMatrix();
}
EDIT: maybe it's not clear - initialization is in different function, and is only called once.
A few hints:
Using that number of vertices you should interleave the arrays. Vertex caches usually don't hold more than 1000 entries. Interleaving the data of course implies that the data is hold by a single VBO.
Using glDrawArrays is suboptimal if there are a lot of shared vertices, which is likely the case for a (static) terrain. Instead draw using glDrawElements. You can use the index array to implement some cheap LOD
Experiment with the number of vertices in the index buffer given to glDrawArrays. Try batches of at most 2^14, 2^15 or 2^16 indices. This is again to relieve cache pressure.
Oh and in your code the last two lines
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_COLOR_ARRAY);
glEnableClientState(GL_NORMAL_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
I think you meant those to be glDisableClientState.
Make sure your system has OpenGL acceleration enabled:
$ glxinfo | grep rendering
direct rendering: Yes
If you get 'no', then you don't have OpenGL acceleration.
Thanks fo answers.
Yeah, I have direct rendering on, according to glxinfo. In glxgears I get sth like 150 FPS, and games like Warzone or glest works fast enough. So probably problem is in my code.
I'll buy some real graphic card eventually anyway, but I wanted my game to work on integrated graphic cards too, that's why I posted this question.

How to get screen DPI (linux,mac) programatically?

I need to know active screen DPI on Linux and Mac OS. I think on linux xlib might be useful, but I can't find a way how to get currect DPI.
I want this information to get real screen size in inches.
Thanks in advance!
On a mac, use CGDisplayScreenSize to get the screen size in millimeters.
In X on Linux, call XOpenDisplay() to get the Display, then use DisplayWidthMM() and DisplayHeightMM() together with DisplayWidth() and DisplayHeight() to compute the DPI.
On the Mac, there's almost certainly a more native API to use than X. Mac OS X does not run X Window by default, it has a native windowing environment.
I cobbled this together from xdpyinfo...
Compile with: gcc -Wall -o getdpi getdpi.c -lX11
/* Get dots per inch
*/
static void get_dpi(int *x, int *y)
{
double xres, yres;
Display *dpy;
char *displayname = NULL;
int scr = 0; /* Screen number */
if( (NULL == x) || (NULL == y)){ return ; }
dpy = XOpenDisplay (displayname);
/*
* there are 2.54 centimeters to an inch; so there are 25.4 millimeters.
*
* dpi = N pixels / (M millimeters / (25.4 millimeters / 1 inch))
* = N pixels / (M inch / 25.4)
* = N * 25.4 pixels / M inch
*/
xres = ((((double) DisplayWidth(dpy,scr)) * 25.4) /
((double) DisplayWidthMM(dpy,scr)));
yres = ((((double) DisplayHeight(dpy,scr)) * 25.4) /
((double) DisplayHeightMM(dpy,scr)));
*x = (int) (xres + 0.5);
*y = (int) (yres + 0.5);
XCloseDisplay (dpy);
}
You can use NSScreen to get the dimensions of the attached display(s) in pixels, but this won't give you the physical size/PPI of the display and in fact I don't think there are any APIs that will be able to do this reliably.
You can ask a window for its resolution like so:
NSDictionary* deviceDescription = [window deviceDescription];
NSSize resolution = [[deviceDescription objectForKey:NSDeviceResolution] sizeValue];
This will currently give you an NSSize of {72,72} for all screens, no matter what their actual PPI. The only thing that make this value change is changing the scaling factor in the Quartz Debug utility, or if Apple ever turns on resolution-independent UI. You can obtain the current scale factor by calling:
[[NSScreen mainScreen] userSpaceScaleFactor];
If you really must know the exact resolution (and I'd be interested to know why you think you do), you could create a screen calibration routine and have the user measure a line on-screen with an actual physical ruler. Crude, yes, but it will work.
Here's a platform independent way to get the screen DPI:
// Written in Java
import java.awt.Toolkit;
import javax.swing.JFrame;
import javax.swing.JLabel;
import javax.swing.border.EmptyBorder;
public final class DpiTest {
public static void main(String[] args) {
JFrame frame = new JFrame("DPI");
JLabel label = new JLabel("Current Screen DPI: " + Toolkit.getDefaultToolkit().getScreenResolution());
label.setBorder(new EmptyBorder(20, 20, 20, 20));
frame.add(label);
frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
frame.pack();
frame.setVisible(true);
}
}
You can download a compiled jar of this from here. After downloading, java -jar dpi.jar will show you the DPI.

Resources