I need to know active screen DPI on Linux and Mac OS. I think on linux xlib might be useful, but I can't find a way how to get currect DPI.
I want this information to get real screen size in inches.
Thanks in advance!
On a mac, use CGDisplayScreenSize to get the screen size in millimeters.
In X on Linux, call XOpenDisplay() to get the Display, then use DisplayWidthMM() and DisplayHeightMM() together with DisplayWidth() and DisplayHeight() to compute the DPI.
On the Mac, there's almost certainly a more native API to use than X. Mac OS X does not run X Window by default, it has a native windowing environment.
I cobbled this together from xdpyinfo...
Compile with: gcc -Wall -o getdpi getdpi.c -lX11
/* Get dots per inch
*/
static void get_dpi(int *x, int *y)
{
double xres, yres;
Display *dpy;
char *displayname = NULL;
int scr = 0; /* Screen number */
if( (NULL == x) || (NULL == y)){ return ; }
dpy = XOpenDisplay (displayname);
/*
* there are 2.54 centimeters to an inch; so there are 25.4 millimeters.
*
* dpi = N pixels / (M millimeters / (25.4 millimeters / 1 inch))
* = N pixels / (M inch / 25.4)
* = N * 25.4 pixels / M inch
*/
xres = ((((double) DisplayWidth(dpy,scr)) * 25.4) /
((double) DisplayWidthMM(dpy,scr)));
yres = ((((double) DisplayHeight(dpy,scr)) * 25.4) /
((double) DisplayHeightMM(dpy,scr)));
*x = (int) (xres + 0.5);
*y = (int) (yres + 0.5);
XCloseDisplay (dpy);
}
You can use NSScreen to get the dimensions of the attached display(s) in pixels, but this won't give you the physical size/PPI of the display and in fact I don't think there are any APIs that will be able to do this reliably.
You can ask a window for its resolution like so:
NSDictionary* deviceDescription = [window deviceDescription];
NSSize resolution = [[deviceDescription objectForKey:NSDeviceResolution] sizeValue];
This will currently give you an NSSize of {72,72} for all screens, no matter what their actual PPI. The only thing that make this value change is changing the scaling factor in the Quartz Debug utility, or if Apple ever turns on resolution-independent UI. You can obtain the current scale factor by calling:
[[NSScreen mainScreen] userSpaceScaleFactor];
If you really must know the exact resolution (and I'd be interested to know why you think you do), you could create a screen calibration routine and have the user measure a line on-screen with an actual physical ruler. Crude, yes, but it will work.
Here's a platform independent way to get the screen DPI:
// Written in Java
import java.awt.Toolkit;
import javax.swing.JFrame;
import javax.swing.JLabel;
import javax.swing.border.EmptyBorder;
public final class DpiTest {
public static void main(String[] args) {
JFrame frame = new JFrame("DPI");
JLabel label = new JLabel("Current Screen DPI: " + Toolkit.getDefaultToolkit().getScreenResolution());
label.setBorder(new EmptyBorder(20, 20, 20, 20));
frame.add(label);
frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
frame.pack();
frame.setVisible(true);
}
}
You can download a compiled jar of this from here. After downloading, java -jar dpi.jar will show you the DPI.
Related
I'm trying to create a randomly generated "planet" (circle), and I want the areas of water, land and foliage to be decided by perlin noise, or something similar. Currently I have this (psudo)code:
for (int radius = 0; radius < circleRadius; radius++) {
for (float theta = 0; theta < TWO_PI; theta += 0.1) {
float x = radius * cosine(theta);
float y = radius * sine(theta);
int colour = whateverFunctionIMake(x, y);
setPixel(x, y, colour);
}
}
Not only does this not work (there are "gaps" in the circle because of precision issues), it's incredibly slow. Even if I increase the resolution by changing the increment to 0.01, it still has missing pixels and is even slower (I get 10fps on my mediocre computer using Java (I know not the best) and an increment of 0.01. This is certainly not acceptable for a game).
How might I achieve a similar result whilst being much less computationally expensive?
Thanks in advance.
Why not use:
(x-x0)^2 + (y-y0)^2 <= r^2
so simply:
int x0=?,y0=?,r=?; // your planet position and size
int x,y,xx,rr,col;
for (rr=r*r,x=-r;x<=r;x++)
for (xx=x*x,y=-r;y<=r;y++)
if (xx+(y*y)<=rr)
{
col = whateverFunctionIMake(x, y);
setPixel(x0+x, y0+y, col);
}
all on integers, no floating or slow operations, no gaps ... Do not forget to use randseed for the coloring function ...
[Edit1] some more stuff
Now if you want speed than you need direct pixel access (in most platforms Pixels, SetPixel, PutPixels etc are slooow. because they perform a lot of stuff like range checking, color conversions etc ... ) In case you got direct pixel access or render into your own array/image whatever you need to add clipping with screen (so you do not need to check if pixel is inside screen on each pixel) to avoid access violations if your circle is overlapping screen.
As mentioned in the comments you can get rid of the x*x and y*y inside loop using previous value (as both x,y are only incrementing). For more info about it see:
32bit SQRT in 16T without multiplication
the math is like this:
(x+1)^2 = (x+1)*(x+1) = x^2 + 2x + 1
so instead of xx = x*x we just do xx+=x+x+1 for not incremented yet x or xx+=x+x-1 if x is already incremented.
When put all together I got this:
void circle(int x,int y,int r,DWORD c)
{
// my Pixel access
int **Pixels=Main->pyx; // Pixels[y][x]
int xs=Main->xs; // resolution
int ys=Main->ys;
// circle
int sx,sy,sx0,sx1,sy0,sy1; // [screen]
int cx,cy,cx0, cy0 ; // [circle]
int rr=r*r,cxx,cyy,cxx0,cyy0; // [circle^2]
// BBOX + screen clip
sx0=x-r; if (sx0>=xs) return; if (sx0< 0) sx0=0;
sy0=y-r; if (sy0>=ys) return; if (sy0< 0) sy0=0;
sx1=x+r; if (sx1< 0) return; if (sx1>=xs) sx1=xs-1;
sy1=y+r; if (sy1< 0) return; if (sy1>=ys) sy1=ys-1;
cx0=sx0-x; cxx0=cx0*cx0;
cy0=sy0-y; cyy0=cy0*cy0;
// render
for (cxx=cxx0,cx=cx0,sx=sx0;sx<=sx1;sx++,cxx+=cx,cx++,cxx+=cx)
for (cyy=cyy0,cy=cy0,sy=sy0;sy<=sy1;sy++,cyy+=cy,cy++,cyy+=cy)
if (cxx+cyy<=rr)
Pixels[sy][sx]=c;
}
This renders a circle with radius 512 px in ~35ms so 23.5 Mpx/s filling on mine setup (AMD A8-5500 3.2GHz Win7 64bit single thread VCL/GDI 32bit app coded by BDS2006 C++). Just change the direct pixel access to style/api you use ...
[Edit2]
to measure speed on x86/x64 you can use RDTSC asm instruction here some ancient C++ code I used ages ago (on 32bit environment without native 64bit stuff):
double _rdtsc()
{
LARGE_INTEGER x; // unsigned 64bit integer variable from windows.h I think
DWORD l,h; // standard unsigned 32 bit variables
asm {
rdtsc
mov l,eax
mov h,edx
}
x.LowPart=l;
x.HighPart=h;
return double(x.QuadPart);
}
It returns clocks your CPU has elapsed since power up. Beware you should account for overflows as on fast machines the 32bit counter is overflowing in seconds. Also each core has separate counter so set affinity to single CPU. On variable speed clock before measurement heat upi CPU by some computation and to convert to time just divide by CPU clock frequency. To obtain it just do this:
t0=_rdtsc()
sleep(250);
t1=_rdtsc();
fcpu = (t1-t0)*4;
and measurement:
t0=_rdtsc()
mesured stuff
t1=_rdtsc();
time = (t1-t0)/fcpu
if t1<t0 you overflowed and you need to add the a constant to result or measure again. Also the measured process must take less than overflow period. To enhance precision ignore OS granularity. for more info see:
Measuring Cache Latencies
Cache size estimation on your system? setting affinity example
Negative clock cycle measurements with back-to-back rdtsc?
I have made some very simple bots for some web based games and I wanted to move on to other games which require to use some more advanced features.
I have used pyautogui to bot in web based games and it has been easy because all the images are static (not moving) but when I want to click something in a game what is moving, it could be a Character or a Creature running around pyautogui is not really efficient because it looks for pixels/colors that are exactly the same.
Please suggest any references or any libraries or functions that can detect a model or character even though the character is moving?
Here is an example of something I'd like to click on:
Moving creature Gif image
Thanks.
I noticed the image you linked to is a gif of a mob from world of warcraft.
As a hobby I have been designing bots for MMO's on and off over the past few years.
There are no specific python libraries that will allow you to do what you're asking that I'm aware of; however, taking WoW as an example...
If you are using Windows as your OS in question you will be using Windows API calls to get manipulate your game's target process (here wow.exe).
There are two primary approaches to this:
1) Out of process - you do everything via reading memory values from known offsets and respond by using the Windows API to simulate mouse and/or keyboard input (your choice).
1a) I will quickly mention that although for most modern games it is not an option (due to built-in anti-cheating code), you can also manipulate the game by writing directly to memory. In WAR (warhammer online) when it was still live, I made a grind bot that wrote to memory whenever possible, as they had not enabled punkbuster to protect the game from this. WoW is protected by the infamous "Warden."
2) DLL Injection - WoW has a built-in API created in LUA. As a result, over the years, many hobbyist programmers and hackers have taken apart the binary to reveal its inner workings. You might check out the Memory Editing Forum on ownedcore.com if you are wanting to work with WoW. Many have shared the known offsets in the binary where one can hook into LUA functions and as a result perform in-game actions directly and also tap into needed information. Some have even shared their own DLL's
You specifically mentioned clicking in-game 3d objects. I will close by sharing with you a snippet shared on ownedcore that allows one to do just this. This example encompasses use of both memory offsets and in-game function calls:
using System;
using SlimDX;
namespace VanillaMagic
{
public static class Camera
{
internal static IntPtr BaseAddress
{
get
{
var ptr = WoW.hook.Memory.Read<IntPtr>(Offsets.Camera.CameraPtr, true);
return WoW.hook.Memory.Read<IntPtr>(ptr + Offsets.Camera.CameraPtrOffset);
}
}
private static Offsets.CGCamera cam => WoW.hook.Memory.Read<Offsets.CGCamera>(BaseAddress);
public static float X => cam.Position.X;
public static float Y => cam.Position.Y;
public static float Z => cam.Position.Z;
public static float FOV => cam.FieldOfView;
public static float NearClip => cam.NearClip;
public static float FarClip => cam.FarClip;
public static float Aspect => cam.Aspect;
private static Matrix Matrix
{
get
{
var bCamera = WoW.hook.Memory.ReadBytes(BaseAddress + Offsets.Camera.CameraMatrix, 36);
var m = new Matrix();
m[0, 0] = BitConverter.ToSingle(bCamera, 0);
m[0, 1] = BitConverter.ToSingle(bCamera, 4);
m[0, 2] = BitConverter.ToSingle(bCamera, 8);
m[1, 0] = BitConverter.ToSingle(bCamera, 12);
m[1, 1] = BitConverter.ToSingle(bCamera, 16);
m[1, 2] = BitConverter.ToSingle(bCamera, 20);
m[2, 0] = BitConverter.ToSingle(bCamera, 24);
m[2, 1] = BitConverter.ToSingle(bCamera, 28);
m[2, 2] = BitConverter.ToSingle(bCamera, 32);
return m;
}
}
public static Vector2 WorldToScreen(float x, float y, float z)
{
var Projection = Matrix.PerspectiveFovRH(FOV * 0.5f, Aspect, NearClip, FarClip);
var eye = new Vector3(X, Y, Z);
var lookAt = new Vector3(X + Matrix[0, 0], Y + Matrix[0, 1], Z + Matrix[0, 2]);
var up = new Vector3(0f, 0f, 1f);
var View = Matrix.LookAtRH(eye, lookAt, up);
var World = Matrix.Identity;
var WorldPosition = new Vector3(x, y, z);
var ScreenPosition = Vector3.Project(WorldPosition, 0f, 0f, WindowHelper.WindowWidth, WindowHelper.WindowHeight, NearClip, FarClip, World*View*Projection);
return new Vector2(ScreenPosition.X, ScreenPosition.Y-20f);
If the mobs colors are somewhat easy to differentiate from the background you can use pyautogui pixel matching.
import pyautogui
screen = pyautogui.screenshot()
# Use this to scan the area of the screen where the mob appears.
(R, G, B) = screen.getpixel((x, y))
# Compare to mob color
If colors vary you can use color tolerance:
pyautogui.pixelMatchesColor(x, y, (R, G, B), tolerance=5)
I'm currently writing a Processing sketch that needs to access multiple audio inputs, but Processing only allows access to the default line in. I have tried getting Lines straight from the Java Mixer (accessed within Processing), but I still only get the signal from whichever line is currently set to default on my machine.
I've started looking at sending the sound via OSC from SuperCollider, as recommended here. However, since I'm very new to SuperCollider and their documentation and support is more focused on generating sound than on accessing inputs, my next step will probably be to play around with Beads and Jack, as suggested here.
Does anyone have (1) other suggestions, or (2) concrete examples of getting multiple inputs from either SuperCollider or Beads/Jack to Processing?
Thank you in advance!
Edit: The sound will be used to power custom music visualizations (think the iTunes visualizer, but much more song specific). We have this working with multiple mp3s; now what I need is to able to get a float[] buffer from each mic. Hoping to have 9 different mics, though we'll settle for 4 if that is more doable.
For hardware, at this point, we are just using mics and XLR to USB cables. (Have considered a pre-amp, but so far this has been sufficient.) I am currently on Windows, but I think that we will ultimately switch to a Mac.
Here was my attempt with just Beads (it works fine for the laptop, since I do that one first, but the headset buffer has all 0's; if I switch them and put the headset first, the headset buffer will be correct, but the laptop will contain all 0's):
void setup() {
size(512, 400);
JavaSoundAudioIO headsetAudioIO = new JavaSoundAudioIO();
JavaSoundAudioIO laptopAudioIO = new JavaSoundAudioIO();
headsetAudioIO.selectMixer(5);
headsetAudioCon = new AudioContext(headsetAudioIO);
laptopAudioIO.selectMixer(4);
laptopAudioCon = new AudioContext(laptopAudioIO);
headsetMic = headsetAudioCon.getAudioInput();
laptopMic = headsetAudioCon.getAudioInput();
} // setup()
void draw() {
background(100,0, 75);
laptopMic.start();
laptopMic.calculateBuffer();
laptopBuffer = laptopMic.getOutBuffer(0);
for (int j = 0; j < laptopBuffer.length - 1; j++)
{
println("laptop; " + j + ": " + laptopBuffer[j]);
line(j, 200+laptopBuffer[j]*50, j+1, 200+laptopBuffer[j+1]*50);
}
laptopMic.kill();
headsetMic.start();
headsetMic.calculateBuffer();
headsetBuffer = headsetMic.getOutBuffer(0);
for (int j = 0; j < headsetBuffer.length - 1; j++)
{
println("headset; " + j + ": " + headsetBuffer[j]);
line(j, 50+headsetBuffer[j]*50, j+1, 50+headsetBuffer[j+1]*50);
}
headsetMic.kill();
} // draw()
My attempt at adding Jack contains this line:
ac = new AudioContext(new AudioServerIO.Jack(), 44100, new IOAudioFormat(44100, 16, 4, 4));
but I get the error:
Jun 22, 2016 9:17:24 PM org.jaudiolibs.beads.AudioServerIO$1 run
SEVERE: null
org.jaudiolibs.jnajack.JackException: Can't find native library
at org.jaudiolibs.jnajack.Jack.getInstance(Jack.java:428)
at org.jaudiolibs.audioservers.jack.JackAudioServer.initialise(JackAudioServer.java:102)
at org.jaudiolibs.audioservers.jack.JackAudioServer.run(JackAudioServer.java:86)
at org.jaudiolibs.beads.AudioServerIO$1.run(Unknown Source)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.UnsatisfiedLinkError: Unable to load library 'jack': Native library (win32-x86-64/jack.dll) not found in resource path ([file:/C:/Users/...etc...)
And when I'm in Jack, I don't see my mic (which seems like a huge red flag to me, though I am completely new to Jack). Should this AudioContext show up as an Input in Jack? Or vice versa -- find my mic there first and then get it from Jack to Processing?
(Forgive my inexperience, and thank you again! My lack of knowledge in Jack makes me wonder if I should revisit SuperCollider instead...)
I had the same issue a few years ago and I used a combination of JACK, JNAJack and Beads. You can follow this Beads Google Group thread for more details.
At the that time I had to use this version of Beads (2012-04-23), but I hope those changes probably made it into the main project by now.
For reference, here is the basic class I used:
import java.util.Arrays;
import org.jaudiolibs.beads.AudioServerIO;
import net.beadsproject.beads.analysis.featureextractors.FFT;
import net.beadsproject.beads.analysis.featureextractors.PowerSpectrum;
import net.beadsproject.beads.analysis.segmenters.ShortFrameSegmenter;
import net.beadsproject.beads.core.AudioContext;
import net.beadsproject.beads.core.AudioIO;
import net.beadsproject.beads.core.UGen;
import net.beadsproject.beads.ugens.Gain;
import processing.core.PApplet;
public class BeadsJNA extends PApplet {
AudioContext ac;
ShortFrameSegmenter sfs;
PowerSpectrum ps;
public void setup(){
//defining audio context with 6 inputs and 6 outputs - adjust this based on your sound card / JACK setup
ac = new AudioContext(new AudioServerIO.Jack(),512,AudioContext.defaultAudioFormat(6,6));
//getting 4 audio inputs (channels 1,2,3,4)
UGen microphoneIn = ac.getAudioInput(new int[]{1,2,3,4});
Gain g = new Gain(ac, 1, 0.5f);
g.addInput(microphoneIn);
ac.out.addInput(g);
println("no. of inputs: " + ac.getAudioInput().getOuts());
//test get some FFT power spectrum data form the
sfs = new ShortFrameSegmenter(ac);
sfs.addInput(ac.out);
FFT fft = new FFT();
sfs.addListener(fft);
ps = new PowerSpectrum();
fft.addListener(ps);
ac.out.addDependent(sfs);
ac.start();
}
public void draw(){
background(255);
float[] features = ps.getFeatures();
if(features != null){
for(int x = 0; x < width; x++){
int featureIndex = (x * features.length) / width;
int barHeight = Math.min((int)(features[featureIndex] *
height), height - 1);
line(x, height, x, height - barHeight);
}
}
}
public static void main(String[] args) {
PApplet.main(BeadsJNA.class.getSimpleName());
}
}
On other Linux machines using the FBDEV drivers ( Raspberry Pi.. etc. ), I could mmap the /dev/fb0 device and directly create a BMP file that saved what was on the screen.
Now, I am trying to do the same thing with DRM on a TI Sitara AM57XX ( Beagleboard X-15 ). The code that used to work with FBDEV is shown below.
This mmap no longer seems to work the DRM. I'm using a very simple Qt5 Application with the Qt platform linuxfb plugin. It draws just fine into /dev/fb0 and shows on the screen properly, however I cannot read back from /dev/fb0 with a memory mapped pointer and have an image of the screen saved to file. It looks garbled like this:
Code:
#ifdef FRAMEBUFFER_CAPTURE
repaint();
QCoreApplication::processEvents();
// Setup framebuffer to desired format
struct fb_var_screeninfo var;
struct fb_fix_screeninfo finfo;
memset(&finfo, 0, sizeof(finfo));
memset(&var, 0, sizeof(var));
/* Get variable screen information. Variable screen information
* gives information like size of the image, bites per pixel,
* virtual size of the image etc. */
int fbFd = open("/dev/fb0", O_RDWR);
int fdRet = ioctl(fbFd, FBIOGET_VSCREENINFO, &var);
if (fdRet < 0) {
qDebug() << "Error opening /dev/fb0!";
close(fbFd);
return -1;
}
if (ioctl(fbFd, FBIOPUT_VSCREENINFO, &var)<0) {
qDebug() << "Error setting up framebuffer!";
close(fbFd);
return -1;
} else {
qDebug() << "Success setting up framebuffer!";
}
//Get fixed screen information
if (ioctl(fbFd, FBIOGET_FSCREENINFO, &finfo) < 0) {
qDebug() << "Error getting fixed screen information!";
close(fbFd);
return -1;
} else {
qDebug() << "Success getting fixed screen information!";
}
//int screensize = var.xres * var.yres * var.bits_per_pixel / 8;
//int screensize = var.yres_virtual * finfo.line_length;
//int screensize = finfo.smem_len;
int screensize = finfo.line_length * var.yres_virtual;
qDebug() << "Framebuffer size is: " << var.xres << var.yres << var.bits_per_pixel << screensize;
int linuxFbWidth = var.xres;
int linuxFbHeight = var.yres;
int location = (var.xoffset) * (var.bits_per_pixel/8) +
(var.yoffset) * finfo.line_length;
// Perform memory mapping of linux framebuffer
char* frameBufferMmapPixels = (char *)mmap(0, screensize, PROT_READ | PROT_WRITE, MAP_SHARED, fbFd, 0);
assert(frameBufferMmapPixels != MAP_FAILED);
QImage toSave((uchar*)frameBufferMmapPixels,linuxFbWidth,linuxFbHeight,QImage::Format_ARGB32);
toSave.save("/usr/bin/test.bmp");
sync();
#endif
Here is the output of the code when it runs:
Success setting up framebuffer!
Success getting fixed screen information!
Framebuffer size is: 800 480 32 1966080
Here is the output of fbset showing the pixel format:
mode "800x480"
geometry 800 480 800 480 32
timings 0 0 0 0 0 0 0
accel true
rgba 8/16,8/8,8/0,8/24
endmode
root#am57xx-evm:~#
finfo.line_length gives the size of the actual physical scan line in bytes. It is not necessarily the same as screen width multiplied by pixel size, as scan lines may be padded.
However the QImage constructor you are using assumes no padding.
If xoffset is zero, it should be possible to construct a QImage directly from the framebuffer data using a constructor with the bytesPerLine argument. Otherwise there are two options:
allocate a separate buffer and copy only the visible portion of each scanline to it
create an image from the entire buffer (including the padding) and then crop it
If you're using DRM, then /dev/fb0 might point to an entirely different buffer (not the currently visible one) or have an different format.
fbdev is really just for old legacy that hasn't been ported DRM/KMS yet
and only has very limited modsetting capabilities.
BTW: which kernel are you using ? hopefully not that ancient and broken TI vendor kernel ...
I noticed a strange behaviour with Direct3D while doing this tutorial.
The dimensions I am getting from the Window Object differ from the configured resolution of windows. There I set 1920*1080, the width and height from the Winows Object is 1371*771.
CoreWindow^ Window = CoreWindow::GetForCurrentThread();
// set the viewport
D3D11_VIEWPORT viewport = { 0 };
viewport.TopLeftX = 0;
viewport.TopLeftY = 0;
viewport.Width = Window->Bounds.Width; //should be 1920, actually is 1371
viewport.Height = Window->Bounds.Height; //should be 1080, actually is 771
I am developing on an Alienware 14, maybe this causes this problem, but I could not find any answers, yet.
CoreWindow sizes, pointer locations, etc. are not expressed in pixels. They are expressed in Device Independent Pixels (DIPS). To convert to/from pixels you need to use the Dots Per Inch (DPI) value.
inline int ConvertDipsToPixels(float dips) const
{
return int(dips * m_DPI / 96.f + 0.5f);
}
inline float ConvertPixelsToDips(int pixels) const
{
return (float(pixels) * 96.f / m_DPI);
}
m_DPI comes from DisplayInformation::GetForCurrentView()->LogicalDpi and you get the DpiChanged event when and if it changes.
See DPI and Device-Independent Pixels for more details.
You should take a look at the Direct3D UWP Game templates on GitHub, and check out how this is handled in Main.cpp.