GLFW/opengl pixel ratio - geometry

How do I determine the pixel ratio (ie. to compensate for non-square pixels) using GLFW?
There is glfwGetMonitorPhysicalSize, however there seems to be only a function to retrieve the monitor for a full-screen window, which is not what I need/have.
Should I even care about pixel geometry?
Experiments have shown that two of my monitors have pixel ratios of about 0.998 or 0.996 (y/x). But using nvidia settings I can distort that (ie by switching between fill screen and aspect mode). Is there any expectation that such distortions should be detected by any app and be compensated for?

Related

opencv2: Circle detection not detecting the obvious ones

Problem
I'm trying to use opencv2 to detect PlayStation Move Motion Controllers in still images. In an attempt to increase the contrast between the orbs and the backgrounds, I decided to modify the input image to automatically scale the brightness level between the image's mean level and 96 above for each channel, then when converting to grayscale, taking the maximum value instead of the default transform, since some orbs are saturated but not "bright".
However, my best attempts at adjusting the parameters seems to not work well, detecting circles that aren't there over the obvious ones.
What can I do to improve the accuracy of the detection? What other improvements or algorithms do you think I could use?
Samples
In order of best to worst:
2 Wands, 1 Wand detected (showing all 2 detected circles)
2 Wands, 1 Wand detected with many nonexistent circles (showing top 4 circles)
1 Wand (against a dark background), 6 total circles, the lowest-ranked of which is the correct one (showing all 6 circles)
1 Wand (against a dark background), 44 total circles detected, none of which are that Wand (showing all 44 circles)
I am using this function call:
cv2.HoughCircles(img_gray,cv2.HOUGH_GRADIENT,
dp=1, minDist=24, param1=90, param2=25,
minRadius=2, maxRadius=48)
All images are resized and cropped to 640x480 (the resolution of the PS3 Eye). No blur is performed.
I think hough circles is the wrong approach for you, as you are not really looking for circles. You are looking for circular areas with strong intensity. Use e.g. blob detection instead, I linked a guide:
https://www.learnopencv.com/blob-detection-using-opencv-python-c/
In the blob detection, you need to set the parameters to get a proper high-intensity circular area.
as the other user said, hough circles arent the best approach here because hough circles look for perfect circles only. whereas your target is "circular" but not a circle (due to motion blur, light bleed/reflection, noise etc)
I suggest converting the image to HSV then filtering by hue/color and intensities to get a binary threshold instead of using grayscale directly (that will help remove background & noise and limit the search area)
then using findContours() (faster than blob detection), check for contours of high circularity and expected size/area range and maybe even solidity.
area = cv2.contourArea(contour)
perimeter = cv2.arcLength(contour,True)
circularity = 4*np.pi*area / (perimeter**2)
solidity = area/cv2.contourArea(cv2.convexHull(contour))
your biggest problem will be the orb contour merging with the background due to low contrast. so maybe some adaptive threshold could help

scaling glyphs in data units (not screen units)

I am plotting both wedges and triangles on the same figure. The wedges scale up as I zoom in (I like this), but the triangles do not (I wish they did), presumably because wedges are sized in data units (via radius property) and traingles are in screen units (via size property).
Is it possible to switch the triangles to data units, so everything scales up during zoom in?
I am using bokeh version 0.12.4 and python 3.5.2 (both installed via Anaconda).
Markers (e.g. Triangle) are really meant for use as "scatter" plot markers. With the exception of Circle, they only accept screen dimensions (pixles) for size. If you need triangular regions that scale with data space range changes, your options are to use patch or patches to draw the triangles as polygons (either one at a time, or "vectorized", respectively)

What is the real definition of resolution?

I read everywhere that resolution is defined by the number of pixels on a screen.
But if you imagine 1000 x 1000 pixels on a screen the size of 20 skyscrapers and compare it to 999 x 999 pixels on a box of matches, the resolution would make the skyscrapers screen look 'low-res' and the box of matches screen look 'high-res'. Instinctively, I would say that the box of matches screen is higher resolution than the skyscrapers screen.
Am I wrong to say this? Is resolution definitely defined by the total number of pixels instead of the dots per inch?
Indeed, in the context of displays, the term resolution says nothing about pixel density. As stated in Wikipedia's article on Display Resolution:
The term "display resolution" is usually used to mean pixel dimensions, the number of pixels in each dimension (e.g. 1920 × 1080), which does not tell anything about the pixel density of the display on which the image is actually formed: broadcast television resolution properly refers to the pixel density, the number of pixels per unit distance or area, not total number of pixels. In digital measurement, the display resolution would be given in pixels per inch (PPI)
Definition of resolution varies according with the context. Every thing has a measurement unit.
When you talk about screens(Monitors), screen has pixels not dots thats why its resolution is measured in Pixels.
And When you talk about printing or video it is all about dots per inch. In your case, Box of Match is not a screen, its on printed paper.
For eg: you might have heard people saying DPI's(not resolution) while scanning documents.
So don't get yourself confused with the definition of resolution that is meant for different context.

Why do we use the term DPI for matters involving images on computers

I'm told that DPI and Points are no longer relevant in terminology involving graphical displays on computer screens and mobile devices yet we use the term "High DPI Aware" and in Windows you can set the various DPI levels (96, 120, 144, 192).
Here is my understanding of the various terms that are used in displaying images on computer monitors and devices:
DPI = number of dots in one linear inch. But DPI refers to printers and printed images.
Resolution = the number of pixels that make up a picture whether it is printed on paper or displayed on a computer screen. Higher resolution provides the capability to display more detail. Higher DPI = Higher resolution, however, resolution does not refer to size, it refers to the number of pixels in each dimension.
DPI Awareness = an app takes the DPI setting into account, making it possible for an application to behave as if it knew the real size of the pixels.
Points and Pixels: (There are 72 points per inch.)
At 300 DPI, there are 300 pixels per inch. So 4.16 Pixels = 1 point.
At 96 DPI there are 1.33 pixels in one point.
Is there a nice way to "crisply" describe the relationship between DPI, PPI, Points, and Resolution?
You are correct that DPI refers to the maximum amount of detail per unit of physical length.
Computer screens are devices that have a physical size, so we speak of the number of pixels per inch they have. Traditionally this value has been around 80 PPI, but now it can be up to 400 PPI.
The notion of "High DPI Aware" (e.g. Retina) is based on the fact that physical screen sizes don't change much over time (for example, there have been 10-inch tablets for more than a decade), but the number of pixels we pack into the screens is increasing. Because the size isn't increasing, it means the density - or the PPI - must be increasing.
Now when we want to display an image on a screen has more pixels than an older screen, we can either:
Map the old pixels 1:1 onto the new screen. The physical image is smaller due to the increased density. People start to complain about how small the icons and text are.
Stretch the old image and fill in the extra details. The physical image is the same size, but now there are more pixels to represent the content. For example, this results in font curves being smoother and photographs showing more fine details.
The term DPI (Dots Per Inch) to refer to device or image resolution came into common use well before the invention of printers that could print multiple dots per pixel. I remember using it in the 1970's. The term PPI was invented later to accommodate the difference, but the old usage still lingers in places such as Windows which was developed in the 1980's.
The DPI assigned in Windows rarely corresponds to the actual PPI of the screen. It's merely a way to specify the intended scaling of elements such as fonts.
DPI vs. resolution – What’s the difference?
The acronym dpi stands for dots per inch. Similarly, ppi stands for pixels per inch. So, why have two different acronyms for measuring roughly the same thing? Because there is a key difference between the two and if you don’t understand this difference it can have a negative impact on your digital signage project.
Part of the confusion between the two terms stems from the fact that many people who use them are lazy and tend to use the terms interchangeably. The simplest way of thinking about them is that one is digital (ppi) and represents what you see on the computer screen and the other is physical (dpi) for example, how an image appears when you print it out on a piece of paper.
I suggest you to check this in-depth article talking about the technicality of this topic.
https://blog.viewneo.com/blog/72-dpi-resolution-vs-300-dpi-for-digital-solutions/

Terrain tile scale in case of tilted camera

I am working on 3d terrain visualization tool right now. Surface is logically covered with square tiles. This tiling could be visualized as follows:
Suppose I want to draw a picture on these tiles. The level of detail for a picture is required to be selected according to the current camera scale which is calculated for each tile individually.
In case of vertical camera (no tilt, i.e. camera looks perpendicularly on the ground) all tiles have the same scale which is camera focal length divided on camera height above the ground.
Following picture depicts the situation:
where red triangle is camera which has no tilt, BG is camera height above the ground and EG is focal length, then scale = AC/DF = BG/EG
But if camera has tilt (i.e. pitch angle isn't 0) then scale is changed from tile to tile (even from point to point).
So I wonder if there any kind method to produce reasonable scale for each tile in that case ?
There may be (there almost surely is) a more straightforward solution, but what you could do is regular world to screen coordinate conversion.
You just take the coordinates of bounding points of the tile and calculate to which pixels on the screen these will project (you of course get floating point precision). From this, I believe you can calculate the "scale" you are mentioning.
This is applicable to any point or set of points in the world space.
Here is tutorial on how to do this "by hand".
If you are rendering the tiles with OpenGL or DirectX, you can do this much easier.

Resources