How does Chrome know what a centimeter is? - display

As stated in the question title, where do Chrome, other web browsers and programs like MS Paint get the information about how many pixels are needed to make up a physical centimeter?
As far as I know, such programs can't just assume a nominal pixel size as monitors with same resolution differ greatly regarding their active screen size. So for this to work, the display has to tell the computer not only how many pixels it has but also how big (in terms of physical size) its active area is.
I couldn't find any information about this question on the Internet, but my personal guess would be that the OS extracts this information from the EDID of the monitor.
But does anyone here know the answer to this question for sure?

As chiliNUT pointed out in the comments, physical units in browsers don't necessarily match physical units in reality, but are instead tied to the size of a pixel (e.g. 1 in = 96 px) as stated in the link below:
https://developer.mozilla.org/en-US/docs/Web/CSS/length#Absolute_length_units
Absolute length units
Absolute length units represent a physical measurement when the physical properties of the output medium are known, such as for print layout. This is done by anchoring one of the units to a physical unit, and then defining the others relative to it. The anchor is done differently for low-resolution devices, such as screens, versus high-resolution devices, such as printers.
For low-dpi devices, the unit px represents the physical reference pixel; other units are defined relative to it. Thus, 1in is defined as 96px, which equals 72pt. The consequence of this definition is that on such devices, dimensions described in inches (in), centimeters (cm), or millimeters (mm) doesn't necessary match the size of the physical unit with the same name.
For high-dpi devices, inches (in), centimeters (cm), and millimeters (mm) are the same as their physical counterparts. Therefore, the px unit is defined relative to them (1/96 of 1 inch).

Related

How do I get base-space transform from CGContext?

The offset for CGContext.setShadow has to be specified in base-space:
offset - Specifies a translation in base-space.
(from https://developer.apple.com/documentation/coregraphics/cgcontext/1455205-setshadow)
What is this "base-space"?
Semi related docs have this explanation:
The drawing (user) coordinate system. This coordinate system is used when you issue drawing commands.
The view coordinate system (base space). This coordinate system is a fixed coordinate system relative to the view.
The (physical) device coordinate system. This coordinate system represents pixels on the physical screen.
(from: https://developer.apple.com/library/archive/documentation/2DDrawing/Conceptual/DrawingPrintingiOS/GraphicsDrawingOverview/GraphicsDrawingOverview.html)
This makes sense. However, how do I get the transform of this base-space? There is CGContext.userSpaceToDeviceSpaceTransform but it seems to be transform from user->physical. How do I get from user->base or base->physical?
I believe that the base-space is equivalent to when the user-space has an identity transform matrix. In Apple's documentation, figure 1-1, the show the base-space have the origin in the upper-left, and an identity matrix of one (the indicated pixel of (3, 5) is 3 to the right, 5 down, as expected).
Thus, the shadow offset is in untransformed units. This is probably convenient for you, as you probably want the shadow offset to be the same, regardless of what the scale factor is. (If you scale a piece of vector clip art in a PowerPoint presentation, you want the shadow to be offset the same no matter how big you expand the clip art.)

What dimensional units are used in PyQt4?

When using "setMinimumHeight(...)/setMinimumWidth(...)" what units are the arguments in? I'm not turning up anything online, the book I bought doesn't address it and based on my experiments the units certainly aren't pixels. Thanks in advance.
Those parameters are measured in pixels, but there are other things at play here as well that unfortunately are harder to deal with and may be complicating your measurments.
Take a look at the following two images. The resolution of my screen remains at 3840x2160 but the "Scale Factor" that Windows suggests varies between 100% and 250%.
Scale Factor = 100%
Scale Factor = 250%
The ruler has actually changed size which could give you the impression that the size policy of these isn't equivalent to the pixel size. Note the size of each of these widgets starts at the grey, not at the blue. Additionally, even though Qt maintains the size of the widget in pixels independently from Windows' "Scale Factor", the same can't be said for the label in the center which does change in size depending on the scaling.
I don't know exactly how you are taking your measurements, what the GUI is, or what your display setting is, but those all can contribute to the confusion around sizing in Qt.

Why do we use the term DPI for matters involving images on computers

I'm told that DPI and Points are no longer relevant in terminology involving graphical displays on computer screens and mobile devices yet we use the term "High DPI Aware" and in Windows you can set the various DPI levels (96, 120, 144, 192).
Here is my understanding of the various terms that are used in displaying images on computer monitors and devices:
DPI = number of dots in one linear inch. But DPI refers to printers and printed images.
Resolution = the number of pixels that make up a picture whether it is printed on paper or displayed on a computer screen. Higher resolution provides the capability to display more detail. Higher DPI = Higher resolution, however, resolution does not refer to size, it refers to the number of pixels in each dimension.
DPI Awareness = an app takes the DPI setting into account, making it possible for an application to behave as if it knew the real size of the pixels.
Points and Pixels: (There are 72 points per inch.)
At 300 DPI, there are 300 pixels per inch. So 4.16 Pixels = 1 point.
At 96 DPI there are 1.33 pixels in one point.
Is there a nice way to "crisply" describe the relationship between DPI, PPI, Points, and Resolution?
You are correct that DPI refers to the maximum amount of detail per unit of physical length.
Computer screens are devices that have a physical size, so we speak of the number of pixels per inch they have. Traditionally this value has been around 80 PPI, but now it can be up to 400 PPI.
The notion of "High DPI Aware" (e.g. Retina) is based on the fact that physical screen sizes don't change much over time (for example, there have been 10-inch tablets for more than a decade), but the number of pixels we pack into the screens is increasing. Because the size isn't increasing, it means the density - or the PPI - must be increasing.
Now when we want to display an image on a screen has more pixels than an older screen, we can either:
Map the old pixels 1:1 onto the new screen. The physical image is smaller due to the increased density. People start to complain about how small the icons and text are.
Stretch the old image and fill in the extra details. The physical image is the same size, but now there are more pixels to represent the content. For example, this results in font curves being smoother and photographs showing more fine details.
The term DPI (Dots Per Inch) to refer to device or image resolution came into common use well before the invention of printers that could print multiple dots per pixel. I remember using it in the 1970's. The term PPI was invented later to accommodate the difference, but the old usage still lingers in places such as Windows which was developed in the 1980's.
The DPI assigned in Windows rarely corresponds to the actual PPI of the screen. It's merely a way to specify the intended scaling of elements such as fonts.
DPI vs. resolution – What’s the difference?
The acronym dpi stands for dots per inch. Similarly, ppi stands for pixels per inch. So, why have two different acronyms for measuring roughly the same thing? Because there is a key difference between the two and if you don’t understand this difference it can have a negative impact on your digital signage project.
Part of the confusion between the two terms stems from the fact that many people who use them are lazy and tend to use the terms interchangeably. The simplest way of thinking about them is that one is digital (ppi) and represents what you see on the computer screen and the other is physical (dpi) for example, how an image appears when you print it out on a piece of paper.
I suggest you to check this in-depth article talking about the technicality of this topic.
https://blog.viewneo.com/blog/72-dpi-resolution-vs-300-dpi-for-digital-solutions/

Number of polygons in a 3D object and the rendering workload?

Is there any relation (preferably an equation) between the number of polygons in a 3D object and the rendering workload? I want to see how much the rendering workload would be increased if for instance the number of polygons doubles.
There is no clear connection between the arbitrary number of polygons and the mythical "workload".
See the following samples:
You render a cube with 6 faces composed of 12 triangles. You get, say, 1000fps (without vsync). When you tesselate the cube into 120 triangles, most likely the fps counter remains 1000.
You render a single fullscreen-sized quad with a heavy fragment shader with a lot of calculation. You get 0.5fps (or more, but I hope you get the point).
Another extreme. You are rendering a thousand of similar cubes, each with different texture. The rendering state change will take most of the time, not the actual rendering.
So, polygons may have different screen area and they may be rendered not within a single primitive. If you're talking about one big vertex array with a large number of polygons, then for some certain scenarios the performance change must be something like linear. "Something" because the videocard and the drivers are clipping the invisible polys and perfrom the early-out tests for each pixel being rendered.
Could you define 'workload'? – Erno yesterday
Well, I mean working
calculations. I want to see how much overhead (for GPU, CPU,
memory,...) would be increased. Actually I want to conclude the energy
usage of the device – user1196937 2 hours ago
If that is the actual question, a comparison of energy usage:
You will have to pick specific configurations and test those. Energy usage is very different from GPU to GPU and machine to machine.
Some GPU manufactures give very detailed information on the performance of their processors but when you want to compare those you will need an actual machine.

Contact area size in MultitouchSupport private framework

I've been playing around with the carbon multitouch support private framework and I've been able to retrieve various type of data.
Among these, each contact seems to have a size and is as well described by an ellipsoid (angle, minor axis, major axis). However, I haven't been able to identify the frame of reference used for the size and the minor and major axis.
If anybody has been able to find it out, I'm interested in your information.
Thanks in advance
I've been using the framework for two years now and I've found that the ellipse is not in standard units (e.g. inches, milimeters). You could approximate millimeters by doubling the values you get for the ellipse.
Here's how I derived the ellipse information.
First, my best guess for how it works is that it's close to Synaptics "units per mm": http://ccdw.org/~cjj/l/docs/ACF126.pdf But since Apple has not released any of that information for developers, I'm relying on information that I print to the console.
You may get slightly different values based on the dimensions of the device (e.g. native trackpad vs magic mouse) you're using with the MultiTouchSupport.framework. This might also be caused by the differences in the surface (magic mouse is curved).
The code on http://www.steike.com/code/multitouch/ has a parameter called mm. This gives you the raw (non-normalized) position and velocity for the device.
Based on the width's observed min & max values from mm (-47.5,52.5), the trackpad is ~100 units wide (~75 units the other way). The trackpad is about 100mm wide x 80mm. But no, it's not a direct unit to millimeter translation. I think the parameter being named 'mm' may have just been a coincidence.
My forearm can cover about 90% of the surface of the trackpad. After laying it across the trackpad, the output will read to about 58 units wide by 36 units long, with a size of 55. If you double the units you get 116 by 72 which is really close to 100mm by 80mm. So that's why I say just double the units to approximate the millimeters. I've done this with my forearm the other way and with my palm and the approximations still seem to work.
The size of 55 doesn't seem to coincide with the values of ellipse. I'm inclined to believe that ellipse is an approximation of the surface dimensions and size is the actual surface area (probably in decimeters).
Sorry there's no straight answer (this is after all a reverse engineering project) but maybe this information can help you find the answer yourself.
(Note: I'd like to know what you're working on?)

Resources