Why do we use the term DPI for matters involving images on computers - resolution

I'm told that DPI and Points are no longer relevant in terminology involving graphical displays on computer screens and mobile devices yet we use the term "High DPI Aware" and in Windows you can set the various DPI levels (96, 120, 144, 192).
Here is my understanding of the various terms that are used in displaying images on computer monitors and devices:
DPI = number of dots in one linear inch. But DPI refers to printers and printed images.
Resolution = the number of pixels that make up a picture whether it is printed on paper or displayed on a computer screen. Higher resolution provides the capability to display more detail. Higher DPI = Higher resolution, however, resolution does not refer to size, it refers to the number of pixels in each dimension.
DPI Awareness = an app takes the DPI setting into account, making it possible for an application to behave as if it knew the real size of the pixels.
Points and Pixels: (There are 72 points per inch.)
At 300 DPI, there are 300 pixels per inch. So 4.16 Pixels = 1 point.
At 96 DPI there are 1.33 pixels in one point.
Is there a nice way to "crisply" describe the relationship between DPI, PPI, Points, and Resolution?

You are correct that DPI refers to the maximum amount of detail per unit of physical length.
Computer screens are devices that have a physical size, so we speak of the number of pixels per inch they have. Traditionally this value has been around 80 PPI, but now it can be up to 400 PPI.
The notion of "High DPI Aware" (e.g. Retina) is based on the fact that physical screen sizes don't change much over time (for example, there have been 10-inch tablets for more than a decade), but the number of pixels we pack into the screens is increasing. Because the size isn't increasing, it means the density - or the PPI - must be increasing.
Now when we want to display an image on a screen has more pixels than an older screen, we can either:
Map the old pixels 1:1 onto the new screen. The physical image is smaller due to the increased density. People start to complain about how small the icons and text are.
Stretch the old image and fill in the extra details. The physical image is the same size, but now there are more pixels to represent the content. For example, this results in font curves being smoother and photographs showing more fine details.

The term DPI (Dots Per Inch) to refer to device or image resolution came into common use well before the invention of printers that could print multiple dots per pixel. I remember using it in the 1970's. The term PPI was invented later to accommodate the difference, but the old usage still lingers in places such as Windows which was developed in the 1980's.
The DPI assigned in Windows rarely corresponds to the actual PPI of the screen. It's merely a way to specify the intended scaling of elements such as fonts.

DPI vs. resolution – What’s the difference?
The acronym dpi stands for dots per inch. Similarly, ppi stands for pixels per inch. So, why have two different acronyms for measuring roughly the same thing? Because there is a key difference between the two and if you don’t understand this difference it can have a negative impact on your digital signage project.
Part of the confusion between the two terms stems from the fact that many people who use them are lazy and tend to use the terms interchangeably. The simplest way of thinking about them is that one is digital (ppi) and represents what you see on the computer screen and the other is physical (dpi) for example, how an image appears when you print it out on a piece of paper.
I suggest you to check this in-depth article talking about the technicality of this topic.
https://blog.viewneo.com/blog/72-dpi-resolution-vs-300-dpi-for-digital-solutions/

Related

How does Chrome know what a centimeter is?

As stated in the question title, where do Chrome, other web browsers and programs like MS Paint get the information about how many pixels are needed to make up a physical centimeter?
As far as I know, such programs can't just assume a nominal pixel size as monitors with same resolution differ greatly regarding their active screen size. So for this to work, the display has to tell the computer not only how many pixels it has but also how big (in terms of physical size) its active area is.
I couldn't find any information about this question on the Internet, but my personal guess would be that the OS extracts this information from the EDID of the monitor.
But does anyone here know the answer to this question for sure?
As chiliNUT pointed out in the comments, physical units in browsers don't necessarily match physical units in reality, but are instead tied to the size of a pixel (e.g. 1 in = 96 px) as stated in the link below:
https://developer.mozilla.org/en-US/docs/Web/CSS/length#Absolute_length_units
Absolute length units
Absolute length units represent a physical measurement when the physical properties of the output medium are known, such as for print layout. This is done by anchoring one of the units to a physical unit, and then defining the others relative to it. The anchor is done differently for low-resolution devices, such as screens, versus high-resolution devices, such as printers.
For low-dpi devices, the unit px represents the physical reference pixel; other units are defined relative to it. Thus, 1in is defined as 96px, which equals 72pt. The consequence of this definition is that on such devices, dimensions described in inches (in), centimeters (cm), or millimeters (mm) doesn't necessary match the size of the physical unit with the same name.
For high-dpi devices, inches (in), centimeters (cm), and millimeters (mm) are the same as their physical counterparts. Therefore, the px unit is defined relative to them (1/96 of 1 inch).

What dimensional units are used in PyQt4?

When using "setMinimumHeight(...)/setMinimumWidth(...)" what units are the arguments in? I'm not turning up anything online, the book I bought doesn't address it and based on my experiments the units certainly aren't pixels. Thanks in advance.
Those parameters are measured in pixels, but there are other things at play here as well that unfortunately are harder to deal with and may be complicating your measurments.
Take a look at the following two images. The resolution of my screen remains at 3840x2160 but the "Scale Factor" that Windows suggests varies between 100% and 250%.
Scale Factor = 100%
Scale Factor = 250%
The ruler has actually changed size which could give you the impression that the size policy of these isn't equivalent to the pixel size. Note the size of each of these widgets starts at the grey, not at the blue. Additionally, even though Qt maintains the size of the widget in pixels independently from Windows' "Scale Factor", the same can't be said for the label in the center which does change in size depending on the scaling.
I don't know exactly how you are taking your measurements, what the GUI is, or what your display setting is, but those all can contribute to the confusion around sizing in Qt.

What is the real definition of resolution?

I read everywhere that resolution is defined by the number of pixels on a screen.
But if you imagine 1000 x 1000 pixels on a screen the size of 20 skyscrapers and compare it to 999 x 999 pixels on a box of matches, the resolution would make the skyscrapers screen look 'low-res' and the box of matches screen look 'high-res'. Instinctively, I would say that the box of matches screen is higher resolution than the skyscrapers screen.
Am I wrong to say this? Is resolution definitely defined by the total number of pixels instead of the dots per inch?
Indeed, in the context of displays, the term resolution says nothing about pixel density. As stated in Wikipedia's article on Display Resolution:
The term "display resolution" is usually used to mean pixel dimensions, the number of pixels in each dimension (e.g. 1920 × 1080), which does not tell anything about the pixel density of the display on which the image is actually formed: broadcast television resolution properly refers to the pixel density, the number of pixels per unit distance or area, not total number of pixels. In digital measurement, the display resolution would be given in pixels per inch (PPI)
Definition of resolution varies according with the context. Every thing has a measurement unit.
When you talk about screens(Monitors), screen has pixels not dots thats why its resolution is measured in Pixels.
And When you talk about printing or video it is all about dots per inch. In your case, Box of Match is not a screen, its on printed paper.
For eg: you might have heard people saying DPI's(not resolution) while scanning documents.
So don't get yourself confused with the definition of resolution that is meant for different context.

Contact area size in MultitouchSupport private framework

I've been playing around with the carbon multitouch support private framework and I've been able to retrieve various type of data.
Among these, each contact seems to have a size and is as well described by an ellipsoid (angle, minor axis, major axis). However, I haven't been able to identify the frame of reference used for the size and the minor and major axis.
If anybody has been able to find it out, I'm interested in your information.
Thanks in advance
I've been using the framework for two years now and I've found that the ellipse is not in standard units (e.g. inches, milimeters). You could approximate millimeters by doubling the values you get for the ellipse.
Here's how I derived the ellipse information.
First, my best guess for how it works is that it's close to Synaptics "units per mm": http://ccdw.org/~cjj/l/docs/ACF126.pdf But since Apple has not released any of that information for developers, I'm relying on information that I print to the console.
You may get slightly different values based on the dimensions of the device (e.g. native trackpad vs magic mouse) you're using with the MultiTouchSupport.framework. This might also be caused by the differences in the surface (magic mouse is curved).
The code on http://www.steike.com/code/multitouch/ has a parameter called mm. This gives you the raw (non-normalized) position and velocity for the device.
Based on the width's observed min & max values from mm (-47.5,52.5), the trackpad is ~100 units wide (~75 units the other way). The trackpad is about 100mm wide x 80mm. But no, it's not a direct unit to millimeter translation. I think the parameter being named 'mm' may have just been a coincidence.
My forearm can cover about 90% of the surface of the trackpad. After laying it across the trackpad, the output will read to about 58 units wide by 36 units long, with a size of 55. If you double the units you get 116 by 72 which is really close to 100mm by 80mm. So that's why I say just double the units to approximate the millimeters. I've done this with my forearm the other way and with my palm and the approximations still seem to work.
The size of 55 doesn't seem to coincide with the values of ellipse. I'm inclined to believe that ellipse is an approximation of the surface dimensions and size is the actual surface area (probably in decimeters).
Sorry there's no straight answer (this is after all a reverse engineering project) but maybe this information can help you find the answer yourself.
(Note: I'd like to know what you're working on?)

Smallest recommended button size

Is there a recommended smallest button size under normal conditions?
By "recommended" I mean prescribed by some document like:
Apple HCI Guidelines
Windows UX Guidelines
or some ISO standard..
By "normal" conditions I mean:
desktop/office use
standard 96dpi monitor resolution
mouse/touchpad for pointing (no touchscreen)
non-disabled or visually impaired users
standard "theme" (no large fonts/icons)
Microsoft's UX Guide for Windows 7 and Vista recommends:
"Make click targets at least 16x16 pixels so that they can be easily clicked by any input device. For touch, the recommended minimum control size is 23x23 pixels (13x13 DLUs)." where"A dialog unit (DLU) is a device-independent metric where one horizontal dialog unit equals one-fourth of the average character width for the current font and one vertical dialog unit equals one-eighth of the character height for the current font. Because characters are roughly twice as high as they are wide, a horizontal DLU is roughly the same size as a vertical DLU, but it's important to realize that DLUs are not a square unit."
You may also want to look up Fitts' Law, which calculates the time necessary to complete an action as a function of the target size. That can help mathematically determine the trade-offs of different button sizes.
Well, I try to make important/common mouse targets as large as possible without looking bad, something about 20 pixels (assuming 96 DPI) height, and as much width as needed to accomodate labels. If the button has no labels, which is very rare, I found out it's actually comfortable to have an aspect like 20w/50h (with the icon on top, not center), since the mouse is easier to move horizontally. So it's also good to keep them in the same row.
In addition to what MsLis suggested the UX Guide also suggests a minimum width of 75 pixels specifically for Command Buttons.
UX Guide - Recommended sizing and spacing

Resources