What is the real definition of resolution? - resolution

I read everywhere that resolution is defined by the number of pixels on a screen.
But if you imagine 1000 x 1000 pixels on a screen the size of 20 skyscrapers and compare it to 999 x 999 pixels on a box of matches, the resolution would make the skyscrapers screen look 'low-res' and the box of matches screen look 'high-res'. Instinctively, I would say that the box of matches screen is higher resolution than the skyscrapers screen.
Am I wrong to say this? Is resolution definitely defined by the total number of pixels instead of the dots per inch?

Indeed, in the context of displays, the term resolution says nothing about pixel density. As stated in Wikipedia's article on Display Resolution:
The term "display resolution" is usually used to mean pixel dimensions, the number of pixels in each dimension (e.g. 1920 × 1080), which does not tell anything about the pixel density of the display on which the image is actually formed: broadcast television resolution properly refers to the pixel density, the number of pixels per unit distance or area, not total number of pixels. In digital measurement, the display resolution would be given in pixels per inch (PPI)

Definition of resolution varies according with the context. Every thing has a measurement unit.
When you talk about screens(Monitors), screen has pixels not dots thats why its resolution is measured in Pixels.
And When you talk about printing or video it is all about dots per inch. In your case, Box of Match is not a screen, its on printed paper.
For eg: you might have heard people saying DPI's(not resolution) while scanning documents.
So don't get yourself confused with the definition of resolution that is meant for different context.

Related

Text Display Implementation Across Multiple Platforms

I have been scouring the internet for weeks trying to figure out exactly how text (such as what you are reading right now) is displayed to the screen.
Results have been shockingly sparse.
I've come across the concepts of rasterization, bitmaps, vector graphics, etc. What I don't understand, is how the underlying implementation works so uniformly across all systems (windows, linux, etc.) in way we as humans understand. Is there a specification defined somewhere? Is implementation code open source and viewable by the general public?
My understanding as of right now, is this:
Create a font with an external drawing program, one character at a time
Add these characters into a font file that is understood by language-specific libraries
These characters are then read from the file as needed by the GPU and displayed to the screen in a linear fashion as defined by the parenting code.
Additionally, if characters are defined in a font file such as 'F, C, ..., Z', how are vector graphics (which rely on a set of coordinate points) supported? Without coordinate points, rasterization would seem the only option for size changes.
This is about as far as my assumptions/research goes.
If you are familiar with this topic and can provide a detailed answer that may be useful to myself and other readers, please answer at your discretion. I find it fascinating just how much code we take for granted that is remarkably complicated under the hood.
The following provides an overview (leaving out lots of gory details):
Two key components for display of text on the Internet are (i) character encoding and (ii) fonts.
Character encoding is a scheme by which characters, such as the Latin capital letter "A", are assigned a representation as a byte sequence. Many different character encoding schemes have been devised in the past. Today, what is almost ubiquitously used on the Internet is Unicode. Unicode assigns each character to a code point, which is an integer value; e.g., Unicode assigns LATIN CAPITAL LETTER A to the code point 65, or 41 in hexadecimal. By convention, Unicode code points are referred to using four to six hex digits with "U+" as a prefix. So, LATIN CAPITAL LETTER A is assigned to U+0041.
Fonts provide the graphical data used to display text. There have been various font formats created over the years. Today, what is ubiquitously used on the Internet are fonts that follow the OpenType spec (which is an extension of the TrueType font format created back around 1991).
What you see presented on the screen are glyphs. An OpenType font contains data for the glyphs, and also a table that maps Unicode code points to corresponding glyphs. More precisely, the character-to-glyph mapping (or 'cmap') table maps Unicode code points to glyph IDs. The code points are defined by Unicode; the glyph IDs are a font-internal implementation detail, and are used to look up the glyph descriptions and related data in other tables.
Glyphs in an OpenType font can be defined as bitmaps, or (far more common) as vector outlines (Bezier curves). There is an assumed coordinate grid for the glyph descriptions. The vector outlines, then, are defined as an ordered list of coordinate pairs for Bezier curve control points. When text is displayed, the vector outline is scaled onto a display grid, based on the requested text size (e.g., 10 point) and pixel sizing on the display. A rasterizer reads the control point data in the font, scales as required for the display grid, and generates a bitmap that is put onto the screen at an appropriate position.
One extra detail about displaying the rasterized bitmap: most operating systems or apps will use some kind of filtering to give glyphs a smoother and more legible appearance. For example, a grayscale anti-alias filter will set display pixels at the edges of glyphs to a gray level, rather than pure black or pure white, to make edges appear smoother when the scaled outline doesn't align exactly to the physical pixel boundaries—which is most of the time.
I mentioned "at an appropriate position". The font has metric (positioning) information for the font as a whole and for each glyph.
The font-wide metrics will include a recommended line-to-line distance for lines of text, and the placement of the baseline within each line. These metrics are expressed in the units of the font's glyph design grid; the baseline corresponds to y=0 within the grid. To start a line, the (0,0) design grid position is aligned to where the baseline meets the edge of a text container within the page layout, and the first glyph is positioned.
The font also has glyph metrics. One of the glyph metrics is an advance width for each given glyph. So, when the app is drawing a line of text, it has a starting "pen position" at the start of the line, as described above. It then places the first glyph on the line accordingly, and advances the pen position by the amount of that first glyph's advance width. It then places the second glyph using the new pen position, and advances again. And so on as glyphs are placed along the line.
There are (naturally) more complexities in laying out lines of text. What I described above is sufficient for English text displayed in a basic text editor. More generally, display of a line of text can involve substitution of the default glyphs with certain alternate glyphs; this is needed, for example, when displaying Arabic text so that characters appear cursively connected. OpenType fonts contain a "glyph substitution" (or 'GSUB') table that provides details for glyph substitution actions. In addition, the positioning of glyphs can be adjusted for various reasons; for example, to position a diacritic glyph correctly over a letter. OpenType fonts contain a "glyph positioning" ('GPOS') table that provides the position adjustment data. Operating system platforms and browsers today support all of this functionality so that Unicode-encoded text for many different languages can be displayed using OpenType fonts.
Addendum on glyph scaling:
Within the font, a grid is set up with a certain number of units per em. This is set by the font designer. For example, the designer might specify 1000 units per em, or 2048 units per em. The glyphs in the font and all the metric values—glyph advance width, default line-to-line distinance, etc.—are all set in font design grid units.
How does the em relate to what content authors set? In a word processing app, you typically set text size in points. In the printing world, a point is a well defined unit for length, approximately but not quite 1/72 of an inch. In digital typography, points are defined as exactly 1/72 of an inch. Now, in a word processor, when you set text size to, say, 12 points, that really means 12 points per em.
So, for example, suppose a font is designed using 1000 design units per em. And suppose a particular glyph is exactly 1 em wide (e.g., an em dash); in terms of the design grid units, it will be exactly 1000 units wide. Now, suppose the text size is set to 36 points. That means 36 points per em, and 36 points = 1/2", so the glyph will print exactly 1/2" wide.
When the text is rasterized, it's done for a specific target device, that has a certain pixel density. A desktop display might have a pixel (or dot) density of 96 dpi; a printer might have a pixel density of 1200 dpi. Those are relative to inches, but from inches you can get to points, and for a given text size, you can get to ems. You end up with a certain number of pixels per em based on the device and the text size. So, the rasterizer takes the glyph outline defined in font design units per em, and scales it up or down for the given number of pixels per em.
For example, suppose a font is designed using 1000 units per em, and a printer is 1000 dpi. If text is set to 72 points, that's 1" per em, and the font design units will exactly match the printer dots. If the text is set to 12 points, then the rasterizer will scale down so that there are 6 font design units per printer dot.
At that point, the details in the glyph outline might not align to whole units in the device grid. The rasterizer needs to decide which pixels/dots get ink and which do not. The font can include "hints" that affect the rasterizer behaviour. The hints might ensure that certain font details stay aligned, or the hints might be instructions to move a Bezier control point by a certain amount based on the current pixels-per-em.
For more details, see Digitizing Letterform Designs and Font Engine from Apple's TrueType Reference Manual, which goes into lots of detail.

What dimensional units are used in PyQt4?

When using "setMinimumHeight(...)/setMinimumWidth(...)" what units are the arguments in? I'm not turning up anything online, the book I bought doesn't address it and based on my experiments the units certainly aren't pixels. Thanks in advance.
Those parameters are measured in pixels, but there are other things at play here as well that unfortunately are harder to deal with and may be complicating your measurments.
Take a look at the following two images. The resolution of my screen remains at 3840x2160 but the "Scale Factor" that Windows suggests varies between 100% and 250%.
Scale Factor = 100%
Scale Factor = 250%
The ruler has actually changed size which could give you the impression that the size policy of these isn't equivalent to the pixel size. Note the size of each of these widgets starts at the grey, not at the blue. Additionally, even though Qt maintains the size of the widget in pixels independently from Windows' "Scale Factor", the same can't be said for the label in the center which does change in size depending on the scaling.
I don't know exactly how you are taking your measurements, what the GUI is, or what your display setting is, but those all can contribute to the confusion around sizing in Qt.

Userforms resize snaps to arbitrary grid size

I'm trying to design a Userform in Excel 2010, and I'm coming across a very annoying stumbling block where as I'm trying to move things around, resize them and align them so the form looks appealing.
Unfortunately, different mechanisms are snapping to differently sized grids. For example, drawing a box onto the grid snaps it to multiples of 6, which is the default option found in the Tools> Options> General> Grid units. Resizing these objects snaps it to a seemingly arbitrary grid size that seems to be approximately 7.2 units.
I need these units to match up so I'm not constantly fighting myself getting these grids to function. I don't care what the actual number ends up being, I just need them to be the same. While I'm able to change the grid size, it must be a whole number, which the arbitrary grid is not.
Problem us your units Points <-> Pixels
6 Points * 4/3 points per pixel =>> 8 Pixels .. all good
A nice integer of pixels
Your approx 7.2 I suspect was 7.3333333
Maybe say 11 Points Set gets changed as
* 4/3 => 14.6666 pix Rounded to 15 pix
by 3/4 Pts/pix +> 11.33333
Points as single .. Pixels as integer may be the problem in the math

DirectWrite'ing glyphs such that the em square has a specific size

I'm working on an application that renders music notation. The musical symbol are specified in regular font files, which use the convention that the height of the em square corresponds to the height of a regular five-line staff of music. For example, the glyph for a note head is approximately 0.25 em high, the distance between two lines of the staff.
When it comes to rendering, I use a coordinate system in which 4 units corresponds to the height of a five-line staff of music. Therefore, I need to render glyphs such that the em square ends up rendered 4 units high. However DirectWrite only allows specifying text size in device independent pixels (DIPs) and I'm confused about how to juggle between the coordinate systems. There are two parts to this:
From a given font size in DIPs I can compute a height in physical pixels, but what is mapped to that height? The em square or some other design-space metric?
What if I'm using some arbitrary transformation matrix? How do I specify DIPs in order to get meaningful values in the coordinate system I am using?
And for good measure:
If get this to work, is this going to mess up font hinting because my DIP values don't have a clear relationship to physical pixels?
After some more experimentation and research, I have come to the following conclusions.
The font size specifies the size of the EM square as drawn. Drawing at 12 DIPs means that the EM square is scaled to use 12 DIPs of vertical space.
The top Y coordinate of the layoutRect parameter of the ID2D1RenderTarget::DrawText function is mapped to the top of the font's ascent (for the first line of text).
The identity matrix gives a coordinate system in which (0, 0) is the top-left and (width, height), as retrieved from ID2D1RenderTarget::GetSize, is the bottom-right, in DIPs. Which means for any transformation matrix, the font size unit should match the unit in the render target's coordinate system and a vertical line of 42 units will be as high as the EM square with a font size of 42 units.
I was unable to find information about the effect of arbitrary transformations on font hinting, however.

Why do we use the term DPI for matters involving images on computers

I'm told that DPI and Points are no longer relevant in terminology involving graphical displays on computer screens and mobile devices yet we use the term "High DPI Aware" and in Windows you can set the various DPI levels (96, 120, 144, 192).
Here is my understanding of the various terms that are used in displaying images on computer monitors and devices:
DPI = number of dots in one linear inch. But DPI refers to printers and printed images.
Resolution = the number of pixels that make up a picture whether it is printed on paper or displayed on a computer screen. Higher resolution provides the capability to display more detail. Higher DPI = Higher resolution, however, resolution does not refer to size, it refers to the number of pixels in each dimension.
DPI Awareness = an app takes the DPI setting into account, making it possible for an application to behave as if it knew the real size of the pixels.
Points and Pixels: (There are 72 points per inch.)
At 300 DPI, there are 300 pixels per inch. So 4.16 Pixels = 1 point.
At 96 DPI there are 1.33 pixels in one point.
Is there a nice way to "crisply" describe the relationship between DPI, PPI, Points, and Resolution?
You are correct that DPI refers to the maximum amount of detail per unit of physical length.
Computer screens are devices that have a physical size, so we speak of the number of pixels per inch they have. Traditionally this value has been around 80 PPI, but now it can be up to 400 PPI.
The notion of "High DPI Aware" (e.g. Retina) is based on the fact that physical screen sizes don't change much over time (for example, there have been 10-inch tablets for more than a decade), but the number of pixels we pack into the screens is increasing. Because the size isn't increasing, it means the density - or the PPI - must be increasing.
Now when we want to display an image on a screen has more pixels than an older screen, we can either:
Map the old pixels 1:1 onto the new screen. The physical image is smaller due to the increased density. People start to complain about how small the icons and text are.
Stretch the old image and fill in the extra details. The physical image is the same size, but now there are more pixels to represent the content. For example, this results in font curves being smoother and photographs showing more fine details.
The term DPI (Dots Per Inch) to refer to device or image resolution came into common use well before the invention of printers that could print multiple dots per pixel. I remember using it in the 1970's. The term PPI was invented later to accommodate the difference, but the old usage still lingers in places such as Windows which was developed in the 1980's.
The DPI assigned in Windows rarely corresponds to the actual PPI of the screen. It's merely a way to specify the intended scaling of elements such as fonts.
DPI vs. resolution – What’s the difference?
The acronym dpi stands for dots per inch. Similarly, ppi stands for pixels per inch. So, why have two different acronyms for measuring roughly the same thing? Because there is a key difference between the two and if you don’t understand this difference it can have a negative impact on your digital signage project.
Part of the confusion between the two terms stems from the fact that many people who use them are lazy and tend to use the terms interchangeably. The simplest way of thinking about them is that one is digital (ppi) and represents what you see on the computer screen and the other is physical (dpi) for example, how an image appears when you print it out on a piece of paper.
I suggest you to check this in-depth article talking about the technicality of this topic.
https://blog.viewneo.com/blog/72-dpi-resolution-vs-300-dpi-for-digital-solutions/

Resources