Smallest recommended button size - user-experience

Is there a recommended smallest button size under normal conditions?
By "recommended" I mean prescribed by some document like:
Apple HCI Guidelines
Windows UX Guidelines
or some ISO standard..
By "normal" conditions I mean:
desktop/office use
standard 96dpi monitor resolution
mouse/touchpad for pointing (no touchscreen)
non-disabled or visually impaired users
standard "theme" (no large fonts/icons)

Microsoft's UX Guide for Windows 7 and Vista recommends:
"Make click targets at least 16x16 pixels so that they can be easily clicked by any input device. For touch, the recommended minimum control size is 23x23 pixels (13x13 DLUs)." where"A dialog unit (DLU) is a device-independent metric where one horizontal dialog unit equals one-fourth of the average character width for the current font and one vertical dialog unit equals one-eighth of the character height for the current font. Because characters are roughly twice as high as they are wide, a horizontal DLU is roughly the same size as a vertical DLU, but it's important to realize that DLUs are not a square unit."
You may also want to look up Fitts' Law, which calculates the time necessary to complete an action as a function of the target size. That can help mathematically determine the trade-offs of different button sizes.

Well, I try to make important/common mouse targets as large as possible without looking bad, something about 20 pixels (assuming 96 DPI) height, and as much width as needed to accomodate labels. If the button has no labels, which is very rare, I found out it's actually comfortable to have an aspect like 20w/50h (with the icon on top, not center), since the mouse is easier to move horizontally. So it's also good to keep them in the same row.

In addition to what MsLis suggested the UX Guide also suggests a minimum width of 75 pixels specifically for Command Buttons.
UX Guide - Recommended sizing and spacing

Related

Text Display Implementation Across Multiple Platforms

I have been scouring the internet for weeks trying to figure out exactly how text (such as what you are reading right now) is displayed to the screen.
Results have been shockingly sparse.
I've come across the concepts of rasterization, bitmaps, vector graphics, etc. What I don't understand, is how the underlying implementation works so uniformly across all systems (windows, linux, etc.) in way we as humans understand. Is there a specification defined somewhere? Is implementation code open source and viewable by the general public?
My understanding as of right now, is this:
Create a font with an external drawing program, one character at a time
Add these characters into a font file that is understood by language-specific libraries
These characters are then read from the file as needed by the GPU and displayed to the screen in a linear fashion as defined by the parenting code.
Additionally, if characters are defined in a font file such as 'F, C, ..., Z', how are vector graphics (which rely on a set of coordinate points) supported? Without coordinate points, rasterization would seem the only option for size changes.
This is about as far as my assumptions/research goes.
If you are familiar with this topic and can provide a detailed answer that may be useful to myself and other readers, please answer at your discretion. I find it fascinating just how much code we take for granted that is remarkably complicated under the hood.
The following provides an overview (leaving out lots of gory details):
Two key components for display of text on the Internet are (i) character encoding and (ii) fonts.
Character encoding is a scheme by which characters, such as the Latin capital letter "A", are assigned a representation as a byte sequence. Many different character encoding schemes have been devised in the past. Today, what is almost ubiquitously used on the Internet is Unicode. Unicode assigns each character to a code point, which is an integer value; e.g., Unicode assigns LATIN CAPITAL LETTER A to the code point 65, or 41 in hexadecimal. By convention, Unicode code points are referred to using four to six hex digits with "U+" as a prefix. So, LATIN CAPITAL LETTER A is assigned to U+0041.
Fonts provide the graphical data used to display text. There have been various font formats created over the years. Today, what is ubiquitously used on the Internet are fonts that follow the OpenType spec (which is an extension of the TrueType font format created back around 1991).
What you see presented on the screen are glyphs. An OpenType font contains data for the glyphs, and also a table that maps Unicode code points to corresponding glyphs. More precisely, the character-to-glyph mapping (or 'cmap') table maps Unicode code points to glyph IDs. The code points are defined by Unicode; the glyph IDs are a font-internal implementation detail, and are used to look up the glyph descriptions and related data in other tables.
Glyphs in an OpenType font can be defined as bitmaps, or (far more common) as vector outlines (Bezier curves). There is an assumed coordinate grid for the glyph descriptions. The vector outlines, then, are defined as an ordered list of coordinate pairs for Bezier curve control points. When text is displayed, the vector outline is scaled onto a display grid, based on the requested text size (e.g., 10 point) and pixel sizing on the display. A rasterizer reads the control point data in the font, scales as required for the display grid, and generates a bitmap that is put onto the screen at an appropriate position.
One extra detail about displaying the rasterized bitmap: most operating systems or apps will use some kind of filtering to give glyphs a smoother and more legible appearance. For example, a grayscale anti-alias filter will set display pixels at the edges of glyphs to a gray level, rather than pure black or pure white, to make edges appear smoother when the scaled outline doesn't align exactly to the physical pixel boundaries—which is most of the time.
I mentioned "at an appropriate position". The font has metric (positioning) information for the font as a whole and for each glyph.
The font-wide metrics will include a recommended line-to-line distance for lines of text, and the placement of the baseline within each line. These metrics are expressed in the units of the font's glyph design grid; the baseline corresponds to y=0 within the grid. To start a line, the (0,0) design grid position is aligned to where the baseline meets the edge of a text container within the page layout, and the first glyph is positioned.
The font also has glyph metrics. One of the glyph metrics is an advance width for each given glyph. So, when the app is drawing a line of text, it has a starting "pen position" at the start of the line, as described above. It then places the first glyph on the line accordingly, and advances the pen position by the amount of that first glyph's advance width. It then places the second glyph using the new pen position, and advances again. And so on as glyphs are placed along the line.
There are (naturally) more complexities in laying out lines of text. What I described above is sufficient for English text displayed in a basic text editor. More generally, display of a line of text can involve substitution of the default glyphs with certain alternate glyphs; this is needed, for example, when displaying Arabic text so that characters appear cursively connected. OpenType fonts contain a "glyph substitution" (or 'GSUB') table that provides details for glyph substitution actions. In addition, the positioning of glyphs can be adjusted for various reasons; for example, to position a diacritic glyph correctly over a letter. OpenType fonts contain a "glyph positioning" ('GPOS') table that provides the position adjustment data. Operating system platforms and browsers today support all of this functionality so that Unicode-encoded text for many different languages can be displayed using OpenType fonts.
Addendum on glyph scaling:
Within the font, a grid is set up with a certain number of units per em. This is set by the font designer. For example, the designer might specify 1000 units per em, or 2048 units per em. The glyphs in the font and all the metric values—glyph advance width, default line-to-line distinance, etc.—are all set in font design grid units.
How does the em relate to what content authors set? In a word processing app, you typically set text size in points. In the printing world, a point is a well defined unit for length, approximately but not quite 1/72 of an inch. In digital typography, points are defined as exactly 1/72 of an inch. Now, in a word processor, when you set text size to, say, 12 points, that really means 12 points per em.
So, for example, suppose a font is designed using 1000 design units per em. And suppose a particular glyph is exactly 1 em wide (e.g., an em dash); in terms of the design grid units, it will be exactly 1000 units wide. Now, suppose the text size is set to 36 points. That means 36 points per em, and 36 points = 1/2", so the glyph will print exactly 1/2" wide.
When the text is rasterized, it's done for a specific target device, that has a certain pixel density. A desktop display might have a pixel (or dot) density of 96 dpi; a printer might have a pixel density of 1200 dpi. Those are relative to inches, but from inches you can get to points, and for a given text size, you can get to ems. You end up with a certain number of pixels per em based on the device and the text size. So, the rasterizer takes the glyph outline defined in font design units per em, and scales it up or down for the given number of pixels per em.
For example, suppose a font is designed using 1000 units per em, and a printer is 1000 dpi. If text is set to 72 points, that's 1" per em, and the font design units will exactly match the printer dots. If the text is set to 12 points, then the rasterizer will scale down so that there are 6 font design units per printer dot.
At that point, the details in the glyph outline might not align to whole units in the device grid. The rasterizer needs to decide which pixels/dots get ink and which do not. The font can include "hints" that affect the rasterizer behaviour. The hints might ensure that certain font details stay aligned, or the hints might be instructions to move a Bezier control point by a certain amount based on the current pixels-per-em.
For more details, see Digitizing Letterform Designs and Font Engine from Apple's TrueType Reference Manual, which goes into lots of detail.

What dimensional units are used in PyQt4?

When using "setMinimumHeight(...)/setMinimumWidth(...)" what units are the arguments in? I'm not turning up anything online, the book I bought doesn't address it and based on my experiments the units certainly aren't pixels. Thanks in advance.
Those parameters are measured in pixels, but there are other things at play here as well that unfortunately are harder to deal with and may be complicating your measurments.
Take a look at the following two images. The resolution of my screen remains at 3840x2160 but the "Scale Factor" that Windows suggests varies between 100% and 250%.
Scale Factor = 100%
Scale Factor = 250%
The ruler has actually changed size which could give you the impression that the size policy of these isn't equivalent to the pixel size. Note the size of each of these widgets starts at the grey, not at the blue. Additionally, even though Qt maintains the size of the widget in pixels independently from Windows' "Scale Factor", the same can't be said for the label in the center which does change in size depending on the scaling.
I don't know exactly how you are taking your measurements, what the GUI is, or what your display setting is, but those all can contribute to the confusion around sizing in Qt.

How to control KML icon drawing order, top to bottom

I'm displaying many overlapping icons in a Google Earth tour. I'd like to control (or at least understand) the order in which the icons are drawn (which one shows on "top"). Thanks!
PS. Non solutions attempted: Using gx:drawOrder (it applies to overlays, but not icons). Using AnimatedUpdate to establish the order chronologically. Using the order in which you introduce the placemarks to establish their drawing order.
Apparently Google Earth draws the features in groups by type: polygons, then ground overlays, followed by lines and point data where drawOrder is applied only within a group. ScreenOverlays are drawn last so they are always on top.
If you define gx:drawOrder or drawOrder on a collection of features, it only applies to the features of the same type (polygon and other polygons) not between different types.
That is the behavior if the features are clamped to ground. If features are at different altitudes then lower altitude layers are drawn first.
Note that the tilt angle affects the size of the icon and as the tilt approaches 90 degrees, the size of the icon gets smaller. The icon is at largest size when viewed straight-down with 0 degree tilt angle.

Why do we use the term DPI for matters involving images on computers

I'm told that DPI and Points are no longer relevant in terminology involving graphical displays on computer screens and mobile devices yet we use the term "High DPI Aware" and in Windows you can set the various DPI levels (96, 120, 144, 192).
Here is my understanding of the various terms that are used in displaying images on computer monitors and devices:
DPI = number of dots in one linear inch. But DPI refers to printers and printed images.
Resolution = the number of pixels that make up a picture whether it is printed on paper or displayed on a computer screen. Higher resolution provides the capability to display more detail. Higher DPI = Higher resolution, however, resolution does not refer to size, it refers to the number of pixels in each dimension.
DPI Awareness = an app takes the DPI setting into account, making it possible for an application to behave as if it knew the real size of the pixels.
Points and Pixels: (There are 72 points per inch.)
At 300 DPI, there are 300 pixels per inch. So 4.16 Pixels = 1 point.
At 96 DPI there are 1.33 pixels in one point.
Is there a nice way to "crisply" describe the relationship between DPI, PPI, Points, and Resolution?
You are correct that DPI refers to the maximum amount of detail per unit of physical length.
Computer screens are devices that have a physical size, so we speak of the number of pixels per inch they have. Traditionally this value has been around 80 PPI, but now it can be up to 400 PPI.
The notion of "High DPI Aware" (e.g. Retina) is based on the fact that physical screen sizes don't change much over time (for example, there have been 10-inch tablets for more than a decade), but the number of pixels we pack into the screens is increasing. Because the size isn't increasing, it means the density - or the PPI - must be increasing.
Now when we want to display an image on a screen has more pixels than an older screen, we can either:
Map the old pixels 1:1 onto the new screen. The physical image is smaller due to the increased density. People start to complain about how small the icons and text are.
Stretch the old image and fill in the extra details. The physical image is the same size, but now there are more pixels to represent the content. For example, this results in font curves being smoother and photographs showing more fine details.
The term DPI (Dots Per Inch) to refer to device or image resolution came into common use well before the invention of printers that could print multiple dots per pixel. I remember using it in the 1970's. The term PPI was invented later to accommodate the difference, but the old usage still lingers in places such as Windows which was developed in the 1980's.
The DPI assigned in Windows rarely corresponds to the actual PPI of the screen. It's merely a way to specify the intended scaling of elements such as fonts.
DPI vs. resolution – What’s the difference?
The acronym dpi stands for dots per inch. Similarly, ppi stands for pixels per inch. So, why have two different acronyms for measuring roughly the same thing? Because there is a key difference between the two and if you don’t understand this difference it can have a negative impact on your digital signage project.
Part of the confusion between the two terms stems from the fact that many people who use them are lazy and tend to use the terms interchangeably. The simplest way of thinking about them is that one is digital (ppi) and represents what you see on the computer screen and the other is physical (dpi) for example, how an image appears when you print it out on a piece of paper.
I suggest you to check this in-depth article talking about the technicality of this topic.
https://blog.viewneo.com/blog/72-dpi-resolution-vs-300-dpi-for-digital-solutions/

Contact area size in MultitouchSupport private framework

I've been playing around with the carbon multitouch support private framework and I've been able to retrieve various type of data.
Among these, each contact seems to have a size and is as well described by an ellipsoid (angle, minor axis, major axis). However, I haven't been able to identify the frame of reference used for the size and the minor and major axis.
If anybody has been able to find it out, I'm interested in your information.
Thanks in advance
I've been using the framework for two years now and I've found that the ellipse is not in standard units (e.g. inches, milimeters). You could approximate millimeters by doubling the values you get for the ellipse.
Here's how I derived the ellipse information.
First, my best guess for how it works is that it's close to Synaptics "units per mm": http://ccdw.org/~cjj/l/docs/ACF126.pdf But since Apple has not released any of that information for developers, I'm relying on information that I print to the console.
You may get slightly different values based on the dimensions of the device (e.g. native trackpad vs magic mouse) you're using with the MultiTouchSupport.framework. This might also be caused by the differences in the surface (magic mouse is curved).
The code on http://www.steike.com/code/multitouch/ has a parameter called mm. This gives you the raw (non-normalized) position and velocity for the device.
Based on the width's observed min & max values from mm (-47.5,52.5), the trackpad is ~100 units wide (~75 units the other way). The trackpad is about 100mm wide x 80mm. But no, it's not a direct unit to millimeter translation. I think the parameter being named 'mm' may have just been a coincidence.
My forearm can cover about 90% of the surface of the trackpad. After laying it across the trackpad, the output will read to about 58 units wide by 36 units long, with a size of 55. If you double the units you get 116 by 72 which is really close to 100mm by 80mm. So that's why I say just double the units to approximate the millimeters. I've done this with my forearm the other way and with my palm and the approximations still seem to work.
The size of 55 doesn't seem to coincide with the values of ellipse. I'm inclined to believe that ellipse is an approximation of the surface dimensions and size is the actual surface area (probably in decimeters).
Sorry there's no straight answer (this is after all a reverse engineering project) but maybe this information can help you find the answer yourself.
(Note: I'd like to know what you're working on?)

Resources