What dimensional units are used in PyQt4? - pyqt

When using "setMinimumHeight(...)/setMinimumWidth(...)" what units are the arguments in? I'm not turning up anything online, the book I bought doesn't address it and based on my experiments the units certainly aren't pixels. Thanks in advance.

Those parameters are measured in pixels, but there are other things at play here as well that unfortunately are harder to deal with and may be complicating your measurments.
Take a look at the following two images. The resolution of my screen remains at 3840x2160 but the "Scale Factor" that Windows suggests varies between 100% and 250%.
Scale Factor = 100%
Scale Factor = 250%
The ruler has actually changed size which could give you the impression that the size policy of these isn't equivalent to the pixel size. Note the size of each of these widgets starts at the grey, not at the blue. Additionally, even though Qt maintains the size of the widget in pixels independently from Windows' "Scale Factor", the same can't be said for the label in the center which does change in size depending on the scaling.
I don't know exactly how you are taking your measurements, what the GUI is, or what your display setting is, but those all can contribute to the confusion around sizing in Qt.

Related

setting a virtual resolution in wayland

Is it possible to set a virtual resolution of a screen, meaning increasing its resolution over its normal resolution (say, I've got a 1920x1080 screen, can I use it like it was a 3640x2160 screen)?
With X it was easy, just xrandr --scale 2x2, but with wayland I can't seem to find a way to do it...
It would be to set up a multi-screen environment, with one good screen and a bad screen, and I need to double the resolution of the bad screen, to have the windows about the same size in both screens, which is my goal.
I've read somewhere about multi-screen scaling, but couldn't fine more informations about
Thank you for your help
If you are using weston compositor then you can specify the "scale" factor in weston.ini file under [ouput] section please refer here http://manpages.ubuntu.com/manpages/bionic/man5/weston.ini.5.html
scale=factor
The scaling multiplier applied to the entire output, in support of high resolution("HiDPI" or "retina") displays, that roughly corresponds to the pixel ratio of the display's physical resolution to the logical resolution. Applications that do not support high resolution displays typically appear tiny and unreadable. Weston will scale the output of such applications by this multiplier, to make them readable.Applications that do support their own output scaling can draw their content in high resolution, in which case they avoid compositor scaling. Weston will not scale the output of such applications, and they are not affected by this multiplier.
An integer, 1 by default, typically configured as 2 or higher when needed, denoting
the scaling multiplier for the output.

Why do we use the term DPI for matters involving images on computers

I'm told that DPI and Points are no longer relevant in terminology involving graphical displays on computer screens and mobile devices yet we use the term "High DPI Aware" and in Windows you can set the various DPI levels (96, 120, 144, 192).
Here is my understanding of the various terms that are used in displaying images on computer monitors and devices:
DPI = number of dots in one linear inch. But DPI refers to printers and printed images.
Resolution = the number of pixels that make up a picture whether it is printed on paper or displayed on a computer screen. Higher resolution provides the capability to display more detail. Higher DPI = Higher resolution, however, resolution does not refer to size, it refers to the number of pixels in each dimension.
DPI Awareness = an app takes the DPI setting into account, making it possible for an application to behave as if it knew the real size of the pixels.
Points and Pixels: (There are 72 points per inch.)
At 300 DPI, there are 300 pixels per inch. So 4.16 Pixels = 1 point.
At 96 DPI there are 1.33 pixels in one point.
Is there a nice way to "crisply" describe the relationship between DPI, PPI, Points, and Resolution?
You are correct that DPI refers to the maximum amount of detail per unit of physical length.
Computer screens are devices that have a physical size, so we speak of the number of pixels per inch they have. Traditionally this value has been around 80 PPI, but now it can be up to 400 PPI.
The notion of "High DPI Aware" (e.g. Retina) is based on the fact that physical screen sizes don't change much over time (for example, there have been 10-inch tablets for more than a decade), but the number of pixels we pack into the screens is increasing. Because the size isn't increasing, it means the density - or the PPI - must be increasing.
Now when we want to display an image on a screen has more pixels than an older screen, we can either:
Map the old pixels 1:1 onto the new screen. The physical image is smaller due to the increased density. People start to complain about how small the icons and text are.
Stretch the old image and fill in the extra details. The physical image is the same size, but now there are more pixels to represent the content. For example, this results in font curves being smoother and photographs showing more fine details.
The term DPI (Dots Per Inch) to refer to device or image resolution came into common use well before the invention of printers that could print multiple dots per pixel. I remember using it in the 1970's. The term PPI was invented later to accommodate the difference, but the old usage still lingers in places such as Windows which was developed in the 1980's.
The DPI assigned in Windows rarely corresponds to the actual PPI of the screen. It's merely a way to specify the intended scaling of elements such as fonts.
DPI vs. resolution – What’s the difference?
The acronym dpi stands for dots per inch. Similarly, ppi stands for pixels per inch. So, why have two different acronyms for measuring roughly the same thing? Because there is a key difference between the two and if you don’t understand this difference it can have a negative impact on your digital signage project.
Part of the confusion between the two terms stems from the fact that many people who use them are lazy and tend to use the terms interchangeably. The simplest way of thinking about them is that one is digital (ppi) and represents what you see on the computer screen and the other is physical (dpi) for example, how an image appears when you print it out on a piece of paper.
I suggest you to check this in-depth article talking about the technicality of this topic.
https://blog.viewneo.com/blog/72-dpi-resolution-vs-300-dpi-for-digital-solutions/

Random string from randomly placed circles

I have this fun idea of a project i'd like to do, but i'm not really sure about the math part of it. Here is the idea:
Make a plastic card that would simulate a 9 finger multitouch gesture when it is held against a capacitive screen
Based on the "9 finger" placement, determine some sort of a unique string and use it as an encryption/decryption key for an app
This way i could just open an app, touch the screen with the card and it would get authorized.
But here's the problem:
It shouldn't matter where you place the card on a screen, because the card would be pretty small to fit various screen sizes
The rectangle in which we can randomly position the 9 "fingers" would optimally be 4.5cm x 3cm
The "finger" itself is only recognized as a touch if it is about a 6mm circle (not sure if this can be made smaller)
I figured we could find the left-top "finger" and get every other "finger's" X and Y difference from it. Then concatenate the resulting numbers into a string and use it as a decryption/encryption key. So basically:
key = concat(X2 - X1, Y2 - Y1, X3 - X1, Y3 - Y1, ...)
But i think such an approach would have very few possible combinations (given a relatively small card size and a relatively big "finger") and one could easily write a program to generate all possible combinations and break the key in no time. Am i right about this? If so, how could i improve this?
Thanks for your thoughts
UPDATE 1: actually tried it out on iOS. The result is not promising, since the "fingers" get detected differently each time. The distance between them varies significantly (by as much as 40 pixels!). So i guess this is not as easy as i expected, since the OS seems to detect the touch differently each time for the same two circles.
Your question is lacking some relevant information: how far apart need the circles be so that the system can still distinguish them? What resolution can you realistically expect for the circle centers? And by “6mm circle”, do you mean 6mm diameter or radius (or even circumference)?
Lacking details, I'll make some pretty rough approximations. I'll start by requiring that two of the circles will be placed in opposite corners of the card. That way, you can find them by looking for a pair with maximal distance, and from that compute the orientation and size of the card and correct for that. This leaves 7 fingers to be placed randomly. I'll assume 1mm resolution, and restrict myself to a 45×30mm area. Which means 39×24=936 positions per circle, for a total of 9367≈6,3×1020≈269 combinations. OK, this does not exclude overlapping circles. But since the card is still rather sparsely covered, that shouldn't amount to too much. I'd say 64 bit of entropy (i.e. 264 possible combinations) should be reasonable even if you enforce non-overlapping circles. If you can really detect the circle centers with the required resolution, that is. This should be sufficient security for most applications. Far better than 8-letter passwords, but worse than the symmetric keys usually used for e.g. AES.
Since all of this depends very much on the resolution, it might be worthwhile to investigate that aspect first. Usually you'll get pixel coordinates for your finger positions, but it would be expecting too much to assume that you'd always get the pixel coordinate closest to the center of your circle. So you might start by writing a small application which draws a 6mm circle and records coordinates it receives. Then place a 6mm artificial circle in that drawn one a large number of times. Look how far the recorded positions differ from the center of circle. Take the maximum of those differences, perhaps after removing outliers. I'd add a pixel or two to that, to account for rounding errors due to the rotation of the card. Then turn that pixel count back into a metric length. This is the resolution you can expect. You might have to do this for several devices. If you do perform these experiments, let me know what you find and I'll update my answer accordingly.

Contact area size in MultitouchSupport private framework

I've been playing around with the carbon multitouch support private framework and I've been able to retrieve various type of data.
Among these, each contact seems to have a size and is as well described by an ellipsoid (angle, minor axis, major axis). However, I haven't been able to identify the frame of reference used for the size and the minor and major axis.
If anybody has been able to find it out, I'm interested in your information.
Thanks in advance
I've been using the framework for two years now and I've found that the ellipse is not in standard units (e.g. inches, milimeters). You could approximate millimeters by doubling the values you get for the ellipse.
Here's how I derived the ellipse information.
First, my best guess for how it works is that it's close to Synaptics "units per mm": http://ccdw.org/~cjj/l/docs/ACF126.pdf But since Apple has not released any of that information for developers, I'm relying on information that I print to the console.
You may get slightly different values based on the dimensions of the device (e.g. native trackpad vs magic mouse) you're using with the MultiTouchSupport.framework. This might also be caused by the differences in the surface (magic mouse is curved).
The code on http://www.steike.com/code/multitouch/ has a parameter called mm. This gives you the raw (non-normalized) position and velocity for the device.
Based on the width's observed min & max values from mm (-47.5,52.5), the trackpad is ~100 units wide (~75 units the other way). The trackpad is about 100mm wide x 80mm. But no, it's not a direct unit to millimeter translation. I think the parameter being named 'mm' may have just been a coincidence.
My forearm can cover about 90% of the surface of the trackpad. After laying it across the trackpad, the output will read to about 58 units wide by 36 units long, with a size of 55. If you double the units you get 116 by 72 which is really close to 100mm by 80mm. So that's why I say just double the units to approximate the millimeters. I've done this with my forearm the other way and with my palm and the approximations still seem to work.
The size of 55 doesn't seem to coincide with the values of ellipse. I'm inclined to believe that ellipse is an approximation of the surface dimensions and size is the actual surface area (probably in decimeters).
Sorry there's no straight answer (this is after all a reverse engineering project) but maybe this information can help you find the answer yourself.
(Note: I'd like to know what you're working on?)

Smallest recommended button size

Is there a recommended smallest button size under normal conditions?
By "recommended" I mean prescribed by some document like:
Apple HCI Guidelines
Windows UX Guidelines
or some ISO standard..
By "normal" conditions I mean:
desktop/office use
standard 96dpi monitor resolution
mouse/touchpad for pointing (no touchscreen)
non-disabled or visually impaired users
standard "theme" (no large fonts/icons)
Microsoft's UX Guide for Windows 7 and Vista recommends:
"Make click targets at least 16x16 pixels so that they can be easily clicked by any input device. For touch, the recommended minimum control size is 23x23 pixels (13x13 DLUs)." where"A dialog unit (DLU) is a device-independent metric where one horizontal dialog unit equals one-fourth of the average character width for the current font and one vertical dialog unit equals one-eighth of the character height for the current font. Because characters are roughly twice as high as they are wide, a horizontal DLU is roughly the same size as a vertical DLU, but it's important to realize that DLUs are not a square unit."
You may also want to look up Fitts' Law, which calculates the time necessary to complete an action as a function of the target size. That can help mathematically determine the trade-offs of different button sizes.
Well, I try to make important/common mouse targets as large as possible without looking bad, something about 20 pixels (assuming 96 DPI) height, and as much width as needed to accomodate labels. If the button has no labels, which is very rare, I found out it's actually comfortable to have an aspect like 20w/50h (with the icon on top, not center), since the mouse is easier to move horizontally. So it's also good to keep them in the same row.
In addition to what MsLis suggested the UX Guide also suggests a minimum width of 75 pixels specifically for Command Buttons.
UX Guide - Recommended sizing and spacing

Resources