Designing an MFC App That Will Work on All Resolutions? - visual-c++

I'm currently designing my first ever GUI for Windows. I'm using MFC and Visual Studio 2008. The monitor I have been designing my program on has 1680x1050 native resolution. If I compile and send my program to one of my coworkers to run on their computer (generally a laptop running at 1024x768), my program will not fit on their screen.
I have been trying to read up on how to design an MFC application so that it will run on all resolutions, but I keep finding misleading information. Everywhere I look it seems that DLUs are supposed to resize your application for you, and that the only time you should run into problems is when you have an actual bitmap whose resolution you need to worry about. But if this is the case, why will my program no longer fit on my screen when I set my monitor to a lower resolution? Instead of my program "shrinking" to take up the same amount of screen real estate that it uses at 1680x1050, it gets huge and grainy.
The "obvious" solution here is to set my resolution to 1024x768 and redesign my program to fit on the screen. Except that I've already squished everything on my dialogs as much as possible to try and get my program to fit on screen running at 1024x768. My dialog fonts are set to Microsoft Sans Serif 8 but still appear huge (much larger than 8 points) when running at 1024x768.
I know there HAS to be a way to make my program keep the same scaling... right? Or is this the wrong way to approach the problem? What is the correct/standard way to go about designing an MFC program so that it can run on many resolutions, say 800x600 and up?

I assume your application GUI is dialog based (the main window is a dialog)?
In that case you have a problem, because, as you discovered, MFC has no support for resizing a dialog correctly. Your options are:
Redesign your GUI to use a SDI or MDI GUI.
Use a dialog resize extension. There are many available, for some very good suggestions see this question. Another options are this one and this one.
Don't use MFC. wxWidgets has much better support for dialog resizing.

MFC is only a thin wrapper over the Windows API. They both make an assumption which is hardly ever true: if you have a higher resolution screen, you'll adjust the DPI or font size in Windows to get larger characters. Most of the time, a larger screen size means a larger physical monitor, or a laptop where you want to squeeze as much information into a small screen as possible; people value more information over greater detail. Thus the assumption fails.
If you can't squeeze your entire UI into the smallest size screen you need to support, you'll have to find another way to make it smaller. Without knowing anything about your UI, I might suggest using tabs to group the controls into pages.
I've had good luck making my windows resizable, so that people with larger screens can see more information at once. You need to do this the hard way, responding to the WM_SIZE message to the window and deciding which controls should be made larger and which ones should just move.

There is no automatic way to resize the content of your dialogs when resolution changes. So, you need to set some boundaries.
Option 1.
If you are developing your app for customers, pick one minimum resolution (like 1024x7678), redesign you dialogs so that everything fits. Maybe break up some into several, or use tab strip control.
Option 2.
Create separate dialog forms for each resolution you'd like to support, but use the same class to handle it. At runtime detect resolution and use the appropriate form.
Option 3.
Write your own resizing functionality, so that user could adjust the size of your dialogs to his liking.

Related

xml layout in android that supports different screen sizes

how should we set the xml layout in android that supports different screen sizes.
I tried using wrap content and match parent but its not working properly. Please guide me for this.
Thanks in advance.
The comment about, Supporting Multiple Screens is defiantly a good starting place! By default your xml does support different screen sizes.
Although the system performs scaling and resizing to make your application work on different screens, you should make the effort to optimize your application for different screen sizes and densities. In doing so, you maximize the user experience for all devices and your users believe that your application was actually designed for their devices—rather than simply stretched to fit the screen on their devices.
However, like it says you need to optimize it. This refers to images or a completely different xml per screen size/orientation. Does this help any?
If you need something a little more specific to your situation you'll need to provide more information.

Making Software ready for Retina Display - Why is this necessary?

Now that the new Macbook Pro is coming out with a Retina Display, there are a lot of resources out there on how to make Mac apps and now even websites "Retina Display Friendly". Even Google is updating Chrome for Retina Display...
Why is this necessary at all? From what I understand, Retina Display is just a higher resolution screen. Right?
I thought when you develop gui's for desktop software and develop websites, you are developing something that is supposed to work and scale properly with virtually any resolution... When you resize an app's window, or display it on a higher or lower resolution display, it is supposed to scale and display properly.
So why are these people coming out with guides on how to make something look good on a Retina Display? Shouldn't it already look fine by default? Is there something about Retina Display that I'm not understanding?
And for the record, I'm not talking about iPhone 4 Retina Display. Most iOS dev's make their apps with fixed position elements since they know the screen's won't change size/shape. So I understand the importance of developing an app to look good on the iPhone 4/s vs 3g/s.
With the Retina display apps don't actually scale like they're being resized, all the controls are resized to be twice as big. If an app would be scaled normally, not by scaling all the controls, etc. you wouldn't see anything, because everything would be too small. It's the same difference between a Retina and a lower-resolution display as on the iPhone 3GS / iPhone 4.
An example:
These images are actually the same size, just the pixel densities differ.
And here's how it looks not properly scaled (using some app to disable proper scaling):
http://cloudmancer.com/images/trueretina.jpg
I thought when you develop gui's for desktop software and develop websites, you are developing something that is supposed to work and scale properly with virtually any resolution... When you resize an app's window, or display it on a higher or lower resolution display, it is supposed to scale and display properly (StackOverflow, for example, uses a 960px-wide container).
From a web developer standpoint, you are often asked to develop fixed-width websites (ranging from normally 940 to 1000 pixels wide), and they don't get to scale at all. There are a lot of websites like this and many apps just aren't designed to increase in size.
Also, apps that do grow in size usually expect that a bigger resolution also means a bigger screen, so they simply stretch the main application panels and are done with it.
Now, consider static elements, like a 150x50 button that says 'Click me'. This button is not intended to become bigger and is perfectly acceptable on a regular 1440x900 display. Now the retina screen comes in with its 2580x1800 resolution. The app sees the resolution change but it thinks "Hey, that user must have a huge screen" so it keeps the button the same size.
The problem that now occurs is that the button, because both resolutions apply to the same 13" screen, is now appearing to be a fraction of the size of the original button. Depending on your user vision, he might not be able to read the text on it, and might have a hard time clicking it, depending on the mouse settings.
To fix that problem, Apple and Microsoft used two different solutions:
Microsoft decided to tell the app the display had a 2580x1800 resolutions, but that the user wanted to have everything scaled to 200 dpi. This means that, if an app does not follow the guidelines, it will look smaller. Many apps simply ignore the DPI settings (though this might change with Windows 8);
Apple decided to report to apps that the resolution of the monitor was 1440x900, but that it could display higher-resolution elements if asked to; This means that apps existing before the new retina settings will appear to be the same size as before for the end-user (with added benefits like crisper text if they use the default Apple APIs), but that they can decide to provide high-DPI images that will look much better on the display.
Both solutions requires apps to be aware that the display is high-DPI ('retina'), but the way Apple handled it means the static websites and apps mentioned earlier will keep looking just fine, except they wont have super-crisp, high-resolution images to use. And, to opt-in to the retina features, they have to provide 200x200 images for a 100x100 canvas, for example, and Apple will take care of the rest.

How do I write an panel task bar in FLTK for use on Linux systems

I need to write a small application in C/C++ to implement a panel task bar like thing to display information along the top of a desktop window (specifically an xorg desktop on a Linux system). I need to avoid bloat and steep learning curves for the GUI programming.
My research is pointing me at GTK+/GTKmm or FLTK. It looks like FLTK is probably the simpler to get to grips with and the most likely to provide a small clean package with minimal dependencies. So I've based my research on FLTK so far.
I've been doing some reading and am struggling to find out how to write a basic program that will create a narrow undecorated window that covers the width of a monitor in such a way that maximising other applications would not obscure it. The FLTK tutorials I have found so far (including the FLTK documentation) only implement standard windows with borders that can be moved around the screen.
I'd like to start by writing a simple program in FLTK (or GTK+/GTKmm) that creates a 20 pixel deep bar across the with of the screen containing a "hello world" message. The bar's area would be reserved outside the area that other programs can access so that maximising another application would not hide the "hello world" message. I think this has something to do with a WM_STRUT_PARTIAL property but I can't find information about this in FLTK.
Doing this is partially to understand how to write a simple GUI program and partially to solve a specific need that I have.
I'm looking for any help/guidance to put me in the right direction to get started. Many thanks.
starfry, it is not a trivial task I believe. The problem is that your desktop (say GNOME2/Metacity) reserved that space, and paints its panel in the area where you want your bar. -
If you really want your tray-bar applet to be based on FLTK, the you would have to "embed" it in a (GNOME) applet. It was long ago when I did similar thing with SDL application, but I am afraid I forgot how to do it. The first thing that comes to my mind is to use somehow get the XID from the GNOME applet and somehow pass it to the FLTK part of it, and then let FLTK do the rest...
Sure, you may use another desktop, like KDE, or i3 or IceWM, they ALL have their own ways of dealing with the tray bar (there is no standard for it!) so, pardon my "French" - it is going to be a PITA to support all environments...
If I was on GNOME, i would write the applet entirely using GNOME/GTK. Forget FLTK in that case. That is my recommendation. If you target KDE, then do it using KDE/QT libraries (Plasma widget would be what to look for).
However, if you still want to use FLTK, start with the fltk::draw_into() function (it is probably called fl_draw_into() in FLTK 1.x), fltk::xid() and related functions.

Touch Screen Running Windows CE

I'm starting my first project that runs on a 7 inch touch screen running Windows CE 6.0 (and NETCF 3.5).
The touch screen doesn't respond to touch too well when I use my finger. The only way for me to navigate around is by using a stylus (or similar).
Since I've never worked with Windows CE or a resistive touch screen, I'm not sure if I should expect to be able to use my finger or if the stylus method is, essentially, the only way to effectively navigate around. - or, maybe, I have a touch screen that simply isn't that good.
If you have experience with WinCE running on a touch screen, do you find that a stylus is the only way to go?
A resistive touchscreen can certainly provide feedback for a finger - I've even configured them for hands-in-gloves. It sounds like the touchscreen driver is tossing out the data samples it's getting from the panel and the key for you is going to be to figure out why.
In my experience there are really two primary reasons for your samples to be ignored.
The driver has been configured to too tight of a tolerance.
Sensitivity is often a configurable item. Maybe through recompile of the OS, maybe through the registry - depends on how your OEM implemented it. Check with the OEM and see if you can adjust it.
The panel has too much noise, causing your samples to get tossed
This one is easy to check. Drag a selection rectangle on the desktop with a stylus and hold the end point down (don't lift the stylus). Is it steady, or does it "wiggle" a lot at the final point? If so, you have noise. Grounding the panel usually helps, but it could be a hardware issue. I've done rolling-average work in touchpanel drivers to help smooth this out, but you then have to fight hysteresis.
Be aware that larger touchscreens have different resistivity properties than smaller ones, so if you just swapped panels from small to large, it's quite possible that the output range difference of the panels is not workable with the current driver settings. Again, some OEMs provide the ability to adjust these settings.
So can it work with a finger? Well there's nothing in the physical theory that would prevent using a finger. In fact if you can't use a finger, there's something wrong. Will it work in reality? Check with your OEM.

Colour blindness simulator

Like any responsible developer, I'd like to make sure that the sites I produce are accessible to the widest possible audience, and that includes the significant fraction of the population with some form of colour blindness.
There are many websites which offer to filter a URL you feed it, either by rendering a picture or by filtering all content. However, both approaches seem to fail when rendering even moderately complex layouts, so I'd be interested in finding a client-side approach.
The ideal solution would be a system filter over the whole screen that can be used to test any program. The next best thing would be a browser plugin.
I came across Color Oracle and thought it might help. Here is the short description:
Color Oracle is a colorblindness simulator for Windows, Mac and Linux. It takes the guesswork out of designing for color blindness by showing you in real time what people with common color vision impairments will see.
Color Oracle is great, but another option is KMag, which is part of KDE in Linux. It's ostensibly a screen magnifier, but can simulate protanopia, deuteranopia, tritanopia and achromatopsia.
It differs from Color Oracle by requiring an additional window in which to display the re-coloured image, but an advantage is that one can modify the underlying image at the same time as previewing the simulation.
Here is a screenshot showing the original figure on the left, and the KMag window on the right, simulating protanopia.
Here's a link to a website that simulates various kinds of color blindness:
http://www.vischeck.com/
They let you check URL's and Screenshots with three kinds of different color blindness types (URL checking is a bit dated though. Image-check works better).
I'd encourage everyone to check their applications btw. Seeing your own app with others eyes may be an eye opener (pun intended).
I know this is a quite old question, but I've recently found an interesting solution to transparently simulate color blindness.
When working with Linux, you can simulate color blindness using the Color Filter plugin for Compiz. It comes with profiles for deuteranopia and protonopia und changes the colors of the whole screen in real-time.
It's very nice because it works transparently in all applications (even within Youtube-Videos), but it will only work where Compiz is available, e.g. only under Linux.
Here's an article that has some guidelines for optimizing UI for color blind users:
Particletree » Be Kind to the Color Blind
It contains a link to another article with the kind of tools you were asking for:
10 colour contrast checking tools to improve the accessibility of your design | 456 Berea Street
A great paper that explains a conversion that preserves color differences is:
Detail Preserving Reproduction of color images for Monochromats and Dichromats.(PDF)
I haven't implemented the filter, but I plan to when I have some more free time.
I found Colour Simulations easy to use on Windows 10. This software can apply a color-blind filter to a part of the screen or the whole screen. And what's great is it allows me to interact with my PC normally as if it doesn't exist in fullscreen mode. It runs quite slow in my 4K screen using an integrated graphics card, though.

Resources