Related
I am sorry if my question is a bit vague. I am trying to understand where to look for my problems. I have a regression test suite that captures and compare the screen. It seems like whenever we do some kind of library upgrade the regression tests would fail. Our font settings are the same. The difference would be like the graphics card upgrade (driver), window manager upgrade, or just third party library upgrade (for example Qt library). From human visual testing, the fonts look almost identical, but if I do pixel to pixel comparison, it would show that the snapshots are different. Does anyone have insight how the fonts are rendered ?
Graphics rendering on Linux is a proper mess. While Linux is about as old as Windows, Linux first tried to copy the old X11 window system. This was one of the oldest GUI systems in the world, and it shows - the API is beyond horrible. As a result, lots and lots of libraries were stacked on top of X11 to make it workable, with various degrees of compatibility.
To make things worse, X11 was not just a single implementation, there were competing X11 implementations. Linux chiefly used XFree86, which later became Xorg. And because that's not confusing enough, recent developments added a number of alternatives to X11, which support backwards-compatibility interfaces to X11. Some of those GUI libraries on top of X11 are aware of these new libraries, and may now use the new interfaces.
So, you basically have a pretty fragile system, and any library with a decent programming model has shaky foundations. It's no wonder that changing any part may suddenly cause re-rendering, possibly even choosing entirely new rendering paths.
Windows is a bit better, but it too is old and has some competing GUI libraries. The reason why it's better is probably threefold: there's a single party in control of all the interfaces (Microsoft), they were aware of the bad X11 design from the start (avoided beginner mistakes) and Microsoft has far more resources to spend.
But still, both Linux and Windows had to evolve to support Unicode and the much larger fonts it brought, 24 bits color, high-DPI screens, LCD screens with subpixel resolution, accelerated GPU's, etc. And it's been hard for both to dump old interfaces.
Can anyone please help me understand what resources the system uses to render fonts on the browser? For e.g. graphics card, memory, etc. Sorry for this vague question but I am facing a problem here. We have a custom font which gets rendered differently in same versions of safari browsers in different systems. For instances, in some systems the fonts appears to be a bit bolder and due to that it is taking up more space to get displayed. All the systems have same resolution.
Safari uses Quartz to render fonts on OS X. I'm not sure what it uses on Windows but it seem it doesn't use Windows rendering engine for that. Quartz utilizes some of the graphic card resources but it's still mostly software rendering.
Quartz has a number of rasterization options that can influence font appearance. One of the most prominent is Font Smoozing. Here's an article about Font Smoothing on support forums. Though, I don't think it should change the amount of space rasterized text actually takes.
You may have problems with fonts themselves. Maybe, different systems have different versions of the font you use and that changes the way it looks. Or your font doesn't get loaded on some systems and Safari uses a fall back font. There may be many reasons. It's hard to tell what exactly causes your problems just from your description.
I especially hear it from those advocates of "business" notebooks manufactured by IBM/Lenovo, HP, Dell (maybe) that "business users do not need quality screens". They stick in the worst possible LCDs out there (even if with a high resolution) and dare to sell that crap. You can't even distinguish hue variations like light-yellow vs. light-grey.
I really miss it - do all of you agree color reproduction of a developer display is irrelevant, be it even a grayscale display it will do?
I understand most of developers work with text but... at times there is some design work to be done which is not doable on cheap LCDs.
And besides - wouldn't you enjoy fresh saturated colors even in a development environment? Bright cheerful icons on menus? Isn't it better to sit in a sunny office with green trees and flowers out of the window than in a garage with dark colors and weak artificial lighting?
P.S. Inspired by the topic about keyboards: Keyboard for programmers
The question about displays and developers really interests me since a very long time.
Even though I don't need a high quality screen, I appreciate the difference, and like esnoeijs said, an occasion will arise where I'll need to critique some graphic design work where the quality monitor will make a difference.
I think, "developer" is too broad to give a precise answer.
If you are a code-crafter of programs reading text emitting text, without the need to make some colors look nice, then yes, then you really can go with a monochrome screen. you need black as a background, white as the foreground and some reversing to highlight matching braces. In this particular case, I would value high resolutions far far more important than colors, since usually it is about seeing more code (and especially, more things about and around the current piece of code, like documentation, tests, a quick interpreter loop, some research paper, you name it).
If you are a developer just learning a language and if you have an editor with syntax highlighting, then color is a massive, massive usability leap. I would not want to miss the ability to display keywords in a bright pink, strings in a brigt cyan and similar things (all on a black background)
If you are a frontend-designer, then it is a completely different story. If you are a frontend designer, you will need a high quality display with good color display abilities. You do not need the best one possible, but your display should at least be able to display the colors your regular user will use, so you will not put in green, because you wanted blue and your users see yellow (or other nonsenses).
if you use tools that require the use of colors in order to encode information, color is crucial, because you might be unable to see the additional information.
...
So, I think, most programmers do not need some ridiculous color displaying abilities, even though, most of the time, a good solid color display is helpful, because they need to work on some frontend or because they want to learn some language.
HTH,
Tetha
Better quality color monitors can come in handy in a lot of ways. The first way that comes to mind is if you are using a code development tool that has the capability of highlighting keywords such as Zend does.
I once spent half a day trying to add zebra striping to a table in my company's webapp that already had it because both my screen and QA's screen were unable to display the different colors of the zebra stripes (they rendered as the same color). Likewise, I once had my boss ask me to change the color of part of an icon, and to me it made the icon look like a uniform blue, but on his much better monitor, you could clearly see both shades of blue and it looked really nice... it was hard to make that edit without being able to see what I was doing.
I guess the developers in my company end up doing some design work in addition to real dev. I do spend most of my time in the shell though, so aside from the constant flickering that gives me headaches (yes, it's an LCD), a low-qual monitor is OK.
I'm a developer but being in webdev land i've picked up enough design stuff to be critical about it, so i mostly try to get samsung screens with a good colour range.
With a good monitor, you can adjust it to your likings.
Personally, I have a $700-Fujitsu Siemens monitor bought in (afaik) 2000, and a $340-BenQ bought in 2005, and I prefer coding on the first monitor, as I don't have to crank up the brightness (reducing headaches) and can still see everything I want to see (subpixelhinted 6 point fonts, subtle variations in syntax highlightings etc.).
At least one author would disagree. He ranked color accuracy on four notebooks:
Lenovo ThinkPad W700
IBM/Lenovo ThinkPad T60
Dell Inspiron Mini 9
Apple late-2008 MacBook Pro 15 inch
I'm less picky about the actual monitor I have and more picky that I have two monitors that are exactly the same model and use the same video connector.
As a web developer, it can be frustrating to have colors that don't match because one of your monitors is VGA and the other is DVI.
Possibly the sort of "business user" who works on invoices all day does not need a very good display, but anyone who works on anything whose appearance counts, from software developers to business users who need to make Powerpoint presentations, does.
If you are a hardcore terminal+vim user like me, they color quality and fidelity are almost irrelevant, except for the quality of blue (which I use in some situations, like directory names) which tends to be too faint to be seen on my black background. Nothing that cannot be fixed with some tinkering though, but I am used to blue.
That said, I actually have a couple of things to say about the new screen on the macbook unibody. The glossy finish is a real pain. So annoying. And the color fidelity is very low. I spent an evening trying to understand why on a gradient from light green to white I had a pinkish stripe. Turns out that the pink is an artifact of the macbook screen. Another screen does not show the issue. On the plus side, the LED backlight is very powerful and nice, making the colors very vibrant.
This to say that color fidelity is fundamental if you use color-intensive stuff like eclipse (which communicates a lot also through different shades of colors), and of course for web frontend development. If you just need a terminal and a vim running, I don't think color fidelity makes a real difference, once you have a comfortable setup with low reflections, and a good contrast.
(note: it's been a few years since I've shopped for a monitor. this may be out of date)
I find it interesting that nobody has really defined "quality" yet, other than to say more vibrant colors. Generally, LCD panels fall into one of two tracks:
Good color/image reproduction (S-IPS panels and similar)
Good response time ( TN panels )
I consider SIPS and similar panels a must for development for one crucial reason: look angle. The image doesn't change colors or do other weird things as you angle to the screen changes. Very important for collaboration.
At the high end of this scale are monitors that are designed to perform will with color calibration. Most developers won't need anything this fancy.
TN panels are decent for gaming, movies, and other things featuring fast motion. They are optimized for pixel response time, and it's usually the main feature touted for these panels. Many cheaper panels are going to be of this variety.
In a monitor, I look for four things:
panel type (S-IPS or similar)
brightness (no more than 300cd/m2)
dot pitch (for good text, go with a small dot pitch: 0.27 is too big)
good contrast/ light leakage/ etc. (how black is black, and how uniform)
Although I love S-IPS panels, I must admit that any LCD monitor that can meet criteria 2-4 above would be a good choice, even if it's a cheaper TN panel.
It depends what you're doing.
If you're dealing processing images, yes, a good "quality" monitor is important.. but equally (or more) important to have it set-up correctly and calibrated.
If you're doing web-design, having a decent monitor is important, but again only if it's setup correctly (contrast/brightness/colour-balance).
If you're just "writing code", having a monitor your eyes like is important, the colour replication isn't important. A monochrome monitor might be stretching it, syntax-highlighting is nice, but even vim and it's 16 colours is "enough"
The term quality is also a bit "it depends" also.. CRT's have far better colour replication than TFT's, but I wouldn't recommend them (I always found reading text on them difficult, and they are hard to find, bulky and generally deprecated now).
For web-design, pretty much any monitor will be fine as long as it's not a 10 year old CRT with a broken red cathode-tube.. Again, as long as it's set-up correctly, most monitors are capable of displaying colour "good enough"
For "writing code", I think size/resolution/number-of-screens is more important than colour replication, as shown by most answers to any of these questions
Like any responsible developer, I'd like to make sure that the sites I produce are accessible to the widest possible audience, and that includes the significant fraction of the population with some form of colour blindness.
There are many websites which offer to filter a URL you feed it, either by rendering a picture or by filtering all content. However, both approaches seem to fail when rendering even moderately complex layouts, so I'd be interested in finding a client-side approach.
The ideal solution would be a system filter over the whole screen that can be used to test any program. The next best thing would be a browser plugin.
I came across Color Oracle and thought it might help. Here is the short description:
Color Oracle is a colorblindness simulator for Windows, Mac and Linux. It takes the guesswork out of designing for color blindness by showing you in real time what people with common color vision impairments will see.
Color Oracle is great, but another option is KMag, which is part of KDE in Linux. It's ostensibly a screen magnifier, but can simulate protanopia, deuteranopia, tritanopia and achromatopsia.
It differs from Color Oracle by requiring an additional window in which to display the re-coloured image, but an advantage is that one can modify the underlying image at the same time as previewing the simulation.
Here is a screenshot showing the original figure on the left, and the KMag window on the right, simulating protanopia.
Here's a link to a website that simulates various kinds of color blindness:
http://www.vischeck.com/
They let you check URL's and Screenshots with three kinds of different color blindness types (URL checking is a bit dated though. Image-check works better).
I'd encourage everyone to check their applications btw. Seeing your own app with others eyes may be an eye opener (pun intended).
I know this is a quite old question, but I've recently found an interesting solution to transparently simulate color blindness.
When working with Linux, you can simulate color blindness using the Color Filter plugin for Compiz. It comes with profiles for deuteranopia and protonopia und changes the colors of the whole screen in real-time.
It's very nice because it works transparently in all applications (even within Youtube-Videos), but it will only work where Compiz is available, e.g. only under Linux.
Here's an article that has some guidelines for optimizing UI for color blind users:
Particletree ยป Be Kind to the Color Blind
It contains a link to another article with the kind of tools you were asking for:
10 colour contrast checking tools to improve the accessibility of your design | 456 Berea Street
A great paper that explains a conversion that preserves color differences is:
Detail Preserving Reproduction of color images for Monochromats and Dichromats.(PDF)
I haven't implemented the filter, but I plan to when I have some more free time.
I found Colour Simulations easy to use on Windows 10. This software can apply a color-blind filter to a part of the screen or the whole screen. And what's great is it allows me to interact with my PC normally as if it doesn't exist in fullscreen mode. It runs quite slow in my 4K screen using an integrated graphics card, though.
I'm interesting in learning about the different layers of abstraction available for making graphical applications.
I see a lot of terms thrown around: At the highest level of abstraction, I hear about things like C#, .NET, pyglet and pygame. Further down, I hear about DirectX and OpenGL. Then there's DirectDraw, SDL, the Win32 API, and still other multi-platform libraries like WxWidgets.
How can I get a good sense of where one of these layers ends and where the next one begins? What is the "lowest possible level" way of creating a window in Windows, in C? What about C++? (A code sample would be divine.) What about in X11? Are the Windows implementations of OpenGL and DirectX built on top of the Win32 API? Where can I begin to learn about these things?
There's another question on SO where Programming Windows is suggested. What about for Linux? Is there an equivalent such book?
I'm aware that this is very low-level, and that there are many friendlier tools available, but I would like to at least learn the basics of what's going on beneath the surface. As much as I'd like to begin slinging windows and vectors right off the bat, starting with something like pygame is too high-level for me; I really need to make the full conceptual circuit of how you draw stuff on a computer.
I will certainly appreciate suggestions for books and resources, but I think it would be stupendously cool if the answers to this question filled up with lots of different ways to get to "Hello world" with different approaches to graphics programming. C? C++? Using OpenGL? Using DirectX? On Windows XP? On Ubuntu? Maybe I ask for too much.
The lowest level would be the graphics card's video RAM. When the computer first starts, the graphics card is typically set to the 80x25 character legacy mode.
You can write text with a BIOS provided interrupt at this point. You can also change the foreground and background color from a palette of 16 distinctive colors. You can use access ports/registers to change the display mode. At this point you could say, load a different font into the display memory and still use the 80x25 mode (OS installations usually do this) or you can go ahead and enable VGA/SVGA. It's quite complicated, that's what drivers are for.
Once the card's in the 'higher' mode you'd change what's on screen by accessing the memory mapped to the video card. It's stored horizontally pixel by pixel with some 'dirty regions' of pixels that aren't mapped to screen at the end of each line which you have to compensate for. But yeah, you could copy the pixels of an image in memory directly to the screen.
For things like DirectX, OpenGL. rather than write directly to the screen, commands are sent to the graphics card and it updates its screen automatically. Commands like "Hey you, draw this image I've loaded into the VRAM here, here and here" or "Draw these triangles with this transformation matrix..." take a fraction of the time compared to pixel by pixel . The CPU will thank you.
DirectX/OpenGL is a programmer friendly library for sending those commands to the card with all the supporting functions to help you get it done smoothly. A more direct approach would only be unproductive.
SDL is an abstraction layer so without bothering to read up on it I'd guess it would have different ways of working on each system. On one it might use semi-direct screen writing, another Direct3D, etc. Whatever's fastest as long as the code stays cross-platform..able.
The GDI/GDI+ and XWindow system. They're designed specifically to draw windows. Originally they drew using the pixel-by-pixel method (which was good enough because they'd only have to redraw when a button was pressed or a window moved, etc.) but now they use Direct3D/OpenGL for accelerated drawing (and special effects). Optimizations depend on the versions and implementations of these libraries.
So if you want the most power and speed, DirectX/openGL is the way to go. SDL is certainly useful for getting the most from a cross-platform environment and integrates with OpenGL anyway. The windowing system comes last but don't underestimate it. Especially with the stuff Microsoft's coming up with lately.
Michael Abrash's Graphics Programming 'Black Book' is a great place to start. Plus you can download it for free!
If you really want to start at the bottom then drawing a line is the most basic operation. Computer graphics is simply about filling in pixels on a grid (screen), so you need to work out which pixels to fill in to get a line that goes from (x0,y0) to (x1,y1).
Check out Bresenham's algorithm to get a feel for what is involved.
To be a good graphics and image processing programmer doesn't require this low level knowledge, but i do hate to be clueless about the insides of what i'm using. I see two ways to chase this - high-level down, or bottom-level up.
Top-down is a matter of following how the action traces from a high-level graphics operation such as to draw a circle, to the hardware. Get to know OpenGL well. Then the source to Mesa (free!) provides a peek at how OpenGL can be implemented in software. The source to Xorg would be next, first to see how the action goes from API calls through the client side to the X server. Finally you dive into a device driver that interfaces with hardware.
Bottom up: build your own graphics hardware. Think of ways it could connect to a computer - how to handle massive numbers of pixels through a few byte-size registers, how DMA would work. Write a device driver, and try designing a graphics library that might be useful for app programmers.
The bottom-up way is how i learned, years ago when it was a possibility with the slow 8-bit microprocessors. The direct experience with circuitry and hardware-software interfacing gave me a good appreciation of the difficult design decisions - e.g. to paint rectangles using clever hardware, in the device driver, or higher level. None of this is of practical everyday value, but provided a foundation of knowledge to understand newer technology.
see Open GPU Documentation section:
http://developer.amd.com/documentation/guides/Pages/default.aspx
HTH
On MSWindows it is easy: you use what the API provides, whether it is the standard windows programming API or the DirectX-family API's: that's what you use, and they are well documented.
In an X windows environment you use whatever X11-libraries that are provided. If you want to understand the principles behind windowing on X, I suggest that you do this, nevermind that many others tell you not to, it will really help you to understand graphics and windowing under X. You can read the documentation on X-programming (google for it). (After this exercise you would appreciate the higher level libraries!)
Apart from the above, at the absolutely lowest level (excluding chip-level) that you can go is to call the interrupts that switch to the various graphics modes available - there are several - and then write to the screen buffers, but for this you would have to use assembler, anything else would be too slow. Going this way will not be portable at all.
Another post mentions Abrash's Black Book - an excellent resource.
Edit: As for books on programming Linux: it is a community thing, there are many howto's around; also find a forum, join it, and as long as you act civilized you will get all the help you can ever need.
Right off the bat, I'd say "you're asking too much." From what little experience I've had, I would recommend reading some tutorials or getting a book on either directX or OpenGL to start out. To go any lower than that would be pretty complex. Most of the books I've seen in OGL or DX have pretty good introductions that explain what the functions/classes do.
Once you get the hang of one of these, you could always dig in to the libraries to see what exactly they're doing to go lower.
Or, if you really, absolutely MUST learn the LOWEST level... read the book in the above post.
libX11 is the lowest level library for X11. I believe the opengl/directx talk to the driver/hardware directly (or emulate unsupported ops), so they would be the lowest level library.
If you want to start with very low level programming, look for x86 assembly code for VGA and fire up a copy of dosbox or similar.
Vulkan api is an api which gives you very low level access to most if not all features of the gpu, computational and graphical, it works on amd and Nvidia gpus (not all)
you can also use CUDA, but it only works on Nvidia gpus and has access to computational features only, no video output.