Color System Computer graphic - graphics

Suppose that when you power on a computer its monitor displays predominantly variations of black and magenta. You suspect that one of the three wires RGB in your monitor cable is no longer properly connected. Which one is it? Show your reasoning?

Related

ICC Color profiles able to match Pantone, RAL, etc

Im trying to calibrate/adjust my screen color configuration more towards a specific paint color like RAL.
Im using Encycolorpedia works great tot determine (what color) and even deltas between specific paints. However my screen doesn't come close to the real life situation, I've received a color wheel and oh my gosh... what a huge difference. So ordering paint online is an absolute no-no.
The big question how to close that gap between Adobe RGB Color and paint-colors like (Sikkens) RAL or Pantone? Something within the ICC profile of the monitor (which is a Samsung)?
I know there are color-spiders to calibrate screens, but I really wonder if there's an online database or which corrections or calibrated settings for monitors ?
You cannot do it. And possibly you do not have correct understanding of ICC, colour profiles, and RAL.
If you want to calibrate your screen, you should have a good screen (I recommend one "hardware calibrated", so you will not lose colours), and a calibration device. So you will have a good calibrated monitor, which display the correct colours. You will use the ICC file to tell the screen (or graphic card) how to handle colours, and to the program, so they know which range of colours they can use.
A ICC profile just tell a screen how to convert numbers (colours). An AdobeRGB profile will not make your screen to see AdobeRGB colours, it will just transform the colours so that you will have the "numbers" as expected by a 100% precise AdobeRGB screen (which never exist, so it is better to use device specific profiles). If your screen is not 100% AdobeRGB it will display some colours in an unexpected way. Our eyes may adapt colours (so for a single person, this is not a huge problem, but if you are doing a magazine, with 15 graphic editors, the reader want consistency (there is not time to adapt eyes for every image).
But then you go to Pantone and RAL: there are different kind of colour description (really, forget RGB for such colours: you need spectral distribution). These are for real objects, so they are seen with different lighting conditions (illumination), and an object can be seen in different colours (as RGB), but being of the same colour (as paint/dye).
And Pantone and RAL have discrete colours (enumerated colours, not homogeneously distributed). And for screens we just use LUT or LUT3D, so simple matrix conversion of received colour numbers to displayed colour numbers). Not a thing a screen can do so quickly (60 time per second, for every pixel).
Finally: screens are very different technologies as objects. There are some screens which create the same feeling (and also they seems more "opaque"), but this is not a thing we can every reach with standard (or most of good wide-gamut screens). And screens sucks on yellows and yellow greens (now I'm thinking at various RAL used for emergency). Out of reach for most monitors. An interaction with a soft printing is necessary (and you need a [frequently] calibrated printer).
Note: usually you get drivers for your monitor (look online for your monitor manuals and drivers [and look just at manufacturer website]). They will usually have the "driver", which it is a ICC profile. But this is just a "standard" profile. Monitors will change with time (either cold [at start-up] to hot), but also with long period using it. And different batched may be different (especially if produced in different places, common for very common monitors). If you use their profile, you should get better colours. If you calibrate yourself, you will have much better colours, but as I wrote, it is probably not possible to have good/very good matching colours.

What is the point of having metric mapping modes like MM_LOMETRIC, and MM_LOENGLISH?

Page number 47 of book Programming with MFC (second edition) by Jeff Prosise (chapter 2: Drawing in a window), has the following statement.
One thing to keep in mind when you use the metric mapping modes is that on display screens, 1 logical inch usually doesn't equal 1 physical inch. In other words, if you draw a line that's 100 units long in the MM_LOENGLISH mapping mode, the line probably won't be exactly 1 inch long.
My question is, if windows cannot give any guarantee on the physical dimensions of things we draw using metric mapping modes, then what is the point of having such a mapping mode? Are metric mapping modes relevant only for printers, and completely irrelevant for monitors?
In modern monitors, with digital ports like HDMI/Display port, can't windows OS get physical dimensions of the screen, thus making it possible to draw things using metric dimensions (inches, rather than pixels, note that the current resolution of the monitor will already be known to the OS)?
One of the ideas behind the logical inch is that viewing distance to a monitor was typically larger than the distance to a printed page, so it made sense to have the default of a logical inch on a typical monitor be a bit larger than a physical inch, especially in an era where WYSIWYG was taking off. Rather than put all of the burden to adjust for device resolution on the application, the logical inch lets WYSIWYG application developer think in terms of distances and sizes on the printed page and not have to work in pixels or dots which varied widely from device to device (and especially from monitor to printer).
Another issue was that, with the relatively limited resolutions of early monitors, it just wasn't practical to show legible text as small as typically printed text. For example, text was commonly printed at 6 lines per inch. At typical monitor resolutions, this might mean 12 pixels per line, which really limits font design and legibility (especially before anti-aliased and sub-pixel rendered text was practical). Making the logical inch default to 120-130% of an actual inch (on a typical monitor of the era) means lines of text would be 16 pixels high, making typographic niceties like serifs and italic more tenable (though still not pretty).
Also keep in mind that the user controls the logical inch and could very well set the logical inch so that it matches the physical inch if that suited their needs.
The logical units are still useful today, even as monitors have resolutions approaching those of older laser printers. Consider designing slides for a presentation that will be projected and also printed as handouts. The projection size is a function of the projector's optics and its distance from the screen. There's no way, even with two-way communication between the OS and the display device for the OS to determine the actual physical size (nor would it be useful for most applications).
I'm not a CSS expert, but it's my understanding that even when working in CSS's px units, you're working in a logical unit that may not be exactly the size of a physical pixel. It's supposed to take into account the actual resolution of the device and the typical viewing distance, allowing web designers to make the same 96-per-inch assumption that native application developers had long been using.

How does an operating system draw windows on the screen?

I realized after many years of using and programming computers that the stack of software that actually draws on the screen is mostly a mystery to me.
I have worked on some embedded LCD GUI applications and I think that provides some clues as to a simplified stack but the whole picture for something like the Windows operating system is still murky.
From what I know:
Lowest level 0 is electronic hardware (integrated circuits) that provide a digital interface to turn a pixel on the screen a certain color or grey scale shade. The interface is documented in data sheets so you know how to toggle the digital lines to turn any pixel the way you want it.
Next level 1 is a hardware driver. This usually abstracts the hardware into a common interface. Something like SetPixel() etc.
Next level 2 is 2D/3D graphics library (of which I have limited widget/single screen experience). The lower levels seem to provide a buffer or range of memory that represents the pixels on the screen. The graphics library abstracts this so you can call functions like DrawText("text", 10, 10, "font") and it will set the pixels for you in the right way.
Next level would be the magic of the OS. The windows/buttons/forms/WPF/etc is created in memory and then routed to the appropriate driver while also being directed to a certain part of the screen?
But how does something like Windows really work?
I would assume that the GPU fits between level 0 and level 1. The GPU drives the pixels on the display directly and now the level 1 drivers are a GPU driver. There are more functions available to enable the added functionality a GPU provides. (what would this be though? Does the OS pass on an array of triangles in 3D space and the GPU processes this into a 3D perspective view and then chuck it on the screen?)
The biggest mystery to me though is when you get into the windows part of things. You can have sketch up, visual studio and a FPS game all running at the same time and be able to switch between them, or in some cases tile them on the screen or have then spread across multiple screens. How is this tracked and rendered? Each of these would have to be running in the background and the OS would have to say which graphics pipe should be connected to which part of the screen. How would Windows say this part of the screen is a 3D game and this part is a 2D WPF app etc?
On top of that all you have DirectX used in one application and Qt in another. I remember having multiple games or apps running that use the same technology so how would that work? From what I can see you would have Application->Graphics library (DirectX, WPF etc)->Frame Buffer->Windows director (where and what part of the screen should this frame buffer be scaled to)->Driver?
In the end it is just bits toggling to indicate which pixel should be what color but it is one hell of a lot of toggling bits along the way to get there.
If I fire up Visual Studio and create a basic WPF app what is all going on in the background when I drop a button on the screen and hit start? I have seen the VS designer to drop it on, created it in XAML and I have even manually drawn things pixel by pixel in an embedded system but what happens in between, the so-called meat of this sandwich?
I have used Android, iOS, Windows and Linux and it seem to be a common functionality but I have never seen or heard an explanation of the how behind what I outline above, I only have a slightly educated guess.
Is anyone able to shed some light on how this works?
VGA
Assuming x86, VGA memory is mapped at a standard video buffer address in the lowest 1 MiB (0x000B8000 for text mode and 0x000A0000 for graphics mode). There are also many VGA registers that control the behaviour of the card. There were two widely used video modes, mode 0x12 (16-color 640x480) and mode 0x13 (256-color 320x200). Mode 0x12 involved switching planes (blue, green, red, white) with VGA registers, while mode 0x13 involved having a 256-color palette which can be modified using VGA registers.
Normally, an OS relying on VGA would set the mode using BIOS while booting, or write to the appropriate VGA registers at runtime (if it knows what it is doing). To draw to the screen, the video driver would either simply write to the video memory (mode 0x13) or combine that with writing to VGA registers too (mode 0x12).
Most cards in use today are still (partly) VGA compatible.
VBE
Some years later, VESA invented "VESA BIOS Extensions", which was a standard interface for video cards and allowed higher resolutions and greater color depths. The video memory was exposed through two different ways: banked mode and linear framebuffer. The banked mode would expose some small portion of the video memory to a low address (0x000A0000) and the video driver would need to switch banks almost each time the screen is to be updated. The linear framebuffer is a much more convenient solution, which would map the entire video memory to a non-standard high address.
During boot, an OS would call the VBE interface to query for supported modes and to set the most convenient one, or it would bypass the VBE interface and write directly to the needed video hardware registers (if it knows what it is doing). In either between the banked mode and the linear framebuffer, the video driver would write to the specified memory address to which the video memory is mapped.
Most cards in use today are still (partly) VBE compatible.
Modern video interfaces
The most modern video interfaces usually aren't documented as widely as VGA and/or VBE. However, the video memory is still mapped at an address, while hardware registers and/or a buffer contain modifiable information about the behaviour of the graphics card. The difference is that the interfaces aren't standardised anymore and nowadays an advanced OS requires different drivers for each graphics card.

How to make colours on one screen look the same as another

Given two seperate computers, how could one ensure that colours are being projected roughly the same on each screen?
IE, one screen might have 50% brightness more than another, so colours appear duller on one screen. One artist on one computer might be seeing the pictures differently to another, it's important they are seeing the same levels.
Is there some sort of callibration technique via software you can do? Any techniques? Or is a hardware solution the only way?
If you are talking about lab-critical calibration (that is, the colours on one monitor need to exactly match the colours on another, and both need to match an external reference as closely as possible) then a hardware colorimeter (with its own appropriate software and test targets) is the only solution. Software solutions can only get you so far.
The technique you described is a common software-only solution, but it's only for setting the gamma curves on a single device. There is no control over the absolute brightness and contrast; you are merely ensuring that solid colours match their dithered equivalents. That's usually done after setting the brightness and contrast so that black is as black as it can be and white is as white as it can be, but you can still distinguish not-quite-black from black and not-quite-white from white. Each monitor, then, will be optimized for its own maximum colour gamut, but it will not necessarily match any other monitor in the shop (even monitors that are the same make and model will show some variation due to manufacturing tolerances and age/use). A hardware colorimeter will (usually) generate a custom colour profile for the device under test as it is at the time of testing, and there is generally and end-to-end solution built into the product (so your scanner, printer, and monitor are all as closely matched as they can be).
You will never get to an absolute end-to-end match in a complete system, but hardware will get you as close as you can get. Software alone can only get you to a local maximum for the device it's calibrating, independent of any other device.
What you need to investigate are color profiles.
Wikipedia has some good articles on this:
https://en.wikipedia.org/wiki/Color_management
https://en.wikipedia.org/wiki/ICC_profile
The basic thing you need is the color profile of the display on which the color was seen. Then, with the color profile of display #2, you can take the original color and convert it into a color that will look as close as possible (depends on what colors the display device can actually represent).
Color profiles are platform independent and many modern frameworks support them directly.
You may be interested in reading about how Apple has dealt with this issue:
Color Programming Topics
https://developer.apple.com/library/archive/documentation/Cocoa/Conceptual/DrawColor/DrawColor.html
You'd have to allow or ask the individual users to calibrate their monitors. But there's enough variation across monitors - particularly between models and brands - that trying to implement a "silver bullet" solution is basically impossible.
As #Matt Ball observes calibrating your monitors is what you are trying to do. Here's one way to do it without specialised hardware or software. For 'roughly the same' visual calibration against a reference image is likely to be adequate.
Getting multiple monitors of varying quality/brand/capabilities to render a given image the same way is simply not possible.
IF you have complete control over the monitor, video card, calibration hardware/software, and lighting used then you have a shot. But that's only if you are in complete control of the desktop and the environment.
Assuming you are just accounting for LCDs, they are built different types of panels with a host of different capabilities. Brightness is just one factor (albeit a big one). Another is simply the number of colors they are capable of rendering.
Beyond that, there is the environment that the monitor is in. Even assuming the same brand monitor and calibration points, a person will perceive a different color if an overhead fluorescent is used versus an incandescent placed next to the monitor itself. At one place I was at we had to shut off all the overheads and provide exact lamp placement for the graphic artists. Picky picky. ;)
I assume that you have no control over the hardware used, each user has a different brand and model monitor.
You have also no control over operating system color profiles.
An extravagant solution would be to display a test picture or pattern, and ask your users to take a picture of it using their mobile or webcam.
Download the picture to the computer, and check whether its levels are valid or too out of range.
This will also ensure ambient light at the office is appropiate.

Achieving Colour Consistency Across Different Monitors

I have an SWF file with only vector illustrations in it (no bitmaps). Is there a way to improve colour consistency across different monitors?
Colour management is a very complex topic and the more I read about it the more confused I become. There's this thing called ICC profiles which are supposed to convert colours into device independent color spaces, but of what use is that?
ICC profiles provides a way to map the colors that your monitor thinks it's showing (the bitmap/image, or other graphics) to what it is actually outputting on the panel. Using software that supports these profiles you can get more consistent colors.
The basic flow is this:
A program reads the graphics file
The program uses the icc profile to compensate for your monitors inadequacies
When you change monitor, you change the icc-profile to match the new monitor
When you print, you use a different icc-profile suitable for the printer to compensate for the printers inadequacies
This is meant to make sure that the colors on screen match the printed paper and is generally not something that scales beyond artsy stuff.
If you want consistency among your own monitors you would "just" have to calibrate them and configure the profiles for your monitors. I don't know how to do this, but my guess is that Adobe has pretty good docs about it.
If you want something like consistent colors on say a flash game across different users, I don't think that is possible. In any case it would be the clients job to manage the ICC-profile and the flashplayers job to support the compensation.
In any case, the part about adjusting the monitor settings before doing the calibration is because this changes the color reproduction of the display, so if you change the settings you will have to re-calibrate the display.
The problem is you have no control over your user's monitor (type, make, age, adjustment).
ICC profiles are designed to interpret between a real-world device (like a camera, monitor or printer) and an independent working colour space (see here for an explanation).
Flash 10 "supports ICC-profiles" only in the sense that you can specify whether or not flash should adjust it's colours according to the local ICC profile (chosen by the user to suit their monitor). So the most you can do is set stage.colorCorrection = ColorCorrection.ON; (and it won't work for Unix or Linux).
Otherwise, you could consider making the colours shown in your SWF file to be user-configurable: they can then adjust things to their own liking - perhaps via some form of colour-calibration.

Resources