QGIS Reset Zero Position on ColourRamp for Positive and Negative Values - colors

I'm using the Turbo colour ramp over several layers, each layer representing the different time stages of a tsunami moving away from an island. Because the height of the tsunami waves are at different elevations in each layer, the colour ramp is sliding up or down from sea level in each layer to reflect this change in elevations.How do I set the colour ramp parameters so that the sea level elevation is the same colour for each layer.

Related

Raytracing and Computer Graphics. Color perception functions

Summary
This is a question about how to map light intensity values, as calculated in a raytracing model, to color values percieved by humans. I have built a ray tracing model, and found that including the inverse square law for calculation of light intensities produces graphical results which I believe are unintuitive. I think this is partly to do with the limited range of brightness values available with 8 bit color images, but more likely that I should not be using a linear map between light intensity and pixel color.
Background
I developed a recent interest in creating computer graphics with raytracing techniques.
A basic raytracing model might work something like this
Calculate ray vectors from the center of the camera (eye) in the direction of each screen pixel to be rendered
Perform vector collision tests with all objects in the world
If collision, make a record of the color of the object at the point where the collision occurs
Create a new vector from the collision point to the nearest light
Multiply the color of the light by the color of the object
This creates reasonable, but flat looking images, even when surface normals are included in the calculation.
Model Extensions
My interest was in trying to extend this model by including the distance into the light calculations.
If an object is lit by a light at distance d, then if the object is moved a distance 2d from the light source the illumination intensity is reduced by a factor of 4. This is the inverse square law.
It doesn't apply to all light models. (For example light arriving from an infinite distance has intensity independent of the position of an object.)
From playing around with my code I have found that this inverse square law doesn't produce the realistic lighting I was hoping for.
For example, I built some initial objects for a model of a room/scene, to test things out.
There are some objects at a distance of 3-5 from the camera.
There are walls which make a boundry for the room, and I have placed them with distance of order 10 to 100 from the camera.
There are some lights, distance of order 10 from the camera.
What I have found is this
If the boundry of the room is more than distance 10 from the camera, the color values are very dim.
If the boundry of the room is a distance 100 from the camera it is completely invisible.
This doesn't match up with what I would expect intuitively. It makes sense mathematically, as I am using a linear function to translate between color intensity and RGB pixel values.
Discussion
Moving an object from a distance 10 to a distance 100 reduces the color intensity by a factor of (100/10)^2 = 100. Since pixel RGB colors are in the range of 0 - 255, clearly a factor of 100 is significant and would explain why an object at distance 10 moved to distance 100 becomes completely invisible.
However, I suspect that the human perception of color is non-linear in some way, and I assume this is a problem which has already been solved in computer graphics. (Otherwise raytracing engines wouldn't work.)
My guess would be there is some kind of color perception function which describes how absolute light intensities should be mapped to human perception of light intensity / color.
Does anyone know anything about this problem or can point me in the right direction?
If an object is lit by a light at distance d, then if the object is moved a distance 2d from the light source the illumination intensity is reduced by a factor of 4. This is the inverse square law.
The physical quantity you're describing here is not intensity, but radiant flux. For a discussion of radiometric concepts in the context of ray tracing, see Chapter 5.4 of Physically Based Rendering.
If the boundary of the room is more than distance 10 from the camera, the color values are very dim.
If the boundary of the room is a distance 100 from the camera it is completely invisible.
The inverse square law can be a useful first approximation for point lights in a ray tracer (before more accurate lighting models are implemented). The key point is that the law - radiant flux falling off by the square of the distance - applies only to the light from the point source to a surface, not to the light that's then reflected from the surface to the camera.
In other words, moving the camera back from the scene shouldn't reduce the brightness of the objects in the rendered image; it should only reduce their size.

gnuplot scale plot function to same height

I am drawing distribution curves of three different datasets.
They have different means and standard deviations, and thus different curves. However, the plots appear different when in the same graph.
I use the normal curve function:
std_b=0.1674
mu_b=.6058
mu_j=0.8955
std_j=0.0373
mu_s=0.9330
std_s=0.0240
normal(x,mu,sd) = (1/(sd*sqrt(2*pi)))*exp(-(x-mu)**2/(2*sd**2))
plot normal(x,mu_b,std_b) w boxes title "Boolean",\
normal(x,mu_j,std_j) w boxes title "Jaccard",\
normal(x,mu_s,std_s) w boxes title "Sorensen"
However the scale of the curves if off as seen by the difference in the Y axis.
How can I scale each plot function, so that they are all at the same Y height?
In general, you can't.
These are probability density functions, which means that they must be positive and they must have an area of exactly 1 under the curve (the formal definition is a little more technical, but that is the statistics 101 definition). Because of that, when you make the curve less spread out (which is what the standard deviation is measuring), in order to preserve the area, you must make the peak in the middle higher.
If it helps to visualize it, think of a finite distribution in the shape of an isosceles triangle.
Both the purple and green triangles form perfectly valid probability distributions. In the case of the purple distribution, it has a base of length 10 (from 0 to 10) and a height of 1/5, giving an area of 1. If I want to make it cover a smaller range (which again is basically what the standard deviation is doing in your normal curves), I push the sides together (in this case a length of 6 - from 2 to 8), but in order to preserve the area of 1, I have to make the triangle taller (in this case a height of 1/3). If I kept the same height, I would have less than an area of 1.
In your normal distributions, the y height is controlled by the scale in front of your exponential functions. Getting a rid of that, or setting them to be the same will make them have the same height, but they will no longer be probability distributions, as the area will not be 1. In general, for a normal distribution, the smaller the standard deviation, the taller the peak.

What units should I use for lighting in a 3D rendering engine?

In reading academic papers on rendering, graphics processing, lighting, etc..., I am finding a wide variety of units mentioned and used.
For example, Bruneton's Atmospheric Scattering paper seems to use candellas per square meter (cd/m^2), representing luminance. However, Jensen's Night Model uses watts per square meter (W/m^2), implying irradiance. Other papers mention irradiance, luminance, illuminance, etc., with seemingly no common representation of the lighting calculations used. How then, can one even be sure that in implementing all of these papers, that the calculations will "play well" together?
To add to the confusion, most papers on adaptive tonemapping seem to forego units at all, merely recommending that pixels (referred to as luminance) be rendered in a log format (decibels). A decibel is useless without a reference intensity/power.
Which begs the question, what unit does a single pixel represent? When I calculate the "average luminance" of a scene by averaging the log-brightness of the pixels, what exactly am I calculating? The term "luminance" itself implies an area being illuminated and a solid angle for the source. This leads to two more questions: "What is the solid angle of the point source?" "What is the area of a pixel?"
My question is thus,
What units should lighting in a 3d graphics engine be represented in to allow for proper, calibrated brightness control across a wide variety of light sources, from faint starlight to bright sunlight, and how does this unit relate to the brightness of individual pixels?
Briefly: radiance, measured in candela per square meter (cd/m^2) is the appropriate unit.
Less briefly: computer graphics is usually concerned with what things should look like to people. The units that describe this are:
"luminous flux" is measured in lumens, or lm, which are defined proportional to the total radiated power (in watts) of light at a particular wavelength.
"luminous intensity" is measured in candela, or cd, which can be defined as lumens per steradian (lm/sr).
Intuitively, when you spread the same amount of energy over a larger area, it becomes proportionately less bright. This yields two more quantities:
"irradiance" is the luminous flux per unit area. It is measured in lm/m^2, and is proportional to W/m^2.
"radiance" is the luminous intensity per unit area. It is measured in cd/m^2, or lm/(sr.m^2).
Now, to answer your questions:
Each pixel represents a finite amount of solid angle from the camera, which is measured in steradian. In the context of your question, the relevant area is the area of the object being rendered.
The radiance (measured in cd/m^2) represents surface brightness, and has the unique property that it is invariant along any unobstructed path of observation (which makes it the most appropriate quantity for a rendering engine). The color of each pixel represents the average radiance over the solid angle occupied by that pixel.
Note that, by definition, a point source doesn't occupy any solid angle; its radiance is technically infinite. Its irradiance is finite, though, and technically it should only contribute a finite (if potentially large) effect to a given pixel value. In any case, if you want to directly render point sources, you will need to treat them differently from area sources, and deal with the problem that quantities other than radiance are not invariant over a given ray...
When Jensen et al's paper "A Physically-Based Night Sky Model" uses an irradiance-related W/m^2 in a table of various sources of illumination, I would guess that their intention was to describe their relative contribution as averaged over the entire night sky, as abstracted from any visual details.
Finally, note that truly physically based models need to integrate over the observable spectrum in order to evaluate brightness and color. Thus, such models must start out with a distribution of watts over visible wavelengths, and use the standard human colorimetric model to evaluate lumens.
The SI unit for brightness is the Candela per square metre so if your wanting to represent actual physical quantities it would be hard to argue against using that. As for how this unit relates to the brightness of an individual pixel that would be a function of the brightness at that part of the illumination source represented in the pixels viewing area combined with contributions from elsewhere in the scene as calculated by the engine - presumably this would very completely depending on the renderer.

DirectWrite'ing glyphs such that the em square has a specific size

I'm working on an application that renders music notation. The musical symbol are specified in regular font files, which use the convention that the height of the em square corresponds to the height of a regular five-line staff of music. For example, the glyph for a note head is approximately 0.25 em high, the distance between two lines of the staff.
When it comes to rendering, I use a coordinate system in which 4 units corresponds to the height of a five-line staff of music. Therefore, I need to render glyphs such that the em square ends up rendered 4 units high. However DirectWrite only allows specifying text size in device independent pixels (DIPs) and I'm confused about how to juggle between the coordinate systems. There are two parts to this:
From a given font size in DIPs I can compute a height in physical pixels, but what is mapped to that height? The em square or some other design-space metric?
What if I'm using some arbitrary transformation matrix? How do I specify DIPs in order to get meaningful values in the coordinate system I am using?
And for good measure:
If get this to work, is this going to mess up font hinting because my DIP values don't have a clear relationship to physical pixels?
After some more experimentation and research, I have come to the following conclusions.
The font size specifies the size of the EM square as drawn. Drawing at 12 DIPs means that the EM square is scaled to use 12 DIPs of vertical space.
The top Y coordinate of the layoutRect parameter of the ID2D1RenderTarget::DrawText function is mapped to the top of the font's ascent (for the first line of text).
The identity matrix gives a coordinate system in which (0, 0) is the top-left and (width, height), as retrieved from ID2D1RenderTarget::GetSize, is the bottom-right, in DIPs. Which means for any transformation matrix, the font size unit should match the unit in the render target's coordinate system and a vertical line of 42 units will be as high as the EM square with a font size of 42 units.
I was unable to find information about the effect of arbitrary transformations on font hinting, however.

Differentiate table line from big letters

I'm doing some graphics processing and I have a logic where in I have a bitmap with edges and I disregard all table edges from the letters E.g.
0000000000
0111111110
0100000010
0102220010
0100200010
0100200010
0100000010
0111111110
0000000000
0 - background color
1 - ignored edges
2 - edges I need
My logic is just simple, if a number of continuous pixels exceeds a certain threshold, e.g. 20pixels of continuous edges, it will consider it as a line and disregard it.
My problem is that on big font size and letters such as H and T, it will definitely exceed the threshold. Please advise is there a better way or additional logic i need to implement in order to separate table lines from letters.
[update] Additional consideration: Performance, this logic will be used during touch movement (dragging). It will be called a lot of times so it needs to be fast.
If table lines are guaranteed to be thin, then ignore thick lines. However, if the lines in your application are generated by edge detection (which are always 1-pixel thin) then connected-component will be needed.
Basically, the "thickness" refers to thickness measured from an edge profile:
00000000100000000 This line has thickness 1
00000011111000000 This line has thickness 5. However, this cannot occur in the output of edge detection, because edge detection algorithms are specifically designed to remove this condition.
00000000111111111 This is a transition from black to white.
Table lines usually have small thickness. Large fonts usually have transition from black to white because their thickness is larger than the edge profile window.

Resources