VTK - explain simple code - vtk

Can you explain me following simple code ?
VolumeScalarOpacity->AddPoint(0.0, 0.0);
VolumeScalarOpacity->AddPoint(0.25, 0.0);
VolumeScalarOpacity->AddPoint(1.0, 0.1);
and
VolumeGradientOpacity->AddPoint(0.0, 0.0);
VolumeGradientOpacity->AddPoint(1.0, 0.0);
VolumeGradientOpacity->AddPoint(90.0, 0.1);
VolumeGradientOpacity->AddPoint(900.0, 0.5);
where VolumeScalarOpacity and VolumeGradientOpacity is type of vtkPiecewiseFunction ... I see nowhere explain these methods ...
I struggle to render an CT volume ... thank you.

A vtkPiecewiseFunction defines a 1D piecewise Function. See this from the class Documentation: vtkPiecewiseFunction Documentation
Defines a piecewise function mapping. This mapping allows the addition
of control points, and allows the user to control the function between
the control points. A piecewise hermite curve is used between control
points, based on the sharpness and midpoint parameters. A sharpness of
0 yields a piecewise linear function and a sharpness of 1 yields a
piecewise constant function. The midpoint is the normalized distance
between control points at which the curve reaches the median Y value.
The midpoint and sharpness values specified when adding a node are
used to control the transition to the next node (the last node's
values are ignored) Outside the range of nodes, the values are 0 if
Clamping is off, or the nearest node point if Clamping is on. Using
the legacy methods for adding points (which do not have Sharpness and
Midpoint parameters) will default to Midpoint = 0.5 (halfway between
the control points) and Sharpness = 0.0 (linear).
You seem to use it for volume visualisation. And the code uses the legacy type of AddPoint. So for your VolumeScalarOpacity it controls the opacity of the scalars of the volume. For your code it creates a function which evaluates to 0.0 for scalars from 0.0 up to 0.25. Then your function describes a linear rise to 0.1 for scalars >0.25 up until 1.0.
If you have clamping on values bigger then 1.0 will be 0.1 otherwise they will be 0.0.
If you have trouble visualising your data, make sure your piecewise function has meaningful values across the entire scalar range of your data. Also make sure your opcaity values are reasonable. 0.1 is not much and maybe you wont see what you like. Experiment with the values, so they suit your need.

Related

SVG: Bezier Curves that start at a different point than the origin point used to compute the path

I'm trying to draw a path from one foreignObject to another.
I'd like the path to be oriented / set according to the centre of each object, but only start some distance away from the object. For example, here are straight-lined paths from one object to two different objects: notice that the starting point for the paths is not the same; rather, it has been adjusted to lie on the line connecting the two objects.
If the path is a straight line, this is easy enough to achieve: simply start and end
the path at a displacement of Δr along the straight line defined by the centre points of the objects.
However, I'm not sure how one would achieve this, in the case of a Bezier curve (quadratic or cubic).
If it were possible to make part of the path transparent (i.e. set the stroke colour for different parts of the path), then one could just use the centre points and set the the first Δs px to transparent; however, I'm not aware of any way of doing this.
Alternatively, if it were possible to set the start and end points of a path independently of the points used to compute the path, this would address all cases (linear, Bézier quadratic or cubic).
Another option might be to use the dash-array property, but this would require knowing the length of the path. e.g. if the path length is S, then setting the dash-array to x-calc(S-2x)-x would also achieve the desired result.
Is there any way of achieving this programmatically?
I don't mind legwork, so even just pointers in the right direction would be appreciated.
Here's an idea: use de Casteljau algorithm twice to trim off the beginning and the end portions of your curve.
Say you were asked to evaluate a cubic Bézier curve defined by the control points C_{0,0}, C_{1,0}, C_{2,0} and C_{3,0} at a particular parameter t between 0 and 1. (I assume that the parameter interval of the curve is [0,1] and I give the control points such strange names in the anticipation of the following. Have a look at the Wikipedia article if you work with a curve degree different from 3.)
You would proceed as follows:
for j = 1, ..., 3
for i = 0, ..., 3 - j
C_{i, j} = (1 - t) * C_{i, j-1} + t * C_{i+1, j-1}
The point C_{0, 3} is the value of your curve at the parameter value t. The whole thing is much easier to understand with a picture (I took t=0.5):
However, the algorithm gives you more information. For instance, the control points C_{0,0}, C_{0,1}, C_{0,2} and C_{0,3} are the control polygon a curve which is equal to your curve restricted to the interval [0, t]; similarly, C_{0,3}, C_{1,2}, C_{2,1} and C_{3,0} give you a Bézier curve equal to your curve restricted to [t, 1]. This means that you can use de Casteljau algorithm to divide your curve in two at a prescribed interval.
In your case, you would:
Start with the curve you show in the bottom picture.
Use de Casteljau algorithm to split your curve at a parameter t_0 close to 0 (I would start with t_0 = 0.1 and see what happens).
Throw away the part defined on [0, t_0] and keep only that defined on [t_0, 1].
Take a parameter t_1 close to 1 and split the remaining part from 3.
Keep the beginning and throw away the (short) end.
Note that this way you will be splitting you curve according to parameter values and not based on its length. If your curves are similar in shape, this is not a problem but if they would differ significantly, you might have to invest some effort at finding suitable values of t_0 and t_1 programmatically.
Another issue is the choice of t_1. I suppose that due to symmetry, you would want to split your curve into [0, t_0], [t_0, 1 - t_0], [1 - t_0, 1]. Taking t_2 = 1 - t_1 would not do, because t_2 refers to the parameter interval of the result of step 3 and that is again [0, 1]! Instead, you would need something like t_2 = (1 - t_1)^2.
A basic solution using layers:
Layer 1: paths;
Layer 2: elements coloured the same as the background which are slightly larger than the elements layered on top of them (for the emoji, I used circles, for text, I used the same text with a larger pt-size);
Layer 3: the elements themselves.
Pros & Cons
Pros:
quick;
easy;
Cons:
can't use markers (because they are masked by the background-coloured objects);
may require some tinkering to find the most appropriate type of background element;
Here is a screenshot of a sample output:

Can camera.lookAt function be calculated from an angle and an axis of rotation, given a target point and the camera forward?

I am trying to understand three.js's camera.lookAt function, and to implement my own.
I'm using eye = camera position, target = target look at point, and up is always (0, 1, 0). A friend proposed that the obvious way to rotate a camera to look at a point in space would be to get the desired forward by calculating target - eye, and compute the angle between the camera's forward vector (0, 0, -1) and the target forward (using atan2 method described in these answers), and this is the angle of rotation. I would be able to find the axis of the rotation by computing the crossProduct of the forward vector and the desired forward. I would use a function like setFromAxisAngle to get the resulting quaternion.
Tried to draw it here:
Would this work in theory? When testing it against the canonical lookAt method, which uses eye, up, and target and does z = (eye - target), x = Cross(up, z), y = Cross(x, z) -- (also, why is it eye - target instead of target - eye?) -- I see small ( < 0.1 differences).
I personally think the implementation of three.js's Matri4.lookAt() method is somewhat confusing. It's also in the wrong class, it should be placed in Matrix3. Anyway, a more clear implementation can be found in the MathGeoLib, a C++ library for linear algebra and geometry manipulation for computer graphics.
https://github.com/juj/MathGeoLib/blob/ae6dc5e9b1ec83429af3b3ba17a7d61a046d3400/src/Math/float3x3.h#L167-L198
https://github.com/juj/MathGeoLib/blob/ae6dc5e9b1ec83429af3b3ba17a7d61a046d3400/src/Math/float3x3.cpp#L617-L678
A lookAt() method should first build an orthonormal linear basis A (localRight, localUp, localForward) for the object's local space. Then it builds an orthonormal linear basis B (worldRight, worldUp, targetDirection) for the desired target orientation. The primary task of lookAt() is to map from basis A to B. This is done by multiplying m1 (basis B) with the inverse of m2(basis A). Since this matrix is orthonormal, the inverse is computed by a simple transpose.
m1.makeBasis( worldRight, worldUp, targetDirection );
m2.makeBasis( localRight, localUp, localForward );
this.multiplyMatrices( m1, m2.transpose() );
this references to an instance of a 3x3 matrix class.
I suggest you carefully study the well-documented C++ code in order to understand each single step of the method.

How to tell if an xyY color lies within the CIE 1931 gamut?

I'm trying to plot the CIE 1931 color gamut using math.
I take a xyY color with Y fixed to 1.0 then vary x and y from 0.0 to 1.0.
If I plot the resulting colors as an image (ie. the pixel at (x,y) is my xyY color converted to RGB) I get a pretty picture with the CIE 1931 color gamut somewhere in the middle of it, like this:
xyY from 0.0 to 1.0:
Now I want the classic tongue-shaped image so my question is: How do I cull pixels outside the range of the CIE 1931 color gamut?
ie. How can I tell if my xyY color is inside/outside the CIE 1931 color range?
I happened upon this question while searching for a slightly different but related issue, and what immediately caught my eye is the rendering at the top. It's identical to the rendering I had produced a few hours earlier, and trying to figure out why it didn't make sense is, in part, what led me here.
For readers: the rendering is what results when you convert from {x ∈ [0, 1], y ∈ [0, 1], Y = 1} to XYZ, convert that color to sRGB, and then clamp the individual components to [0, 1].
At first glance, it looks OK. At second glance, it looks off... it seems less saturated than expected, and there are visible transition lines at odd angles. Upon closer inspection, it becomes clear that the primaries aren't smoothly transitioning into each other. Much of the range, for example, between red and blue is just magenta—both R and B are 100% for almost the entire distance between them. When you then add a check to skip drawing any colors that have an out-of-range component, instead of clamping, everything disappears. It's all out-of-gamut. So what's going on?
I think I've got this one small part of colorimetry at least 80% figured out, so I'm setting this out, greatly simplified, for the edification of anyone else who might find it interesting or useful. I also try to answer the question.
(⚠️ Before I begin, an important note: valid RGB display colors in the xyY space can be outside the boundary of the CIE 1931 2° Standard Observer. This isn't the case for sRGB, but it is the case for Display P3, Rec. 2020, CIE RGB, and other wide gamuts. This is because the three primaries need to add up to the white point all by themselves, and so even monochromatic primaries must be incredibly, unnaturally luminous compared to the same wavelength under equivalent illumination.)
Coloring the chromaticity diagram
The xy chromaticity diagram isn't just a slice through xyY space. It's intrinsically two dimensional. A point in the xy plane represents chromaticity apart from luminance, so to the extent that there is a color there it is to represent as best as possible only the chromaticity, not any specific color. Normally the colors seem to be the brightest, most saturated colors for that chromaticity, or whatever's closest in the display's color space, but that's an arbitrary design decision.
Which is to say: to the extent that there are illustrative colors drawn they're necessarily fictitious, in much the same way that coloring an electoral map is purely a matter of data visualization: a convenience to aid comprehension. It's just that, in this case, we're using colors to visualize one aspect of colorimetry, so it's super easy to conflate the two things.
(Image credit: Michael Horvath)
The falsity, and necessity thereof, of the colors becomes obvious when we consider the full 3D shape of the visible spectrum in the xyY space. The classic spectral locus ("horse shoe") can easily be seen to be the base of a quasi-Gibraltian volume, widest at the spectral locus and narrowing to a summit (the white point) at {Y = 1}. If viewed as a top-down projection, then colors located on and near the spectral locus would be very dark (although still the brightest possible color for that chromaticity), and would grow increasingly luminous towards the center. If viewed as a slice of the xyY volume, through a particular value of Y, the colors would be equally luminous but would grow brighter overall and the shape of the boundary would shrink, again unevenly, with increasing Y, until it disappeared entirely. So far as I can tell, neither of these possibilities see much, if any, practical use, interesting though they may be.
Instead, the diagram is colored inside out: the gamut being plotted is colored with maximum intensities (each primary at its brightest, and then linear mixtures in the interior) and out-of-gamut colors are projected from the inner gamut triangle to the spectral locus. This is annoying because you can't simply use a matrix transformation to turn a point on the xy plane into a sensible color, but in terms of actually communicating useful and somewhat accurate information it seems, unfortunately, to be unavoidable.
(To clarify: it is actually possible to move a single chromaticity point into the sRGB space, and color the chromaticity diagram pixel-by-pixel with the most brightly saturated sRGB colors possible—it's just more complicated than a simple matrix transformation. To do so, first move the three-coordinate xyz chromaticity into sRGB. Then clamp any negative values to 0. Finally, scale the components uniformly such that the maximum component value is 1. Be aware this can be much slower than plotting the whitepoint and the primaries and then interpolating between them, depending on your rendering method and the efficiency of your data representations and their operations.)
Drawing the spectral locus
The most straightforward way to get the characteristic horseshoe shape is just to use a table of the empirical data.
(http://cvrl.ioo.ucl.ac.uk/index.htm, scroll down for the "historical" datasets that will most closely match other sources intended for the layperson. Their too-clever icon scheme for selecting data is that a dotted-line icon is for data sampled at 5nm, a solid line icon is for data sampled at 1nm.)
Construct a path with the points as vertices (you might want to trim some off the top, I cut it back to 700nm, the CIERGB red primary), and use the resulting shape as a mask. With 1nm samples, a polyline should be smooth enough for near any resolution: there's no need for fitting bezier curves or whatnot.
(Note: only every 5th point shown for illustrative purposes.)
If all we want to do is draw the standard horse shoe bounded by the triangle {x = 0, y = 0}, {0, 1}, and {1, 0} then that should suffice. Note that we can save rendering time by skipping any coordinates where x + y >= 1. If we want to do more complex things, like plot the changing boundary for different Y values, then we're talking about the color matching functions that define the XYZ space.
Color matching functions
(Image credit: User:Acdx - Own work, CC BY-SA 4.0)
The ground truth for the XYZ space is in the form of three functions that map spectral power distributions to {X, Y, Z} tristimulus values. A lot of data and calculations went into constructing the XYZ space, but it all gets baked into these three functions, which uniquely determine the {X, Y, Z} values for a given spectrum of light. In effect, what the functions do is define 3 imaginary primary colors, which can't be created with any actual light spectrum, but can be mixed together to create perceptible colors. Because they can be mixed, every non-negative point in the XYZ space is meaningful mathematically, but not every point corresponds to a real color.
The functions themselves are actually defined as lookup tables, not equations that can be calculated exactly. The Munsell Color Science Laboratory (https://www.rit.edu/science/munsell-color-lab) provides 1nm resolution samples: scroll down to "Useful Color Data" under "Educational Resources." Unfortunately, it's in Excel format. Other sources might provide 5nm data, and anything more precise than 1nm is probably a modern reconstruction which might not commute with the 1931 space.
(For interest: this paper—http://jcgt.org/published/0002/02/01/—provides analytic approximations with error within the variability of the original human subject data, but they're mostly intended for specific use cases. For our purposes, it's preferable, and simpler, to stick with the empirically sampled data.)
The functions are referred to as x̅, y̅, and z̅ (or x bar, y bar, and z bar.) Collectively, they're known as the CIE 1931 2 Degree Standard Observer. There's a separate 1964 standard observer constructed from a wider 10 degree field-of-view, with minor differences, which can be used instead of the 1931 standard observer, but which arguably creates a different color space. (The 1964 standard observer shouldn't be confused with the separate CIE 1964 color space.)
To calculate the tristimulus values, you take the inner product of (1) the spectrum of the color and (2) the color matching function. This just means that every point (or sample) in the spectrum is multiplied by the corresponding point (or sample) in the color matching function, which serves to reweight the data. Then, you take the integral (or summation, more accurately, since we're dealing with discrete samples) over the whole range of visible light ([360nm, 830nm].) The functions are normalized so that they have equal area under their curves, so an equal energy spectrum (the sampled value for every wavelength is the same) will have {X = Y = Z}. (FWIW, the Munsell Color Lab data are properly normalized, but they sum to 106 and change, for some reason.)
Taking another look at that 3D plot of the xyY space, we notice again that the familiar spectral locus shape seems to be the shape of the volume at {Y = 0}, i.e. where those colors are actually black. This now makes some sort of sense, since they are monochromatic colors, and their spectrums should consist of a single point, and thus when you take the integral over a single point you'll always get 0. However, that then raises the question: how do they have chromaticity at all, since the other two functions should also be 0?
The simplest explanation is that Y at the base of the shape is actually ever-so-slightly greater than zero. The use of sampling means that the spectrums for the monochromatic sources are not taken to be instantaneous values. Instead, they're narrow bands of the spectrum near their wavelengths. You can get arbitrarily close to instantaneous and still expect meaningful chromaticity, within the bounds of precision, so the limit as the sampling bandwidth goes to 0 is the ideal spectral locus, even if it disappears at exactly 0. However, the spectral locus as actually derived is just calculated from the single-sample values for the x̅, y̅, and z̅ color matching functions.
That means that you really just need one set of data—the lookup tables for x̅, y̅, and z̅. The spectral locus can be computed from each wavelength by just dividing x̅(wl) and y̅(wl) by x̅(wl) + y̅(wl) + z̅(wl).
(Image credit: Apple, screenshot from ColorSync Utility)
Sometimes you'll see a plot like this, with a dramatically arcing, rainbow-colored line swooping up and around the plot, and then back down to 0 at the far red end of the spectrum. This is just the y̅ function plotted along the spectral locus, scaled so that y̅ = Y. Note that this is not a contour of the 3D shape of the visible gamut. Such a contour would be well inside the spectral locus through the blue-green range, when plotted in 2 dimensions.
Delineating the visible spectrum in XYZ space
The final question becomes: given these three color matching functions, how do we use them to decide if a given {X, Y, Z} is within the gamut of human color perception?
Useful fact: you can't have luminosity by itself. Any real color will also have a non-zero value for one or both of the other functions. We also know Y by definition has a range of [0, 1], so we're really only talking about figuring whether {X, Z} is valid for a given Y.
Now the question becomes: what spectrums (simplified for our purposes: an array of 471 values, either 0 or 1, for the wavelengths [360nm, 830nm], band width 1nm), when weighted by y̅, will sum to Y?
The XYZ space is additive, like RGB, so any non-monochromatic light is equivalent to a linear combination of monochromatic colors at various intensities. In other words, any point inside of the spectral locus can be created by some combination of points situated exactly on the boundary. If you took the monochromatic CIE RGB primaries and just added up their tristimulus values, you'd get white, and the spectrum of that white would just be the spectrum of the three primaries superimposed, a thin band at the wavelength for each primary.
It follows, then, that every possible combination of monochromatic colors is within the gamut of human vision. However, there's a ton of overlap: different spectrums can produce the same perceived color. This is called metamerism. So, while it might be impractical to enumerate every possible individually perceptible color or spectrums that can produce them, it's actually relatively easy to calculate the overall shape of the space from a trivially enumerable set of spectrums.
What we do is step through the gamut wavelength-by-wavelength, and, for that given wavelength, we iteratively sum ever-larger slices of the spectrum starting from that point, until we either hit our Y target or run out of spectrum. You can picture this as going around a circle, drawing progressively larger arcs from one starting point and plotting the center of the resulting shape—when you get to an arc that is just the full circle, the centers coincide, and you get white, but until then the points you plot will spiral inward from the edge. Repeat that from every point on the circumference, and you'll have points spiraling in along every possible path, covering the gamut. You can actually see this spiraling in effect, sometimes, in 3D color space plots.
In practice, this takes the form of two loops, the outer loop going from 360 to 830, and the inner loop going from 1 to 470. In my implementation, what I did for the inner loop is save the current and last summed values, and once the sum exceeds the target I use the difference to calculate a fractional number of bands and push the outer loop's counter and that interpolated width onto an array, then break out of the inner loop. Interpolating the bands greatly smooths out the curves, especially in the prow.
Once we have the set of spectrums of the right luminance, we can calculate their X and Z values. For that, I have a higher order summation function that gets passed the function to sum and the interval. From there, the shape of the gamut on the chromaticity diagram for that Y is just the path formed by the derived {x, y} coordinates, as this method only enumerates the surface of the gamut, without interior points.
In effect, this is a simpler version of what libraries like the one mentioned in the accepted answer do: they create a 3D mesh via exhaustion of the continuous spectrum space and then interpolate between points to decide if an exact color is inside or outside the gamut. Yes, it's a pretty brute-force method, but it's simple, speedy, and effective enough for demonstrative and visualization purposes. Rendering a 20-step contour plot of the overall shape of the chromaticity space in a browser is effectively instantaneous, for instance, with nearly perfect curves.
There are a couple of places where a lack of precision can't be entirely smoothed over: in particular, two corners near orange are clipped. This is due to the shapes of the lines of partial sums in this region being a combination of (1) almost perfectly horizontal and (2) having a hard cusp at the corner. Since the points exactly at the cusp aren't at nice even values of Y, the flatness of the contours is more a problem because they're perpendicular to the mostly-vertical line of the cusp, so interpolating points to fit any given Y will be most pessimum in this region. Another problem is that the points aren't uniformly distributed, being concentrated very near to the cusp: the clipping of the corner corresponds to situations where an outlying point is interpolated. All these issues can clearly be seen in this plot (rendered with 20nm bins for clarity but, again, more precision doesn't eliminate the issue):
Conclusion
Of course, this is the sort of highly technical and pitfall-prone problem (PPP) that is often best outsourced to a quality 3rd party library. Knowing the basic techniques and science behind it, however, demystifies the entire process and helps us use those libraries effectively, and adapt our solutions as needs change.
You could use Colour and the colour.is_within_visible_spectrum definition:
>>> import numpy as np
>>> is_within_visible_spectrum(np.array([0.3205, 0.4131, 0.51]))
array(True, dtype=bool)
>>> a = np.array([[0.3205, 0.4131, 0.51],
... [-0.0005, 0.0031, 0.001]])
>>> is_within_visible_spectrum(a)
array([ True, False], dtype=bool)
Note that this definition expects CIE XYZ tristimulus values, so you would have to convert your CIE xyY colourspace values to XYZ by using colour.xyY_to_XYZ definition.

texture mapping (u,v) values

Here is a excerpt from Peter Shirley's Fundamentals of computer graphics:
11.1.2 Texture Arrays
We will assume the two dimensions to be mapped are called u and v.
We also assume we have an nx and ny image that we use as the texture.
Somehow we need every (u,v) to have an associated color found from the
image. A fairly standard way to make texturing work for (u,v) is to
first remove the integer portion of (u,v) so that it lies in the unit
square. This has the effect of "tiling" the entire uv plane with
copies of the now-square texture. We then use one of the three
interpolation strategies to compute the image color for the
coordinates.
My question is: What are the integer portion of (u,v)? I thought u,v are 0 <= u,v <= 1.0. If there is an integer portion, shouldn't we be dividing u,v by the texture image width and height to get the normalized u,v values?
UV values can be less than 0 or greater than 1. The reason for dropping the integer portion is that UV values use the fractional part when indexing textures, where (0,0), (0,1), (1,0) and (1,1) correspond to the texture's corners. Allowing UV values to go beyond 0 and 1 is what enables the "tiling" effect to work.
For example, if you have a rectangle whose corners are indexed with the UV points (0,0), (0,2), (2,0), (2,2), and assuming the texture is set to tile the rectangle, then four copies of the texture will be drawn on that rectangle.
The meaning of a UV value's integer part depends on the wrapping mode. In OpenGL, for example, there are at least three wrapping modes:
GL_REPEAT - The integer part is ignored and has no meaning. This is what allows textures to tile when UV values go beyond 0 and 1.
GL_MIRRORED_REPEAT - The fractional part is mirrored if the integer part is odd.
GL_CLAMP_TO_EDGE - Values greater than 1 are clamped to 1, and values less than 0 are clamped to 0.
Peter O's answer is excellent. I want to add a high level point that the coordinate systems used in graphics are a convention that people just stick to as a defacto standard-- there's no law of nature here and it is arbitrary (but a decent standard thank goodness). I think one reason texture mapping is often confusing is that the arbitrariness of this stardard isn't obvious. This is that the image has a de facto coordinate system on the unit square [0,1]^2. Give me a (u,v) on the unit square and I will tell you a point in the image (for example, (0.2,0.3) is 20% to the right and 30% up from the bottom-left corner of the image). But what if you give me a (u,v) that is outside [0,1]^2 like (22.7, -13.4)? Some rule is used to make that on [0.1]^2, and the GL modes described are just various useful hacks to deal with that case.

What happened in rasterizer stage?

I want to use Direct3D 11 to blend several images that from multi-view into one texture, so i do multiple projection at Vertex Shader stage and Geometry Shader stage, one of the projection's result stored in SV_Position, others stored in POSITION0, POSITION1 and so on. These positions would be used to sample the image.
Then at the Pixel shader stage, the value in SV_Position is typical like a (307.5,87.5), because it's in screen space. as the size of render target is 500x500, so the uv for sample is (0.615,0.0.175), it's correct. but value in POSITION0 would be like a (0.1312, 0.370), it's vertical reversed with offset. i have to do (0.5 + x, 0.5 - y). the projection is twisted and just roughly matched.
What do the rasterizer stage do on SV_Position?
The rasterizer stage expects the coordinates in SV_Position to be normalized device coordinates. In this space X and Y values between -1.0 and +1.0 cover the whole output target, with Y going "up". That way you do not have to care about the exact output resolution in the shaders.
So as you realized, before a pixel is written to the target another transformation is performed. One that inverts the Y axis, scales X and Y and moves the origin to the top left corner.
In Direct3D11 the parameters for this transformation can be controlled through the ID3D11DeviceContext::RSSetViewports method.
If you need pixel coordinates in the pixel shader you have to do the transformation yourself. For accessing the output resolution in the shader bind them as shader-constants, for example.

Resources