Best 2D integration with Direct3D 10? - direct3d

I have an application that, to this point, is mostly DirectX 10 3D graphics. Now we are wanting to generate some real-time 2D images to display on a portion of the screen. The 2D graphics will be generated by using raw data coming in from an external piece of hardware - each value will be translated into a pixel color and position.
I have looked at Direct2D integration options, but I don't see a straightforward way to draw to a surface pixel-by-pixel, but rather just common lines and shapes.
I believe it would also be possible to do this with just Direct3D (no use of Direct2D), and just have a simple vertex for each pixel and draw it in orthogonal view.
Does using Direct2D improve efficiency at all when generating real-time 2D graphics? I am not wanting to use the shape tools so that is not really a selling point for me. Are there any other common practices for generating 2D graphics within a Direct3D application that would work better?
Any pointers would be greatly appreciated. Thank you.

The normal approach for drawing 2D graphics (UI etc.) in a D3D application is to store any bitmap image data you need in textures and draw textured 2D quads to display it. This gives you hardware acceleration of scaling, rotation, filtering, etc. for free.
If you want pixel accurate reproduction of your source images you'll need to ensure you take the screen resolution into account and draw your quads with the correct vertex positions. This is easier in Direct3D 10 than it was in Direct3D 9.

The answer to your question is Textured 2D quads
Simply create a vertex shader with the following vertices:
(0.0f, 0.0f, 0.0f, 0.0f, 1.0f),
(0.0f, 1.0f, 0.0f, 0.0f, 0.0f),
(1.0f, 1.0f, 0.0f, 1.0f, 0.0f),
(1.0f, 0.0f, 0.0f, 1.0f, 1.0f)
Vertex layout
(Position x, Position y, Position z, Texture Coord x, Texture Coord y).
And the appropriate index buffer.
Then simply bind the appropriate texture as a shader resource and a matrix to translate/rotate/scale the quad as you want. In the vertex shader transform the vertices position using the matrix and in the pixel shader sample the shader resource congaing the texture.
If you want to update the texture simply Map() it, update whatever pixels you want, Unmap() it and bind it again to the pixel shader.

Related

Autodesk Maya per pixel surface position and normals through whole scene

how can I get in maya per pixel all surface positions and normals of the entire scene.
I don't want to stop at the first viewed surface of the camera. I need to get information about all object. I want to traverse the whole scene
For example a cube is in front of the sphere. The camera position just shows the cube - the sphere gets at that camera position by the cube. My output of every pixel position of my camera rendered image data gives me information of the surface position in world space and the normal of the cube at the first hit. Then again for the other side of the cube. Then the two surfaces of the sphere.
How can that be achieved?
Thanks

How to get screen (widget) coordinates from world coordinates

Let's say I have an entity's global (world) coordinate v (QVector3D). Then I make a coordinate transformation:
pos = camera.projectionMatrix() * camera.viewMatrix() * v
where projectionMatrix() and viewMatrix()are QMatrix4x4 instances. What do I actually get and how is this related to widget coordinates?
The following values are for OpenGL. They may differ in other Graphics APIs
You get Clip Space coordinates. Imagine a Cube with side length 2 ( i.e. -1 to 1 -w to w on all axes1). You transform your world to have everything you see with your camera in this cube, so that the graphics card can discard everything outside the cube (since you don't see it, it doesn't need rendering. this is for perforamnce reasons).
Going further, you (or rather your graphics API) would do a perspective divide. Then you are in Normalized Device Space - basically here you go from 3D to 2D, such that you know where in your rendering canvas your pixels have to be colored with whatever lighting calculations you use. This canvas is a quad with side length 1 (I believe).
Afterwards you would stretch these normalized device coordinates with whatever width and height your widget has, such that you know where in your widget the colored pixels go (defined in OpenGL as your Viewport).
What you see as Widget Coordinates are probably the coordinates of where your widget is on screen (usually the upper left corner is specified). Therefore, if your widget coordinate is (10, 10) and you have a rendered pixel in your Viewport transformation at (10, 10), then on screen your rendered pixel would be at (10+10, 10+10).
1After having had a discussion with derhass (see comments), a lot of books for graphics programming speak of [-1, -1, -1] x [1, 1, 1] as the clipping volume. The OpenGL 4.6 Core Spec however states that it is actually [-w, -w, -w] x [w, w, w] (and according to derhass, it is the same for other APIs. I have not checked this).

Algorithm to calculate and display a ribbon on a 3D triangle mesh

I am looking for an algorithm for the following problem:
Given:
A 3D triangle mesh. The mesh represents a part of the surface of the earth.
A polyline (a connected series of line segments) whose vertices are always on an edge or on a vertex of a triangle of the mesh. The polyline represents the centerline of a road on the surface of the earth.
I need to calculate and display the road i.e. add half of the road's width on each side of the center line, calculate the resulting vertices in the corresponding triangles of the mesh, fill the area of the road and outline the sides of the road.
What is the simplest and/or most effective strategy to do this? How do I store the data of the road most efficiently?
I see 2 options here:
render thick polyline with road texture
While rendering polyline you need TBN matrix so use
polyline tangent as tangent
surface normal as normal
binormal=tangent x normal
shift actual point p position to
p0=p+d*binormal
p1=p-d*binormal
and render textured line (p0,p1). This approach is not precise match to surface mesh so you need to disable depth or use some sort of blending. Also on sharp turns it could miss some parts of a curve (in that case you can render rectangle or disc instead of line.
create the mesh by shifting polyline to sides by half road size
This produces mesh accurate road fit, but due to your limitations the shape of the road can be very distorted without mesh re-triangulation in some cases. I see it like this:
for each segment of road cast 2 lines shifted by half of road size (green,brown)
find their intersection (aqua dots) with shared edge of mesh with the current road control point (red dot)
obtain the average point (magenta dot) from the intersections and use that as road mesh vertex. In case one of the point is outside shared mesh ignore it. In case both intersections are outside shared edge find closest intersection with different edge.
As you can see this can lead to serious road thickness distortions in some cases (big differences between intersection points, or one of the intersection points is outside surface mesh edge).
If you need accurate road thickness then use the intersection of the casted lines as a road control point instead. To make it possible either use blending or disabling Depth while rendering or add this point to mesh of the surface by re-triangulating the surface mesh. Of coarse such action will also affect the road mesh and you need to iterate few times ...
Another way is use of blended texture for road (like sprites) and compute the texture coordinate for the control points. If the road is too thick then thin it by shifting the texture coordinate ... To make this work you need to select the most far intersection point instead of average ... Compute the real half size of the road and from that compute texture coordinate.
If you get rid of the limitation (for road mesh) that road vertex points are at surface mesh segments or vertexes then you can simply use the intersection of shifted lines alone. That will get rid of the thickness artifacts and simplify things a lot.

In 3D graphics, why is antialiasing not more often achieved using textures?

Commonly, techniques such as supersampling or multisampling are used to produce high fidelity images.
I've been messing around on mobile devices with CSS3 3D lately and this trick does a fantastic job of obtaining high quality non-aliased edges on quads.
The way the trick works is that the texture for the quad gains two extra pixels in each dimension forming a transparent one-pixel-wide outline outside the border. Due to texture sampling interpolation, so long as the transformation does not put the camera too close to an edge the effect is not unlike a pre-filtered antialiased rendering approach.
What are the conceptual and technical limitations of taking this sort of approach to render a 3D model, for example?
I think I already have one point that precludes using this kind of trick in the general case. Whenever geometry is not rectangular it does nothing to reduce aliasing: The fact that the result with a transparent 1px outline border is smooth for HTML5 with CSS3 depends on those elements being rectangular so that they rasterize neatly into a pixel grid.
The trick you linked to doesn't seem to have to do with texture interpolation. The CSS added a border that is drawn as a line. The rasterizer in the browser is drawing polygons without antialiasing and is drawing lines with antialiasing.
To answer your question of why you wouldn't want to blend into transparency over a 1 pixel border is that transparency is very difficult to draw correctly and could lead to artifacts when polygons are not drawn from back to front. You either need to presort your polygons based on distance or have opaque polygons that you check occlusion of using a depth buffer and multisampling.

Terrain tile scale in case of tilted camera

I am working on 3d terrain visualization tool right now. Surface is logically covered with square tiles. This tiling could be visualized as follows:
Suppose I want to draw a picture on these tiles. The level of detail for a picture is required to be selected according to the current camera scale which is calculated for each tile individually.
In case of vertical camera (no tilt, i.e. camera looks perpendicularly on the ground) all tiles have the same scale which is camera focal length divided on camera height above the ground.
Following picture depicts the situation:
where red triangle is camera which has no tilt, BG is camera height above the ground and EG is focal length, then scale = AC/DF = BG/EG
But if camera has tilt (i.e. pitch angle isn't 0) then scale is changed from tile to tile (even from point to point).
So I wonder if there any kind method to produce reasonable scale for each tile in that case ?
There may be (there almost surely is) a more straightforward solution, but what you could do is regular world to screen coordinate conversion.
You just take the coordinates of bounding points of the tile and calculate to which pixels on the screen these will project (you of course get floating point precision). From this, I believe you can calculate the "scale" you are mentioning.
This is applicable to any point or set of points in the world space.
Here is tutorial on how to do this "by hand".
If you are rendering the tiles with OpenGL or DirectX, you can do this much easier.

Resources