Is there something like vkCmdBlitImage for D3D12? - graphics

I'd like to create a mipmap chain for a 2D texture by blitting the base image into mip levels. In Vulkan, vkCmdBlitImage can be used to do this while linearly filtering the image (see another question). How can I achieve the same in D3D12?

Afaik D3D12 has no such functionality and you're supposed to generate the mip map chain with a compute shader like this one from the MiniEngine in the DX samples provided by MS.

Related

Custom (manual) implementation of MSAA in Vulkan

Im using Vulkan to render simple textured meshes. To achieve a smooth result, I tried to use the built-in multisampling, but the maximum available number of samples for the render target (image) is only x4. This is not enough for my purposes, I need x8/x16.
How to efficiently implement antialiasing manually?
Multisample antialiasing is a technique that requires cooperation from the rasterizer, render targets, and other portions of the per-fragment processing hardware. It's a technique that rasterizes at a higher resolution, but only executes the fragment shader at a lower resolution, broadcasting the results across multiple samples within the same pixel area.
That's not something you can do manually.
You can always resort to super-sampling (render at a high resolution and then downsample). Or you can use faux antialiasing techniques like FXAA and the like. But you can't emulate MSAA manually.

How to write values to depth buffer in godot fragment shader?

How do you specify the depth value in the fragment shader, if you would like to for example render a texture of a sphere that also affect depht buffer in the cameras z-direction?
In OpenGL you can use gl_FragDepth. Is there a similar builtin variable in godot?
Edit:
I found that there is a variable DEPTH after posting the question that seems to be merged.. I have not had time to try it yet. If you have any experience from using successfully, I would accept that answer.
Yes, you can write to DEPTH from the fragment buffer of the shader of an spatial material.
Godot will, of course, also draw depth by default. You can control that with the render modes depth_draw_*, see Depth Draw Mode.
And if you want to read depth, you can use DEPTH_TEXTURE. The article Screen Reading Shaders has an example.
Refer to Spatial Shader for the list available variables and options in spatial shaders.

How does a graphics engine figure out how to place pixels to make a 3d image?

I was wondering what procedure a simple 3d program uses to draw 2d pixels so that they appear 3d. I'm really interested in this for drawing purposes since if a program can figure out how to use a flat screen to produce images with depth then maybe I could use those techniques in my drawing.
Are there any basic 3d engine out there I can look at? Without any 2d to 3d abstractions?
Two notions may interest you:
The perspective projection, which is the mathematical transformation which takes 3D points (or vertices) and the characteristics of your camera (position, orientation, frustrum, ...) and gives you the 2D projection of the point on your chosen medium (screen).
Wikipedia - 3D Projection
StackOverflow - Transform GPS-Points to Screen-Points with Perspective Projection in Android (I made a detailed answer)
The Painter's algorithm (since you seem to ask for drawing-related techniques), a rendering method which sorts by depth all the elements of your scene after their projection, and draws them on your medium by decreasing depth, to ensure a realistic output ("far objects hidden behind closer ones" - imitating painters method). This algorithm has however some limits (far from efficient in its basic implementation, can't easily deal with elements intersecting or circularly overlapping each others), so most of the days a more efficient method is used, the Z-buffering, which deals with depth conflicts on a pixel-to-pixel basis.
Wikipedia - Painter's algorithm
Wikipedia - Z-buffering
By combining those notions, you can actually implement your own simple 3D engine (in the other StackOverflow thread I'm pointing, I gave a link to an article I made about creating such an engine easily).
If you want to look at more complex engines and notions, you can take a look at the GPU Gems 3 by Nvidia for instance, or look at articles about OpenGL.
Hope it helped, bye !

What is the difference between texture filtering and texture sampling?

Are these concepts one and the same? I've seen them used in multiple contexts.
Texture sampling is the act of retrieving data from a texture. Texture filtering is the algorithm by which a pixel or group of pixels within the texture are fetched (and possibly combined) in order to produce the result of a sampling of a texture.
Some links for you:
http://www.extremetech.com/article2/0,2845,1155163,00.asp
http://blogs.msdn.com/shawnhar/archive/2009/09/08/texture-filtering.aspx

Non-Affine image transformations in .NET

Are there any classes, methods in the .NET library, or any algorithms in general, to perform non-affine transformations? (i.e. transformations that involve more than just rotation, scale, translation and shear)
e.g.:
(source: last100.com)
Is there another term for non-affine transformations?
I am not aware of anything integrated in .Net letting you do non affine transforms.
I guess you are trying to have some sort of 3D texture mapping? If that's the case you need an homogenous affine transform, which is not available in .Net. I'm also not aware of any integrated way to make pixel displacement transforms in .Net.
However, the currently voted solution might be good for what you are trying to do, just be aware that it won't do perspective correction out of the box.
For instance:
The picture on the left was generated using the single quad distort library provided by Neil N. The picture on the right was generated using a single quad (two triangles actually) in DirectX.
This may not have any impact on what you are trying to do, but this is something to keep in mind if you want to do 3D stuff, it will look very weird without perspective correct mapping.
All of the example images you posted can be done with a Quadrilateral Distortion. Though I cant say for certain that a quad distort will cover ALL non affine transforms.
Heres a link to a not so good implementation of it in C#... it works, but is slow. Poke around Wikipedia for the many different optimizations available for these kinds of calculations
http://www.vcskicks.com/image-distortion.html
-Neil
You can do this in wpf using a the Viewport3d control and a non-affine transform matrix. Rendering this to a bitmap again may be interesting.... Which I "fixed" by including an invisible <image> control with the same image as on my textured plane... (Also, I've had to work around the max texture size issues by splitting up the plane and cropping images...)
http://www.charlespetzold.com/blog/2007/08/060605.html
In my case I wanted the reverse of this (transform so arbitrary points on the warped become the corners of my rectangular window), which is the Inverse of the matrix to do the opposite.

Resources