Im using Vulkan to render simple textured meshes. To achieve a smooth result, I tried to use the built-in multisampling, but the maximum available number of samples for the render target (image) is only x4. This is not enough for my purposes, I need x8/x16.
How to efficiently implement antialiasing manually?
Multisample antialiasing is a technique that requires cooperation from the rasterizer, render targets, and other portions of the per-fragment processing hardware. It's a technique that rasterizes at a higher resolution, but only executes the fragment shader at a lower resolution, broadcasting the results across multiple samples within the same pixel area.
That's not something you can do manually.
You can always resort to super-sampling (render at a high resolution and then downsample). Or you can use faux antialiasing techniques like FXAA and the like. But you can't emulate MSAA manually.
Related
I use an OrthographicCamera set to 720 by 1280 and then set it's combined matrix as the projection matrix in my SpriteBatch. I then generate a BitmapFont using the FreeTypeFontGenerator and use it to render text.
OrthographicCamera camera = new OrthographicCamera();
camera.setToOrtho(false, 720, 1280);
SpriteBatch batch = new SpriteBatch();
batch.setProjectionMatrix(camera.combined);
This is the output:
As you can see the fonts look very distorted and the only way I found to fix it is by removing the line where I set the projection matrix in my SpriteBatch.
batch.setProjectionMatrix(camera.combined);
I put this as an issue on the lib GDX github page, but I was told this is not because of Lib GDX. I need to use this projection matrix so that I can develop my application in one resolution and have it scale to fit any screen. Is there a way to render text without encountering this problems?
Use a pixel perfect projection, by using ScreenViewport. Then use Table to layout your GUI, including its labels. This way you can also support multiple aspect ratio or even use a different layout depending on the aspect ratio.
If you want to support a wide range of resolution then you will need to provide different assets depending on the resolution. Depending on the file size you might want to use different build flavors or use ResolutionFileHandleResolver. Alternatively you can use the freetype extension to generate the correct font for the device, but be aware that that might result in additional render calls which can affect performance.
Note that all of this only applies for your GUI. For you game logic you obviously use the coordinate system that makes most sense for that (e.g. meters), with a separate camera.
I enjoy computer graphics.
I was wondering what the fastest engine was with the following functionality:
Draws triangles with 4 color channels rgba and allows for the drawing of point and directional lights.
Texturing would be a cool additional feature, but again I am looking for the fastest engine, not the most functional. Camera animation and object animation will be imperative.
Finally there are really 2 answers for this question, 1 for general development and one for web, but if you can only speak to one or the other your contributions will be appreciated!
There are quite a lot of engines that do the job. One of the most known is for example Unity, where you also have tons of other features in good performance.
But I think you are not really looking for an engine but an API. Examples are OpenGL or DirectX (already mentioned). OpenGL even has a specific web content (WebGL).
There is one more problem: the triangles should be semitransparent. What is missing in the other answer is the question if the triangles are already ordered. OpenGL for example is good in rendering objects where it does not matter which triangle is nearest to the viewer. It "searches" this one on the fly and shows only the triangle that is visible. But with semitransparent triangles it is possible to see different triangles overlapping each other and therefore it is not only necessary to know which triangle is in the front, but which triangle comes directly after that and so on. OpenGL offers blending for this feature, but is necessary to order the semitransparent triangles manually before rendering. This is called the Painters Algorithm. While Sorting of objects is a complex problem, exspecially with a large number of objects, this could take quite long time.
For this there is another solution called "depth peeling". The idea is to render all triangles multiple times with OpenGL. The first time you get all the triangles which are in the front. Now you render all triangles again, but without the triangles in the front. This results in the second nearest triangles to the viewer. After that all triangles are rendered again, but without the first two "peels", which results in the third nearest triangles and so on. This is expensive because everything has to get rendered multiple times, but in cases where there is a very large number of triangles this is faster than sorting (and more precise due to overlapping triangles). In most cases four peels are enough for good results. For further read I suggest the following paper of Everitt: http://gamedevs.org/uploads/interactive-order-independent-transparency.pdf
Your best bet is probably OpenGL. In the case of the web, you could use WebGL and in the case of native desktop or mobile development you could directly use OpenGL.
I'm a beginner to computer graphics and am trying to get a better understanding. My professor has discussed fixed function pipeline and shader based programming. How do these two compare to each other? What's the difference?
The fixed-function pipeline is as the name suggests — the functionality is fixed. So someone wrote a list of different ways you'd be permitted to transform and rasterise geometry, and that's everything available. In broad terms, you can do linear transformations and then rasterise by texturing, interpolate a colour across a face, or by combinations and permutations of those things. But more than that, the fixed pipeline enshrines certain deficiencies.
For example, it was obvious at the time of design that there wasn't going to be enough power to compute lighting per pixel. So lighting is computed at vertices and linearly interpolated across the face.
There were some intermediate extensions related to specific effects — dot3 plus cubemaps for per-pixel lighting from a single source, for example — but the programmable pipeline lets you do whatever you want at each stage, giving you complete flexibility.
In the first place that allowed better lighting, then better general special effects (ripples on reflective water, imperfect glass, etc), and more recently has been used for things like deferred rendering that flip the pipeline on its end.
All support for the fixed-functionality pipeline is implemented by programming the programmable pipeline on hardware of the last decade or so. The programmable pipeline is an advance on its predecessor, afforded by hardware improvements.
Graphics Processing Units started off very simply with fixed functions, that allowed for quick 3D maths (much faster than CPU maths), and texture lookup, and some simple lighting and shading options (flat, phong, etc).
These were very basic but allowed the CPU to offload the very repetitive tasks of 3D rendering to the GPU. Once the Graphics was taken away from the CPU, and given to the GPU, Games made a massive leap forward.
It wasn't long before the fixed functions needed to be changed to assembly programs and soon there was demand for doing more than simple shading, basic reflections, and single texture maps offered by the fixed function GPUs.
So the 2nd breed of GPU was created, this had two distinct pipelines, one that processed vertex programs and moved verts around in 3D space, and the shader programs that worked with pixels allowing multiple textures to be merged, and more lights and shades to be created.
Now in the latest form of GPU all the pipes in the card are generic, and can run any type of GPU assembler code. This increased in the number of uses for the pipe - they still do vertex mapping, and pixel color calculation, but they also do geometry shaders (tessellation), and even Compute shaders (where the parallel processor is used to do a non-graphics job).
So fixed function is limited but easy, and now in the past for all but the most limited devices. Programmable function shaders using OpenGL (GLSL) or DirectX (HLSL) are the de-facto standard for modern GPUs.
Essential the fixed function pipeline is a hardwired implementation of a, well, fixed program, through which each piece of data a GPU processes traverses, without the ability to change the details of any step. The only thing you can parameterize are the occasional branch to switch between hardcoded paths in the program (like enabling or disabling lighting, or using a separate specular) or some constants used (light colors and positions, texture environment base color modulation). And each and every step follows a specific formula.
In a programmable pipeline however the GPU is clean slate. It's completely up to the programmer how the various stages of the rendering process (vertex transformation, tesselation, fragment processing) are carried out. And you can use whatever formula you see fit for the task.
Fixed function pipeline GPUs have exactly one illumination mode: A Lambertian illumination model, implemented using Gourad or Phong shading. There were a few tricks to slightly alter the illumination model, for example to be anisotropic, but you had to somehow outsmart (or outdumb to be hones) the GPU for this. With a programmable pipeline you simply do what you wanted to do in the first place.
As a brief background, I have been slowly chugging away at the core framework of a game I've been wanting to make for some time now. It has gotten to the point where I want to start really fleshing it out with some graphics assets other than colored boxes. And this brings me to the heart of my question:
What is the best method for creating graphics assets that appear the same quality independent of the device they are drawn on?
My game is styled after Pokemon, so I want to capture the 16-bit feel while still remaining crisp regardless of the device resolution. Does this mean I just create a ton of duplicate sprite sheets? i.e. a 16x16 32x32 48x48 64x64 version of each asset? Or should I be making vector art and rendering it out specifically for each device? Or is there some other alternative I haven't considered?
Thanks!
If by 16-bit feel you mean a classic old-school "pixelated" style (but with crisp edges). Then you can just draw them in the minimal dimension and upscale by whatever factor you need using a Pixel Art Scaling Algorithm, the simplest being nearest neighbour. There are of course many algos that produce much nicer results than NN like the 2xSaI and hqx family of algorithms, and RotSprite if you need rotation.
If you want clean antialiased edges you might want to check out this Microsoft Research paper: Depixelizing Pixel Art
You can then use these algos as a loading pre-pass for your game.
Alternatively, you could shift them "earlier" into your art pipeline to help speed up generation of multiple (resolution/transform) variants, which you could further touch up. This choice largely depends on your level of labor resources and perfectionism. Note also that this loses the "purity" of the solution since it violates DRY because updates will require changes in all variants of a sprite.
I would suggest to first try out some of these upscaling filters and see if you are happy with the results. If you are, you can get away with a loading prepass, which is by far the most desirable outcome because it reduces work and maintenance by a large factor.
Are there any classes, methods in the .NET library, or any algorithms in general, to perform non-affine transformations? (i.e. transformations that involve more than just rotation, scale, translation and shear)
e.g.:
(source: last100.com)
Is there another term for non-affine transformations?
I am not aware of anything integrated in .Net letting you do non affine transforms.
I guess you are trying to have some sort of 3D texture mapping? If that's the case you need an homogenous affine transform, which is not available in .Net. I'm also not aware of any integrated way to make pixel displacement transforms in .Net.
However, the currently voted solution might be good for what you are trying to do, just be aware that it won't do perspective correction out of the box.
For instance:
The picture on the left was generated using the single quad distort library provided by Neil N. The picture on the right was generated using a single quad (two triangles actually) in DirectX.
This may not have any impact on what you are trying to do, but this is something to keep in mind if you want to do 3D stuff, it will look very weird without perspective correct mapping.
All of the example images you posted can be done with a Quadrilateral Distortion. Though I cant say for certain that a quad distort will cover ALL non affine transforms.
Heres a link to a not so good implementation of it in C#... it works, but is slow. Poke around Wikipedia for the many different optimizations available for these kinds of calculations
http://www.vcskicks.com/image-distortion.html
-Neil
You can do this in wpf using a the Viewport3d control and a non-affine transform matrix. Rendering this to a bitmap again may be interesting.... Which I "fixed" by including an invisible <image> control with the same image as on my textured plane... (Also, I've had to work around the max texture size issues by splitting up the plane and cropping images...)
http://www.charlespetzold.com/blog/2007/08/060605.html
In my case I wanted the reverse of this (transform so arbitrary points on the warped become the corners of my rectangular window), which is the Inverse of the matrix to do the opposite.