I am looking for some resources for writing 2D shaders (not 3D, so please no references to vertex shader code or anything like that). I'm using HLSL, but having a lot of trouble finding good tutorials.
Any suggestions?
Related
i want bake light maps for meshes along with their uv unwrapping. Is there any guidance, maybe some articles, books or research papers on how to get started with light map baking using ray tracing available? I have made a ray tracer using Vulkan API, would like to use that to bake light maps.
Final output will be: one large light map texture,containing indirect + direct illumination light contribution.
then mesh can sample from the light map using uv coordinates
found one paper on this here, wanted to know if this is the only way to do this
Disclaimer: I'm not 100% on whether this is a well-formed question, so please feel free to comment and suggest improvements. I'll be actively looking out for ways to improve this question.
I have a triangle mesh, let's say the Stanford Bunny. Now, I want to raycast a ray from a source point in 3D along a 3D direction vector, and identify just the first intersection of that ray with the triangle mesh.
I already have a naive implementation cooked up. However, I'm looking for a more advanced implementation. In particular, I'll be casting many millions of rays in many directions, so I'm looking for a multi-threaded or GPU-accelerated implementation.
I have to believe that there must be some pretty complete projects online, as raycasting triangle meshes is a fundamental part of 3D computer graphics. However, I can't find anything beyond personal projects, which leads me to believe that I am using the wrong search terms, or something pretty simple along those lines.
I am looking for suggestions on existing tools that can raytrace polygonal meshes.
If all you need to do is find the distance to the mesh for millions of rays. Then it might be a good idea to look up CUDA raytracing tutorial online. This will show you how to cast many millions of rays. In most tutorials, raytracing is used to render to the screen with the camera matrix. However, this is not necessary. Simply adjust the rays starting parameters to what you need them to be such as 3D vector and position. Then output the data back to the CPU. Be weary of the bandwidth between the GPU and CPU sending millions of intersection points between the CPU and GPU can make the program run exceptionally slow.
I am trying to write a script that converts the vertex colors of a scanned .ply model to a good UV texture map so that it can be 3D painted as well as re-sculpted in another program like Mudbox.
Right now I am unwrapping the model using smart projection in Blender, and then using Meshlab to convert the vertex colors to a texture. My approach is mostly working, and at first the texture seems to be converted with no issues, but when I try to use the smooth brush in Mudbox/Blender to smooth out some areas of the model after the texture conversion, small polygons rise to the surface that are untextured. Here is an image of the problem: https://www.dropbox.com/s/pmekzxvvi44umce/Image.png?dl=0
All of these small polygons seem to have their own UV shells separate from the rest of the mesh, they all seem to be invisible from the surface of the model before smoothing, and they are difficult or impossible to repaint in Mudbox/Blender.
I tried baking the texture in Blender as well but experienced similar problems. I'm pretty stumped so any solutions or suggestions would be greatly appreciated!
Doing some basic mesh cleanup in Meshlab (merging close vertices in particular) seems to have mostly solved the problem.
So, I'm currently developing a puzzle game of sorts, and I came upon something I'm not sure how to approach.
As you can see from the screenshot below, the text on the sides next to the main square is distorted along the diagonal of the quadrilateral. This is because this is not a screenshot of a 3D environment, but rather a 2D environment where the squares have been stretched in such a way that it looks like it's 3D.
I have tried using 3D perspective and changing depths, and while it solves the issue of the distorted sides, I was wondering if it's possible to fix this issue without doing 3D perspectives. Mainly because the current mesh transformation scheme took a while to get to, and converting that to something that works on 3D space is extra effort that might be avoidable.
I have a feeling this is unavoidable, but I'm curious if anyone knows a solution. I'm currently using OpenGL ES 1.
Probably not the answer you wanted, but I'd go with the 3d transformation because it will save you not only this distortion, but will simplify many other things down the road and give you opportunities to do nice effects.
What you are lacking in this scene is "perspective-correct interpolation", which is slightly non-linear, and is done automatically when you provide coordinates with depth information.
It may be possible to emulate it another way (though your options are limited since you do not have shaders available) but they will all likely be less efficient than using the dedicated functionality of your GPU. I recommend that you switch to using 3D coordinates.
Actually, I just found the answer. Turns out there's a Q coordinate which you can use to play around with trapezoidal texture distortion:
texture mapping a trapezoid with a square texture in OpenGL
http://www.xyzw.us/~cass/qcoord/
http://hacksoflife.blogspot.com.au/2008/08/perspective-correct-texturing-in-opengl.html
Looks like it won't be as correct as doing it 3D, but I suppose it will be easier for my use right now.
I'm starting to code my first game and I want to make simple 2D sprite game. However I want to simulate 3D space & physics and am searching for some tutorial/guide/algorithms that would teach me the basics... but so far without luck.
Do you have any recommendations? Books? I don't care about programming language, any language will do as I can read algorithms in most languages and for start I just want to understand exiting solutions for 3D -> 2D problem.
Thanks!
Edit: I am not so much looking into physics for now as for projecting 3D space onto 2D
This is the best article I've found on subject: http://www.create-games.com/article.asp?id=2138
Another great article: http://pixwiki.bafsoft.com/mags/5/articles/circle/sincos.htm
1980's games systems used parallax techniques to give a feeling of depth with 2D implementations.
If you're talking about the process of rendering a 3D scene as a 2D image (i.e. on a screen), then you'll want to look at perspective projections. It's quite heavy on maths, though, and involves a lot of work with transformation matrices and linear algebra.
You should make sure you're up to scratch on both linear algebra and calculus if you're planning on creating a 3D physics-based game.
If you're doing 2D, you might like to start with simple 2D physics. I'd especially recommend Box2D for that purpose! It's easy to learn and integrate, and it's tutorials will let you on the basics of physics in games.
You don't say what language you're using, but OpenGL and variants of it exist, I believe from internet search, for several common programming environments.
It provides some very powerful tools for creating 3D objects, setting viewports into the virtual 3D space, placing lights, defining textures. It might take a couple weeks of spare time to master, but it certainly spares you doing much of the perspective math you would need to roll your own 3D tools. There are good tutorials on the intenet.
Good luck