Generating tileable 2D texture from non-tileable 3D texture - graphics

I am working on a library for procedural texture generation (https://github.com/mikera/clisk) which is starting to come together quite nicely.
I'm now trying to work out good ways of producing tileable 2D textures.
One approach that seems plausible is to map the (0,0) - (1,1) 2D texture space onto a surface within a 3D texture in such a way that the surface connects the left and right edges and top and bottom edges of the texture (e.g. a torus). In doing so, that should ensure that the 2D texture is automatically tileable.
Since I already have good (non-tileable) 3D textures (perlin noise, fractal noise etc.) this seems like it would be a good way to allow the creation of tileable 2D textures from an arbitrary 3D texture.
So my quesyions:
Is this a valid technique?
If so, what kind of surface should I map onto in order to minimise distortions / get an good looking tiling effect?
Any pitfalls to be aware of?

Using 3D noise for this will produce distortion, the answer is to use 4D noise, though that is not the only way - you can also make the 2D function tileable.
Here's a couple of useful links:
http://www.gamedev.net/blog/33/entry-2138456-seamless-noise/
Introduces the 4D method
https://gamedev.stackexchange.com/questions/23625/how-do-you-generate-tileable-perlin-noise
Has multiple answers for this: making the 2D tileable, 3D with distortion, and the 4D method

Related

Best way to project an arbitrary 2D polygon onto a 3D triangle mesh?

What is the best way to project an arbitrary 2D polygon onto a 3D triangle mesh?
To make thing clearer, here is a visualization of the problem:
The triangle mesh is representing terrain and thus can be considered 2.5D. I want to be able to treat the projected polygon as a separate object.
This particular implementation is done in WebGL and three.js but any solution that fits an interactive 3D application is of interest.
If your question is not how to texture map the surface, then you really have to generate new 3D polygons.
You will be using some projection mechanism (such as a parallel one) that turns your 3D problem to 2D.
First backproject the surface onto the polygon plane. The polygon will be overlaid on a corresponding 2D mesh. Now for every facet, find the intersection (in the Boolean sense) of the facet and the polygon.
You will need a polygon intersection machinery for that purpose, such as the Weiler-Atherton or Sutherland-Hodgman clipping algorithms (the latter is much simpler, but works on convex windows only). (Also check http://www.angusj.com/delphi/clipper.php)
After clipping, you project to the original facet plane.

3d graphics from scratch

What the minimum configuration for the program I need to build 3D Graphics from scratch, for example I have only SFML for working with 2d graphics and I need to implement the Camera object that can move & rotate in a space
Where to start and how to implement vector3d -> vector2d conversion functions and other neccessary things
All I have for now is:
angles Phi, Xi, epsilon 1-3 and some object that I can draw on the screen with the following formula
x/y = center.x/y + scale.x/y * dot(point[i], epsilon1/epsilon2)
But this way Im just transforming "world" axis, not the Object points
First you need to implement transform matrix and vector math:
Mathematically compute a simple graphics pipeline
Understanding 4x4 homogenous transform matrices
The rest depends on kind of rendering you want to achieve:
boundary polygonal mesh rendering
This kind of rendering is the native for nowadays gfx cards. You need to implement buffers for:
depth (for filled polygons without z-sorting)
screen (to avoid flickering and also serves as Canvas)
shadow,stencil,aux (for advanced rendering techniques)
they have usually the same resolution as target rendering area. On top of this you need to implement supported primitives rendering at least point,line,triangle. see:
Algorithm to fill triangle
on top of all this you can add textures,shaders and whatever else you want to ...
(back)ray tracing
this kind of rendering is very different and current gfx HW is not build for it. This involves implementing ray/primitives intersections computation, Snell's law and analytical representation of meshes. This way you can also do multi-spectral rendering and more physically accurate effects/processes see:
How can I render an 'atmosphere' over a rendering of the Earth in Three.js? hybrid approach #1+#2
Algorithm for 2D Raytracer
How to implement 2D raycasting light effect in GLSL
Multi-Band Image raster to RGB
The difference between 2D and 3D ray tracer is almost none the only difference is how to compute perpendicular vector ...
There are also different rendering methods like Volume rendering, hybrid methods and others but their implementation is usually task oriented and generic description would most likely just mislead ... Here some 3D ray tracers of mine:
back raytrace through 3D mesh
back raytrace through 3D volume

Fastest way to draw to screen with software 3D rendering

I'm currently taking a course on polygonal 3D rendering from scratch. We write our own line drawing and clipping algorithms that are eventually used to draw polygons in 3D space using code for perspective transformations that we write ourself. The assumption of the course is that we write to 2D arrays that represent the window, viewport, or display device. In the first week of the course we wrote code to write out these 2D arrays as bitmap files so we could view the output.
Now I want to see the output of my software renderer in real-time and interact with it. What is the fastest way to draw a 2D bitmap array to the screen, in Mac OSX 10.9 for example? Linux? Windows?
I'm specifically looking for speed here, as the only thing that I want the GPU to do is draw the 2D array that I just rendered in main memory at runtime.
Without the initialization step it should be OpenGL rendering of the bitmap on screen aligned quad(What's the best way to draw a fullscreen quad in OpenGL 3.2?) Only costly operation will be uploading the bitmap but it's unavoidable anyway.

how can 3d shape data project onto your 2d screen in computer graphics?

So correct me if i'm wrong, but I think all elements in 3d graphics are meshes.
So the question is really, how do you take mesh data and create a 2d projection based on the mesh data, the camera location, rotations of camera & mesh, etc.
I realize this is fairly complicated and I would be satisfied by just knowing what the technical term for this is called so I may search and research it.
You can read about 3D projection on Wikipedia.

How to draw 3D images?

I am working on a simple 3d software renderer but one thing I'm no sure about is how to actually draw it all on the screen. What could I use to draw a wireframed cube ?
I am not asking HOW to write a complete 3D pipeline just the final step, the actual drawing on the screen.
Edit: I think I could do that with SDL.
You need to project the 3D object onto the 2D screen using a perspective transformation matrix.
This will generate a set of 2D lines etc. which get drawn in the same way as "normal" 2D lines get drawn.
However, without more information about the language and/or framework you are using, it's not easy to go into any more detail.
For the "actual drawing on screen" in Windows XP of your software-rendered wireframe 3D, call StretchDIBits with a pointer to the array of bytes that represents your pixels. This answer addresses maximum convenience; maximum efficiency is another matter.

Resources