I'm working on a Three JS project that creates procedurally generated meshes at run-time.
I need to apply texture-mapping to these shapes. For simple shapes with a definite top, bottom and sides, I understand I can use planar-mapping which is fine.
However, What about more complex shapes e.g. a torus knot? For example, I want it to look like there is grass on the 'top' and rocky at the 'bottom'.
I want to apply the texturing so there are no obvious seams, I'm not too worried about stretching as I'm not looking for super-realism.
Also, I'd rather not have use shaders that blend multiple textures.
Any thoughts appreciated.
Related
I draw graphics for my program in corel draw (x6),
after that export it as svg files, and my program
uses this svg files.
Let's say I draw "arrow" in corel draw program.
It consists of tip and line.
I need to show this "arrow" in my program,
but I need "tip" part to be not scalable,
while "line" should be scalable.
The most simple solution which works, split "arrow"
into two parts, convert "tip" part to bitmap during program starts.
But it requires too much time for complex pictures.
And I wonder, is it possible in svg format to say this part should
not be scaled, and this should? And how this can be exported from corel draw?
I found something suitable in corel draw, to play with scale for diffrent parts of picture,
but during export to svg all my definitions was lost.
Unfortunately, there is no concept of a non-scaling element. At my last job, I worked with the SVG working group to try to get this feature introduced (nonscaling elements are really useful in engineering drawings), and it is on the roadmap for SVG 2.
The issue is SVG-ISSUE-2400.
The way to do this for now, is to implement a zoom event, that dynamically rescales nonscaling elements when the zoom level changes.
I have been using the Gloss Library for some game programming, and have got to the point where I am having the most difficulty laying out different elements on the screen. I was wondering whether it was possible to limit a Picture type to display only a certain rectangular area of the screen. The library already has the concept of a rectangular area with the Extent type, but there does not appear to be any way to 'subtract' from pictures.
If there was a way of doing this then it seems like creating a View type or similar that takes over responsibility for a certain area of the screen — which can also contain additional views, and with suitable coordinate substitutions between them etc — would be an achievable and sensible goal. But without a way to limit drawing areas it doesn't seem like this would be possible within the Gloss framework.
It seems that clipping is not supported in Gloss.
Nevertheless the recursive drawing of views each with their own relative coordinate system does still seem to be a viable and useful goal, and I am part way through writing code for this now.
I'm making "dungeon master-like" game where corridors and objects will be models. I have everything completed, but the graphic part of the game missing. I also made test levels without texture.
I would like to know which texture mapping would be the best for a realistic look.
I was thinking about parallax mapping for walls and doors, normal mapping for objects like treasure and boxes.
What mapping should I choose for enemies, npcs?
I have never worked with HLSL before, so I want to be sure that I'll go straight ahead for my goal because I expect another hard work there.
The mapping to use depends on your tastes. But first of all implement diffuse color mapping and per pixel lights. When that is working add normal mapping. If still not satisfied, add parallax mapping.
Even better results than the combination of normal and parallax mapping can be achieved using DirectX 11 Tesselation and displacement mapping. But this is much more GPU intensive and may not work on older hardware.
I've been studying 3D graphics on my own for a while now and I want to get a greater understanding of just how everything works. What I would like to do is to create a simple game without using DirectX or OpenGL. I understand most of the math I believe, but the problem I am running up against is I do not know how to get control of the pixels being displayed in a window.
How do I specify what color I want each pixel in my window to be?
I understand I will probably run into issues with buffers and image shearing and probably terrible efficiency problems, but I want to create my own program so that I could see from the very lowest level, of the high level language, how the rendering process works. I really have no idea where to start though. I've figured out how to output BMPs, but I would like to have a running program spitting out 20+ frames per second. How do I accomplish this?
You could pick a environment that allows you to fill an array with values for pixels and display it as a bitmap. This way you come closest to poking RGB values in video memory. WPF, Silverlight, HTML5/Javascript can do this. If you do not make it full screen these technologies should suffice for now.
In WPF and Silverlight, use the WriteableBitmap.
In HTML5, use the canvas
Then it is up to you to implement the logic to draw lines, circles, bezier curves, 3D projections.
This is a lot of fun and you will learn a lot.
I'm reading between the lines that you're more interested in having full control over the rendering process from a low level, rather than having a specific interest in how to achieve that on one specific platform.
If that's the case then you will probably get a good bang for your buck looking at a library like SDL which provides you with a frame buffer that you can render to directly but abstracts away a lot of the platform specifics issues. It has been around for quite a while and there are some good tutorials to give you an idea of whether it's the kind of thing you're looking for - see this tutorial and the subsequent one in the same series, which should be enough to get you up and running.
You say you want to create some kind of a rendering engine, meaning desinging you own Pipeline and matrice classes. Which you are to use to transform 3D coordinates to 2D points.
When you have got the 2D points you've been looking for. You can use say for instance on windows, you can select a brush and draw you triangle values while coloring them at the same time.
I do not know why you would need Bitmaps, but if you want to practice say Texturing you can also do that yourself although off course on a weak computer this might take your frames per second significantly.
If you aim is to understand how rendering works on the lowest level. This is with no doubt a good practice.
Jt Schwinschwiga
I'm working on a game in XNA for Xbox 360. The game has 3D terrain with a collection of static objects that are connected by a graph of links. I want to draw the links connecting the objects as lines projected on to the terrain. I also want to be able to change the colors etc. of links as players move their selection around, though I don't need the links to move. However, I'm running into issues making this work correctly and efficiently.
Some ideas I've had are:
1) Render quads to a separate render target, and use the texture as an overlay on top of the terrain. I currently have this working, generating the texture only for the area currently visible to the camera to minimize aliasing. However, I'm still getting aliasing issues -- the lines look jaggy, and the game chugs frequently when moving the camera EDIT: it chugs all the time, I just don't have a frame rate counter on Xbox so I only notice it when things move.
2) Bake the lines into a texture ahead of time. This could increase performance, but makes the aliasing issue worse. Also, it doesn't let me dynamically change the properties of the lines without much munging.
3) Make geometry that matches the shape of the terrain by tessellating the line-quads over the terrain. This option seems like it could help, but I'm unsure if I should spend time trying it out if there's an easier way.
Is there some magical way to do this that I haven't thought of? Is one of these paths the best when done correctly?
Your 1) is a fairly good solution. You can reduce the jagginess by filtering -- first, make sure to use bilinear sampling when using the overlay. Then, try blurring the overlay after drawing it but before using it; if you choose a proper filter, it will remove the aliasing.
If it's taking too much time to render the overlay, try reducing its resolution. Without the antialiasing filter, that would just make it jaggier, but with a good filter, it might even look better.
I don't know why the game would chug only when moving the camera. Remember, you should have a separate camera for the overlay -- orthogonal, and pointing down onto the terrain.
Does XNA have a shadowing library? If so, yo could just pretend the lines are shadows.