Disable culling on an object - graphics

This question is actually for Unity3D, but it can also be a more general question, so therefore I'm going to make this question as general possible.
Suppose I have a scene with a camera (near = 0.3, far = 1000, fov = 60) and I want to draw a skydome that is 10000 units in radius.
The object is not culled by the frustum of the camera, because I'm inside of the dome. But the vertices are culled by some shader somehow and the end-result looks like this:
Now my question is:
what setting for any engine can I change to make sure that the complete object is drawn and not clipped by the far plane of the camera?
What I don't want is:
Change the far plane to 10000, because it makes the frustum less accurate
Change the near plane, because my game is actually on a very low scale
Change the scale of the dome, because this setting looks very realistic

I do not know how to do this in Unity but in DirectX and in OpenGL you switch off the zbuffer (both checks and writing) and draw the skybox first.
Then you switch on the zbuffer and draw the rest of the scene.
My guess is that Unity can do all this for you.

I have two solutions for my own problem. The first one doesn't solve everything. The second does, but is against my own design principles.
There was no possibility for me to change the shader's z-writing, which is a great solution from #Erno, because the shaders used are 3rd party.
Option 1
Just before the object is rendered, set the far plane to 100,000 and set it back to 1000 after drawing the sky.
Problem: The depth buffer is still filled with values between very low and 100,000. This decreases the accuracy of the depth buffer and gives problems with z-fighting and post-effects that depend on the depth buffer.
Option 2
Create two cameras that are linked to each other. Camera 1 renders the skydome first with a setting of far = 100000, near = 100. Camera 2 clears the depth buffer and draws the rest of the scene with a setting of far = 1000, near = 0.3. The depth buffer doesn't contain big values now, so that solves the problems of inaccurate depth buffers.
Problem: The cameras have to be linked by some polling system, because there are no change events on the camera class (e.g. when FoV changes). I like the fact that there is only one camera, but this doesn't seem possible quite easily.

Related

Three.js ParticleSystem flickering with large data

Back story: I'm creating a Three.js based 3D graphing library. Similar to sigma.js, but 3D. It's called graphosaurus and the source can be found here. I'm using Three.js and using a single particle representing a single node in the graph.
This was the first task I had to deal with: given an arbitrary set of points (that each contain X,Y,Z coordinates), determine the optimal camera position (X,Y,Z) that can view all the points in the graph.
My initial solution (which we'll call Solution 1) involved calculating the bounding sphere of all the points and then scale the sphere to be a sphere of radius 5 around the point 0,0,0. Since the points will be guaranteed to always fall in that area, I can set a static position for the camera (assuming the FOV is static) and the data will always be visible. This works well, but it either requires changing the point coordinates the user specified, or duplicating all the points, neither of which are great.
My new solution (which we'll call Solution 2) involves not touching the coordinates of the inputted data, but instead just positioning the camera to match the data. I encountered a problem with this solution. For some reason, when dealing with really large data, the particles seem to flicker when positioned in front/behind of other particles.
Here are examples of both solutions. Make sure to move the graph around to see the effects:
Solution 1
Solution 2
You can see the diff for the code here
Let me know if you have any insight on how to get rid of the flickering. Thanks!
It turns out that my near value for the camera was too low and the far value was too high, resulting in "z-fighting". By narrowing these values on my dataset, the problem went away. Since my dataset is user dependent, I need to determine an algorithm to generate these values dynamically.
I noticed that in the sol#2 the flickering only occurs when the camera is moving. One possible reason can be that, when the camera position is changing rapidly, different transforms get applied to different particles. So if a camera moves from X to X + DELTAX during a time step, one set of particles get the camera transform for X while the others get the transform for X + DELTAX.
If you separate your rendering from the user interaction, that should fix the issue, assuming this is the issue. That means that you should apply the same transform to all the particles and the edges connecting them, by locking (not updating ) the transform matrix until the rendering loop is done.

How to construct ground surface of infinite size in a 3D CAD application?

I am trying to create an application similar in UI to Sketchup. As a first step, I need to display the ground surface stretching out in all directions. What is the best way to do this?
Options:
Create a sufficiently large regular polygon stretching out in all directions from the origin. Here there is a possibility of the user hitting the edges and falling off the surface of the earth.
Model the surface of the earth as a sphere/spheroid. Here I will be limiting my vertex co-ordinates to very large values prone to rounding off errors. (Radius of earth is 6371000000 millimeter).
Same as 1 but dynamically extend the ends of the earth as the user gets close to them.
What is the usual practice?
I guess you would do neither of these, but instead use a virtual ground.
So you just find out, what portion of the ground is visible in the viewport and then create a plane large enough to fill that. With some reasonable maxiumum, which simulates the end of the line of sight aka horizon as we know it.

Why is collision difficult to effectively compute in graphics engines?

From the oldest games to the very modern, it seems like you can still see through walls or most often the ground in some camera positions.
Why is collision difficult to effectively compute in graphics engines?
Is it rounding/loss of precision accumulating leading to a mis-rendered view?
This is not actually collision in the explicit sense. The camera position is probably not actually "inside" the wall or the ground in those situations, but it is simply very close to it.
In computer 3D graphics the camera has a concept of a near plane and a far plane. Only geometry located between these two planes will be visible, while the rest will be clipped. If you are too close to something and align the camera correctly, then chances are that some parts of the geometry will be too close to the camera as defined by the near plane and as a result that geometry will not be rendered.
Now, the distance to this near plane can be set by the developers, and it can be set to be very short - short enough to ensure that situations like these cannot occur. However, the depth buffer or z buffer that is used to determine which objects are closest to the camera during rendering, and thus which objects to render and which not to render, is closely related to the near and far plane distances.
In graphics hardware the depth buffer is represented using a fixed amount of bits for each pixel, for example 32 bits. These 32 bits must be enough to accurately represent the entire span between the near plane and the far plane. It is also not linear, but will use more precision closer to the camera. As a result, choosing a very small near plane distance will greatly reduce the overall precision of the depth buffer. This can cause annoying flickering throughout the entire scene wherever two objects are very close to each others.
You can read more about this issue here as well as section 12.040 here.
It's not about difficulty (of course, it's not easy to compute collision/clipping of non-convex object), but you still have only like ~33ms to compute whole frame, so some compromise have to be made (collision mesh is not the same like mesh you really see). If there is no time for precise solution (to fulfill all conditions - camera distance, object which have to be seen, collision avoidance), you have to fallback to some "easy" solution like see through the wall.

Best way to move sprites in OpenGL - translate or alter vertices

I am creating an app for android using openGL ES. I am trying to draw, in 2D, lots of moving sprites which bounce around the screen.
Let's consider I have a ball at coordinates 100,100. The ball graphic is 10px wide, therefore I can create the vertices boundingBox = {100,110,0, 110,110,0, 100,100,0, 110,100,0} and perform the following on each loop of onDrawFrame() with the ball texture loaded.
//for each ball object
FloatBuffer ballVertexBuffer = byteBuffer.asFloatBuffer();
ballVertexBuffer.put(ball.boundingBox);
ballVertexBuffer.position(0);
gl.glVertexPointer(3, GL10.GL_FLOAT, 0, ballVertexBuffer);
gl.glDrawArrays(GL10.GL_TRIANGLE_STRIP, 0,4);
I would then update the boundingBox array to move the balls around the screen.
Alternatively, I could not alter the bounding box at all and instead translatef() the ball before drawing the verticies
gl.glVertexPointer(3, GL10.GL_FLOAT, 0, ballVertexBuffer);
gl.glPushMatrix();
gl.glTranslatef(ball.posX, ball.posY, 0);
gl.glDrawArrays(GL10.GL_TRIANGLE_STRIP, 0,4);
gl.glPopMatrix();
What would be the best thing to do in the case in terms of efficient and best practices.
OpenGL ES (as of 2.0) does not support instancing, unluckily. If it did, I would recommend drawing a 2-triangle sprite instanced N times, reading the x/y offsets of the center point, and possibly a scale value if you need differently sized sprites, from a vertex texture (which ES supports just fine). This would limit the amount of data you must push per frame to a minimum.
Assuming you can't do the simulation directly on the GPU (thus avoiding uploading the vertex data each frame) ... this basically leaves you only with only one efficient option:
Generate 2 VBOs, map one and fill it, while the other is used as the source of the draw call. You can also do this seemingly with a single buffer if you glBufferData(... 0) in between, which tells OpenGL to generate a new buffer and throw the old one away as soon as it's done reading from it.
Streaming vertices in every frame may not be super fast, but this does not matter as long as the latency can be well-hidden (e.g. by drawing from one buffer while filling another). Few draw calls, few state changes, and ideally no stalls should still make this fast.
Drawing calls are much more expensive than altering the data. Also glTranslate is not nearly as efficient as just adding a few numbers, after all it has to go through a full 4×4 matrix multiplication, which is 64 scalar multiplies and 16 scalar additions.
Of course the best method is using some form of instancing.

SDL: FPS problems with simple bitmap

I am currently working on a game in SDL which has destructible terrain. At the moment the terrain is one large (5000*500, for testing) bitmap which is randomly generated.
Each frame the main surface is cleared and the terrain bitmap is blitted into it. The current resolution is 1200 * 700, so when I was testing 1200 * 500 pixels were visible at most of the points.
Now the problem is: The FPS are already dropping! I thought one simple bitmap shouldn't show any effect - but I am already falling down to ~24 FPS with this!
Why is blitting & drawing a bitmap of that size so slow?
Am I taking a false approach at destructible terrain?
How have games like Worms done this? The FPS seem really high although there's definitely a lot of pixels drawn in there
Whenever you initialize a surface, do it the following way:
SDL_Surface* mySurface;
SDL_Surface* tempSurface;
tempSurface = SDL_LoadIMG("./path/to/image/image.jpg_or_whatever");
/* SDL_LoadIMG() is correct name? Not sure now, I`m at work, so I can`t verify it. */
mySurface = SDL_DisplayFormat(tempSurface);
SDL_FreeSurface(tempSurface);
The SDL_DisplayFormat() method converts the pixel format of your surface to the format the video surface uses. If you don`t do it the way I described above, SDL does this each time the surface is blitted.
And always remember: just blit the necessary parts that really are visible to the player.
That`s my first guess, why you are having performance problems. Post your code or ask more specific questions, if you want more tipps. Good luck with your game.
If you redraw the whole screen at each frame your will always get a bad FPS. You have to redraw only part of the screen which have changed. You can also try to use SDL_HWSURFACE to use hardware but it won't work on every graphical card.
2d in SDL is pretty slow and there isn't much you can do to make it faster (on windows at least it uses GDI for drawing by default.) Your options are:
Go opengl and start using textured quads for sprites.
Try SFML. It provides a hardware accelerated 2d environment.
Use SDL 1.3 Get a source snapshot it is unstable and still under development but hardware accelerated 2d is supposed to be one of the main selling points.

Resources