When rendering with points that have a large point size, under desktop OpenGL, such points would be clipped to the center. That is, if the center is off screen, they are culled, causing them to disappear even if part of the rasterized value would be visible. (FYI: this only happens on conforming desktop OpenGL implementations. Some desktop GL implementations ignore this rule)
OpenGL ES however explicitly doesn't do this. Point sprites are only clipped against the near/far planes, not against the sides of the view volume. Direct3D uses these rules as well.
Which rules does Apple's Metal follow?
Currently, on OS X on AMD and Nvidia GPUs, points are culled when their center falls outside the clip volume. On iOS, and OS X on Intel GPUs, points are clipped to the edges of the volume and culled only when they fall entirely outside. As this is not documented (to the best of my knowledge), it may be subject to change.
Related
This is a question to understand the principles of GPU accelerated rendering of 2d vector graphics.
With Skia or Direct2D, you can draw e.g. rounded rectangles, Bezier curves, polygons, and also have some effects like blur.
Skia / Direct2D offer CPU and GPU based rendering.
For the CPU rendering, I can imagine more or less how e.g. a rounded rectangle is rendered. I have already seen a lot of different line rendering algorithms.
But for GPU, I don't have much of a clue.
Are rounded rectangles composed of triangles?
Are rounded rectangles drawn entirely by wild pixel shaders?
Are there some basic examples which could show me the basic prinicples of how such things work?
(Probably, the solution could also be found in the source code of Skia, but I fear that it would be so complex / generic that a noob like me would not understand anything.)
In case of direct2d, there is no source code, but since it uses d3d10/11 under the hood, it's easy enough to see what it does behind the scenes with Renderdoc.
Basically d2d tends to have a policy to minimize draw calls by trying to fit any geometry type into a single buffer, versus skia which has some dedicated shader sets depending on the shape type.
So for example, if you draw a bezier path, Skia will try to use tesselation shader if possible (which will need a new draw call if the previous element you were rendering was a rectangle), since you change pipeline state.
D2D, on the other side, tends to tesselate on the cpu, and push to some vertexbuffer, and switches draw call only if you change brush type (if you change from one solid color brush to another it can keep the same shaders, so it doesn't switch), or when the buffer is full, or if you switch from shape to text (since it then needs to send texture atlases).
Please note that when tessellating bezier path D2D does a very great work at making the resulting geometry non self intersecting (so alpha blending works properly even on some complex self intersecting path).
In case on rounded rectangle, it does the same, just tessellates into triangles.
This allows it to minimize draw calls to a good extent, as well as allowing anti alias on a non msaa surface (this is done at mesh level, with some small triangles with alpha). The downside of it is that it doesn't use much hardware feature, and geometry emitted can be quite high, even for seemingly simple shapes).
Since d2d prefers to use triangle strips instead or triangle list, it can do some really funny things when drawing a simple list of triangles.
For text, d2d use instancing and draws one instanced quad per character, it is also good at batching those, so if you call some draw text functions several times in a row, it will try to merge this into a single call as well.
I'm displaying many overlapping icons in a Google Earth tour. I'd like to control (or at least understand) the order in which the icons are drawn (which one shows on "top"). Thanks!
PS. Non solutions attempted: Using gx:drawOrder (it applies to overlays, but not icons). Using AnimatedUpdate to establish the order chronologically. Using the order in which you introduce the placemarks to establish their drawing order.
Apparently Google Earth draws the features in groups by type: polygons, then ground overlays, followed by lines and point data where drawOrder is applied only within a group. ScreenOverlays are drawn last so they are always on top.
If you define gx:drawOrder or drawOrder on a collection of features, it only applies to the features of the same type (polygon and other polygons) not between different types.
That is the behavior if the features are clamped to ground. If features are at different altitudes then lower altitude layers are drawn first.
Note that the tilt angle affects the size of the icon and as the tilt approaches 90 degrees, the size of the icon gets smaller. The icon is at largest size when viewed straight-down with 0 degree tilt angle.
I'm total new in graphics and DX, encountered a problem and no one around me know graphics too. Sorry if the question seems too naive.
I use DirectX 11 to render a mesh, and I want to get a buffer for each pixel. This buffer should store a linked-list (or some other structure) of all triangles that contribute color to this pixel.
Should I operate on which shader or which part of DX? Or simply, where could I get the triangle information in pixel shader?
You can write the triangle ID in the pixel shader but using the hardware z-buffer you can only capture one triangle per pixel.
With multisampled textures you can capture more triangles. This should be enough in practical situations.
If your triangles are extremely small and many of them are visible within one pixel then you should consider the A-Buffer with your own hidden surface removal algorithm.
If you need it only for debug purposes you can use any of graphics debuggers:
Visual Studio Graphics Debugger (integrated since Visual Studio 2012)
For AMD GPUs: GPUPerfStudio
For NVidia GPUs: Nsight
Good old PIX from DX SDK.
If you need it at runtime (BTW, why? =) )
Use System-Generated Values: VertexID, PrimitiveID and SV_VertexID to calculate exact primitive or even vertex, that contributed in pixel color. It is tricky, but possible.
Another way is to use some kind of custom triangle ID in vertex declaration. But be aware of culling.
You can output final data from pixel shader into buffer, then read from it on CPU.
All of such problems are pretty advanced topics in DirectX. I'm not sure if "total new in graphics and DX" coder can solve it.
Is it possible to use color pallettes in openGL ES 1.1?
I'm currently developing a game which has player sprites, and the player sprites need to be able to be changed to different teams' colors. For example, changing the shirts' colors but not the face colors, which rules out simple hue rotation.
Is this possible, or will this have to be implemented manually (modifying the texture data directly)?
Keep in mind that anything other than non-mipmapped GL_NEAREST will blend between palette indices. I ended up expanding paletted textures in my decompression method before uploading them as BGRA32. (GLES 2.0)
It's not a hardware feature of the MBX but a quick check of gl.h for ES 1.x from the iPhone SDK reveals that GL_PALETTE4_RGB8_OES, GL_PALETTE8_RGBA8_OES and a bunch of others are available as one of the constants to pass to glCompressedTexImage2D, as per the man page here. So you can pass textures with palettes to that, but I'll bet anything that the driver will just turn them into RGB textures on the CPU and then upload them to the GPU. I don't believe Apple support those types of compressed texture for any reason other than that they're part of the ES 1.x spec.
On ES 2.x you're free to do whatever you want. You could easily upload the palette as one texture (with, say, the pixel at (x, 0) being the colour for palette index x) and the paletted texture as another. You'll then utilise two texture units to do the job that one probably could do when plotting fragments, so use your own judgment as to whether you can afford that.
I'm trying to set base to a 2D game with destructible terrain and/or particle effects, scroll, zoom, characters, etc... I'd like to know if there is a graphics library that would support those things in both software and hardware acceleration (need pixel access). I've tried SDL (even with DirectX back-end), but it seems hardware does its job only in full screen. I'd appreciate any suggestion.
Use OpenGL. Perhaps via another library e.g. SDL. I do not know why you can't get windowed HW acceleration working, it might be a platform thing (but it's certainly a different question).
Set the projection matrix to orthographic and use one of the axis (typically z) to organise 'stacking' elements. With an appropriate transformation in the display subroutine, you can align the x/y coordinates with "traditional" drawing (i.e., top-left down, rather than bottom-left up).
Build your graphical elements into bitmaps, convert them into textures and draw them on top of OpenGL Rects.