Suppose I have a one big shader program and want only specific ranges of vertices/triangles rendered.
What are some performant ways of doing this? Which one looks most promising?
I came up with 3 methods, are there some more?
Batching draw calls Draw(position, count) using command lists. (Opengl has glMultiDrawArrays.)
If the ranges are known ahead of time, we don't need to worry about the time spent constructing them. However, since some ranges change unpredictably, it is probably unrealistic to keep command lists for all possibilities.
This method obviously reduces the time spent constructing the draw calls on CPU side and I assume that these calls already tell the GPU that it does not need to do any state changes.
Call Draw on the whole buffer and keep updating a boolean per-vertex float buffer which would just multiply the output positions by 0/1 in inactive/active ranges.
The benefit here is only one draw call. However, we need to update the buffer and it seems like the buffer needs to be locked while we update it...
Powerset defined in a constant buffer. For n ranges, we can use one n-bit constant mask compared against a static n-bit per_vertex_mask. Vertex is visible if mask & per_vertex_mask != 0.
Constant buffers are probably cheaper to update than a whole vertex buffer. The number of ranges may however be too large for this method.
Related
I am building a Cellular Automata visualization "game" with Rust and the BevyEngine. Currently, when initializing the world, I spawn a sprite for every cell. Within each update the sprites color is changed according to wether it is alive or dead.
for (grid_pos, mut color_mat) in query.iter_mut() {
let cell_state = world.0[grid_pos.1][grid_pos.0];
match cell_state {
0 => *color_mat = materials.dead_cell.clone(),
1 => *color_mat = materials.alive_cell.clone(),
_ => (),
}
}
The thing is, when dealing with a larger and larger map, the number of sprites gets very high. So I wondered if it might be faster when I only spawn a sprite in the position of a living cell and remove it when the cell dies.
So my question is: Is it faster if I spawn all a sprite for every grid position OR is the better way to only spawn a sprite when a cell is alive and remove it when the cell dies.
I'm not familiar with Bevy's performance characteristics, but it is generally the case that to get good performance for drawing a cellular automaton, you don't want to be storing "sprite" objects for every cell, but an array of just the cell states, and drawing them in some kind of efficient batch form.
The state of of a cellular automaton is basically an image (possibly one with rather chunky pixels). So, treat it that way: copy the cell states into a texture and then draw the texture. If you want the cells to appear fancier than little squares, program the fragment shader to do that. (For example, you can read the CA-state texture and use it to decide which of several other textures to draw within the bounds of one square.)
This may or may not be necessary to get acceptable performance for your use case, but it will certainly be faster than managing individual sprites.
I've been trying to implement a Compute Shader based particle system.
I have a compute shader which builds a structured buffer of particles, using a UAV with the D3D11_BUFFER_UAV_FLAG_COUNTER flag.
When I add to this buffer, I check if this particle has any complex behaviours, which I want to filter out and perform in a separate compute shader. As an example, if the particle wants to perform collision detection, I add its index to another structured buffer, also with the D3D11_BUFFER_UAV_FLAG_COUNTER flag.
I then run a second compute shader, which processes all the indices, and applies collision detection to those particles.
However, in the second compute shader, I'd estimate that about 5% of the indices are wrong - they belong to other particles, which don't support collision detection.
Here's the compute shader code that perfroms the list building:
// append to destination buffer
uint dstIndex = g_dstParticles.IncrementCounter();
g_dstParticles[ dstIndex ] = particle;
// add to behaviour lists
if ( params.flags & EMITTER_FLAG_COLLISION )
{
uint behaviourIndex = g_behaviourCollisionIndices.IncrementCounter();
g_behaviourCollisionIndices[ behaviourIndex ] = dstIndex;
}
If I split out the "add to behaviour lists" bit into a separate compute shader, and run it after the particle lists are built, everything works perfectly. However I think I shouldn't need to do this - it's a waste of bandwidth going through all the particles again.
I suspect that IncrementCounter is actually not guaranteed to return a unique index into the UAV, and that there is some clever optimisation going on that means the index is only valid inside the compute shader it is used in. And thus my attempt to pass it to the second compute shader is not valid.
Can anyone give any concrete answers to what's going on here? And if there's a way for me to keep the filtering inside the same compute shader as my core update?
Thanks!
IncrementCounter is an atomic operation and so will (driver/hardware bugs notwithstanding) return a unique value to each thread that calls it.
Have you thought about using Append/Consume buffers for this, as it's what they were designed for? The first pass simply appends the complex collision particles to an AppendStructuredBuffer and the second pass consumes from the same buffer but using a ConsumeStructuredBuffer view instead. The second run of compute will need to use DispatchIndirect so you only run as many thread groups as necessary for the number in the list (something the CPU won't know).
The usual recommendations apply though, have you tried the D3D11 Debug Layer and running it on the reference device to be sure it isn't a driver issue?
I am not able to figure out the procedure for iterative octree traversal though I have tried approaching it in the way of binary tree traversal. For my problem, I have octree nodes having child and parent pointers and I would like to iterate and only store the leaf nodes in the stack.
Also, is going for iterative traversal faster than recursive traversal?
It is indeed like binary tree traversal, but you need to store a bit of intermediate information. A recursive algorithm will not be slower per se, but use a bit more stack space for O(log8) recursive calls (about 10 levels for 1 billion elements in the octree).
Iterative algorithms will also need the same amount of space to be efficient, but you can place it into the heap it you are afraid that your stack might overflow.
Recursively you would do (pseudocode):
function traverse_rec (octree):
collect value // if there are values in your intermediate nodes
for child in children:
traverse_rec (child)
The easiest way to arrive at an iterative algorithm is to use a stack or queue for depth first or breath first traversal:
function traverse_iter_dfs(octree):
stack = empty
push_stack(root_node)
while not empty (stack):
node = pop(stack)
collect value(node)
for child in children(node):
push_stack(child)
Replace the stack with a queue and you got breath first search. However, we are storing something in the region of O(7*(log8 N)) nodes which we are yet to traverse. If you think about it, that's the lesser evil though, unless you need to traverse really big trees. The only other way is to use the parent pointers, when you are done in a child, and then you need to select the next sibling, somehow.
If you don't store in advance the index of the current node (in respect to it's siblings) though, you can only search all the nodes of the parent in order to find the next sibling, which essentially doubles the amount of work to be done (for each node you don't just loop through the children but also through the siblings). Also, it looks like you at least need to remember which nodes you visited already, for it is in general undecidable whether to descend farther down or return back up the tree otherwise (prove me wrong somebody).
All in all I would recommend against searching for such a solution.
Depends on what your goal is. Are you trying to find whether a node is visible, if a ray will intersect its bounding box, or if a point is contained in the node?
Let's assume that you are doing the last one, checking if a point is/should be contained in the node. I would add a method to the Octnode that takes a point and checks whether or not it lies within the bounding box of the Octnode. If it does return true, else false, pretty simple. From here, call a drill down method that starts at your head node and check each child, simple "for" loop, to see which Octnode it lies in, it can at most be one.
Here is where your iterative vs recursive algorithm comes into play. If you want iterative, just store the pointer to the current node, and swap this pointer from the head node to the one containing your point. Then just keep drilling down till you reach maximal depth or don't find an Octnode containing it. If you want a recursive solution, then you will call this drill down method on the Octnode that you found the point in.
I wouldn't say that iterative versus recursive has much performance difference in terms of speed, but it could have a difference in terms of memory performance. Each time you recurse you add another call depth onto the stack. If you have a large Octree this could result in a large number of calls, possibly blowing your stack.
I'm looking for the best data structure to add styles to a text (say in a text editor). The structure should allow the following operations:
Quick lookup of all styles at absolute position X
Quick insert of text at any position (styles after that position must be moved).
Every position of the text must support an arbitrary number of styles (overlapping).
I've considered lists/arrays which contain text ranges but they don't allow quick insert without recalculating the positions of all styles after the insert point.
A tree structure with relative offsets supports #2 but the tree will degenerate fast when I add lots of styles to the text.
Any other options?
I have never developped an editor, but how about this:
I believe it would be possible to expand the scheme that is used to store the text characters themeselves, depending of course on the details of your implementation (language, toolkits etc) and your performance and resource usage requirements.
Rather than use a separate data structure for the styles, I'd prefer having a reference that would accompany each character and point to an array or list with the applicable characters. Characters with the same set of styles could point to the same array or list, so that one could be shared.
Character insertions and deletions would not affect the styles themeselves, apart from changing the number of references to them, which could be handled with a bit of reference counting.
Depending on your programming language you could even compress things a bit more by pointing halfway into a list, although the additional bookkeeping for this might in fact make it more inefficient.
The main issue with this suggestion is the memory usage. In an ASCII editor written in C, bundling a pointer with each char would raise its effective memory usage from 1 byte to 12 bytes on a 64 bit system, due to struct alignment padding.
I would look about breaking the text into small variable size blocks that would allow you to efficiently compress the pointers. E.g. a 32-character block might look like this in C:
struct _BLK_ {
unsigned char size;
unsigned int styles;
char content[];
}
The interesting part is the metadata processing on the variable part of the struct, which contains both the stored text and any style pointers. The size element would indicate the number of characters. The styles integer (hence the 32-character limit) would be seen as a set of 32 1-bit fields, with each one indicating whether a character has its own style pointer, or whether it should use the same style as the previous character. This way a 32-char block with a single style would only have the additional overhead of the size char, the styles mask and a single pointer, along with any padding bytes. Inserting and deleting characters into a small array like this should be quite fast.
As for the text storage itself, a tree sounds like a good idea. Perhaps a binary tree where each node value would be the sum of the children values, with the leaf nodes eventually pointing to text blocks with their size as their node value? The root node value would be the total size of the text, with each subtree ideally holding half of your text. You'd still have to auto-balance it, though, with sometimes having to merge half-empty text blocks.
And in case you missed it, I am no expert in trees :-)
EDIT:
Apparently what I suggested is a modified version of this data structure:
http://en.wikipedia.org/wiki/Rope_%28computer_science%29
as referenced in this post:
Data structure for text editor
EDIT 2:
Deletion in the proposed data structure should be relatively fast, as it would come down to byte shifting in an array and a few bitwise operations on the styles mask. Insertion is pretty much the same, unless a block fills up. It might make sense to reserve some space (i.e. some bits in the styles mask) within each block to allow for future insertions directly in the blocks, without having to alter the tree itself for relatively small amounts of new text.
Another advantage of bundling characters and styles in blocks like this is that its inherent data locality should allow for more efficient use of the CPU cache than other alternatives, thus improving the processing speed to some extent.
Much like any complex data structure, though, you'd probably need either profiling with representative test cases or an adaptive algorithm to determine the optimal parameters for its operation (block size, any reserved space etc).
Does anyone have a detailed explanation on how integers can be exploited? I have been reading a lot about the concept, and I understand what an it is, and I understand buffer overflows, but I dont understand how one could modify memory reliably, or in a way to modify application flow, by making an integer larger than its defined memory....
It is definitely exploitable, but depends on the situation of course.
Old versions ssh had an integer overflow which could be exploited remotely. The exploit caused the ssh daemon to create a hashtable of size zero and overwrite memory when it tried to store some values in there.
More details on the ssh integer overflow: http://www.kb.cert.org/vuls/id/945216
More details on integer overflow: http://projects.webappsec.org/w/page/13246946/Integer%20Overflows
I used APL/370 in the late 60s on an IBM 360/40. APL is language in which essentially everything thing is a multidimensional array, and there are amazing operators for manipulating arrays, including reshaping from N dimensions to M dimensions, etc.
Unsurprisingly, an array of N dimensions had index bounds of 1..k with a different positive k for each axis.. and k was legally always less than 2^31 (positive values in a 32 bit signed machine word). Now, an array of N dimensions has an location assigned in memory. Attempts to access an array slot using an index too large for an axis is checked against the array upper bound by APL. And of course this applied for an array of N dimensions where N == 1.
APL didn't check if you did something incredibly stupid with RHO (array reshape) operator. APL only allowed a maximum of 64 dimensions. So, you could make an array of 1-64 dimension, and APL would do it if the array dimensions were all less than 2^31. Or, you could try to make an array of 65 dimensions. In this case, APL goofed, and surprisingly gave back a 64 dimension array, but failed to check the axis sizes.
(This is in effect where the "integer overflow occurred"). This meant you could create an array with axis sizes of 2^31 or more... but being interpreted as signed integers, they were treated as negative numbers.
The right RHO operator incantation applied to such an array to could reduce the dimensionaly to 1, with an an upper bound of, get this, "-1". Call this matrix a "wormhole" (you'll see why in moment). Such an wormhole array has
a place in memory, just like any other array. But all array accesses are checked against the upper bound... but the array bound check turned out to be done by an unsigned compare by APL. So, you can access WORMHOLE[1], WORMHOLE[2], ... WORMHOLE[2^32-2] without objection. In effect, you can access the entire machine's memory.
APL also had an array assignment operation, in which you could fill an array with a value.
WORMHOLE[]<-0 thus zeroed all of memory.
I only did this once, as it erased the memory containing my APL workspace, the APL interpreter, and obvious the critical part of APL that enabled timesharing (in those days it wasn't protected from users)... the terminal room
went from its normal state of mechanically very noisy (we had 2741 Selectric APL terminals) to dead silent in about 2 seconds.
Through the glass into the computer room I could see the operator look up startled at the lights on the 370 as they all went out. Lots of runnning around ensued.
While it was funny at the time, I kept my mouth shut.
With some care, one could obviously have tampered with the OS in arbitrary ways.
It depends on how the variable is used. If you never make any security decisions based on integers you have added with input integers (where an adversary could provoke an overflow), then I can't think of how you would get in trouble (but this kind of stuff can be subtle).
Then again, I have seen plenty of code like this that doesn't validate user input (although this example is contrived):
int pricePerWidgetInCents = 3199;
int numberOfWidgetsToBuy = int.Parse(/* some user input string */);
int totalCostOfWidgetsSoldInCents = pricePerWidgetInCents * numberOfWidgetsToBuy; // KA-BOOM!
// potentially much later
int orderSubtotal = whatever + totalCostOfWidgetInCents;
Everything is hunky-dory until the day you sell 671,299 widgets for -$21,474,817.95. Boss might be upset.
A common case would be code that prevents against buffer overflow by asking for the number of inputs that will be provided, and then trying to enforce that limit. Consider a situation where I claim to be providing 2^30+10 integers. The receiving system allocates a buffer of 4*(2^30+10)=40 bytes (!). Since the memory allocation succeeded, I'm allowed to continue. The input buffer check won't stop me when I send my 11th input, since 11 < 2^30+10. Yet I will overflow the actually allocated buffer.
I just wanted to sum up everything I have found out about my original question.
The reason things were confusing to me was because I know how buffer overflows work, and can understand how you can easily exploit that. An integer overflow is a different case - you cant exploit the integer overflow to add arbitrary code, and force a change in the flow of an application.
However, it is possible to overflow an integer, which is used - for example - to index an array to access arbitrary parts of memory. From here, it could be possible to use that mis-indexed array to override memory and cause the execution of an application to alter to your malicious intent.
Hope this helps.