when the model is too big in forge,it will flickering,how to solve it? - viewer

I use forge in my project.But my model is so big that force can't work well.
When I move my model in a forge, it can not finish rendering right away.It will be flickering.This affects the normal work.
Such as this video:
problem video
1)I have tried Model Consolidation, but this has no obvious effect.
[https://forge.autodesk.com/blog/forge-viewer-consolidated-geometry][2]
This is the code:
var initializerOptions = {
env: 'AutodeskProduction',
useConsolidation: true,
consolidationMemoryLimit: 150 * 1024 * 1024 // 150MB - Optional, defaults to 100MB
}
Autodesk.Viewing.Initializer( initializerOptions, function() {
// ...
});
2)I want to merge elements and reduce the number of patches in the model.
I think my model is too big, it has too many elements and patches. So it wants to reduce them.But I don't find any good idea to solve it.
I have tried 3d Max, Maya, and Navisworks to merge elements and reduce the number of patches.But none of them worked well.I haven't found a better way to reduce patches and merge elements.
Do you have a good way to solve the flickering problem?

The flickering issue you mentioned is not an issue but a side effect of the Progressive Rendering that the viewer is using. You cannot avoid it, this is how works Progressive Rendering. Consolidation will help on loading large scene, but not on the effect you're seeing. I am afraid there is no good answer on this case.

Related

MeshLab: fill cracks in mesh

I'm having trouble finding a way to solve this specific problem using MeshLab.
As you can see in the figure, the mesh with which I'm working presents some cracks in certain areas, and I would like to try to close them. The "close holes" option does not seem to work because, being technically cracks and not holes, it seems not to be able to weld them.
I managed to get a good result using the "Screened Poisson Surface Reconstruction" option, but using this operation (rebuilding the whole mesh topology), I would lose all the information about the mesh's UVs (and I can not afford to lose them).
I would need some advice to find the best method to weld these cracks, which does not change the vertices that are not along them, adding only the geometry needed to close the mesh (or, ideally, to make a weld using the existing edges along the edge).
Thanks in advance!
As answered by A.Comer in a comment to the main question, I was able to get the desired result simply by playing a bit with the parameters of the "close holes" tool.
Just for the sake of completeness, here is a copy of the comment:
The close holes option should be able to handle this. Did you try changing the max size for that filter to a much larger number? Do filters >> selection >> select border and put the number of selected faces as the max size into that filter – A.Comer

Improve performance big canvas [Edited]

I have a large canvas (2000x2000), and many rect objects (738)
With this canvas, I have problems to be able to move objects, the application is blocked easily and it is not flowing
I have tried to reduce the size of the canvas to 400x400 and the speed has improved a lot, and it does not use as much memory.
What is the reason for this? Can I improve with the new objectCaching property of fabricJS?
I leave an example fiddle
var canvas = new fabric.Canvas('canvas');
canvas.setDimensions({width:2000, height: 2000});
I can not leave the complete code of the example, because it is too big. It is a JSON with the structure of my objects
Edit
We have seen that the culprit is related to the memory available to the user of the canvas, and the browser used.
We believe that little can be done in these aspects.
So 2000x2000 is a large canvas, and you should not use it is for panning logic.
Keep the canvas as big as you display it and then just use panning to move around. This is a first step.
(check this tutorial for panning http://fabricjs.com/fabric-intro-part-5#pan_zoom)
Having the canvas smaller will detect objects that are not visible and skip their rendering, this will save rendering time.
Another thing could be group similar objects togheter, so that fabric can optimize rendering of grouped rects, if you have 738 objects, you need to find specific optimization for your application use case.
Thanks Andrea, I have made the changes you suggested, and I see that performance has improved a lot. I see that having a very large group hidden, moving the viewport, the performance improves a lot, instead of having an excessively large canvas.

How to improve spacing in 'cose' layout in cytoscape.js?

I'm using cytoscape.js 2.3.9 and I'm playing with some layouts.
I'm now rendering about 150 nodes, but I wish to go up till 1000-1500. There are about 25 nodes with 1-50 posible childs.
My best approach for what I need has been with 'cose' layout, but I'm quite far from my final expected result.
I've tried several configurations playing with its attributes values as documented, but I'm no so much in force directed simulations and feel like trying without much sense.
With this config:
layout: {
'name':'cose',
'animate':false,
'refresh':.1,
'edgeElasticity' : 20,
'fit': true,
'gravity' : 100
}
I get this result (red line shows the size of the containing div):
I wish the graph fits better, leaving less blank space and child nodes to be closer to its parent.
Sometimes with few elements fits better (but not always), like this:
But even so some child nodes overlap its parent and others get so far.
Any advice on attributes values or any other layout that fit better on my purpouse?
Thank you.
As is the nature of force-directed/physics-sim layouts, you have to tailor the force values to your particular data. My suggestion is to copy-paste the example in the docs for cose; it uses the default values.
Experiment by changing each value independently, and see what effect you get.
Unfortunately, there is no one-size-fits all set of force values, but we've tried to set defaults that work OK for most data we've seen.

Level of Detail in 3D graphics - What are the pros and cons?

I understand the concept of LOD but I am trying to find out the negative side of it and I see no reference to that from Googling around. The only pro I keep coming across is that it improves performance by omitting details when an object is far and displaying better graphics when the object is near.
Seriously that is the only pro and zero con? Please advice. Tnks.
There are several kinds of LOD based on camera distance. Geometric, animation, texture, and shading variations are the most common (there are also LOD changes that can occur based on image size and, for gaming, hardware capabilities and/or frame rate considerations).
At far distances, models can change tessellation or be replaced by simpler models. Animated details (say, fingers) may simplify or disappear. Textures may move to simpler textures, bump maps vanish, spec/diffuse maps combines, etc. And shaders may also swap-put to reduce the number of texture inputs or calculation (though this is less common and may be less profitable, since when objects are far away they already fill fewer pixels -- but it's important for screen-filling entities like, say, a mountain).
The upsides are that your game/app will have to render less data, and in some cases, the LOD down-rezzed model may actually look better when far away than the more-complex model (usually because the more detailed model will exhibit aliasing when far away, but the simpler one can be tuned for that distance). This frees-up resources for the nearer models that you probably care about, and lets you render overall larger scenes -- you might only be able to render three spaceships at a time at full-res, but hundreds if you use LODs.
The downsides are pretty obvious: you need to support asset swapping, which can mean both the real-time selection of different assets and switching them but also the management (at times of having both models in your memory pipeline (one to discard, one to load)); and those models don't come from the air, someone needs to create them. Finally, and this is really tricky for PC apps, less so for more stable platforms like console gaming: HOW DO YOU MEASURE the rendering benefit? What's the best point to flip from version A of a model to B, and B to C, etc? Often LODs are made based on some pretty hand-wavy specifications from an engineer or even a producer or an art director, based on hunches. Good measurement is important.
LOD has a variety of frameworks. What you are describing fits a distance-based framework.
One possible con is that you will have inaccuracies when you choose an arbitrary point within the object for every distance calculation. This will cause popping effects at times since the viewpoint can change depending on orientation.

Typical rendering strategy for many and varied complex objects in directx?

I am learning directx. It provides a huge amount of freedom in how to do things, but presumably different stategies perform differently and it provides little guidance as to what well performing usage patterns might be.
When using directx is it typical to have to swap in a bunch of new data multiple times on each render?
The most obvious, and probably really inefficient, way to use it would be like this.
Stragety 1
On every single render
Load everything for model 0 (textures included) and render it (IASetVertexBuffers, VSSetShader, PSSetShader, PSSetShaderResources, PSSetConstantBuffers, VSSetConstantBuffers, Draw)
Load everything for model 1 (textures included) and render it (IASetVertexBuffers, VSSetShader, PSSetShader, PSSetShaderResources, PSSetConstantBuffers, VSSetConstantBuffers, Draw)
etc...
I am guessing you can make this more efficient partly if the biggest things to load are given dedicated slots, e.g. if the texture for model 0 is really complicated, don't reload it on each step, just load it into slot 1 and leave it there. Of course since I'm not sure how many registers there are certain to be of each type in DX11 this is complicated (can anyone point to docuemntation on that?)
Stragety 2
Choose some texture slots for loading and others for perpetual storage of your most complex textures.
Once only
Load most complicated models, shaders and textures into slots dedicated for perpetual storage
On every single render
Load everything not already present for model 0 using slots you set aside for loading and render it (IASetVertexBuffers, VSSetShader, PSSetShader, PSSetShaderResources, PSSetConstantBuffers, VSSetConstantBuffers, Draw)
Load everything not already present for model 1 using slots you set aside for loading and render it (IASetVertexBuffers, VSSetShader, PSSetShader, PSSetShaderResources, PSSetConstantBuffers, VSSetConstantBuffers, Draw)
etc...
Strategy 3
I have no idea, but the above are probably all wrong because I am really new at this.
What are the standard strategies for efficient rendering on directx (specifically DX11) to make it as efficient as possible?
DirectX manages the resources for you and tries to keep them in video memory as long as it can to optimize performance, but can only do so up to the limit of video memory in the card. There is also overhead in every state change even if the resource is still in video memory.
A general strategy for optimizing this is to minimize the number of state changes during the rendering pass. Commonly this means drawing all polygons that use the same texture in a batch, and all objects using the same vertex buffers in a batch. So generally you would try to draw as many primitives as you can before changing the state to draw more primitives
This often will make the rendering code a little more complicated and harder to maintain, so you will want to do some profiling to determine how much optimization you are willing to do.
Generally you will get better performance increases through more general algorithm changes beyond the scope of this question. Some examples would be reducing polygon counts for distant objects and occlusion queries. A popular true phrase is "the fastest polygons are the ones you don't draw". Here are a couple of quick links:
http://msdn.microsoft.com/en-us/library/bb147263%28v=vs.85%29.aspx
http://www.gamasutra.com/view/feature/3243/optimizing_direct3d_applications_.php
http://http.developer.nvidia.com/GPUGems2/gpugems2_chapter06.html
Other answers are better answers to the question per se, but by far the most relevant thing I found since asking was this discussion on gamedev.net in which some big title games are profiled for state changes and draw calls.
What comes out of it is that big name games don't appear to actually worry too much about this, i.e. it can take significant time to write code that addresses this sort of issue and the time it takes to spend writing code fussing with it probably isn't worth the time lost getting your application finished.

Resources