I have to use jointJs for building workflow diagram.
So how much performance will affect in ie browser when there are more then 200 component(includes arrows, nodes,bozes ) on same page ?
As you may know there is a demo application named Rappid here
You can test your performance requirements there.
I have tested the performance with ~150 components and 1 link between 2components and there was no performance issue at all.
I am using it for more than 200 nodes, but there are problems like if you want to add some description near every block at the same time it will hang for few minutes. But if you are doing operations in batches then it will be ok. There are problems if you want to hide 100 nodes together. I am doing these type of tasks using jQuery. I am using it for very complex use case for more than 500 nodes including nodes, arrows and other helper components. Tell me your exact requirement and I will be able to tell you if it will work or not.
Related
Sorry not a coding question, not sure if I should be posting it here.
I struggle with the concept of what is 'large' in Notes nsf application design elements as opposed to the amount of data or records stored. For example it is stated that we shouldn't have too many views, but 'too many' does not give any scale whatsoever, is it 10,50,100,500 before it 'slows down'. I realise it also based on the view design but some idea of 'too many' would be beneficial. In this instance data and design elements are in the same nsf.
Is there a recommendation regarding number of elements such as XPages, Custom Controls, Managed Beans, Java Classes etc. What would be deemed excessive? In this instance I have data and logic in separate nsfs.
Any guidance would be greatly appreciated.
Thanks
There is a limit on the number of design elements. But unless you're importing a whole JavaScript framework into an NSF, uou're not likely to hit it.
As has been mentioned, view performance is dependent on many factors. 500 decently designed views are fine. 50 badly performing views can be bad. Lots of resorts on columns impacts the number of indexes that need to be created and managed. Using #Today or #Now in a view selection formula or column formula will be a big problem. Having lots of documents that rarely change, smaller numbers of documents that are updated every 30 seconds, lots of users regularly updating - these will all be impacts on performance.
Performance in code will also impact and XPages Toolbox or agent profiling will give an idea. DocumentCollection.count() is slow, but sometimes is needed. NoteCollections may be quicker. There are various blog posts covering this.
A managed bean that has a Map that grows and grows will impact Java memory.
But there are always performance enhancements being made on the server side. gRPC in Domino 10 will be extremely performant. So always try to be on a recent version and keep up to date with sessions at conferences etc so you know what TCO improvements are being made.
The bottom line is without an intimate understanding of your architecture and code, no one will be able to give you a definitive answer.
Why preformance is droping down when I load for example a 4900 of nodes to scene? If there a 125 it's ok, 200 still ok, but if there are more of them rendering framerate is droping dramaticaly? Root node contains childs that contains (model (in 3ds) + texture + some science calculation) and all created in cylce from 0 to 4899. I have tried to use osgUtil::Optimizer on root after all childs where in place but still no improvments. Tried to put all nodes in one geode but it didn't help too. How can I achieve balance between performance and number of nodes?
4900 nodes seems an awful lot of nodes!
You should start reading about LOD and PagedLOD.
PageLOD will improve the performance. The idea is like this (imagine that the first image is a lot farther):
Since you are far far away, you don't want a lot of detail. But when you zoom in, you want to see those details. You'll have to specify what models go in each LOD level and how you want to activate them. That's the tricky part.
Also, check if you can share nodes. For example, instead of having 4 different wheels, you just create one wheel. Then, add 4 PositionAttitudeTransform/MatrixTransform and add the wheel node to each of them.
The same goes for StateAttributes, share them when possible!
Finally, if you have a lot of repeated geoms, take a look at geometry instancing.
Just wondering if anyone can point me to a good web framework for displaying large-scaled network
Need the ability to display only a small portion of the network, but allowing the possibility to drill down on certain node/portion of the network interactively.
Optionally the ability to allow interactive editing of the network/graph; e.g., connecting nodes or breaking edges.
The simpler the better!
There's our library, mxGraph. If you want open source you could try JIT or D3.
I had similar requirements and I tested about four libraries including d3 and infoVis/JIT.
I was using force-directed layout in both d3 and infoVis.
Both of them are quite close but I ended up choosing infoVis/JIT because I had some problems in d3, solutions of which were not easy.
1: When you have a graph with many nodes in d3, the graph will keep moving/animating for quite longer time. I found that it was because d3 graph animates per tick. I found some solutions here and in forums but it was not easy to solve this problem and they did not work for me.
2: Once the graph is rendered, if you try and drag a node, the whole graph would move and animate itself. Whereas my requirement was to be able to drag and position individual nodes independently, keeping the graph as it is, so that user can re-arrange nodes if he/she wants to. I could not find any simple solution for this one in d3.
Both of these problems were solved in infoVis/JIT.
#"Need the ability to display only a small portion of the network, but allowing the possibility to drill down on certain node/portion of the network interactively."
I have implemented this functionality using infoVis.
I am learning directx. It provides a huge amount of freedom in how to do things, but presumably different stategies perform differently and it provides little guidance as to what well performing usage patterns might be.
When using directx is it typical to have to swap in a bunch of new data multiple times on each render?
The most obvious, and probably really inefficient, way to use it would be like this.
Stragety 1
On every single render
Load everything for model 0 (textures included) and render it (IASetVertexBuffers, VSSetShader, PSSetShader, PSSetShaderResources, PSSetConstantBuffers, VSSetConstantBuffers, Draw)
Load everything for model 1 (textures included) and render it (IASetVertexBuffers, VSSetShader, PSSetShader, PSSetShaderResources, PSSetConstantBuffers, VSSetConstantBuffers, Draw)
etc...
I am guessing you can make this more efficient partly if the biggest things to load are given dedicated slots, e.g. if the texture for model 0 is really complicated, don't reload it on each step, just load it into slot 1 and leave it there. Of course since I'm not sure how many registers there are certain to be of each type in DX11 this is complicated (can anyone point to docuemntation on that?)
Stragety 2
Choose some texture slots for loading and others for perpetual storage of your most complex textures.
Once only
Load most complicated models, shaders and textures into slots dedicated for perpetual storage
On every single render
Load everything not already present for model 0 using slots you set aside for loading and render it (IASetVertexBuffers, VSSetShader, PSSetShader, PSSetShaderResources, PSSetConstantBuffers, VSSetConstantBuffers, Draw)
Load everything not already present for model 1 using slots you set aside for loading and render it (IASetVertexBuffers, VSSetShader, PSSetShader, PSSetShaderResources, PSSetConstantBuffers, VSSetConstantBuffers, Draw)
etc...
Strategy 3
I have no idea, but the above are probably all wrong because I am really new at this.
What are the standard strategies for efficient rendering on directx (specifically DX11) to make it as efficient as possible?
DirectX manages the resources for you and tries to keep them in video memory as long as it can to optimize performance, but can only do so up to the limit of video memory in the card. There is also overhead in every state change even if the resource is still in video memory.
A general strategy for optimizing this is to minimize the number of state changes during the rendering pass. Commonly this means drawing all polygons that use the same texture in a batch, and all objects using the same vertex buffers in a batch. So generally you would try to draw as many primitives as you can before changing the state to draw more primitives
This often will make the rendering code a little more complicated and harder to maintain, so you will want to do some profiling to determine how much optimization you are willing to do.
Generally you will get better performance increases through more general algorithm changes beyond the scope of this question. Some examples would be reducing polygon counts for distant objects and occlusion queries. A popular true phrase is "the fastest polygons are the ones you don't draw". Here are a couple of quick links:
http://msdn.microsoft.com/en-us/library/bb147263%28v=vs.85%29.aspx
http://www.gamasutra.com/view/feature/3243/optimizing_direct3d_applications_.php
http://http.developer.nvidia.com/GPUGems2/gpugems2_chapter06.html
Other answers are better answers to the question per se, but by far the most relevant thing I found since asking was this discussion on gamedev.net in which some big title games are profiled for state changes and draw calls.
What comes out of it is that big name games don't appear to actually worry too much about this, i.e. it can take significant time to write code that addresses this sort of issue and the time it takes to spend writing code fussing with it probably isn't worth the time lost getting your application finished.
I was wondering how does Nike website make the change you can see when selecting a color or a sole. At first I thought they were only using images and when the user picked a color you just replaced that part, but when I selected a different sole I noticed it didn't changed like an image it looked a bit more as if it was being rendered. Does anybody happens to know how this is made? Or where can I get further info about making this effect :)?
It's hard to know for sure, but my guess would be that they're using a rendering service similar to that provided by Adobe's Scene7.
It's a product that is used to colorize/customize a base product image based on user choices.
If you're interested in using the service, I'd suggest signing up for their weekly webinar. I attended one a while back and was very impressed with their offering. They showed the Converse site (which had functionality almost identical functionality to the Nike site) as a demo.
A lot of these tools are built out in Flash using a variety of techniques:
1) You can use Flash's BitmapData object to directly shift the hues of the pixels in your item. This is probably the simplest technique but often limits you to simple color transformations.
2) You can pre-render transparent PNG's (or photos, I guess) containing the various textures you would want to show on your object (for instance patterns or textures) and have them dynamically added to your stage at runtime. This, I think, offers the highest fidelity but means you need all of your items rendered upfront.
3) You can create 3D collada files and load them via a library like Papervision3D. Then dynamically change the texture at runtime. This is the most memory intensive technique and tends to result in far worse fidelity, but for that you get a full 3D object that you can view in space.
I'm sure there are other techniques but those are the top 3 I can think of. I hope that helps!