I am beginner in modelling. Can I increase object's polygen in 3dsmax? I want have smooth object that have not low polygens.
You can increase the polygon count of you model so many ways:
Use subdivide modifier.
Use the turbo smooth modifier.
Use tessellate modifier.
You can use TurboSmooth modifier for this purpose, but before use it, you should make sure your model have enough edges and check if the edges are correct. for example, make sure your vertex not connect to odd number edges( like 3 or 5 edges ) always try to keep it in even numbers, 4 edges is standard, check image below :
both of them used odd numbers ( 3 and 5 ) which is not correct.
try to add more edges to your model and Chamfer them if necessery, before applying turbosmooth or other smooth Modifier's.
You can smooth it by adding a smooth or turbo smooth modifier, then convert it to polygon edit again - or collapse the object class from modify tab - this will reproduce polygon evenly all over the object faces.
Alternately you can select a ring or loop of faces then use the connect tool to add more resolution (polygons) to these selected polygons only
As for the fish model in your image, you can select one of the height faces, then from the polygon edit section in the modify panel on the right, click ring it'll select all the height faces all around the fish, then scroll down on the modify panel click on connect and set the number to add more resolution
UPDATE:
Please take in consideration that there's nothing like infinite segments even spheres (balls) or cylinders in 3D just have a higher number of segments - usually 32+), you can double or triple it but not more, increasing objects segments for very high values can bring your computer down to its knees while modeling or rendering
New versions of 3dsmax (2015, 2016) have a new subdivision modifier called OpenSubDiv modifier. This can be used to subdivide your model, and give it more polygons. There is also the Turbosmooth, Meshsmooth, Tessellate, and Subdivide modifiers available. All of these will add more geometry to your model, based on different algorithms.
Related
I'm building a 2D game where player can only see things that are not blocked by other objects. Consider this example on how it looks now:
I've implemented raytracing algorithm for this and it seems to work just fine (I've reduced the boundaries for demo to make all edges visible).
As you can see, lighter area is built with a bunch of triangles, each of them having common point in the position of player. Each two neighbours have two common points.
However I'm willing to calculate bounds for external the part of the polygon to fill it with black-colored triangles "hiding" what player cannot see.
One way to do it is to "mask" the black rectangle with current polygon, but I'm afraid it's very ineffective.
Any ideas about an effective algorithm to achieve this?
Thanks!
A non-analytical, rough solution.
Cast rays with gradually increasing polar angle
Record when a ray first hits an object (and the point where it hits)
Keep going until it no longer hits the same object (and record where it previously hits)
Using the two recorded points, construct a trapezoid that extends to infinity (or wherever)
Caveats:
Doesn't work too well with concavities - need to include all points in-between as well. May need Delaunay triangulation etc... messy!
May need extra states to account for objects tucked in behind each other.
I am currently wondering if there is a common algorithm to check whether a set of plane polygones, not nescessarily triangles, contruct a watertight polyhedra. Each polygon has an oriantation (normal vector). A simple solution would just be to say yes or no. A more advanced version would be to point out the edges, where the polyhedron is "open". I am not really interesed on how to close to polyhedra.
I would like to point out, that my "holes" are not nescessarily small, e.g., one face of a cube might be missing. Thus, the "undersampling correction" algorithms dont seem to be the correct approach. Furthermore, I am talking of about 100 - 1000, not 1000000 polygons, so computation time should not really be a problem.
Any hints or tips?
kind regards,
curator
I believe you can use a simple topological test -- count the number of times each edge appears in the full list of polygons.
If the set of polygons define the surface of a closed volume, each edge should have count>=2, indicating that each edge is shared by (at least) two adjacent polygons. If the surface is manifold count==2 exactly.
Edges with count==1 indicate open regions of the surface.
The above answer does not cover many cases. A more correct (but not necessarily complete: I wouldn't know) algorithm is to ensure that every edge of every polygon (or of the mesh/polyhedron) has an even number of faces connected to it. Consider the following mesh:
The segment (line) between the closest vertex and the one below is attached to 3 faces (one one of the outer triangle and two of the inner triangle), which is greater than two faces. However this is clearly not closed.
Back story: I'm creating a Three.js based 3D graphing library. Similar to sigma.js, but 3D. It's called graphosaurus and the source can be found here. I'm using Three.js and using a single particle representing a single node in the graph.
This was the first task I had to deal with: given an arbitrary set of points (that each contain X,Y,Z coordinates), determine the optimal camera position (X,Y,Z) that can view all the points in the graph.
My initial solution (which we'll call Solution 1) involved calculating the bounding sphere of all the points and then scale the sphere to be a sphere of radius 5 around the point 0,0,0. Since the points will be guaranteed to always fall in that area, I can set a static position for the camera (assuming the FOV is static) and the data will always be visible. This works well, but it either requires changing the point coordinates the user specified, or duplicating all the points, neither of which are great.
My new solution (which we'll call Solution 2) involves not touching the coordinates of the inputted data, but instead just positioning the camera to match the data. I encountered a problem with this solution. For some reason, when dealing with really large data, the particles seem to flicker when positioned in front/behind of other particles.
Here are examples of both solutions. Make sure to move the graph around to see the effects:
Solution 1
Solution 2
You can see the diff for the code here
Let me know if you have any insight on how to get rid of the flickering. Thanks!
It turns out that my near value for the camera was too low and the far value was too high, resulting in "z-fighting". By narrowing these values on my dataset, the problem went away. Since my dataset is user dependent, I need to determine an algorithm to generate these values dynamically.
I noticed that in the sol#2 the flickering only occurs when the camera is moving. One possible reason can be that, when the camera position is changing rapidly, different transforms get applied to different particles. So if a camera moves from X to X + DELTAX during a time step, one set of particles get the camera transform for X while the others get the transform for X + DELTAX.
If you separate your rendering from the user interaction, that should fix the issue, assuming this is the issue. That means that you should apply the same transform to all the particles and the edges connecting them, by locking (not updating ) the transform matrix until the rendering loop is done.
I'm looking for an efficient way to display lots of spheres using directx 11. The spheres are defined by (x,y,z,r) where (x,y,z) are coordinates in space and r is the radius. I want to display only the spheres that can be seen, meaning that spheres that are not in the field of view and spheres that are too small to be seen wouldn't be drawn. However, if a group of spheres smaller than one pixel is at least as big as one pixel, then I want to display the most predominant color. Spheres have only one color and different levels of transparency. Any help would be appreciated and incomplete answers are acceptable.
You need several things. First an indexed unit sphere geometry, second a buffer to store the sphere instance properties ( position, radius and color ) and third a small buffer for the API parameters yet to come. The three combines in a single 'ID3D11DeviceContext::DrawIndexedInstancedIndirect'
The remaining question is "how to feed the instance buffer ?". cpu is easy, just apply frustum culling, sort back to front because of the transparency and apply a merge based on the screen projection, update the buffer and use 'ID3D11DeviceContext::DrawIndexedInstanced'.
gpu version will do the same thing with compute shaders but will be harder to implement. The advantage, zero cpu/gpu synchronization and should support far more instance.
I'm halfway there please see the edit
OK here's my problem, I'm generating a graph of a python module, including all the files with their functions/methods/classes.
I want to arrange it so, that nodes gather in circles around their parent nodes, currently everything is on one gargantuan horizontal row, which makes the thing >50k pixels wide and also let's the svg converter fail(only renders about the half of the graph).
I went through the docs but couldn't find anything that seems to do the trick.
So the question is:
Is there a simple way to do this or do I have to layout the whole thing by myself? :/
EDIT:
Thanks to Andrews comment I've got the right layout, the only problem now is that it's a bit to "compact"... so the question now is, how to fix this?
i've mentioned all of the most significant parameters that influence your current layout and then suggested values for those parameters. Still, i suspect you can get the layout that you want just from applying a couple of these suggestions.
reduce the edge weight, eg, [weight=0.5]; this will make the
edges longer, causing the tight
clusters you currently see in your
graph to 'fan out'.
get rid of the node borders, node_A
[color=none; shape=plaintext];
especially for oval-shaped nodes, a
substantial fraction of the total
node space is 'unused' (ie, not used
to display the node label).
explicitly set the font size for
the nodes (the node borders are
enlarged so that they surround the
node text, which means that the font
size and amount of text for a given
node has a significant effect on its
size); [fontsize=11] should be large
enough to be legible yet also reduce
the 'cluttered' appearance (the
default size is 14).
increase minimum separation between
nodes, via 'nodesep'; eg, nodesep=2.0; this will
directly address your objection
regarding your graph being "too
compact." ('nodesep' and 'ranksep'
probably affect how dot draws a graph
more than any other parameters for
node, edge, or graph. In your case,
it looks like you have only two ranks
of nodes; 'ranksep' sets the minimum
distance between nodes of different
ranks--it looks like all of the nodes
that comprise your graph are of the
same rank (except for few top level
nodes in the centers).
explicitly set total graph size, eg,
size="7.75,10.25" (ensures that your
graph fits on an 8.5 x 11 page and
that it occupies the entire space)
And one purely aesthetic suggestion
that at most will only help your
graph appear less cluttered: the
default fontcolor for both edges and
nodes is black. The majority of the
ink on your graph is from those two
structures (particularly if you
remove the node borders), so i would
for instance set either the node
(text) fontcolor or the edge
fontcolor to "blue" to help the eye
distinguish the two sets of graph
structures.
If it is too compact, you will want to mess with the edge length. You have a couple options depending on the graph layout:
If your layout is sfdp or fdp, tweak the graph property K. Default is 0.3.
For neato (or fdp), tweak the edge property len. Default is 1.0 for neato and 0.3 for fdp.
For dot you can use the edge property minlen which is the minimum edge length. Default is 1.
You might also want to mess with the graph property model which determines clustering behavior. Specifically, try subset. I believe this handles len for you:
http://www.graphviz.org/doc/info/attrs.html#d:model
Also, you can remove overlaps all together with scaling techniques: http://www.graphviz.org/doc/info/attrs.html#d:overlap
I have around 500 nodes and used doug's recommendation.
This is my sample code that works (in python):
f = Digraph('companies',filename='companies.gv',
edge_attr={'weight':'1',
'fontsize':'11',
'fontcolor':'blue',
'len':'4'},
graph_attr={'fixedsize':'false',
'bgcolor':'transparent'},
node_attr={'fontsize':'11',
'shape':'plaintext',
'color':'none',
'fontcolor':'black'})
f.attr(layout="neato")
f.attr(nodesep='3')
f.attr(ranksep='3')
f.attr(size='5000,5000')