Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 2 years ago.
Improve this question
So I have one big mesh which models a building. I would like to chop the mesh into parts by floor and hallway to make geographically distinct "scenes" which I can cull/order before rendering to reduce render time. I used 3DS Max to "Slice" the model into various meshes however in the scene explorer it still only shows 1 object. When I export the scene to fbx and read it in Assimp it only reads in 1 mesh.
TLDR: How do I split a model in 3DS Max (or similar) such that it exports as multiple meshes which I can selectively render?
The solution is to "Slice" the model, in my case I used the Slice Plane to get clean cuts. Then To use a "Mesh Edit" modifier and "Detach" each individual component.
Here is a 3ds Max forum post asking the exact same thing. Hopefully the answer in there can be useful for you too.
https://forums.autodesk.com/t5/3ds-max-forum/split-a-mesh-into-several-meshes/m-p/5927179#M109322
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
Here I have an image of two objects/stars:
I have hundreds of images like this one, from NASA MAST Archive. (The corners are not stars, just errors, one star is on the top, the other one is on the bottom).
What algorithm should I use to determine the number of objects (in this case stars) in one picture? For a human, it is pretty obvious that there are two objects, but I want to implement this detection in Python.
For reference, here is a picture with one star only:
(The pictures are produced from FITS files with PyKE.)
You can apply a threshold and use open cv to analyze the number of connected components (groups).
For example :
import cv2
src = cv2.imread('/path/to/your/image')
ret, thresh = cv2.threshold(src,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
connectivity = 8 #also diagonal neighbors, choose 4 if you want just horizontal and vertical neighbors.
# Analysis of the binary image
output = cv2.connectedComponentsWithStats(thresh, connectivity, cv2.CV_32S)
n_groups=output[2].max()
To get rid of the noises you can decide that you don't take into account groups with less than TH number of connected pixels (from the images you uploaded as an example I would choose something like TH=4).
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
For Windows there are many tools for extracting 3D data from programmes by intercepting the OpenGL data (e.g. 3D Ripper DX, glintercept, Ogle, OpenGLXtractor, HijackGL).
Are there any similar tools for Linux? If not, would it be possible to make one? (and if would anyone be interested in starting an open source project with me?)
I will actually automate the process, but that is another story.
First a word of warning: OpenGL is not a scene graph. There is no such thing as a "scene" or "objects" (in the physical kind of thing sense) in OpenGL. All what OpenGL does is drawing points, lines and triangles to a scene, one at a time and independent from each other. So intercepting OpenGL drawing calls to extract objects by nature is unreliable. That being said most programs using OpenGL do it in a way that make it actually quite feasible to extract the rendered geometry and interpret it as objects.
Another member of my hackerspace wrote a tool for intercepting OpenGL calls to extract meshes (the original use was so that we could 3D print game assets and similar on our RepRap). The sources for this tool can be found here https://github.com/mazzoo/ogldump
However ogldump is very limited. It doesn't support vertex buffer objects (VBO), interleaved vertex arrays can mess things up and things like shaders and generic vertex attributes are completely unheared of. Feel free to patch that in, if you like.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions concerning problems with code you've written must describe the specific problem — and include valid code to reproduce it — in the question itself. See SSCCE.org for guidance.
Closed 9 years ago.
Improve this question
How can I fix the gap caused by the element sliding to the next row like in the image below?
I'm using Isotope with the masonry layout mode.
Thank you.
I have a similar problem and I'm going to "fix" that by precalculating the order of the elements that way, that there will be no spaces and the boxes will always fit in the grid layout. AFAIK there is no solution by this isotope jQuery plugin for that.
At a guess I'd say -
Because the next item in the order is that big block underneath.
Or the following item is the other smaller block bottom left. Even if that was moved up to occupy the white space there would still be a white gap left where it came from.
Maybe masonry favours left edge over right or something.
Literally only started using it today so I'm no expert. Found this question while searching for an answer of my own.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I don't need the dinosaur, just the buildings with details on the buildings, but I would like the 3D graphics to take on this sort of style. Real-time.
Part of reproducing this effect in a real-time 3D environment could be handled by the designers, texturing the various meshes in combination with fairly matte shading.
However, there is also a whole field of Computer Graphics research focusing on Non-Photorealistic Rendering (or NPR). There actually have been a lot of published papers on real-time watercolor rendering (to various degrees of success), often using shaders as suggested by xOr.
A good starting point in my opinion would be the work of Adrien Bousseau. An example that comes to mind is his paper "Interactive watercolor rendering with temporal coherence and abstraction (PDF warning)". Another one would be "Watercolor Illustrations from CAD Data (PDF warning)" by Luft et al.
Now, don't get me wrong, I'm not saying these papers are to be implemented and be done with it. Perhaps they are too science-y for you or simply to complex for whatever system you're trying to create. However, read through them and read through some of the papers they reference to get an idea of the various approaches out there. If nothing else, you will at least have some terms to Google and see if you can find something that suits you.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
now i want to drill a hole in the 3ds object i import,and if the mesh of the 3ds object within the hole, this part mesh will be delete,that mean if the surface of the object within the hole, this part will be delete,can anyone give me some suggestions?
that is :
first i import a 3ds format file,then i use a cylinder to penetrate it,So everything
belong to this 3ds file but within my cyLinder will be deleted,
for example if this 3ds file is a cube,to perforate a hole on it,the result will be a cube whose front face and back face all add a circle,
am i make the problem clear?
Thanks.
Good luck.
if i should calculate the node within the cylinder ?i don't not know the structure of the 3ds file.
it seems that i should calculate the difference of 3ds and a cylinder?
that mean i should subtract the cylinder,delete the node within the cylinder ?
am i right?
THanks.
This is a programming website then the answer is... BSP Tree.