BVH file parsing (calculating transitions) - graphics

I have a bvh file. I want to calculate translation values of all nodes(bones) in hierarchy w.r.t. global axis or w.r.t. the axis of root node. Except the root node all other values are relative to their parent then how to calculate translation values of all other bones?

If you are a Mathematica user you may find the BVH importer that I wrote as an answer to a question on Mathematica.stackexchange.com of some value.

Related

Why is a normal vector necessary for STL files?

STL is the most popular 3d model file format for 3d printing. It records triangular surfaces that makes up a 3d shape.
I read the specification the STL file format. It is a rather simple format. Each triangle is represented by 12 float point number. The first 3 define the normal vector, and the next 9 define three vertices. But here's one question. Three vertices are sufficient to define a triangle. The normal vector can be computed by taking the cross product of two vectors (each pointing from a vertex to another).
I know that a normal vector can be useful in rendering, and by including a normal vector, the program doesn't have to compute the normal vectors every time it loads the same model. But I wonder what would happen if the creation software include wrong normal vectors on purpose? Would it produce wrong results in the rendering software?
On the other hand, 3 vertices says everything about a triangle. Include normal vectors will allow logical conflicts in the information and increase the size of file by 33%. Normal vectors can be computed by the rendering software under reasonable amount of time if necessary. So why should the format include it? The format was created in 1987 for stereolithographic 3D printing. Was computing normal vectors to costly to computers back then?
I read in a thread that Autodesk Meshmixer would disregard the normal vector and graph triangles according to the vertices. Providing wrong normal vector doesn't seem to change the result.
Why do Stereolithography (.STL) files require each triangle to have a normal vector?
At least when using Cura to slice a model, the direction of the surface normal can make a difference. I have regularly run into STL files that look just find when rendered as solid objects in any viewer, but because some faces have the wrong direction of the surface normal, the slicer "thinks" that a region (typically concave) which should be empty is part of the interior, and the slicer creates a "top layer" covering up the details of the concave region. (And this was with an STL exported from a Meshmixer file that was imported from some SketchUp source).
FWIW, Meshmixer has a FlipSurfaceNormals tool to help deal with this.

How to calculate a DXF's used area in Python 3 with ezdxf or dxfgrabber?

I would like to calculate the minimum size of an rectangle that contains all elements of an DXF file, but apparently neither ezdxf or dxfgrabber have a function to do that.
Is iterating through all entities and calculating the points the only way to do it? If the drawing used only lines and boxes that would be easy, but with splines, arcs and circles the process become tiring.
I know that the context of my answer does not relate to the libraries you refer. But if you look here it mentions:
Extmin and Extmax - Returns the extents of Model space.
Pextmin and Pextmax - Returns the extents of the current Paper space layout.
Are you able to access these variables using your libraries? If not, then you most likely have to do it the manual way.

Breaking a path into 100 pixel lines

I have a path drawn in Illustrator, and I need to break the path into section of 100 px. I can't figure out the logic. A line consist of 2 points x1,y1 and x2, y2. And this is for a straight line. My line may have angles/curve, so what do I need to do, to figure out the distance between 2 pixels.Here is a graphic illustration of my line and the sections, which I need to select/extract:
From the shape above, I need to break it into section of lines(note these are not straight lines).
Try referencing the Bug Algorithm. It's a very simple intuitive approach to path planning. I've uploaded an example written in LabVIEW here, but I know there are plenty of others available.
The Bug Algorithm generates a continuous line; a lot of data points; however you can keep a running average of the general diretion it's heading in and detect sharp changes in angles as an important node in the path. This allows you to segment paths from possibly thousands of data points into just a handful.
There are two aspects in your question:
how do I break a path at some point,
how do I find points spaced by a certain distance.
To answer the first, the type of primitives that define the path matters. Assuming a sequence of Bezier cubics, you will resort to the de Casteljau's algorithm: it allows you to construct the control points that correspond to a desired section of a given Bezier arc, from the original control points. Then, a section of a path will be obtained as a starting section of an initial Bezier, then (possibly) a sequence of whole Bezier arcs, and finally the ending section of a last Bezier arc.
To answer the second, assuming that you need an accurate answer, you will need to resort to numerical integration of the arc length along the path. Refer to this post: https://math.stackexchange.com/a/1171564/65203.
For a simple approximation, you can flatten the curve (approximate it as a polyline) and compute the accumulated segment lengths (or even count the pixels if your curve renderer gives you access to this information).
This process is not trivial.

SVG performance difference: changing transform or changing path

I have an SVG path with 100 points (each point with "Lx y"). The path moves on mouse drag (no rotation structure change, just translation and scale).
Does anybody know: is there any performance difference between changing entire path string on each step and changing just transform matrix?
It's just more convenient for the underlying code to change entire path than transform matrix, so i wonder if i should be worried.
The code is being implemented using Raphaeljs, but i don't think it matters.
Thanks in advance.

Getting data based on location and a specified radius

Scenario: I have a large dataset, with each entry containing a location (x,y - coordinates).
I want to be able to request every entry from this dataset that is within 100m within this dataset and have it returned as an array.
How does one go about implementing something like this? Are there any patterns or framework that recommended? I've previously only worked with relational or simple key-value type data.
The data structure that solves this problem efficiently is a k-d tree. There are many implementations available, including a node.js module.
Put your data set into PostgreSQL and use an R-Tree index. You can then do a bounding box query to get all points with +-100 miles of any locations. Then calculate the radial distance and accept points within 100 miles. You can roll your own schema and queries or use PostGIS.
Unlike R-Trees KD-trees are not inherently balanced. So depending on how a KD-Tree is built you can get inconsistent performance due to unbalanced trees and the longest path.

Resources