Direct3D vector output? - graphics

Is there any means to interpret Direct3D output as a series of vectors instead of a raster image? I am hoping I could use such a feature to generate a PDF file containing the rendered Direct3D output. Am I being too optimistic?

Well there is noting specifically stopping you from interpreting the input data as vectors. It is, however, fundamentally a rasteriser. Pixel shaders entirely stop making sense the moment you convert to vector data.
Still you know what your transforms are and you know what the vertex data is so you could output it as vector data in whatever format you want ...

Related

Why is a normal vector necessary for STL files?

STL is the most popular 3d model file format for 3d printing. It records triangular surfaces that makes up a 3d shape.
I read the specification the STL file format. It is a rather simple format. Each triangle is represented by 12 float point number. The first 3 define the normal vector, and the next 9 define three vertices. But here's one question. Three vertices are sufficient to define a triangle. The normal vector can be computed by taking the cross product of two vectors (each pointing from a vertex to another).
I know that a normal vector can be useful in rendering, and by including a normal vector, the program doesn't have to compute the normal vectors every time it loads the same model. But I wonder what would happen if the creation software include wrong normal vectors on purpose? Would it produce wrong results in the rendering software?
On the other hand, 3 vertices says everything about a triangle. Include normal vectors will allow logical conflicts in the information and increase the size of file by 33%. Normal vectors can be computed by the rendering software under reasonable amount of time if necessary. So why should the format include it? The format was created in 1987 for stereolithographic 3D printing. Was computing normal vectors to costly to computers back then?
I read in a thread that Autodesk Meshmixer would disregard the normal vector and graph triangles according to the vertices. Providing wrong normal vector doesn't seem to change the result.
Why do Stereolithography (.STL) files require each triangle to have a normal vector?
At least when using Cura to slice a model, the direction of the surface normal can make a difference. I have regularly run into STL files that look just find when rendered as solid objects in any viewer, but because some faces have the wrong direction of the surface normal, the slicer "thinks" that a region (typically concave) which should be empty is part of the interior, and the slicer creates a "top layer" covering up the details of the concave region. (And this was with an STL exported from a Meshmixer file that was imported from some SketchUp source).
FWIW, Meshmixer has a FlipSurfaceNormals tool to help deal with this.

How do I display a spectrogram from a wav file in C++?

I am doing a project in which I want to embed images into a .wav file so that when one sees the spectrogram using certain parameters, they will see the hidden image. My question is, in C++, how can I use the data in a wav file to display a spectrogram without using any signal processing libraries?
An explanation of the math (especially the Hanning window) will also be of great help, I am fairly new to signal processing. Also, since this is a very broad question, detailed steps are preferable over actual code.
Example:
above: output spectrogram;
below: input audio waveform (.wav file)
Some of the steps (write C code for each):
Convert the data into a numeric sample array.
Chop sample array into some size of chunks, (usually) overlapped.
(usually) Window with some window function.
FFT each chunk.
Take the Magnitude.
(usually) Take the Log.
Assemble all the 1D FFT result vectors into a 2D matrix.
Scale.
Color the matrix.
Render the 2D bitmap.
(optional) (optimize by rolling some of the above into a loop.)
Add plot decorations (scale, grid marks, etc.)

Converting voxelized model into smooth form

I have a 3D model as mesh structure or in .stl/.obj format which I converted to voxels using binvox voxelization tool. Using a Java program, I have done some processing on the voxel grid thus obtained. Now, I wish to covert this voxelized model back into a "smooth" mesh structure (or any other format), which can later be exported to .stl or .obj format.
Can someone suggest how can I achieve the last part, i.e. converting the voxel grid into some format for retrieving back the "smooth" surfaces ? Any help, including pointing to existing tools, or relevant theory in this direction will be appreciated.
Give a try to Marching Cubes algorithm. See http://paulbourke.net/geometry/polygonise/ for more details.

Different blurFilter.texelSpacingMultiplier for different regions in image GPUImageCannyEdgeDetection filter

I want to set different blurFilter.texelSpacingMultiplier for different regions in image in GPUImageCannyEdgeDetection filter is there a way to do that.
The texelSpacingMultiplier is defined as a uniform in the fragment shaders used for this operation. That will remain constant across the image.
If you wished to have this vary in parts of the image, you will need to create a custom version of this operation and its sub-filters that takes in a varying value for this per-pixel.
Probably the easiest way to do this would be to have your per-pixel values for the multiplier be encoded into a texture that would be input as a secondary image. This texture could be read from within the fragment shaders and the decoded value from the RGBA input converted into a floating point value to set this multiplier per-pixel. That would allow you to create a starting image (drawn or otherwise) that would be used as a mask to define how this is applied.
It will take a little effort to do this, since you will need to rewrite several of the sub-filters used to construct the Canny edge detection implementation here, but the process itself is straightforward.

Recompute a 3d vector field in a set of isosurfaces

I working on program (fortran90), which computes an magnetic field of some static set of wires with electric current. Its output is a magnetic field vectors in many points as file with columns "x,y,z,v_x,v_y,v_z). I able to plot this with gnuplot, e.g.:
But now I want to rewrite program to output isosurfaces (surfaces at which modulus of magnetic field vector is constant), like this (it is found in internet and don't correspond to first image)
Can I do this as second program or with using utility, which will convert my file with 6 columns into ... something format which can be drawn as surface set. Another way of doing this, as I think, is to rewrite first program to compute isosurface directly. Please, recommend me which way is better and how actually I can do this.
I think MathGL can do it easily. It is cross-platform GPL plotting library which have Fortran interface too. Here you can use a sequential call of vector fields and isosurface plotting.

Resources