I am doing one experiment in which I need to capture skeleton data from kinect and then apply that data to a model, I have captured data from kinect and have stored it in a file, i.e in a file i have location of each joint in each frame,
Now I want my model in blender to take the joint position from file, and move accordingly. But I dont have any idea on how to start.
I also have written a small script in python to read position from file and update the position of one bone:
obj.channels['head'].location = Vector((float(xs),float(ys),float(zs)))
but it does not move anything. Am I doing it in wrong way, or we cannot move the armature by just updating the position??
Please guide me on this topic, as i am completely new to python and blender
I don't think that this is the best solution, you can simply export your data to a bvh file and save yourself from a lot of headaches.
You can find a lot of Kinect-sdk to bvh tutorials on the net and the bvh is the de-facto standard to store data from motion capture events, there are no reasons why you should re-invent the wheel and doing extra work.
To use your bvh file in Blender you can simply follow one of the many tutorial on the subject.
Related
I was wondering if anybody knew of an easy, robust way to generate depth images from 3d models (i.e. surface models, vertices with faces), with specifiable camera parameters.
I'd prefer "free" options if possible (e.g. pyOpengl or some open source Java library rather than say matlab).
I believe it is possible with the python blender api (noted here), but I'm hoping there's an easier way.
Note also that this question works only for that special case.
You can use DECA repository in Github. There, in ./demos, I used demo_reconstruct.py code to save the depth image.
I am writing a program that will output 3D model files based on simple geometric shapes (e. g. rectangular prisms & cylinders) with known coordinates in 3-dimensional space. As an example, imagine creating a 3D model of stonehenge. this question suggests that OBJ files are the easiest to generate, but I'm struggling to find a good tutorial or easy-to-use library for doing so.
Can anyone either
(1) describe step-by-step how to create a simple file OR
(2) point me to a tutorial that describes how to do so
Notes:
* Using a GUI-based program to draw such files is not an option for me
* I have no prior experience with 3D modeling
* Other formats such as WRL or DAE would work for me as well
EDIT:
I do not need to use textures, just combinations of simple geometric shapes positioned in 3D space.
I strongly recommend to use some ASCII exchange format there are many out there I usually use these:
*.x DirectX object (it is a C++ source code)
this one is easiest to implement !!! But there are not many tools that can handle them. If you do not want to spend too much time coding then this is the right choice. Just copy the templates (at the start) from any *.x file to get started.
here some specs
*.iges common and importable on most CAD/CAM platform (Catia included)
this one is a bit complicated but for export purposes it is not that bad. It supports Volume operation like +,-,&,^ which are VERY HARD to implement properly but you do not have to use them :)
*.dxf AutoCAD exchange format
this one is even more complicated then IGES. I do not recommend to use it
*.ac AC3D
I first saw this one in flight gear.
here some specs
at first look it is quite easy but the sub-object implementation is really tricky. Unless you use it you should be fine.
This approach is easily verifiable in note pad or by loading to some 3D model viewer. Chose one that is most suitable for your needs and code save/load function to your Apps internal model class/struct. This way you will be compatible with other software and eliminate incompatibility problems which are native to creating 'almost known' binary formats like 3ds,...
In your case I would use IGES (Initial Graphics Exchange Specification)
For export you do not need to implement all just few basic shapes so it would not be too difficult. I code importers which are much much more complicated. Mine IGES loader class is about 30KB of C++ source code look here for more info
You did not provide any info about your 3D mesh model structure and capabilities
like what primitives you use, are your object simple or in skeleton hierarchy, are you using textures, and more ... so it is impossible to answer
Anyway export often looks like this:
create header and structure of target file format
if the format has any directory structure fill it and write it (IGES)
for sub-objects do not forget to add transformation matrices ...
write the chunks you need (points list, faces list, normals, ...)
With ASCII formats you can do this inside String variable so you can easily insert into or modify. Do all thing in memory and write the whole thing to file at the end which is fast and also add capability to work with memory instead of files. This is handy if you want to pack many files to single package file like *.pak or send/receive files through IPC or LAN ...
[Edit1] more about IGES
fileformat specs
I learned IGES from this pdf ... Have no clue where from I got it but this was first valid link I found in google today. I am sure there is some non registration link out there too. It is about 13.7 MB and original name IGES5-3_forDownload.pdf.
win32 viewer
this is free IGES viewer. I do not like the interface and handling but it works. It is necessary to have functional viewer for testing yours ...
examples
here are many tutorial files for many entities there are 3 sub-links (igs,peek,gif) where you can see example file in more ways for better understanding.
exporting to IGES
you did not provide any info about your 3D mesh internal structure so I can not help with export. There are many ways to export the same way so pick one that is closest to your App 3D mesh representation. For example you can use:
point cloud
rotation surfaces
rectangle (QUAD) surfaces
border lines representation (non solid)
trim surface and many more ...
I was wondering if there was an existing API for tracking the top of people heads with the Kinect. e.g., the Kinect is facing downwards from a ceiling.
If not, how might I implement such a thing with its depth data.
No. The Kinect expects to be facing a standing (or seated, given the appropriate flag) human. All APIs (official or 3rd party) that have a notion of skeleton tracking expect this.
If you wish you track someone from above, you will need to use a library such as OpenCV (or EmguCV, for C# development). Well, you don't have to, but they offer utilities to help with computer vision and image processing. These libraries don't care if you are using a Kinect or just a regular RGB camera.
Using the Kinect from above, you could use the depth data to help locate and track blobs. With the Kinect at a known distance from the floor, have a few people walk under it and see what z-coordinates you get out of it -- you can then assume that anything within a certain z-coordinate range is a person walking across the screen (vs. a cat, or something else).
You will need to use standard image processing techniques (see OpenCV reference above) to initially find the blobs within the image. Once found, the depth data from the Kinect might be useful but I think you'll find it isn't ultimately necessary if you're just watching people walk across the floor.
We built a Kinect-driven experience where the sensors had to point downward to detect users walking along a wall. We used openTSPS to do all the work of taking the camera input and doing blob detection and handing off tracked "persons" to (in our case) a Processing app. It works really well for us.
http://opentsps.com/
I like the idea of Nodebox and Processing, and would like to generate movies to visualize some data/algorithms. However, Nodebox exports extremely bloated Quicktime files with frame by frame images, and Processing only exports Java applications. I want to be able to export movies that don't take a Gigabyte a minute of disk space. Perhaps something like SVG animations or Actionscript which stores the vector graphics definition of the animation rather than frame images would be better. Is there a framework that is as easy to program as Nodebox and Processing and can export "lean" movies?
Have you tried the MovieMaker library that ships with Processing ?
Also, it should be fairly simple to save multiple frames using saveFrame().
This is option have a couple of advantages:
If your sketch crashes at some point, you still have all the frames up to that point (unlike writing a .mov file)
It's fairly simple to put the frames back into a video file, but you also have control over playback speed and can easily do a bit of editing if needed.
You can try to a sequence of PDF file using createGraphics() to get vector output, but I'm not sure how stable/feasible this option is.
They are changing the way this works moving towards 2.0 too, as they are moving to GSVideo over Quicktime...
Daniel Shiffman posted about it recently on his blog, but it's the only place I've heard about any changes to post-2.0 tactics (though he IS part of the inner circle, I know)
You can find that post at
http://www.shiffman.net/2011/12/28/night-8-rendering-out-as-image-sequence/
Also, if you are on OSX, you can try Syphon ? See info here
https://forum.processing.org/topic/syphon-integration-with-processing
Is it possible rip game resources from a .smc file? Specifically art, music, sprites, etc. How does an emulator copy the system it emulates?
It's possible, in the sense that the information is all there in some manner. But an smc file is basically a compiled program with embedded resources, and there isn't even a standard compiler or standard format for storing the resources that you can start from.
And as far as image data goes, there is a good chance it will be in the palettized and tiled format used by the PPU, although it's also not unlikely that it will be compressed in some manner or another. But the palette will probably be almost impossible to find by static analysis, and the tile maps are probably generated from the level data rather than being explicitly stored anywhere. You may have better luck running it in an emulator and extracting the data from VRAM.
For music, the situation is even more discouraging. SNES audio is most akin to a MOD file: instruments are sampled, and then the individual samples are pitch-adjusted and mixed to generate the output sound. The SNES provides hardware to decode the instrument samples, manipulate the pitch, and mix them together, but no high-level program (i.e. no equivalent of a mod file "tracker") to play back actual songs. So you may be able to find the BRR-encoded instrument samples in the same manner you may be able to find the image tile data, but the song data can and will be formatted completely differently in different games. Again, your best luck may come from extracting the state of the APU as an SPC file and working with that.
As for your other question, see How do emulators work and how are they written? for a previous answer on that very topic.