I was wondering if there was an existing API for tracking the top of people heads with the Kinect. e.g., the Kinect is facing downwards from a ceiling.
If not, how might I implement such a thing with its depth data.
No. The Kinect expects to be facing a standing (or seated, given the appropriate flag) human. All APIs (official or 3rd party) that have a notion of skeleton tracking expect this.
If you wish you track someone from above, you will need to use a library such as OpenCV (or EmguCV, for C# development). Well, you don't have to, but they offer utilities to help with computer vision and image processing. These libraries don't care if you are using a Kinect or just a regular RGB camera.
Using the Kinect from above, you could use the depth data to help locate and track blobs. With the Kinect at a known distance from the floor, have a few people walk under it and see what z-coordinates you get out of it -- you can then assume that anything within a certain z-coordinate range is a person walking across the screen (vs. a cat, or something else).
You will need to use standard image processing techniques (see OpenCV reference above) to initially find the blobs within the image. Once found, the depth data from the Kinect might be useful but I think you'll find it isn't ultimately necessary if you're just watching people walk across the floor.
We built a Kinect-driven experience where the sensors had to point downward to detect users walking along a wall. We used openTSPS to do all the work of taking the camera input and doing blob detection and handing off tracked "persons" to (in our case) a Processing app. It works really well for us.
http://opentsps.com/
Related
I am tasked with something seemingly trivial which is to
find out how "noisy" a given recording is.
This recording came about via a voice recorder, a
OLYMPUS VN-733 PC which was fairly cheap (I am not doing
advertisement, I merely mention this because I in no way
aim to do anything "professional" here, I simply need to
solve a seemingly simple problem).
To preface this, I have already obtained several datasets
from different outside locations, in particular parks or
near-road recordings. That is, the noise that exists at
these specific locations, and to then compare this noise,
on average, with the other locations.
In other words:
I must find out how noisy location A is compared to location
B and C.
I have made 1 minute recordings each so that at the
least the time span of a recording can be compared
to the other locations (and I was using the very
same voice record at all positions, in the same
height etc...).
A sample file can be found at:
http://shevegen.square7.ch/test.mp3
(This may eventually be moved lateron, it just serves as
example how these recordings may sound right now. I am
unhappy about the initial noisy clipping-sound, ideally
I'd only capture the background noise of the cars etc..
but for now this must suffice.)
Now my specific question is, how can I find out how "noisy"
or "loud" this is?
The primary goal is to compare them to the other .mp3
files, which would suffice for my purpose just fine.
But ideally it would be nice to calculate on average
how "loud" every individual .mp3 is and then compared
it to the other ones (there are several recordings
per given geolocation, so I could even merge them
together).
There are some similar questions but not one in particular
that I was able to find that could answer this in a
objective manner, or perhaps I did not understand the
problem at hand. I have all the audio datasets already
but I have no idea how to find out how "loud" any one
of them is individually; there are some apps on smartphones
that claim that they can do this automatically but since
I do not have any smartphone, this is a dead end for me.
Any general advice will be much appreciated.
Noise is a notion difficult to define. Then, I will focus on loudness.
You could compute the energy of each files. For that, you need to access the samples of the audio signal (generally from a built-in function of you programming language). Then you could compute the RMS energy of the signal.
That could be the more basic processing.
I have never tried, but just curious if there is any possibility to detect ads in audio streams? I mean except machine learning or something. Some specifics about byte stream during adverts. Maybe kind of different loud value?
From a purely audio standpoint, this isn't possible. There is nothing distinguishable between an advertisement and other audio content. Sure, you could argue that a station playing music will have different spectral characteristics than when talking comes on for an advertisement, but what about ads that also play music? How do you distinguish between an announcer and someone reading an ad? What if the ad is embedded in normal content?
Now, some stations do provide metadata which occasionally contain ad information. If you look at the length of a particular content item, your ads are usually going to be under a minute or 30 seconds. How you get this metadata and deal with it depend on the kind of stream you're working with.
There are techniques emerging to do this and they tend to leverage databases of known adverts to get around the theoretical problems that Brad correctly highlights in his answer.
One of the references below however, uses a techniques based on detecting slight differences in the audio when an ad starts as the initial detection trigger.
Some techniques also use both audio and visual streams to aid detection - for example the Google paper below uses first audio matching and then the video to validate/verify.
Some sources that might be worth looking at for anyone interested in this area (I realise it is an old question but it is still topical):
http://www.xavieranguera.com/papers/cimca_2008.pdf
http://static.googleusercontent.com/media/research.google.com/en//pubs/archive/55.pdf
https://www.audiblemagic.com/wp-content/uploads/2014/02/ad_detection_datasheet_150406.pdf
I'm struggling to choose between a vast number of audio programming languages and APIs. I'm very (totally) new to audio programming so please bear with me.
Software
I need to be able to:
Alter volume of different sounds before outputting them to anything (these sounds can have a variety of different origins, for example mp3s and microphone input)
phase shift sounds
superimpose sounds that I have tweaked (as per items 1 and 2)
control the output to each of 8 channels independently of one another
make this all happen on Windows7
These capabilities need be abstracted by a graphical frontend I will probably make myself. What I want to be able to do is create 'sound sources' and move them around a 3D environment along either pre-defined trajectories and/or in relation to the movement of whoever is inside the rig. The reason I want to do pitch bending is so I can mess with red-shift stuff.
I don't want to have to construct full tracks before-hand and just play them. I want the sound that is played to depend on external input from sensors as well as what I am doing on the frontend.
As far as I know this means I cant use any existing full audio making app.
The Question
I've been looking around for for the API or language I should use and I have not turned up a blank, quite the opposite actually. I'm struggling to narrow down my search. A lot of my problem stems from the fact that I have no experience in audio programming.
So, does anyone know off-hand of an API or language that meets my criteria?
Hardware stuff and goals
(I left this until last because I'm not sure how relevant it is)
My goal is to make three rings of speakers at different heights and to have enough control over them to be able to simulate any number of 'sound sources' within the array. The idea is to have someone stand in the middle of the rig and be able to make it sound like there are lots of things moving around them. To get this working I'm planning on doing a little trig and using 8 channels of audio from my PC. The maths is pretty straight forward, it just the rest that I need to worry about
What I want to do next is attach a bunch of cameras to the thing and do some simple image recognition stuff to be able to 'attach sound sources' to different objects. Eg. If someone is standing in the right place it can be made to seem as though all red balls quack like a duck, and all orange ones moan hauntingly.
This is not to detract from Richard Small's answer, but to comment on some of the other options out there:
If you are looking for something higher-level with which you can prototype and develop this faster, you want max/msp or it's open source competitor puredata. These are designed for musicians who are technically minded, but not so much for programmers. As a result, you can build this sort of thing quickly and efficiently.
You also have some lower level options: PortAudio can handle your audio I/O, you would have to do the sound generation and effects and so on on your own or with other libraries. Cinder and OpenFramewoks both provide interfaces for audio, cameras, and other stuff for "creative programming". I'm afraid I don't know if they meet your full requirements, but they are powerful and popular for this sort of thing so I encorage you to look at them.
The two major ones these days tend to be
WWise
WWise Download Link
FMOD
FMOD Download Link
These two engines may even in fact be overkill for what you need, but I can almost guarantee that they will be capable of anything you require.
I've been studying 3D graphics on my own for a while now and I want to get a greater understanding of just how everything works. What I would like to do is to create a simple game without using DirectX or OpenGL. I understand most of the math I believe, but the problem I am running up against is I do not know how to get control of the pixels being displayed in a window.
How do I specify what color I want each pixel in my window to be?
I understand I will probably run into issues with buffers and image shearing and probably terrible efficiency problems, but I want to create my own program so that I could see from the very lowest level, of the high level language, how the rendering process works. I really have no idea where to start though. I've figured out how to output BMPs, but I would like to have a running program spitting out 20+ frames per second. How do I accomplish this?
You could pick a environment that allows you to fill an array with values for pixels and display it as a bitmap. This way you come closest to poking RGB values in video memory. WPF, Silverlight, HTML5/Javascript can do this. If you do not make it full screen these technologies should suffice for now.
In WPF and Silverlight, use the WriteableBitmap.
In HTML5, use the canvas
Then it is up to you to implement the logic to draw lines, circles, bezier curves, 3D projections.
This is a lot of fun and you will learn a lot.
I'm reading between the lines that you're more interested in having full control over the rendering process from a low level, rather than having a specific interest in how to achieve that on one specific platform.
If that's the case then you will probably get a good bang for your buck looking at a library like SDL which provides you with a frame buffer that you can render to directly but abstracts away a lot of the platform specifics issues. It has been around for quite a while and there are some good tutorials to give you an idea of whether it's the kind of thing you're looking for - see this tutorial and the subsequent one in the same series, which should be enough to get you up and running.
You say you want to create some kind of a rendering engine, meaning desinging you own Pipeline and matrice classes. Which you are to use to transform 3D coordinates to 2D points.
When you have got the 2D points you've been looking for. You can use say for instance on windows, you can select a brush and draw you triangle values while coloring them at the same time.
I do not know why you would need Bitmaps, but if you want to practice say Texturing you can also do that yourself although off course on a weak computer this might take your frames per second significantly.
If you aim is to understand how rendering works on the lowest level. This is with no doubt a good practice.
Jt Schwinschwiga
Is it possible rip game resources from a .smc file? Specifically art, music, sprites, etc. How does an emulator copy the system it emulates?
It's possible, in the sense that the information is all there in some manner. But an smc file is basically a compiled program with embedded resources, and there isn't even a standard compiler or standard format for storing the resources that you can start from.
And as far as image data goes, there is a good chance it will be in the palettized and tiled format used by the PPU, although it's also not unlikely that it will be compressed in some manner or another. But the palette will probably be almost impossible to find by static analysis, and the tile maps are probably generated from the level data rather than being explicitly stored anywhere. You may have better luck running it in an emulator and extracting the data from VRAM.
For music, the situation is even more discouraging. SNES audio is most akin to a MOD file: instruments are sampled, and then the individual samples are pitch-adjusted and mixed to generate the output sound. The SNES provides hardware to decode the instrument samples, manipulate the pitch, and mix them together, but no high-level program (i.e. no equivalent of a mod file "tracker") to play back actual songs. So you may be able to find the BRR-encoded instrument samples in the same manner you may be able to find the image tile data, but the song data can and will be formatted completely differently in different games. Again, your best luck may come from extracting the state of the APU as an SPC file and working with that.
As for your other question, see How do emulators work and how are they written? for a previous answer on that very topic.