Microsoft band plethysmograph - sensors

I am looking to get the raw green pleth signal from the band 2. I have algorithms to extract several physiological phenomena from the pleth. Documentation appears not to allow or show this. Anyone have any ideas? Thanks.

Which values are you trying to extract when you say the raw green pleth signal? Lung capacity? Blood volume changes?

I can get many signals. We can not divulge exactly which signals at this time.

Related

which sensor i have to use to find out distance of a particular device using sensors in iot

I want to create a device like tracker to find objects like keys or any important things. I want to add that sensor to that valuable thing to find where the object i left. I can't use motion sensor, ultrasonic sensor or air proximity because of its check distance form one direction. I need to find out the distance of the object from any direction.
Consider using one of both:
GPS
RDF (Radio Direction Find)
The first is great if you have open sky and client (the looking device) is able to navigate through GPS (thinking in a smartphone).
The second is good for indoor but it can be hard to program and find parts. Look to soloshot. It follows a beacon attached to person of interest. I donĀ“t have their spec but this is a kind of RDF I bet. Airplanes use a avionic based on RDF idea. Read the wikpeadia article on RDF.
Others may come up with other ideas, those where the first that popped in my mind.

Comparing audio recordings

I have 5 recorded wav files. I want to compare the new incoming recordings with these files and determine which one it resembles most.
In the final product I need to implement it in C++ on Linux, but now I am experimenting in Matlab. I can see FFT plots very easily. But I don't know how to compare them.
How can I compute the similarity of two FFT plots?
Edit: There is only speech in the recordings. Actually, I am trying to identify the response of answering machines of a few telecom companies. It's enough to distinguish two messages "this person can not be reached at the moment" and "this number is not used anymore"
This depends a lot on your definition of "resembles most". Depending on your use case this can be a lot of things. If you just want to compare the bare spectra of the whole file you can just correlate the values returned by the two ffts.
However spectra tend to change a lot when the files get warped in time. To figure out the difference with this, you need to do a windowed fft and compare the spectra for each window. This then defines your difference function you can use in a Dynamic time warping algorithm.
If you need perceptual resemblance an FFT probably does not get you what you need. An MFCC of the recordings is most likely much closer to this problem. Again, you might need to calculate windowed MFCCs instead of MFCCs of the whole recording.
If you have musical recordings again you need completely different aproaches. There is a blog posting that describes how Shazam works, so you might be able to find this on google. Or if you want real musical similarity have a look at this book
EDIT:
The best solution for the problem specified above would be the one described here ("shazam algorithm" as mentioned above).This is however a bit complicated to implement and easier solution might do well enough.
If you know that there are only 5 different different possible incoming files, I would suggest trying first something as easy as doing the euclidian distance between the two signals (in temporal or fourier). It is likely to give you good result.
Edit : So with different possible starts, try doing an autocorrelation and see which file has the higher peak.
I suggest you compute simple sound parameter like fundamental frequency. There are several methods of getting this value - I tried autocorrelation and cepstrum and for voice signals they worked fine. With such function working you can make time-analysis and compare two signals (base - to which you compare, in - which you would like to match) on given interval frequency. Comparing several intervals based on such criteria can tell you which base sample matches the best.
Of course everything depends on what you mean resembles most. To compare function you can introduce other parameters like volume, noise, clicks, pitches...

Simple audio filter-bank

I'm new to audio filters so please excuse me if i'm saying something wrong.
I like to write a code which can split up audio stored in PCM samples into two or three frequency bands and do some manipulation (like modifying their audio levels) or analysis on them then reconstruct audio samples from the output.
As far as i read on the internet for this task i could use FFT-IFFT and do manipulation on the complex form or use a time domain based filterbank which for example is used by the MP2 audio encoding format. Maybe a filter-bank is a better choice, at least i read somewhere it can be more CPU usage friendly in real time streaming environments. However i'm having hard times understanding the mathematical stuff behind a filterbank. I'm trying to find some source code (preferably in Java or C/C++) about this topic, so far with no success.
Can somebody provide me tips or links which can get me closer to an example filter bank?
Using FFT to split an Audio signal into few bands is overkill.
What you need is one or two Linkwitz-Riley filters. These filters split a signal into a high and low frequency part.
A nice property of this filter is, that if you add the low and high frequency parts you get almost the original signal back. There will be a little bit of phase-shift but the ear will not be able to hear this.
If you need more than two bands you can chain the filters. For example if you want to separate the signal at 100 and 2000Hz it would in pseudo-code somewhat like this:
low = linkwitz-riley-low (100, input-samples)
temp = linkwitz-riley-high (100, input-samples)
mids = linkwitz-riley-low (2000, temp)
highs = linkwitz-riley-high (2000, temp);
and so on..
After splitting the signal you can for example amplifiy the three output bands: low, mids and highs and later add them together to get your processed signal.
The filter sections itself can be implemented using IIR filters. A google search for "Linkwitz-Riley digital IIR" should give lots of good hits.
http://en.wikipedia.org/wiki/Linkwitz-Riley_filter
You should look up wavelets, especially Daubechies wavelets. They will let you do the trick, they're FIR filters and they're really short.
Update
Downvoting with no explanation isn't cool. Additionally, I'm right. Wavelets are filter banks and their job is to do precisely what is described in the question. IMHO, that is. I've done it many times myself.
There's a lot of filter source code to be found here

How to distinguish chords from single notes?

I am a bit stuck here as I cant seem to find some algorithms in trying to distinguish whether a sound produced is a chord or a single note. I am working on Guitar instrument.
Currently, what I am experimenting on is trying to get the Top 5 frequencies with the highest amplitudes, and then determining if they are harmonics of the fundamental (the one with the highest amplitude) or not. I am working on the theory that single notes contain more harmonics than chords, but I am unsure as to if this is the case.
Another thing I am considering is trying to add in the various amplitude values of the harmonics as well as comparing notes comprising the 'supposed chord' to the result from the FFT.
Can you help me out here? It would be really appreciated. Currently, I am only working on Major and Minor chords first.
Thank you very much!
Chord recognition is still a research topic. A good solution might require some fairly sophisticated AI pattern matching techniques. The International Society for Music Information Retrieval seems to run an annual contest on automatic transcription type problems. You can look up the conference and research papers on what has been tried, and how well it works.
Also note that the fundamental pitch is not necessarily the frequency with the highest FFT amplitude result. With a guitar, it very often is not.
You need to think about it in terms of the way we hear sound. Looking for the top 5 frequencies isnt going to do you any good.
You need to look for all frequencies within (Max Frequency Amplitude)/srt(2) to determin the chord/not chord aspect of the signal.

Audio normalization/fixation?

I am using some audio fingerprinting technique to mark songs in long recordings. For example, in radio show records. Fingerprinting mechanism works fine but i have a problem with normalization (or downsampling).
Here you can see two same songs but different waveforms. I know i should make some DC Offset fixation and use some high and low gain filters. I already do them by Sox using highpass 1015 and lowpass 1015. And i use wavegain to fix the volume and DC Offset. But in this case wave forms turns to one like below:
But even in this case, i can't get the same fingerprint. (I am not expecting %100 same but at least %50 would be good)
So. What do you think? What can i do to fix records to have same fingerprints? Maybe some audio filtering would work but i don't know which one to use? Can you help me?
By the way, here is the explanation of fingerprinting technique.
http://wiki.musicbrainz.org/Future_Proof_Fingerprint
http://wiki.musicbrainz.org/Future_Proof_Fingerprint_Function
Your input waveforms appear to be clipping, so no amount of filtering is going to result in a meaningful "fingerprint". Make sure you collect valid input samples that have a reasonable dynamic range but which do not clip.

Resources