Arduino Laser light detection - Photoresistor or Laser sensor module? - sensors

I understand you can use either a basic photoresistor or this non-modulated laser sensor. But which is more reliable for basic light detection?
I am wanting to detect the beam from a basic laser pen. With simple On/Off trigger rather than detecting intensity.
Laser Sensor: Sensor Module Link
The laser sensor board shows a DS18820 sensor.

Related

Azure Kinect DK raw data

I'm considering using the Azure Kinect for a product that I am working on but something that I saw in one of the documents that I read gives me pause:
"The depth camera transmits raw modulated IR images to the host PC. On
the PC, the GPU accelerated depth engine software converts the raw
signal into depth maps."
I'm unsure if this is the result of a non-technical writer trying to interpret what an engineer told them or if this should be interpreted as literal truth.
I'm assuming that the LEDs are being modulated and that the phase shift is what is being used to determine the depths. If the sensor inside the Kinect is just a regular camera that has a fast global shutter that can be timed precisely, I suppose that you could take several images in a row and work out on a pixel by pixel basis the phase shift for the light that it is receiving. I'm working with a resource-limited embedded computer that may not have the processing power to deal with working out the depth for every pixel in real time. Is this what is actually happening where the host computer really is determining the depth or does the Kinect directly give you distances?

'Mono' FFT Visualization of a Stereo Analog Audio Source

I have created a really basic FFT visualizer using a Teensy microcontroller, a display panel, and a pair of headphone jacks. I used kosme's FFT library for Arduino: https://github.com/kosme/arduinoFFT
Analog audio flows into the headphone input and to a junction where the microcontroller samples it. That junction is also connected to an audio out jack so that audio can be passed to some speakers.
This is all fine and good, but currently I'm only sampling the left audio channel. Any time music is stereo separated, the visualization cannot account for any sound on the right channel. I want to rectify this but I'm not sure whether I should start with hardware or software.
Is there a circuit I should build to mix the left and right audio channels? I figure I could do something like so:
But I'm pretty sure that my schematic is misguided. I included bias voltage to try and DC couple the audio signal so that it will properly ride over the diodes. Making sure that the output matches the input is important to me though.
Or maybe should this best be approached in software? Should I instead just be sampling both channels separately and then doing some math to combine them?
Combining the stereo channels of one end of the fork without combining the other two is very difficult. Working in software is much easier.
If you take two sets of samples, you've doubled the amount of math that the microcontroller needs to do.
But if you take readings from both pins and divide them by two, you can add them together and have one set of samples which represents the 'mono' signal.
Keep in mind that human ears have an uneven response to sound volumes, so a 'medium' volume reading on both pins, summed and halved, will result in a 'lower-medium' value. It's better to divide by 1.5 or 1.75 if you can spare the cycles for more complicated division.

How i recognize a unique sound in a noisy environment?

I am developing app to detect the inability of elderly people to unlock their rooms using IC cards in their daycare center.
This room doors has an electronic circuit that emits beep sounds d to signal the user failure in unlock the room. My goal is to detect this beep signal.
I have searched a lot and found some possibilities:
To clip the beep sound and use as a template signal and compare it with test signal (the complete human door interaction audio clip) using convolution, matched filters, DTW or what so ever to measure their similarity. What do u recommend and how to implement it.
To analyze the FFT of beep sound to see if it has a frequency band different that of the background noise. I do not understand how to do it exactly?
To check whether the beep sound form a peak at certain frequency spectrum that is absent in the background noise. If so, Implement a freclipped the beep sound and got the spectrogram as shown in the figure spectrogram of beep sound. but i cannot interpret it? could u give me a detailed explanation of the spectrogram.
3.What is your recommendation? If you have other efficient method for beep detection, please explain.
There is no need to calculate the full spectrum. If you know the frequency of the beep, you can just do a single point DFT and continuously check the level at that frequency. If you detect a rising and falling edge within a given interval it must be the beep sound.
You might want to have a look at the Goertzel Algorithm. It is an algorithm for continuous single point DFT calculation.

Using microphone input to create a music visualization in real time on a 3D globe

I am involved in a side project that has a loop of LEDs around 1.5m in diameter with a rotor on the bottom which spins the loop. A raspberry pi controls the LEDs so that they create what appears to be a 3D globe of light. I am interested in a project that takes a microphone input and turns it into a column of pixels which is rendered on the loop in real time. The goal of this is to see if we can have it react to music in real-time. So far I've come up with this idea:
Using a FFT to quickly turn the input sound into a function that maps to certain pixels to certain colors based on the amplitude of the resultant function at frequencies, so the equator of the globe would respond to the strength of the lower-frequency sound, progressing upwards towards the poles which would respond to high frequency sound.
I can think of a few potential problems, including:
Performance on a raspberry pi. If the response lags too far behind the music it wouldn't seem to the observer to be responding to the specific song he/she is also hearing.
Without detecting the beat or some overall characteristic of the music that people understand it might be difficult for the observers to understand the output is correlated to the music.
The rotor has different speeds, so the image is only stationary if the rate of spin is matched perfectly to the refresh rate of the LEDs. This is a problem, but also possibly helpful because I might be able to turn down both the refresh rate and the rotor speed to reduce the computational load on the raspberry pi.
With that backstory, I should probably now ask a question. In general, how would you go about doing this? I have some experience with parallel computing and numerical methods but I am totally ignorant of music and tone and what-not. Part of my problem is I know the raspberry pi is the newest model, but I am not sure what its parallel capabilities are. I need to find a few linux friendly tools or libraries that can do an FFT on an ARM processor, and be able to do the post-processing in real time. I think a delay of ~0.25s or about would be acceptable. I feel like I'm in over my head so I thought id ask you guys for input.
Thanks!

Analysing RSSI on Wi-Fi Networks

I am using scapy and a Wi-Fi card in monitor mode to extract data from probe requests and beacon frames travelling across a Wi-Fi network. Is it possible to use the RSSI to estimate distance of the device sending packets from the device I am using to pick them up? How does the value given in RSSI work - does it decrease over the life of the packet?
The RSSI is a measurement of the power present in a received radio signal.
It is not a protocol mechanism (like the TTL).
Moreover, the value you will get is related to a physic variable (the radio signal power) which is not related to the distance. By example, a far radio station with a high radio power could have a stronger RSSI than a close radio station with a low radio power.
You can use RSSI to estimate position of the access point but you need to have more information, for example angle of arrival (AoA) or direction of arrival (DoA). In fact RSSI will be only additional information in that case :-)
To estimate the position you need to have good directional antenna (not omnidirectional), much time to do many measurements, good knowledge of math and physics, and patience. And the results will be still not so good :-)

Resources