Calculate approximated distance using RSSI - bluetooth

I working on a project that aims to measure approximated distance between single Raspberry PI and nearby smartphone.
The final target of the project is to check if there is a smartphone in the same room of the Raspberry.
I thought on two ways for implementation. The first is measure distance using RSSI value, second is to calibrate the setup in the first time from many places in the room and outside the room and get a threshold RSSI value.
I read that smartphones sends wi-fi packets even when the wi-fi is disabled, I thought to use this feature the get the RSSI value from the transmitting smartphone (Using kismet passively) and check if its in the room. I can use Bluetooth RSSI also.
How can I calculate distance using RSSI?

This is an open issue. Basically, Measuring distance according to the RSSI in the ideal state is easy, The main challenge is reducing noise that produced due to multipath and reflecting RF signals and its Interferences. Anyway, you can convert RSSI to distance by below code:
double rssiToDistance(int RSSI, int txPower) {
/*
* RSSI in dBm
* txPower is a transmitter parameter that calculated according to its physic layer and antenna in dBm
* Return value in meter
*
* You should calculate "PL0" in calibration stage:
* PL0 = txPower - RSSI; // When distance is distance0 (distance0 = 1m or more)
*
* SO, RSSI will be calculated by below formula:
* RSSI = txPower - PL0 - 10 * n * log(distance/distance0) - G(t)
* G(t) ~= 0 //This parameter is the main challenge in achiving to more accuracy.
* n = 2 (Path Loss Exponent, in the free space is 2)
* distance0 = 1 (m)
* distance = 10 ^ ((txPower - RSSI - PL0 ) / (10 * n))
*
* Read more details:
* https://en.wikipedia.org/wiki/Log-distance_path_loss_model
*/
return pow(10, ((double) (txPower - RSSI - PL0)) / (10 * 2));
}

Related

How to get the color of a specific pixel drawn using SDL_RenderDrawPoint() on SDL2?

SDL_SetRenderDrawColor(renderer, 255, 0, 0, 255);
SDL_RenderDrawPoint(renderer, (window_height / 2) + xxi[i], -yyi[i] + (window_width / 2));
SDL_RenderPresent(renderer);
Now I wanna get the color of the xxi[i], yyi[i] point.
But I don't know how to get that.
SDL_RenderReadPixels():
/**
* Read pixels from the current rendering target to an array of pixels.
*
* **WARNING**: This is a very slow operation, and should not be used
* frequently.
*
* `pitch` specifies the number of bytes between rows in the destination
* `pixels` data. This allows you to write to a subrectangle or have padded
* rows in the destination. Generally, `pitch` should equal the number of
* pixels per row in the `pixels` data times the number of bytes per pixel,
* but it might contain additional padding (for example, 24bit RGB Windows
* Bitmap data pads all rows to multiples of 4 bytes).
*
* \param renderer the rendering context
* \param rect an SDL_Rect structure representing the area to read, or NULL
* for the entire render target
* \param format an SDL_PixelFormatEnum value of the desired format of the
* pixel data, or 0 to use the format of the rendering target
* \param pixels a pointer to the pixel data to copy into
* \param pitch the pitch of the `pixels` parameter
* \returns 0 on success or a negative error code on failure; call
* SDL_GetError() for more information.
*/
extern DECLSPEC int SDLCALL SDL_RenderReadPixels(SDL_Renderer * renderer,
const SDL_Rect * rect,
Uint32 format,
void *pixels, int pitch);
Set the rect argument's width & height to 1 to read a single pixel.

Redhawk FEI sample rate tolerance check failure?

I'm building a Redhawk 2.1.2 FEI device and encountering a tolerance check failure when I try to do an allocation through the IDE (haven't tried python interface or anything). The request is for 8 MHz and I get a return value of 7999999.93575246725231409 Hz which is waaaay within the 20% tolerance, but I still get this error:
2017-12-24 11:27:10 DEBUG FrontendTunerDevice:484 - allocateCapacity - SR requested: 8000000.000000 SR got: 7999999.935752
2017-12-24 11:27:10 INFO FrontendTunerDevice:490 - allocateCapacity(0): returned sr 7999999.935752 does not meet tolerance criteria of 20.000000 percent
The offending code from frontendInterfaces/libsrc/cpp/fe_tuner_device.cpp:
// check tolerances
if( (floatingPointCompare(frontend_tuner_allocation.sample_rate,0)!=0) &&
(floatingPointCompare(frontend_tuner_status[tuner_id].sample_rate,frontend_tuner_allocation.sample_rate)<0 ||
floatingPointCompare(frontend_tuner_status[tuner_id].sample_rate,frontend_tuner_allocation.sample_rate+frontend_tuner_allocation.sample_rate * frontend_tuner_allocation.sample_rate_tolerance/100.0)>0 ))
{
std::ostringstream eout;
eout<<std::fixed<<"allocateCapacity("<<int(tuner_id)<<"): returned sr "<<frontend_tuner_status[tuner_id].sample_rate<<" does not meet tolerance criteria of "<<frontend_tuner_allocation.sample_rate_tolerance<<" percent";
LOG_INFO(FrontendTunerDevice<TunerStatusStructType>, eout.str());
throw std::logic_error(eout.str().c_str());
}
And the function from frontendInterfaces/libsrc/cpp/fe_tuner_device.h:
inline double floatingPointCompare(double lhs, double rhs, size_t places = 1){
return round((lhs-rhs)*pow(10,places));
/*if(round((lhs-rhs)*(pow(10,places))) == 0)
return 0; // equal
if(lhs<rhs)
return -1; // lhs < rhs
return 1; // lhs > rhs*/
}
I actually copied into a non-Redhawk C++ program that I use to test the devices interfaces and the checks pass. I broke everything out to find the difference and noticed that in Redhawk the sample rate being returned from the Device (or at least printed to the screen) is slightly different than the one outside Redhawk - by like a tiny fraction of a Hz:
// in Redhawk using cout::precision(17)
Sample Rate: 7999999.93575246725231409
// outside Redhawk using cout::precision(17)
Sample Rate: 7999999.96948242187500000
I don't know why there's a difference in the actual sample rates returned but in the Redhawk version it's just enough to make the second part of the check fail:
floatingPointCompare(7999999.93575246725231409,8000000.00000000000000000)<0
1
Basically because:
double a = 7999999.93575246725231409 - 8000000.00000000000000000; // = -0.06424753274768591
double b = pow(10,1); // = 10.00000000000000000
double c = a*b; // = -0.6424753274
double d = round(c); // = -1.00000000000000000
So if a returned sample rate is less than the request by more than 0.049999 Hz then it will fail the allocation regardless of the tolerance %? Maybe I'm just missing something here.
The tolerance checks are specified to be the minimum amount plus a delta and not a variance (plus or minus) from the requested amount.
There should be a document somewhere that describes this in detail but I went to the FMRdsSimulator device's source.
// For FEI tolerance, it is not a +/- it's give me this or better.
float minAcceptableSampleRate = request.sample_rate;
float maxAcceptableSampleRate = (1 + request.sample_rate_tolerance/100.0) * request.sample_rate;
So that should explain why the allocation was failing.

Does Phaser Arcade.Body velocity include deltaTime or not?

I want to move my character along the x axis with constantly speed. I thought move depends on frame rate. So, technically I should write
sprite.body.velocity.x = speed * deltaTime
where deltaTime = game.time.elapsedMS / 1000;
But if I'm doing that - my character moves vvvvverrry slooow, even if speed = 1000.
But if I'm writing
sprite.body.velocity.x = speed
it works fine. My fps = 60;
Phaser Documentation
says:
velocity - The velocity, or rate of change in speed of
the Body. Measured in pixels per second.
no deltatime....
and all demos do not have deltatime
http://phaser.io/examples/v2/arcade-physics/platformer-basics
http://phaser.io/examples/v2/arcade-physics/asteroids-movement
etc.
So, I don't understand: shoud I calculate deltaTime or just use velocity.x?
Well.... I think I'm stupid...
My calculation of the deltaTime was wrong
The correct formula will be
deltaTime = (elapsedMS * fps) / 1000
elapsedMS - The time in ms since the last time update, in milliseconds, based on time.
fps - Frames per second.
(Only calculated if advancedTiming is enabled).
So, that was my problem.
As the result
body.velocity doesn't include calculation of deltaTime and for smooth movement should use deltaTime that calculated by the formula above.
And it will be something like that
function update() { // <-- it is phaser state method...is called every frame
deltaTime = (elapsedMS * fps) / 1000;
sprite.body.velocity.x = velocityX * deltaTime;
sprite.body.velocity.y = velocityY * deltaTime;
}

gstreamer read decibel from buffer

I am trying to get the dB level of incoming audio samples. On every video frame, I update the dB level and draw a bar representing a 0 - 100% value (0% being something arbitrary such as -20.0dB and 100% being 0dB.)
gdouble sum, rms;
sum = 0.0;
guint16 *data_16 = (guint16 *)amap.data;
for (gint i = 0; i < amap.size; i = i + 2)
{
gdouble sample = ((guint16)data_16[i]) / 32768.0;
sum += (sample * sample);
}
rms = sqrt(sum / (amap.size / 2));
dB = 10 * log10(rms);
This was adapted to C from a code sample, marked as the answer, from here. I am wondering what it is that I am missing from this very simple equation.
Answered: jacket was correct about the code loosing the sign, so everything ended up being positive. Also the code 10 * log(rms) is incorrect. It should be 20 * log(rms) as I am converting amplitude to decibels (as a measure of outputted power).
The level element is best for this task (as #ensonic already mentioned) its intended for exactly what you need..
So basically you add to your pipe element called "level", then enable the messages triggering.
Level element then emits messages which contains values of RMS Peak and Decay. RMS is what you need.
You can setup callback function connected to such message event:
audio_level = gst_element_factory_make ("level", "audiolevel");
g_object_set(audio_level, "message", TRUE, NULL);
...
g_signal_connect (bus, "message::element", G_CALLBACK (callback_function), this);
bus variable is of type GstBus.. I hope you know how to work with buses
Then in callback function check for the element name and get the RMS like is described here
There is also normalization algorithm with pow() function to convert to value between 0.0 -> 1.0 which you can use to convert to % as you stated in your question.

CUFFT - padding/initializing question

I am looking at the Nvidia SDK for the convolution FFT example (for large kernels), I know the theory behind fourier transforms and their FFT implementations (the basics at least), but I can't figure out what the following code does:
const int fftH = snapTransformSize(dataH + kernelH - 1);
const int fftW = snapTransformSize(dataW + kernelW - 1);
....//gpu initialization code
printf("...creating R2C & C2R FFT plans for %i x %i\n", fftH, fftW);
cuf ftSafeCall( cufftPlan2d(&fftPlanFwd, fftH, fftW, CUFFT_R2C) );
cufftSafeCall( cufftPlan2d(&fftPlanInv, fftH, fftW, CUFFT_C2R) );
printf("...uploading to GPU and padding convolution kernel and input data\n");
cutilSafeCall( cudaMemcpy(d_Kernel, h_Kernel, kernelH * kernelW * sizeof(float), cudaMemcpyHostToDevice) );
cutilSafeCall( cudaMemcpy(d_Data, h_Data, dataH * dataW * sizeof(float), cudaMemcpyHostToDevice) );
cutilSafeCall( cudaMemset(d_PaddedKernel, 0, fftH * fftW * sizeof(float)) );
cutilSafeCall( cudaMemset(d_PaddedData, 0, fftH * fftW * sizeof(float)) );
padKernel(
d_PaddedKernel,
d_Kernel,
fftH,
fftW,
kernelH,
kernelW,
kernelY,
kernelX
);
padDataClampToBorder(
d_PaddedData,
d_Data,
fftH,
fftW,
dataH,
dataW,
kernelH,
kernelW,
kernelY,
kernelX
);
I've never used CUFFT library before so I don't know what the snapTransformSize does
(here's the code)
int snapTransformSize(int dataSize){
int hiBit;
unsigned int lowPOT, hiPOT;
dataSize = iAlignUp(dataSize, 16);
for(hiBit = 31; hiBit >= 0; hiBit--)
if(dataSize & (1U << hiBit)) break;
lowPOT = 1U << hiBit;
if(lowPOT == dataSize)
return dataSize;
hiPOT = 1U << (hiBit + 1);
if(hiPOT <= 1024)
return hiPOT;
else
return iAlignUp(dataSize, 512);
}
nor why the complex plane is such initialized.
Can you provide me explanation links or answers please?
It appears to be rounding up the FFT dimensions to the next power of 2, unless the dimension would exceed 1024, in which case it's rounded up to the next multiple of 512.
Having rounded up the FFT size you then of course need to pad your data with zeroes to make it the correct size for the FFT.
Note that the reason that we typically need to round up and pad for convolution is because each FFT dimension needs to be image_dimension + kernel_dimension - 1, which is not normally a convenient number, such as a power of 2.
What #Paul R says is correct. Why it does that is because The Fast Fourier Transform operation
requires multiple of two to be executed at the fastest speed. See the Cooley-Tukey algorithm
just make sure that you are declaring a matrix that is a power of two and you should not need that generic safe implementation.
It is rounding up the FFT dimensions to the power of 2, and until the dimension would exceed 1024, it rounded up to the multiple of 512. You should pad the data with zeroes to make it the correct size for the FFT. `

Resources