How to get RSSI on the BLE112 by Bluegiga - bluetooth

in the program for BLE112 I measure RSSI and then turn ON the LEDs. I have written a program for this purposes. I get RSSI and if the value is more then -70 dBm I turn on the LEDs in P_03 and P_04, and if the value is less then -70 dBm the LEDs are OFF.
But there is a problem: when I flash my module all is OK - the LEDs are OFF, but when I connect my phone with BLE112 the LEDs turn on and that's all! They don't respond to the statements of RSSI.
I can't find out any information about this problem, so I decide to ask you about this problem. I attach my project.
And this is part of code where I get RSSI and set to high PINs:
event hardware_soft_timer(handle)
if ( connected )
call connection_get_rssi(active_connection)(ret_connection, ret_rssi)
if ( ret_rssi > -80 )
call hardware_io_port_write(0, $18, $18)
else
call hardware_io_port_write(0, $18, 00)
end if
end if

"The "int8" data type is a signed (two's complement) 8-bit integer, which means that practically speaking, 0-127 represent those actual values, and 128 to 255 represent -128 to -1, respectively. Since RSSI values are always negative on the BLE, that means that the mathematical integer representation of -50, for example, will actually be 205." - Jeff Rowberg.
Do the following:
#Get RSSI value of connection
call connection_get_rssi(connection_handle)(connection_handle,rssi)
#Convert ASCII into integer
rssi = $100 - rssi
#if device is within range...
if rssi >= 80 then...
Oh..$100 is 256 in hex. You can simply use 256 will still work.

Related

How to parse CAN bus data

I have a CAN bus charger which broadcasts battery charge level, but I am unable to parse the incoming data.
I use Node.js to capture the data and I receive (for example) following HEX values from the charger:
09e6000000
f5e5000000
Device datasheet states, that this message is 40 bits long. And it contains
Start bit 0, length 20, factor 0.001 - battery voltage
Start bit 20, length 18, factor 0.001 - charging current
Start bit 38, length 1, factor 1 - fault
Start bit 39, length 1, factor 1 - charging
I understand, that I should convert the HEX values to binary, using online calculator I can convert it as follows
09e6000000 -> 100111100110000000000000000000000000
Now if I extract first 20 values, 10011110011000000000 and again, using online calculator to convert it to demical I get 648704
If I take second HEX value and follow same process I get
f5e5000000 -> 1111010111100101000000000000000000000000 -> 11110101111001010000 -> 1007184
Something is terribly wrong, since these two HEX values should have been around 59.000. What I am doing wrong?

Speed up bitwise operations/converting byte data from serial to int

I have a ADC-device which is connected with USB to my Google Coral Dev Board. The device sends 2080 bytes of 16 bit sensor data with a serial port each ms to my Dev Board. There I read that data with pyserial and convert it to integers with the code below. The line commented with "Conversion" is the line, which converts the byte data to integer. I dont exactly understand this line but it was provided by the company who gave me the device.
So I got the problem that my Dev Board is working too slow with the conversion from byte to int. It takes around 2.3 seconds to convert 1 second of data and because the system should work in real time (the data is later processed by an AI model), obviously this is not working like this.
I measured the processing time of partions of that code and the bottleneck is the bitwise operations which take around 2 ms for 1 ms of data.
Is there a way to speed up the process? Actually I am not even sure if there is a problem with my Dev Board or if it is normal that it works so "slow" like this.
def fill_buffer(self):
total_data_per_ms_int = np.zeros(shape=[1040])
for i in range(0, self.buffer_time): #One iteration means 1 ms of data, 2080 bytes per iteration -> ae: 1000 bytes, vib_xyz: je 10 bytes, temp: 10 bytes
total_data_per_ms_bytes = self.ser.read(2080)
z = 0
for j in range(0, 2080, 2):
value_int = total_data_per_ms_bytes[j+1]<<8 | total_data_per_ms_bytes[j] #Conversion from byte to int
total_data_per_ms_int[z] = value_int
z = z+1
self.sensor1[0+i*1000:1000+i*1000] = (total_data_per_ms_int[0:1000]) #This part is for dividing the data which has a specific type to 5 different arrays
self.sensor2[0+i*10:10+i*10] = (total_data_per_ms_int[1000:1040:4]) #not important for understanding the problem
self.sensor3[0+i*10:10+i*10] = (total_data_per_ms_int[1001:1040:4])
self.sensor4[0+i*10:10+i*10] = (total_data_per_ms_int[1002:1040:4])
self.sensor5[0+i*10:10+i*10] = (total_data_per_ms_int[1003:1040:4])

How to scale audio signal from PCM value to a dB SPL value?

I've got a set with MEMS microphones that measure audio signals at specific frequencies. The MEMS microphones have a PDM output which is then converted into PCM (this is necessary to allow for a microcontroller to do certain processing on the sampled audio data).
I'm trying to come up with a method to convert the PCM samples into dB SPL and the best resource I've found on this is this link: https://curiouser.cheshireeng.com/2015/01/16/pdm-in-a-tiny-cpu/. I understand how they calculate a RMS value from 977 PCM samples (this is called an SPL value in internal logarithmic units in the article). They relate this RMS value to a dB FS value by using the microphone datasheet (where the maximum possible PCM value/RMS value for a square wave will be equivalent to a known maximum value of dB FS of +3 dB FS ). I don't understand how the author then creates a linear relationship between dB FS and dB SPL (akin to the classic y=mx+b). The specific paragraph from the article discussing this is listed below:
To relate the finished SPL value in internal logarithmic units to dB SPL, we have to note that the microphone data sheet claims that a 1 kHz tone at 94 dB SPL will typically register as -26 dB FS, where 0 dB FS is largest amplitude sine wave that can be represented in a PCM sample without clipping. A full scale square wave is then +3 dB FS, and which would measure as 8 * log2(8192) or 104 which can be rescaled to dB by multiplying by 20 * log10(2) / 8 or about 0.75 to get 78. Subtract the 3 dB for a sine wave, and we find that the offset is about -75 dB to dB FS, or +19 to dB SPL.
Putting this all together, if we wanted to output true db SPL we would need the following expression in terms of our computed variable spl:
dB SPL = (3 * spl / 4) + 19
I'm not understanding how the coefficient of 0.75 or the intercept of +19 is justified. Does anyone have any ideas or some additional resources I can consult on this?

How many bits does $realtime return in Verilog and Systemverilog?

How many bits does $realtime return in Verilog and Systemverilog?
$realtime does not return bits, it returns an double precision floating point number, which has requires 1 bit for the sign, 11 bits for the exponent, and 52-bits for the mantissa. You cannot access individual bits of a real number, so the total number of bits is irrelevant.
From sutherland hdl quick ref. Pg 40 in the doc, 44 in your pdf viewer
http://www.sutherland-hdl.com/pdfs/verilog_2001_ref_guide.pdf
$time
$stime
$realtime
Returns the current simulation time as a 64-bit vector, a
32-bit integer or a real number, respectively.
The value returned will depend on your timescale. i.e. if timescale is 1ns/1ps, and you have ran for 1us, you will return 1,000.000.

Sending data in 3-byte packet

I am trying to send a command to my LAC board using visual c++. On page 6 of LAC Config it says that the Buffer is sent in a 3-byte packet.
Buffer[0]=Control
Buffer[1]=Data Low
Buffer[2]=Data High
What does this mean and how do I figure out what I should set each of these values to?
Thanks
If you read on, you will see that next comes a list of all control-codes, followed by a detailed description of each of them. The manual also mentions that sample-code is available, probably somewhere on their website.
In general, setting the values is a bit tricky. BYTE is probably a typedef or macro that resolves to an unsigned 8-bit data-type, meaning it can only hold values from 0 to 255. Two bytes could represent values up to 65535. However, if you want to store a value greater than 255 if that buffer, you'd have to decompose it into its higher and lower byte. You can do this the following way:
unsigned int value = 512;
BYTE low_byte = 0xff & value;
BYTE high_byte = value >> 8;

Resources