If I were to use the ALTBeacon library in Android, what is the fastest scan rate for individual bluetooth low energy (BLE) beacon packets that can be captured and recorded into memory?
This is defined by the underlying Android Bluetooth LE scanning APIs. I am unaware of a limit, but 10 Hz is typical, and I have tested up to 40 Hz from a single device.
Related
I have tried to send 12-bit audio to be listened to in real time through the HC05 SPP bluetooth module hooked up to an arduino and DAC over serial with a python RFCOMM socket. I have since learned that Serial Port Protocol is not very great at all for this purpose due to its low bandwidth. I figured I could definitely send the data and then play it out through a DAC, but I doubt an arduino would hold an array the size of a WAV file and maybe not even an mp3 file, but that would defeat the purpose of controlling the audio (play,pause,rewind,etc) from my computer. Would it be more realistic and worthwhile to use an A2DP enabled bluetooth module? Or is it still possible to listen to acceptable quality 12-16 bit audio in real time with SPP? I have tried to use lower bit songs, adjusted baud rates for the arduino and HC-05 serial ports, and tried to adjust the magnitude of the values outputted by the DAC to the audio port and I still seem to get crackly audio. It seems the problem comes down to the low bitrate transfer speed of SPP, or am I wrong?
Is it realistic to stream 12-16 bit audio through SPP bluetooth in realtime?
Sure, at some awfully slow sample rate <= 8 kHz. You'd be better off sending 8-bit audio at a higher sample rate.
Would it be more realistic and worthwhile to use an A2DP enabled bluetooth module?
Yes, absolutely, without question. That's what it's designed for, as I mentioned in your other question.
Or is it still possible to listen to acceptable quality 12-16 bit audio in real time with SPP?
Acceptable is subjective. If it's just voice, you can get away with it. If you want reasonable audio quality for music, almost universally, no, it's not acceptable.
It seems the problem comes down to the low bitrate transfer speed of SPP, or am I wrong?
Without any code to inspect and debug, it's impossible to say what the specific problem is that you're referring to. Undoubtedly, the low bandwidth will not enable good quality audio anyway.
If you must continue to use SPP and simple codecs like PCM, at least use differential PCM to save a bit more bandwidth.
I am using python sockets to connect to a bluetooth HC-05 module with my PC. I want to send music to the HC-05 by converting a wav file to a string array which will later by converted to integers ranging from 0-65535 on an arduino. The arduino and HC-05 communicate via serial at 9600 baud. Then those ints will be passed into a DAC via I2C. I am wondering if there could be a memory issue sending an enormous number of strings from my PC. Is it possible the original quality of the sound will be distorted as a result of the different rates of sending/receiving data across the devices? Or will the sound signal just be delayed on the DAC?
The arduino and HC-05 communicate via serial at 9600 baud.
This is going to be too low to be usable for audio.
9600 baud gives you 7680 bits of data per second. At 16 bits per sample, you're looking at a sample rate of 481 Hz, which is too low for intelligible audio. It's barely even high enough to reproduce sound at all.
You need to:
Increase the baud rate. Ideally you'll want at least 57600 baud, for 46 kbit/sec of data. If higher baud rates are available, use them.
Use fewer bits for each sample. Using 4 bits for each sample at 56 kbaud will give you a respectable sample rate of 11.5 kHz. The audio will sound a little tinny at 4 bits, but it'll be intelligible.
I have an Android App that write several bytes to a Bluetooth device.
Looking on btsnoop_hci.log I see that, when a large amount of bytes are sent to the BLE device, the app use Prepare Write Request more times and then Execute Write Request: Immediately Write All.
Now my problem is how to perform this with a my application using a RN4870 module.
At this moment I can connect, read service and characteristics, and write using
CHW command as described in the manual when the there are few bytes.
But I cannot write as the remote BLE device expect when there are lot of bytes.
Thank You for support
Marco
This is the Microchip answer:
Hello,
The core specifications are handled by the firmware.
The user doesn't have access at this level, so is nothing you can set.
Regarding the long data question:
"Does RN4870 module support the Data Length Extension feature? "
RN4870 rev 1.28 support DLE, but partially. The normal packet size in BLE without DLE is 20 bytes.
With a standard DLE feature, the normal packet size should be 251 bytes.
However, in RN4870 Rev 1.28, the packet size is 151 bytes. So it is not a full implementation of the DLE.
The DLE feature (Data Length Extension) is embedded into the lower levels of the Bluetooth stack and there are no specific commands to enable or disable DLE. Essentially, if the peer device also supports DLE, then the DLE will be enabled.
So there are no specific (commands) that you need to do to increase throughput through DLE.
Regards,
In other words there is nothing to do!
In Android Application you can't directly set the DLE length, instead you should set the MTU size. Android Bluetooth stack will calculate DLE length based on the MTU. Maximum Data Length supported by BT protocol in 251, but can be between 27 and 251 depends on BT controller H/W capability. During connection BT Device will negotiate with peer device(If peer device support DLE) to set Maximum DLE size supported by both device.
To increase your throughput you can use the maximum supported MTU size of 512. Also you can write without response and do error check on data using your own logic like checking parity or CRC and re-transmit data from Application for better throughput.
I am trying to figure out what the maximum throughput of a Bluetooth 2.1 SPP connection is.
I found 2 publications concerned with the topic (1, 2) and they both show diagrams, which show the throughput as a function of the Signal to noise ratio (that I can assume to be perfect for my concideration) and the kind of ACL package used. My problem is, I have no Idea which ACL packets are used. How is this decision made? Is it made on the fly, like "what's needed to transfer the current data is used"?
Furthermore, in the Serial Port Profile specification (chapter 2.3) I found this sentence:
This profile requires support for one-slot packets only. This means that this profile
ensures that data rates up to 128 kbps can be used. Support for higher rates is optional.
The last sentence realy confuses me. How do I find out whether this "option" applies in my case?
This means that in SPP mode, all bluetooth modules should work up to 128kbps, and some modules may work even faster.
Under SPP is RFCOMM, which tries to deliver the packets as quickly as possible. If only one packet is sent in one timeslot, you get the 128kbps. The firmware of the bluetooth module, or the HCI driver however can do things differently.
There are SPP speeds up to 480kbps reported - however this requires that both SPP modules are from the same vendor (e.g. BlueGiga iWrap modules can do this speed).
On the other end, if you're connecting to an unknown device, for example a BT112, or an RN41 module to an Android device, the actual usable SPP bandwidth can be much lower than 128 kbps (I have measurements as low as 10kbps).
In case of some old generation iPhones, the usable SPP bandwidth is around 8 kbps.
It is a wise idea to treat "standards" and "datasheets" very conservative, and do your own measurements if actual net data bandwidth is critical.
Even though BT, BT+EDR has theoretical on-the-air bitrates of 3Mbps, the actual usable net data bandwidth is a way smaller.
I want to modulate digital data into audio. Then communicate it through any audio channel and demodulate at the destination from audio to data again. To do this I hope to use computer sound card and software modem without using any hardware implementation. In the internet, I found that this can be through the technique called Audio Frequency-Shift Keying(AFSK). I want to know that can I obtain bit rate more than 1200bps from AFSK and if it is no what the reason behind that this limitation.
Is there any technique efficient than AFSK for this purpose ?
The most common currently-used form of AFSK is the Bell202 modem at 1200 baud. There are a few other standards which also use 1200 baud, and some that run at less than 1200 bits per second, but none that I know of that run greater than 1200.
However, as far as I know, there's no reason you couldn't write a software modem to transmit and receive at a higher baud rate. Bell202 uses bit stuffing (allowing the data stream to use the same tone no more than 5 bits in a row) to help keep the transmitter and receiver from falling out of sync with each other, so a higher baud rate might require bit stuffing at a lower threshold (every 4 or 3 bits).
Another consideration is that the sound cards you're using should use a sampling rate equal to or a multiple of the baud rate you choose. This is one of the reasons 1200 baud is so common, as 1200Hz and 48000Hz are common sample rates with audio hardware.
So 1200 baud isn't a limit. It's just a standard.