Why UM mode in RLC protocol doesn't support re transmission.
Unacknowledged mode in RLC is meant for delay sensitive applications such as VOIP. If retransmissions were allowed, it would have significant impacts on call quality. By the time the re-transmitted packet was received it would be very late and cause more harm than good. Instead, packets are continuously transmitted and while there may be some that are never received, it results in better overall quality.
A good reference on the 3 RLC modes can be found here:
http://www.tweet4tutorial.com/rlc-mode/
RLC shall operate in various modes based on the type of application or requirement from the application. In other words, to accommodate various type of services (streaming , transfer of huge file, etc..), there are different modes in RLC such as TM, UM and AM.
The RLC UM mode uses sequence numbering of data blocks (PDU) for reordering and duplicate detection of RLC PDUs, but there is no mechanism for the retransmission to address specific application / service requirements. So, the RLC UM mode is used in scenarios where there is no requirement for error free delivery or retransmission is not expected (Application requiring lower delay/latency). Few examples will be that of VoIP ,UDP based flows or broadcast transmissions (Point to multipoint traffic such as Multimedia Broadcast/Multicast Service (MBMS)).
So, services that can sustain error rates or not expecting retransmission are mapped to RB having RLC in UM mode and services that need reliability(error sensitive and latency insensitive) like file transfer/ftp will get mapped to RB having RLC in AM mode.
UM mode is configured by the RRC layer of protocol stack for the applications like VOLTE, VOIP etc.
As mentioned by Jeff, retransmission may have the significant impacts on call quality.
For more information about RLC_UM mode, refer the 3GPP Specification 36.322.
Related
I've been pouring over the BT 4.x (LE) spec trying to figure out if this is possible or not (events without pairing/boding).
Does anyone have an insight (and link to the spec preferably) if it's possible?
As Mike Petrichenko commented, GATT communication is definitely possible without pairing. In fact most GATT servers/clients out there function without the need for pairing/bonding. The only exception is when some characteristics require authentication/authorisation in order to read some data (e.g. a medical device with a Heart rate characteristic).
If you want a specific reference to where this is mentioned in the Bluetooth spec, then I recommend looking at the Core Specification version 5.2, Vol 3, Part C, section 10.2 (LE Security Modes):-
The security requirements of a device, a service or a service request
are expressed in terms of a security mode and security level. Each
service or service request may have its own security requirement. The
device may also have a security requirement. A physical connection
between two devices shall operate in only one security mode.
It is then mentioned that LE security mode 1 has the level No security, and many GATT servers/clients work in this level.
You can test this yourself if you have two phones available. You can use the nRF Connect app to run a GATT server on one and a GATT client on the other. You will see that you can browse the GATT table and read data without having to pair.
Below are a few links that contain more information:-
Is pairing/encryption mandatory to allow a peer to write in GATT
Bluetooth Low Energy GATT security levels
How GAP and GATT work
From the Bluetooth client example at http://people.csail.mit.edu/albert/bluez-intro/x502.html, it seems I can simply connect to a remote Bluetooth socket as long as I have the Bluetooth MAC address of the device.
If I can simply connect to a remote Bluetooth device, I am wondering what exactly does Bluetooth pairing do. When is pairing really needed?
Update:
From How does Bluetooth pairing work?, it appears the final result of pairing is that an encryption key gets stored on both sides. I assume, when you open a remote socket connection, the call is intercepted by the local bluetooth daemon. In turn, the daemon encrypts the data and sends to the remote device. The daemon on the remote device decrypts the data and sends to the remote client application:
Device1Client-->Device1Daemon-->Device2Daemon-->Device2Client
Is this assumption correct?
Yes, your assumption is partially correct. Encryption is one of the uses of Passkey.
Bluetooth pairing is necessary whenever two Bluetooth devices connect to each other to share resources. A trusted relationship is established between the devices using a numerical password, commonly referred to as a passkey. Depending on how often one Bluetooth device connects to another, the user might opt to have the passkey saved for future connection attempts or prompt to enter the passkey each time the devices request communication with each other.
This is already explained on Stack Overflow please check- How does Bluetooth pairing work?
In the below answer, I will try to explain what is not mentioned in the above link or answers.
In Pairing process, when the initiating device sends a ‘Pairing Request” to the other device. The two devices then exchange I/O capabilities, authentication requirements, maximum link key size, and bonding requirements. Basically, all this phase consists of, is the two devices exchanging their capabilities and determining how they are going to go about setting up a secure connection. It is also important to note that all data being exchanged during this phase is unencrypted.
Now the question is why this Phase is needed?
As mentioned- “two devices exchanging their capabilities.” The pairing should happen between compatible device there is no point in Pairing you Mouse with a Headphone as Mouse capabilities are different than Headphone.
One more use of Pairing is – “determining how they are going to go about setting up a secure connection.” Here the frequency hopping pattern is determined for two reasons-
To avoid Middle Man Attack.
To avoid Collision
Bluetooth uses 79 radio frequency channels in the band starting at 2402 MHz and continuing every 1 MHz. It is these frequency channels that Bluetooth technology is "hopping" over. The signal switches carrier channels rapidly, at a rate of 1600 hops per second, over a determined pattern of channels. The hopping pattern is determined well during the pairing process so that no other device will know in which band of the frequency the data is being transferred at an instance. Its rare case that frequency hopping pattern can be the same for a couple of devices communicating hence collision is avoided.
Note: If any third device is able to capture the passkey then it can replicate the whole communication pattern and capture the data being transferred. This is how the BT Sniffers work.
I am not able to cover all the details as per SIG specs. I hope the above answers give you a clearer picture of the need for the Pairing process. Feel free to point out, if you want me to explain any specific point in detail.
Below are the reference Links for more information-
http://large.stanford.edu/courses/2012/ph250/roth1/
https://www.bluetooth.com/blog/bluetooth-pairing-part-1-pairing-feature-exchange/
Let's say I want to start transmitting advertising packets from a Blueooth 4 module attached to a Raspberry Pi. I am planning to use BlueZ library for the same. Have a basic questions regarding the same -
How much memory does a typical Bluetooth device contains (is it standard or something that can change from vendor to vendor) ? In both the cases, does the advertisement has to be 27 bytes (iBeacon) and 28 bytes (Altbeacon and URIBeacon) or can it be extended to any number limited by bluetooth memory size or any other guidelines? Wish to understand a little bit about this topic.
Thanks in advance!
Device memory is not what limits the Bluetooth LE advertisement size. The limitation is imposed by the Bluetooth 4.0 Core specification, which allows for a maximum of 28 bytes in a manufacturer advertisement PDU (including the one-byte PDU length field).
While you can't transmit more data in a single advertisement, it is possible to send more data using other techniques including:
Interleaving multiple advertisements from the same transmitter. You can differentiate these advertisements with a "type" byte, and then use this to stitch them together with the receiving device. Disadvantage: complex implementation.
Using a scan response packet to send additional data. Disadvantage: scan responses may not arrive in a timely manner.
Provide a connectable GATT service that can be used to fetch additional data. Disadvantage: once connected, advertising stops.
Use a web service to look up additional data based on a unique identifier in the advertisement. Disadvantage: It won't work with out an internet connection.
is it possible to bond (aggregate) multiple connections over GPRS (usb stick) and use it as one link in linux?
It is technically possible. Linux has a module called bonding which can assemble several interfaces into one logical interface. If you set the mode of the bonding interface to balance-rr, it will distribute the packets between the two interfaces. You will also need a server somewhere to reassemble the traffic that will come from your two links.
However, in practice the results with such setups are awful, especially with high latency and high jitter links like GPRS. The main reason is that you get a lot of out of order delivery and protocols like TCP become crazy in these conditions. So the resulting throughput will never reach the total throughput of the two links.
I have designed a simple communications protocol, over raw TCP sockets, to enable simple messaging between some embedded devices. To put things in context, my embedded device is a box of electronics containing a relatively small microcontroller that is running a basic embedded RTOS (which essentially provides just task priority and message queueing) and TCP/IP stack. The intended use for the TCP protocol is to
Enable two or more 'boxes' to communicate with each other over a LAN in one building
To allow a box to exchange data with an external server over the Internet.
I now have a messaging protocol working between my metal boxes that I'm happy with. The basic messaging procedure between two boxes is basically:
Box 'A' initiates a socket connection to 'B'.
Box 'A' sends a message report or command sequence.
Box 'B' responds with an acknowledge and / or command response.
Box 'A' closes the socket.
What I would now like to do is to incorporate some level of security and authentication. The great restriction here is that I don't have any kind of OS or glorified TCP stack that can provide me with any security features; I just have simple TCP stack, and therefore I must implement security measures at application level, and with microcontroller limitations in mind.
The objectives I would like to meet are:
Authentication between devices. To achieve this, I'm intending to do the following measures:
Hold a table of known IPs from which connections shall only be accepted.
Each time a socket connection is established, unique identifiers are always exchanged first. The number might be a unique serial number for a given device, and must be known to the other devices.
Encryption of data, in the event that packets are somehow intercepted. Presumably I need some kind of cipher algorithm that isn't too 'expensive' to perform on a small microcontroller, used in conjuction with a unique key that is programmed into all devices. One such algorithm I've seen, which looks compact enough to implement in my code, is TEA (Tiny Encryption Algorithm).
I would be most grateful for any advice or pointers.
Check MatrixSSL - they boast a tiny size and embeddable capabilities. This is much better than re-inventing SSL/TLS yourself (which you would end up doing).
Tea looks to be quite simple and will probably do what you need.
compiling for thumb with -O2 or similar optimizations:
arm-none-linux-gnueabi-gcc (Sourcery G++ Lite 2011.03-41) 4.5.2
encrypt 136 bytes decrypt 128 bytes
llvm 29
encrypt 92 bytes decrypt 96 bytes
compiling for generic arm...
gnu encrypt 188 bytes, decrypt 184 bytes
llvm encrypt 112 bytes, decrypt 116 bytes
For authentication, is there a one to one relationship between the ip address table and the number of devices? Basically does there need to be more than one unique identifier per device? Are you wanting the other side that is making the connection to the embedded system to log in in some form or fashion? Or are you controlling the binaries/code on both ends and there are no users or no selection of devices (the programs know what to do), etc? If each device has a known ip address, that ip address could be the key to the encryption (plus other bytes common to all or derived in some fashion that everyone knows) If a connection coming in is 1) not from the approved list 2) encryption fails when the embedded systems ip address based key fails then reject the connection.
Your security can only go so far anyway, if you need something really robust you probably need more horsepower in the embedded system and then implement one of the common/standard systems like ssl.