How deploy an large number iBeacons - bluetooth

I want deploy a large number of iBeacons with the same UUID because we can't access dynamically to the UUID of the beacons detected. What is the limit number of the maximum Beacons with the same UUID ? I found some discussions with a number of 65000 approximately, is it correct ?
I'm thinking to use a Bluetooth transmitter to wake up the app and check the current location, the API calls return different UUID of the area and I check anyone with the low ranging to check which one is the closest to me.

Bluetooth beacons (iBeacon and AltBeacon) have a three part identifier:
ProximityUUID (16 bytes)
Major (2 bytes)
Minor (2 bytes)
There are 8 bits per byte, so if you give all your beacons the same ProximityUUID, you can have 8*2*2=32 bits worth of combinations. That's 2^32 = 4,294,967,296 combinations.
If you have heard discussions of there only being ~65000 combinations, this was probably referring to the major or minor value by itself. Because the minor field has 16 bits (2 bytes x 8 bits/byte), there can be 2^16 = 65536 combinations in the minor field.

Related

How to determine the addressable memory capacity in bytes knowing the bits for the operand address?

If I have say a 64-bit instruction, which has 2 bytes (16 bits) for opcode and the rest for operand address, I can determine that I have 48bits for the address (64-16). The maximum value that can be displayed with 48 bits plus 1 to account for address 0 is my go to number. This would be 2^48. However, I have the problem with the understanding of this in terms of the iB units.
2^48 is 2^40 (TiB) x 2^8 = 256TiB. But since TiB = 2^40 BYTES, when did the 2^48 become a BYTE? I generally believed that to get the number of bytes I'd have to divide by 8, but this doesn't seem to be the case.
Could someone explain why this works?
A byte is by definition the smallest chunk of memory which has an address. Whatever number of address bits, the resulting address is the address of a byte, by definition. In all (or at least, most) computer architectures existing today, a byte is the same as an octet, that is, eight bits; but historically there were popular computer architectures with 6 bit bytes, or 12 bit bytes, or even other more exotic number of bits per byte.

What is the difference between address size(width) and addressability?

What I understand so far is address width is the number of bits in an address.
For example, 4 bits width address can have 2^4 = 16 cases. And what I'm really uncertain is addressability. Based on what I learned is "the size of the most basic unit that can be named by address". So, if we have 4 bits address width and 2 bits addressability, what happens?
I've been really curious about it for a couple of weeks, but still bummer.
Could you guys explain those things by drawing or something?
I think you do get it. There is the number of address bits, the width if you will, and there is the size of the unit those things address. so 8 bits means you have 256 things, 16 bits of address 65536 things. The size of thing is completely independent of the number of address bits. From a programmers perspective almost always we deal in units of bytes, so 8 bits of address would be 256 bytes, 32 bits of address would be 4 gigabytes. As you dig into the logic it is often wasteful to use a byte based address, if you have a peripheral that has 32 bit wide registers and you can only access those as whole 32 bit registers then do you need to connect address line 0 or 1? often not. so at that peripheral the address bus however wide it is (often the whole address bus to the peripheral is a subset of the address bus higher up closer to the processor/software, and those address bits are in units of 32 bit words.
To make things even more confusing, memory parts are often defined in terms of bits, even if they have an 8 or 16 bit data bus. So you might have a 4M part but that is megabits not megabytes...

High Speed Serial

I have a system which uses a UART clocked at 26 Mhz. This is a 16850 UART on a i86 architecture. I have no problems accessing the port. The largest incoming message is about 56 bytes, the largest outgoing about 100. The baud rate divisor needs to be 1 so seterial /dev/ttyS4 baud_base 115200 is OK and opening at 115200. There is no flow control. Specifying part 16850 does NOT set the FIFO to deep. I was losing bytes. All the data is byte, unsigned char.
I wrote a routine that uses ioperm to set the deep FIFO's to 64 and now a read/write works meaning that the deep FIFO's are NOT being enabled by serial_core.c or 8250.c, at least in a deep manner.
With the deep FIFO set using s brute force, post open(fd, "/dev/ttyS4", NO_BLOCKING, etc I get reliably the correct number of bytes but I tend to get the same word missing a bit. Not a byte, a bit.
All this same stuff runs fine under DOS so it is not a hardware issue.
I have opened the port for raw, no delays, no party, 8 bits, 2 stop.
Has anyone seen issues reading serial ports are relatively high speeds with short bursts of data?
Yes, I have tried custom baud, etc. The FIFO levels made the biggest improvement. This is a ISA bus card using IRQ7.
It appears the serial driver for Linux sucks and has way to much latency and far to many features for really basic raw operation.
Has anyone else tried very high speed data without flow control or had similar issues. As I stated, I get the correct number of bytes and all the data is correct except 1 bit in byte 4.
I am pretty stumped.

Reading LIN frames using Linux standard serial API

I have a development board that runs some Linux distribution, this board has some UART peripherals that are mapped into the system as a tty-like files.
On a specific UART port I have connected a LIN* transceiver which is connected to a LIN bus.
The LIN transceiver outputs me frames (two types: one type of frame has 3 bytes, and the other one has between minimum 6 bytes and maximum 12 bytes) with a minimum ~20ms of distance between them.
Now I want to write an application that is able to read this individual frames as data buffers (not byte-by-byte or any other possibility).
For setting the communication parameters (baud rate, parity bits, start/stop bits, etc), I'm using the stty** utility. I have played a bit with the min, and time [***] special settings parameters, but I didn't obtained the right behavior, big frames will always be spitted into at least three chunks.
Is there any way to achieve this?
[*] LIN: https://en.wikipedia.org/wiki/Local_Interconnect_Network
[**] stty: http://linux.die.net/man/1/stty
[***] I have used the following modes:
MIN == 0, TIME > 0 (read with timeout)
This won't work because I will always receive at least one individual byte (and the rest of the frame as a buffer).
MIN > 0, TIME > 0 (read with interbyte timeout)
In this mode setting the MIN to 3 (the smallest frame haves 3 bytes), and the TIME parameter to some higher value like 90 also won't do the trick, the short frames are received correctly in this case (at once), but the longer frames are splitted into 3 parts (the first part with 3 bytes, the second one with three bytes, and the last one with 5 bytes).

AXI Burst calculations

In AXI channel the no. of data transfers in a single burst are called as beats.
size[2:0] - no. of bytes to be transfered in one beat.
but here actual bus size = 2 ^ size
eg:- if size = 100(binary)
bus size = 2 ^ 4 = 16.
also I have byte_count[15:0] - total no. of bytes to be transferred in the entire transfer.
Now I have the issue how to calculate burst length and no. of bursts to be issued.
burst length is no. of beats transferred in one burst.
the formulas are
no. of beats = byte_count / size
no. of bursts to be issued = no. of beats / 16
16 - because in 1 AXI burst you can have at max 16 beats only.
I am doing the coding in verilog.
This is for AXI Master and unaligned transfers are not supported.
Any hardware design or a formula is acceptable.
A few comments on what you have, assuming this is a master:
You're calculating the number of bursts when you probably don't need to. Usually you would keep track of only the bytes/words remaining to transfer and decrement as transactions complete. Keep in mind you can't burst across certain address boundaries which may require you to split transfers.
You're basing calculations on using 16-beat bursts(maximum LENGTH) but you may not be able to assume this. While AXI requires that slaves respond to all transfers, it does not require that they accept all of them. For instance, an AXI slave for a FIFO interface may reject unaligned transfers. If the target system may include slaves or bus fabrics that you don't know the capabilities of, you'll need to handle some programmable values.
The no. of beats = no. of read or write transfers ie.,
if AWlen or ARlen is 3, then Burst length is awlen (or) arlen + 1.
Therefore, AWlen + 1 => 3 + 1 => 4 transfers or 4 beats.
Maximum no.of beats in AXI protocol are 16 burst length size is 4 bits so that only maximum possible beats occured are 16.
hope you cleared with the concept of calculation of no. of beats.
Thankyou.

Resources