Physical bluetooth signal blockers - sensors

What are some physical materials, such as metal plates, that are efficient at blocking out Bluetooth Low Energy signals? If I have a BLE beacon sensor, how can I limit the sensor to receive signals from only one direction and not from the sides or behind the sensor?

While it is not possible to block signals from other directions, you can reduce the strength of those signals from various directions with metal shielding. Just as important as the materials is placement if them from the sensor and grounding the shielding material so it does not act as an antenna itself. Specific instructions are difficult, as a lot depends on your physical environment. Expect lots of trial and error.
A more reliable approach to the above is to get a BLE sensor with an antenna connector and attach a directional antenna pointing in the way you want to collect signals.

Related

Autosar Spec for I2C and UART

There are autosar specs well defined for CAN,LIN,Ethernet.
Why are these specs unavailable for UART and I2C?
Is UART and I2C not used in automotive?
Automotive has some unique safety requirements and it would be hard to implement them in such protocols as UART or I2C.
They are not used by the auto industry for inter-ECU communication and therefore they are not part of the standard.
The LIN Driver could be used also for SCI/UART:
[SWS_Lin_00063] ⌈It is intended to support the complete range of LIN hardware from a simple SCI/UART to a complex LIN hardware controller.
Using a SW-UART implementation is out of the scope. For a closer
description of the LIN hardware unit, see chapter 2.3.⌋(SRS_Lin_01547)
In the automotive field, I2C is not much used, even on-board. It might be easy and cheap, but its not as fast, not so safe (EMC/noise), and you could even block the whole bus with all nodes, if a slave pulls the SDA low.
A lot of interface chips are using SPI, and for this, we have the SPI driver.

Limit To BLE Devices?

Is there a limit to the number of BLE (Bluetooth Low Energy) devices that can transmit at the same time?
For example- if I plan to implement an IT solution that has to include several thousands of BLE Beacons / iBeacons- would it be a problem to monitor all these Beacons?
Would their transmissions interfere with each other?
Thanks!
BLE devices use multiple radio frequency channels for advertising and vary their specific packet transmission times in order to avoid transmission collisions with other BLE devices on the same channel. I have successfully tested such a scenario with several dozen beacons visible at the same time, but there are limits to the built-in collision avoidance approach.
If you expect to have many hundreds of devices visible within the same ~50 meter transmission radius, you may run into trouble. See this discussion for details.
Collisions of the transmissions will make it take longer for detection of each beacon. CoreLocation on iOS and the Android Beacon Library provide a ranging update once per second for each device, but you may find that each of these updates will include only a smaller percentage of the theoretically visible beacons because collisions prevented many of their packets from being received in a one second interval. It all depends on your application whether or not less frequent updates are acceptable.
On Both iOS and Android there is no problem monitoring this large number of beacons as long as only a few dozen are in range at any given time. On iOS, however, you need to make sure that you use only a maximum of 20 ProximityUUIDs across all the beacons, as this is the maximum number of Beacon Regions you can monitor at the same time on that platform.

Why all the pins on a chip are not GPIOs?

Documentation of GPIOs in Linux states:
A "General Purpose Input/Output" (GPIO) is a flexible software-controlled
digital signal. They are provided from many kinds of chip, and are familiar
to Linux developers working with embedded and custom hardware.
If we are capable of control the behavior of a pin, then why all the pins on a chip are not GPIOs?
OR
How can we provide functionality through software for a pin on chip?
Please explain.
When you design an integrated circuit (chip) you design with some component model in mind, those internal components may have specific needs that can not be reassigned among different pins, then those pins are fixed function.
For example pins related to memory controller have a very strict performance requirements set (in terms of signal integrity, toggle rate, output driver, capacitance), those pins are fixed function not reassignable, thus you can not use those pins for GPIO. If you do it you will end with a slower chip because the additional circuit change those values becoming unfeasible. Other example are power domain pins (those tipically called VCC, VDD, VEE, GND,).
Thats why GPIO pins are always shared with slow interfaces like SPI, I2C, SMBUS but never with fast interfaces like SATA, DDR, etc.
In other cases the only reason is because the chip doesnt make sense without a particular component, for example, given you must to have RAM memory, then RAM dedicated pins doesnt need to be reassignable because you never will implement the system without RAM memory and never will need reuse those pins for GPIO
All the pins in SOC are not GPIO. A specific group of pins mapped as GPIO. Other pins are configured for specific interfaces like DDR, SPI, I2C... etc, which includes clock, data and power supply pins. GPIO is generic pins can be used for any purpose based on user requirement. It can be used for handling IRQs, trigger Resets, Glow LEDs..etc.
for example, Consider a FPGA is connected to SOC via GPIOs. User need to program the FPGA via these GPIO pins. In SOC side user need to write a specific program with mentioned sequence to drive those GPIO to program the FPGA config file.

How to (almost) prevent FT232R (uart) receive data loss?

I need to transfer data from a bare metal microcontroller system to a linux PC with 2 MBaud.
The linux PC is currently running a 32 bit Kubuntu 14.04.
To archive this, I'd tried to use a FT232R based USB-UART adapter, but I sometimes observed lost data.
As long as the linux PC is mainly idle, it seems to work most time; however, I see rare data loss.
But when I force cpu load (e.g. rebuild my project), the data loss increases significantly.
After some research I read here, that the FT232R consist of a receive buffer with a capacity of only 384 Byte. This means, that the FT232R has to be read out (USB-polled) after at least every 1,9 ms. Well, FTDI recommends to use flow control, but because of the used microcontroller system, I'm fixed to cannot use any flow control.
I can live with the fact, that there is no absolutely guarantee for having no data loss. But the observed amount of data loss is quiet too heavy for my needs.
So I tried to find a way to increase the priority of the "FT232 driver" on my linux, but cannot find how to do this. It's not described in the
AN220 FTDI Drivers Installation Guide for Linux
and the document
AN107 FTDI Advanced Driver Options
has a capter about "Changing the Driver Priority" but only for windows.
So, does anybody know how to increase the FT232R driver priority in linux?
Any other ideas to solve this problem?
BTW: As I read the FT232H datasheet, it seems that this comes with 1 KiB RX buffer. I'd order one just now and check out its behaviour. Edit: No significant improvement.
If you want reliable data transfer, there is absolutely no way to use any USB-to-serial bridge correctly without hardware flow control, and without dedicating at least all remaining RAM in your microcontroller as the serial buffer (or at least until you can store ~1s worth of data).
I've been using FTDI devices since FT232AM was a hot new thing, and here's how I implement them:
(At least) four lines go between the bridge and the MCU: RXD, TXD, RTS#, CTS#.
Flow control is enabled on the PC side of things.
Flow control is enabled on the MCU side of things.
MCU code is only sending communications when it can fit a complete reply packet into the buffer. Otherwise, it lets the PC side of it time out and retry the request. For requests that stream data back, the entire frame is dropped if it can't fit in the transmit buffer at the time the frame is ready.
If you wish the PC to be reliably notified of new data, say every number of complete samples/frames, you must use event characters to flush the FTDI buffers to the hist, and encode your data. HDLC works great for that purpose and is documented in free standards (RFCs and ITU X and Q series - all free!).
The VCP driver, or the D2XX port bring-up is set up to have transfer sizes and latencies set for the needs of the application.
The communication protocol is framed, with CRCs. I usually use a cut-down version if X.25/Q.921/HDLC, limited to SNRM(E) mode for simple "dumb" command-and-respond devices, and SABM(E) for devices that stream data.
The size of FTDI buffers is immaterial, your MCU should have at least an order of magnitude more storage available to buffer things.
If you're running hard real-time code, such as signal processing, make sure that you account for the overhead of lots of transmit interrupts running "back-to-back". Once the FTDI device purges its buffers after a USB transfer, and indicates that it's ready to receive more data from your MCU, your code can potentially transmit a full FTDI buffer's worth of data at once.
If you're close to running out of cycles in your realtime code, you can use a timer as a source of transmit interrupts instead of the UART interrupt. You can then set the timer rate much lower than the UART speed. This allows you to pace the transmission slower without lowering the baudrate. If you're running in setup/preoperational mode or with lower real-time task load, you can then trivially raise the transmit rate without changing the baudrate. You can use a similar trick to pace the receives by flipping the RTS# output on the MCU under timer control. Of course this isn't a problem is you use DMA or a sufficiently fast MCU.
If you're out of timers, note that many other peripherals can also be repurposed as a source of timer interrupts.
This advice applies no matter what is the USB host.
Sidebar: Admittedly, Linux USB serial driver "architecture" is in the state of suspended animation as far as I can tell, so getting sensible results there may require a lot of work. It's not a matter of a simple kernel thread priority change, I'm afraid. Part of the reason is that funding for a lot of Linux work focuses on server/enterprise applications, and there the USB performance is a matter of secondary interest at best. It works well enough for USB storage, but USB serial is a mess nobody really cares enough to overhaul, and overhaul it needs. Just look at the amount of copy-pasta in that department...

Thermal aware scheduler in linux

Currently i'm working on making a temperature aware version of linux for my university project. Right now I have to create a temperature aware scheduler which could take into account processor temperature and perform some scheduling. Is there any generalized way to get the temperature of the processor cores or can I integrate the coretemp driver with the linux kernel in any way ( I didn't find a way to do so on the internet ).
lm-sensors simply uses some device files exported by the kernel for CPU temperature, you can just read whatever these device files have as backing variables in the kernel to get the temperature information. In terms of a scheduler I would not write one from scratch and would start with the kernels CFS implementation and in your case modify the load balancer check to include temperature (currently it uses a metric that is the calculated cost of moving a task from one core to another in terms of cache issues, etc... I'm not sure if you want to keep this or not).
Temperature control is very difficult. The difficulty is with thermal capacity and conductance. It is quite easy to read a temperature. How you control it will depend on the system model. A Kalman filter or some higher order filter will be helpful. You don't know,
Sources of heat.
Distance from sensors.
Number of sensors.
Control elements, like a fan.
If you only measure at the CPU itself, the hard drive could have over heated 10 minutes ago, but the heat is only arriving at the CPU now. Throttling the CPU at this instance is not going to help. Only by getting a good thermal model of the system can you control the heat. Yet, you say you don't really know anything about the system? I don't see how a scheduler by itself can do this.
I have worked on mobile freezer application where operators would load pallets of ice cream, etc from a freezer to a truck. Very small distances between sensors and control elements can create havoc with a control system. Also, you want your ambient temperature to be read instantly if possible. There is a lot of lag in temperature control. A small distance could delay a reading by 5-15 minutes (ie, it take 5-15 minutes for heat to transfer 1cm).
I don't see the utility of what you are proposing. If you want this for a PC, then video cards, hard drives, power supplies, sound cards, etc. can create as much heat as the CPU. You can not generically model a PC; maybe you could with an Apple product. I don't think you will have a lot of success, but you will learn a lot from trying!

Resources