Is there any way in LLRP to configure antenna switches? - rfid

Rfid Readers perform switches between antennas while using multiple antennas. Reader runs one antenna while others sleeping and switches one by one. It makes it fast so running one antenna at a time doesn't matter. According to my observations, the time for every switch is 1 second.
(After sometime I realised this 1 second is only for Motorola FX7500. Most other readers do it the right way, light fast like in miliseconds)
That is what I know so far.
Now, in my specific application I need this procedure to run faster, like 200ms instead of 1s.
Is this value changeable? If so, which message and parameter in LLRP can modify this value?

Actually the 1 second problem is with MotorolaFX7500 reader. By examining LLRP messages that Motorola's own library generates between FC7500, I discovered there are vendor specific parameters that can be used via custom extensions fields of LLRP. These params and settings can be found in Motorola Readers' software guide. This switch time is one of these vendor specific parameters, it's not a parameter of generic LLRP. A piece of code generating LLRP message including the custom extension with the proper format, solved my issue.

Related

What is the Device parameter in Mix_OpenAudioDevice ( SDL2 )

While I know how to use MIX_OpenAudio.... I wanted to know the use of MIX_OpenAudioDevice function
It takes several arguments... Device name is one of them
So, I want to know that how can we know what the device name is
It says we can use a function i.e. SDL_GetAudioDeviceName()
But how would we know which audio device to choose on every system....
Or is this function only for working with specific audio systems such as realtek or something?
So from how I understand it, MIX_OpenAudio() is already using MIX_OpenAudioDevice(), just using a NULL value for the device parameter (which then defaults to whatever the system uses for sound). The only reason you would need to specify an actual device in that function is if you are expecting your audio data to be in a specific format. Therefore you should already know what it is.
From the docs: (link)
If you aren't particularly concerned with the specifics of the audio device, and your data isn't in a specific format, the values you use here can just be reasonable defaults.
This function allows you to select specific audio hardware on the system with the device parameter. If you specify NULL, SDL_mixer will choose the best default it can on your behalf (which, in many cases, is exactly what you want anyhow). SDL_mixer does not offer a mechanism to determine device names to open, but you can use SDL_GetNumAudioDevices() to get a count of available devices and then SDL_GetAudioDeviceName() in a loop to obtain a list. If you do this, be sure to call SDL_Init(SDL_INIT_AUDIO) first to initialize SDL's audio system!
As the doc says there, SDL_GetNumAudioDevices() will allow you to loop through SDL_GetAudioDeviceName() to see if it exists on the machine.
This would allow you more control over your audio and can save CPU time from converting the data to the exact device. You also must have that device already opened as well.
Also a link to the SDL2 docs.
Hope this helps explain that function.

python-can Keep getting the same message(id)

I'm more of a HW engineer who's currently trying to use Python at work. What I want to accomplish via Python is read the CAN-FD output from the DUT and use it for monitoring purposes in the measurement setup. But, I think I didn't get the correct result. Because it shows the same message(id) even there so much more. Based on my understanding from other examples, this should shows the stream of all the messages since there was no filters. Is there anyone who can help me solve this issue or have the similar experience?
import can
def _get_message(msg):
return msg
bus = can.interface.Bus(bustype='vector',app_name ='app', channel=1, bitrate=500000)
buffer = can.BufferedReader()
can.Notifier(bus,[_get_message,buffer])
while True:
msgs = bus.recv(None)
print(msgs)
You don't say what you expected to see on your bus, but I assume there are a lot of different messages on it.
Simplify your problem to start with - set up a bus which has only a single message on it at a low rate. You might have to write some code for that, or you might get some tools which can send periodic messages with counters in for example. Make sure you can capture that correctly first. Then add a second message at a faster rate.
You will proably learn a selection of useful things from these exercises which mean that when you go back to your full-system you have more success, or at least have more ideas on what to debug :)
First of all, thanks for the answers and comments. the root cause was the incorrect HW configuration for the interface. I thought if HW configuration is wrong in the first place, there will be no output at all. But it turned out it is not. After proper configuration, I could see the output stream I expected.

ESP32: BLE transmission speed is very slow

I am trying to build an Android app that interfaces with the ESP32 using BLE. I am using the RxBluetoothKotlin library from Vincent Masselis for the Android side. For the ESP32 side, I am using the default Kolban libraries that are included in the Arduino IDE. My phone is a OnePlus 5T and my ESP32 is a MH ET Live ESP32DevKIT. My Android app can be found here, and my ESP32 program here.
The whole system works pretty much perfectly for me in terms of pure functionality. That is to say, every button does what it's supposed to do, and I get the exact behaviour I had expected to get. However, the communication itself is very slow. Around 200 bytes/second. My test button in the Android app requests a bunch of text data from the ESP32, and displays this in a dialog. It also lists a number which represents the time between request and reception in milliseconds. Using this, I get around 2 seconds for 440 bytes of data. When I send less data, the time decreases approximately linearly with data size. 40 bytes of data will take around 200ms, and 20 bytes or under typically takes less than 100ms.
This seems rather slow to me. From what I understand, I should be able to at least get a few kilobytes per second. I have tried to check the speed using nRF Connect, but I get the same 2 seconds timespan for my data transfer. This suggests that the problem is not in my app, since I also have it with a completely different app. I also put the code in my main loop inside of callbacks instead (which I probably should have done in the first place), but this didn't change things at all. I have tried taking the microcontroller and my phone to a few different locations, hoping to eliminate interference. I have tried to mess with BLEDevice::setPower and BLEDevice::setMTU, as well as setting RxBluetoothGatt.requestMtu(500) on the Android side. Everything so far seems to have had little to no effect. The only thing that did anything, was adding the line "pServer->updatePeerMTU(0,500);" in my loop during the connection phase. This caused the first 23 bytes of data to be repeated whenever I pressed the test button in my app, and made the data transfer take about 3 seconds. If I'm lucky, I can get maybe a bit under 1.8 seconds for 440 bytes, but this is a very small change when I'm expecting an order of magnitude of difference, and might even be down to pure chance rather than anything I did.
Does anyone have an idea of how to increase my transfer speed?
The data transmission speed is mainly influenced by the Bluetooth LE connection interval (between 7.5 ms and 4 seconds) and is negotiated between the master (central unit) and the peripheral device. The master establishes a connection with a parameter set and the peripheral can propose to change this parameter set. In the end, however, the central unit decides which parameter set is to be used.
But the Bluetooth connection interval cannot be changed by an Android applications directly, which normally act as the central role. Instead it can request a connection priority which is known to have an influence on the connection interval.

How Long is RFID Tag Session 2 Persistence?

When doing scanning with passive RFID tags, you can set the SESSION to '2' in order for the tag state of 'B' to persist for "an indefinite amount of time" even when it is not being energized by the scanner, according to the standards. Your tag will then not be visible to the scanner until this indefinite amount of time expires.
My question is, does anyone have any idea what the maximum amount of time is for RFID tags? I'm sure it's different for different tag manufacturers , etc. However, are we talking seconds, minutes, hours, or even days? I don't want to keep seeing the same tags over and over again while doing a scan in the storeroom, but at the same time, I don't want the tag to be hidden if they need to be scanned again at a later time.
The answer is: it depends. Please note that the standard says 'indefinite when powered'. When powered, it is really indefinite. When not powered, the standard defines it is longer than 5 seconds. For most modern tags, it is typically less than 30s, of course depending on environment conditions.
About the definition of 'powered': please note that this power can originate from any RFID reader, not only the one you are using to interrogate the tags with. Or any other radio device that transmits at the same frequency.
To circumvent this, you can use a SELECT statement to revert the session flag back from B to A.

low latency sounds on key presses

I am trying to write an application(I'm a gui first timer) for my son, he has autism. There is a video player in the top half and a text entry area in the bottom. When letters are typed sounds are produced to mimic the words in the video.
There have been other posts on this site in regard to playing sounds on key presses, using gstreamer as a system call. I have also tried libcanberra but both seem to have significant delays between sounds. I can write the app in python or C but will likely do at least some of it in C.
I also want to mention that the video portion is being played by gstreamer. I tried to create two instances of gstreamer, to avoid expensive system calls but the audio instance seemed to kill the app when called.
If anyone has any tips on creating faster responding sounds I would really appreciate it.
You can upload a raw audio sample directly to PulseAudio so there will be no decoding and (perhaps save) extra switches by using the following function from Canberra:
http://developer.gnome.org/libcanberra/unstable/libcanberra-canberra.html#ca-context-cache
The next ca_context_play() will use it.
However, the biggest problem you'll encounter with this scenario (with simultaneous video playback) is that the audio device might be configured with large latency with PulseAudio (up to 1/2s or more for normal playback). It may be reasonable to file a bug to libcanberra to support a LOW_LATENCY flag, as it currently doesn't attempt to minimize delay for sound events afaik. That would be great to have.
GStreamer pulsesink could probably get low latency too (it has some properties for that), but I am afraid it won't be as lightweight as libcanberra, and you won't be able to cache a sample for instance. Ideally, GStreamer could also learn to cache samples, or pre-fill PulseAudio...

Resources