I'm writing code for Pycom's Lopy4 board and have created a BLE service for environmental sensing which currently has one characteristic, temperature. I'm taking the temperature as a float and attempting to update the characteristic value every two seconds.
When I use a BLE scanner app, whenever I try to read, I read a value of "temperature10862," which is the characteristic's name and uuid. Yet when I press the indicate button, the value shows the correct temperature string, updating automatically every two seconds.
I'm a bit confused overall. Is this a problem with my code on the Pycom device or am I simply misunderstanding what a BLE read is supposed to be? Since the temperature values are obviously being updated on the device, but why does the client, the app, only show these values with an indication rather than a read?
I am sorry for any vagueness in the question, but any help or guidance would be appreciated.
Read Attempt
Indicate Attempt
Returning "temperature10862" as a read response is obviously incorrect. Sending the temperature as a string is in this case also incorrect, since you use the Bluetooth SIG-defined characteristic https://www.bluetooth.com/xml-viewer/?src=https://www.bluetooth.com/wp-content/uploads/Sitecore-Media-Library/Gatt/Xml/Characteristics/org.bluetooth.characteristic.temperature.xml. According to that the value should consist of a signed 16-bit integer in units of 0.01 degrees Celcius.
If you look at https://www.bluetooth.com/xml-viewer/?src=https://www.bluetooth.com/wp-content/uploads/Sitecore-Media-Library/Gatt/Xml/Services/org.bluetooth.service.environmental_sensing.xml, you will see that it's mandatory to support Read and optional to support Notifications. Indications, however are not permitted. So you should change your indicate property to notify instead.
The value sent should be the same regardless if the value is sent as a notification or read response.
Be sure you read the Environmental Sensing specs and follow the rest of the GATT service structure.
Related
While I know how to use MIX_OpenAudio.... I wanted to know the use of MIX_OpenAudioDevice function
It takes several arguments... Device name is one of them
So, I want to know that how can we know what the device name is
It says we can use a function i.e. SDL_GetAudioDeviceName()
But how would we know which audio device to choose on every system....
Or is this function only for working with specific audio systems such as realtek or something?
So from how I understand it, MIX_OpenAudio() is already using MIX_OpenAudioDevice(), just using a NULL value for the device parameter (which then defaults to whatever the system uses for sound). The only reason you would need to specify an actual device in that function is if you are expecting your audio data to be in a specific format. Therefore you should already know what it is.
From the docs: (link)
If you aren't particularly concerned with the specifics of the audio device, and your data isn't in a specific format, the values you use here can just be reasonable defaults.
This function allows you to select specific audio hardware on the system with the device parameter. If you specify NULL, SDL_mixer will choose the best default it can on your behalf (which, in many cases, is exactly what you want anyhow). SDL_mixer does not offer a mechanism to determine device names to open, but you can use SDL_GetNumAudioDevices() to get a count of available devices and then SDL_GetAudioDeviceName() in a loop to obtain a list. If you do this, be sure to call SDL_Init(SDL_INIT_AUDIO) first to initialize SDL's audio system!
As the doc says there, SDL_GetNumAudioDevices() will allow you to loop through SDL_GetAudioDeviceName() to see if it exists on the machine.
This would allow you more control over your audio and can save CPU time from converting the data to the exact device. You also must have that device already opened as well.
Also a link to the SDL2 docs.
Hope this helps explain that function.
I'm volunteering at a uni lab, and I was tasked with removing the dependency on Labview (among other things).
The only problem there for me is the VISA resource. I have no clue (and can't seem to figure out) what exactly the format of the data being sent is.
The VISA buffer seems to get a string, but I've been told that what's being sent is just numbers (0-255), which makes sense, except for the fact that the buffer receives a string.
When I looked at the com port using MAX I saw that there's a termination character on write only (which does make sense given that the device isn't meant to send any data back)
the baud on the com port also says 96,000, when the block diagram has a higher number getting inputted when initializing the VISA resource (though I didn't check it through MAX after running the thing, so it may just keep going to default until I run it)
The device also doesn't respond to an *IDN? query (times out), though I hope it's not a problem since, as mentioned, the device isn't meant to send back data, but I'm assuming whatever chip implements the VISA protocol on that side should also respond. pyVISA throws no errors (even with logging enabled), and any attempt to write just gives me success code 0.
All in all, short of debugging Labview to see exactly what's being fed to the buffer (which I haven't done yet - as a volunteer I'm not sure I'm even entitled to a license of labview on my laptop), I'm at a loss as to how I get all the information I need to imitate what's going on in LABVIEW with pyVISA. Right clicking on the VISA resource and looking at its properties is of little help.
Note: I'm using pyVISA-py as a backend for pyVISA since it seems I also need a license for NI's VISA drivers
I build a small game controller for the Z1.
I have a process reading values from a Joystick sensor. It works fine.
Then, I added a second process, reading the value of the battery sensor every 5 minutes. But it makes the Joystick stop working: the value does not update anymore!
I found a workaround: when I have to read the value of the battery, I deactivate the phidget_sensor, activate the battery_sensor, read the value and then deactivate the battery_sensor and reactivate the phidget_sensor.
But I would like to know why I can not have both sensors activated at the same time ?
Thanks
Comes from Here.
The ADC is the "analogue to digital converter", basically is the component that provides you the voltage signal levels of an analogue sensor, so then it can later be used to translate to a meaningful value.
What happens is the battery sensor driver and the phidget driver each when starts configures the ADC on its own, thus overwriting the ADC configuration.
The expected use of both of these components is actually how you are actually using: enable, measure, then disable. This way you ensure at all times the ADC is configured the way your application expects. If you want to have this done in a single operation then I'm afraid you will need to modify probably the phidget driver and include this.
I hope this is the answer you expected, as you are asking why does this happens.
Rfid Readers perform switches between antennas while using multiple antennas. Reader runs one antenna while others sleeping and switches one by one. It makes it fast so running one antenna at a time doesn't matter. According to my observations, the time for every switch is 1 second.
(After sometime I realised this 1 second is only for Motorola FX7500. Most other readers do it the right way, light fast like in miliseconds)
That is what I know so far.
Now, in my specific application I need this procedure to run faster, like 200ms instead of 1s.
Is this value changeable? If so, which message and parameter in LLRP can modify this value?
Actually the 1 second problem is with MotorolaFX7500 reader. By examining LLRP messages that Motorola's own library generates between FC7500, I discovered there are vendor specific parameters that can be used via custom extensions fields of LLRP. These params and settings can be found in Motorola Readers' software guide. This switch time is one of these vendor specific parameters, it's not a parameter of generic LLRP. A piece of code generating LLRP message including the custom extension with the proper format, solved my issue.
I've never worked with bluetooth before. I have to sends data via BLE and I've found the limit of 20 bytes per chunk.
The sender is an Arduino and the receiver could be both an Android or a Node.js app on a pc.
I have to send 9 values, stored in float values, so 4 bytes * 9 = 36 bytes. I need 2 chunks for all my data via BLE. The receiving part needs both chunks to process them. If some data are lost, I don't care.
I'm not expert in network protocols and I think I have to give each message an incremental timestamp so that the receiver can glue the two chunks with the same timestamp or discard the last one if the new timestamp is higher. But I'm not sure how to do a checksum, if I really need it or not, if I really have to care about it, or if - for a simple beta version of my system - I can ignore all those problems..
Does anyone can give me some advice? Like examples of similar situations handled with BLE communication?
You can get around the size limitation using the "Read Blob Request" of ATT. It allows you to read an attribute and also give an offset. So, you can use it to read the attribute with an offset of 0, if there's more than ATT_MTU bytes than you can request again with the offset at ATT_MTU*1, if there's still more ATT_MTU*2, etc... (You can read it in 3.4.4.5 of the Bluetooth v4.1 specifications; it's in the 4.0 spec too but I don't have that in front of me right now)
If the value changes between request, I'm not sure how you could go about detecting such a change. You could have the attribute send notifications when there's a change to interrupt the process in case the value changes in the middle of reading it.