BCM socket cyclic reception in python - python-3.x

I'm trying to monitor a can bus through socketcan with Python. I'm taking as a reference the can4python package.
Since I want to continuously acquire data from the can socket, I'm thinking of using BCM sockets since it handles this on the Kernel level. In the can4python package I can only find periodic CAN transmission but no periodic can frame reception.
Is it possible to do this with can4python? If not is it possible to do it with BCM sockets in general?
Thank you for your help.

Just create a thread in Python that continuously reads CAN frames from the socket. If there are CAN frames you're not interested in just setup a CAN filter, so that SocketCAN subsystem would deliver only the required frames.
can4python project seems to be abandoned. Take a look at python-can project that is actively maintained.

Related

Actual Python 3.10 async implementation of serial port/uart communication?

folks.
I'm writing some sort of scheduler that will send commands and receive responses to a hardware controller that is connected to the Linux-based system via a serial (UART) port. Currently it is FT232RL USB <-> UART bridge.
The scheduler itself is asynchronous and written using asyncio "in-code" Python module. There is no code that may block main loop. But here I'm facing the need of actually sending data to the controller via UART and receiving data from it. The "controller" will actually execute commands and perform actions with physical devices. The problem is that there is no "in core" standard Python module to communicate with external devices using serial ports. Especially the way asyncio does.
Is there is any actual maintained one that is supported by Python 3.10 that will allow me to make use of asyncio and make non-blocking data transmission via UART?
The pyserial-asyncio seems to be a good candidate. Any other modules to take a look at?

Constant Delay in Bluetooth Low Energy (BLE) data transmission

I am trying to evaluate the suitability of some different wireless interfaces for our project on 2xRaspberry Pi 4 and currently I’m evaluating Bluetooth Low Energy. Therefore I have written an Central and Peripheral device application with the Qt framework (5.15). In my case the latency time between messages is important, because of some security aspects. The message size of each command is around 80-100 Bytes. In one of my tests I have sent 80 Bytes commands every 80ms. Ideally the messages should be received on the other device in 80ms interval as well. For the LAN (TCP) interface this test works well.
For Bluetooth Low Energy I observed that messages, which are sent from Peripheral to Central work quite good and I measured no big delay. Different results I got for the Central to Peripheral direction. Here, I have received the messages in the interval of 100ms to 150ms really exactly. It seems that there couldn’t be a very big magic behind it, so is there any plausible explanation for this? I tested it with a Python script as well and I observed the same results. So it seems that the Qt implementation shouldn’t be the problem.
During research I found out, that the connection interval may influence this, but in Qt the QLowEnergyConnectionParameterRequest (QLowEnergyConnectionParameters Class | Qt Bluetooth 5.15.4) doesn’t work for me. Is there any command, where I can set the connection interval for test purposes at the command line on Linux?
Kind regards,
BenFR
It is possible that your code is slower from central to peripheral because WRITE is used instead of WRITE WITHOUT RESPONSE. The difference is that WRITE waits for an acknowledgement, therefore slowing the communication down, while WRITE WITHOUT RESPONSE is very much like how notifications/indications work in that there's no ACK at the ATT layer. You can change this by changing the write mode of your application and ensuring that the peripheral's characteristic supports WriteNoResponse.
Regarding changing the connection interval, the change needs to be accepted from the remote side in order for it to take effect. In other words, if you are requesting the connection parameter change from the peripheral, then the central needs to have code to receive this connection parameter change request and accept it.
Have a look at the links below for more information:-
How does BLE parameter negotiation work
Understand BLE connection intervals and events
The different types of BLE write

Simple SIP call receiver and recorder

Any linux library or docker container for:
receiving simultaneous voip SIP calls, and for each call,
recording the incoming audio stream to a distinct file,
having silence as outgoing audio stream.
Essentially, a simple sip call receiver and recorder, accepting simultaneous calls, and playing silence.
Drachtio did not work out-of-the-box. Is there an easier alternative? Maybe, a very simple softphone would be fine to start with? A Gstreamer pipeline?
If you're familiar with c# and can install dotnet on your Linux machine then my library can do that. Here's a single call example of recording an incoming call. It will need to be enhanced if you want to co-ordinate multiple calls.

What is the Windows "Event Queue Bottleneck" and what steps should I take to prevent that from happening?

I have a multithreaded DirectShow app that uses sockets to transfer audio to and from Skype. It is written in Delphi 6 using the DSPACK component suite. The socket components used to trade audio data with Skype are from the ICS socket library. The ICS components use non-blocking sockets that use a Windows message loop to do their work instead of using blocking sockets like the Indy or Synapse components do. I have each socket on its own real-time critical worker thread. In a Stack Overflow post I made about ICS sockets and background threads a comment was made about switching to Indy to avoid the Windows "event queue bottleneck":
How can I push a TWSocket's OnDataAvailable() event to a background thread in my Delphi 6 application?
Can someone tell me what that is and what steps I need to take to prevent/avoid that problem?
Note, I have two ICS sockets handling the bidirectional (duplex) connection for audio with Skype. One for sending audio to Skype and another for receiving audio from Skype. I have my audio buffers set to a size that equals 100 milliseconds of audio data. Therefore data is sent 10 times a second to Skype, and conversely, is received 10 times a second from Skype. Each socket has its own thread set to real-time critical priority to do its work.
The reason for my question is this. The app now works reasonably well. Before I fixed a number of bugs I would see incrementally increasing delays in the audio send and receive streams. Now I can run for several minutes without any delays audible in the audio streams, but after that, and the time of inception is extremely variable, some small delays start to creep in. I have noticed the following symptoms:
If I have a large number of OutputDebugString() messages, the delays happen after only 20-30 seconds, and get exponentially worse after that (quickly degrade). Removing the OutputDebugString() messages alleviates the problem:
How can I keep a large amount of OutputDebugString() calls from degrading my application in the Delphi 6 IDE?
Switching from my app back to the Delphi IDE causes a delay to be injected into the audio streams. Apparently, something about the switch gums up the works and code I have that monitors how long a critical section is held pops out a warning message that one of the critical sections was held for a long amount of time (longer than 100 milliseconds which is obviously a problem).
Note: The two threads I use to send command messages strings to Skype and receive status and result message strings from Skype also are GetMessage() based because interfacing with Skype is done synchronously with the Windows SendMessage() API call using WM_COPYDATA messages.
Given the frequency of audio delivery, how likely is it that the problems are due to a Windows "event queue bottleneck", or is it more likely an inefficiency in my thread synchronization scheme?

Netlink user-space and kernel-space communication

I am learning programming in embedded systems using Linux as my main platform. And I want to create a Device Event Management Service. This service is a user-space application/daemon that will detect if a connected hardware module triggered an event. But my problem is I don't know where should I start.
I read about Netlink implementation for userspace-kernelspace communication and it seems its a good idea but not sure if it is the best solution. But I read that the UDEV device manager uses Netlink to wait a "uevent" from the kernel space but it is not clear for me how to do that.
I read about polling sysfs but it seems it is not a good idea to poll filesystem.
What do you think the implementation that should I use in my service? Should I use netlink(hard/no clue how to) or just polling the sysfs(not sure if it works)?
Thanks
Yes, polling is ill-advised. These resources: LJ article on Netlink, "Understanding and Programming with Netlink Sockets" make it seem not so hard to do netlink. Here's an example of netlink sockets in python.
udevtrigger is a great utility for reacting to udev changes.
http://www.linuxjournal.com/article/7356
http://smacked.org/docs/netlink.pdf
http://guichaz.free.fr/misc/iotop.py
http://manpages.ubuntu.com/manpages/gutsy/man8/udevtrigger.8.html
If all you do is wait for an event you can use sysfs, which will be a lot simpler than netlink. An example is the GPIO system's edge file.

Resources