Python3 - Some of threads stop suddenly without any logs/messages - python-3.x

I am working on a python project on Raspberry Pi. After the last release, some of our devices have a problem. Project includes two parts, camera and BLE. At first, these two parts were in seperate projects, they were running under different systemd services. In the latest release, I combined these two into one project. I am creating two processes using the multiprocess module and create threads under these processes. In the previous version, I was just creating threads and starting them. But now, some of the threads from the BLE part stuck at a point and stop working-without any error logs/messages. Especially BLE scan thread from the BLE process doing this. These two projects were running under two systemd services without a problem for a month. I could not figure out what is causing the problem.
I tried increasing the stack sizes of threads, cancelling the multiprocess part and using only threads as in the previous version but none of them was a solution. There is no major differences between versions, functions, modules, pip package versions are all the same. In the previous situation, one of the service running a 5-thread project and the other one running 10-thread project. Now I have one service running a python program includes 15-threads.
Platform: Raspbian Stretch -Raspbian Buster (both)
Python version: 3.5 (Stretch) - 3.7 (Buster) (both)
Important modules used in camera part: opencv-contrib-python(3.4.3.18), picamera(1.13)
Important modules used in BLE part: bluepy(1.3.0)
Feel free to ask further questions.

Related

How to not stop RedHawk processing even if there is no request from RedHawk-IDE

I use Red Hawk v2.1.0 to realize the AM demodulation part with three components.
Platform --> Xilinx Zynq 7035 (ARM Coretex A9*2)
Oparating System(OS)--> embedded Linux.
When connecting the RedHawk-IDE on the external PC with the Ether and displaying the waveform between the components, an abnormal sound is occured.
At this time, when I disconnect the LAN cable, the AM demodulation processing of Red Hawk inside the ARM will cease.
RedHawk inside the ARM appears to be waiting for requests from RedHawk-IDE on the external PC.
From this, it seems that abnormal noise will occur when requests from RedHawk-IDE on the external PC are delayed.
How can I keep RedHawk's AM demodulation processing inside the ARM running without stopping while connecting the RedHawk-IDE of the external PC and monitoring the waveform?
Environment is below.
CPU:Xilinx Zynq ARM CoretexA9 2cores 600MHz
OS:Embedded Linux Kernel 3.14 RealTimePatch
FrameLength:5.333ms(48kHz sampling, 256 data)
I have seen similar, if not identical issues, when running on an ARM board. Tracking down the exact issue may be difficult and in my experience hasn't been redhawk specific and has really been an issue with omniORB or its configuration. I believe one of the fixes for me was recompiling omniORB rather than using the omniORB package provided by my OS. (Which didn't make any sense to me at the time as I used the same flags & build process as the package maintainer)
First I would confirm this issue is specific to ARM. If it's easy enough to setup the same components, waveforms etc. on a 2nd x86_64 host and validate the problem does not occur.
Second I would try a "quick fix" of setting the omniORB timeouts on the arm host using the /etc/omniORB.cfg file and setting:
clientCallTimeOutPeriod = 2000
clientConnectTimeOutPeriod = 2000
This will set a 2 second timeout on CORBA interactions for both the connect portion and the call completion portion. In the past this has served as a quick fix for me but does not address the underlying issue. If this "fixes" it for you then you've at least narrowed part of the issue down and you could enable omniORB debugging using the traceLevel configuration option to find what call is timing out. See this sample configuration file for all options
If you want to dive into the underlying issues you'd need to see what the IDE and framework are doing when things lock up. With the IDE this is easy; simply find the PID of the java process and run kill -3 <pid> and a full stack trace will be printed in the terminal that is running the IDE. This can give you hints as to what calls are locked up. For the framework you'll need to use GDB and connect to the process in question and tell GDB to print the stack trace. You'd have to do some investigation ahead of time to determine which process is locking up.
If it ends up being an issue with the Java CORBA implementation on x86_64 talking with the C++ CORBA implementation on ARM you could also try launching / configuring / interacting with the ARM board via the REDHAWK python API from your x86_64 host. This may have better compatibility since they both use the same omniORB CORBA implementation.

share anaconda folder between different Linux installation

On my laptop I have two Linux distribution installed: Fedora 25 and Arch.
For purpose of saving disk space I would like to use only one instance of Miniconda shared between these two installation.
Question: Is such solution safe or are there any known problems that can occur in such situation?
Update
It seems that there should be no problem with that: Google Group Thread

How can I write CUDA in Visual Studio on Windows and deploy it to Linux?

I'm assisting a professor in setting up a lab for a class in parallel programming. The process will be the following:
A student logs into a virtual machine running Windows 7. This machine has no GPUs available. It has version 7.5 of the CUDA toolkit installed along with Visual Studio 2013. Students are supposed to use Visual Studio to write their CUDA programs/projects.
To test/run these projects, students have remote access to a fairly high-end machine. I don't have physical access to it, but from what I can tell using the command line, it has four NVIDIA Tesla M40s. Students have remote access to this machine via SSH. The issue, though, is that this machine is running Linux (Ubuntu 14.04.5). I'm trying to figure out how to deploy what students write in Visual Studio on Windows to the Linux box with the GPUs. I have limited experience in C, C++, and CUDA. I can work my way around a make file, but specific instructions on this topic (if it's part of the solution) would be appreciated.
I have read this article—Creating CUDA Projects for Linux—which details how to get NVIDIA's sample projects running, but I'm not sure if I can adapt this to work in the given situation.
I'm looking for a simple way for students to write assignments in CUDA, but they also need to be able to run what they write. The reason that this professor and I prefer Visual Studio are:
It's something many students in this class are familiar with
It handles project architecture fairly well
It offers students a GUI which can help reduce the learning curve (students can focus on CUDA instead of terminal, gcc, etc.—these things are undeniably incredibly useful, but they're not the focus of the class)
If the test machine were running Windows, then students would be able to simply transfer the contents of the debug or release folder for their solutions in Visual Studio (on the development VM) to the test machine and then run the executable. Since there are two different operating systems in play, I don't imagine it will work like this. I know that writing code on Windows and deploying on Linux won't be quite as easy, but I'm hoping there's a feasible solution.
Reconfiguring the setup and having students develop directly on the test machine or creating a Linux VM for development is doable, but should be avoided if at all possible. Reconfiguration would require involving the systems admin team and would delay the process of getting students writing code.
I have researched this and I've come across these questions, but they don't quite apply to this specific case:
How to write programs in C# .NET, to run them on Linux/Wine/Mono?
How should I develop CUDA on OSX and the deploy test on Linux
The direct answer to the question is that it is not really practical to try to take a VS solution/project and reconstruct it in linux. It may be possible, but I wouldn't put that burden on students trying to learn GPU programming.
An alternative would be to have your students use an X-forwarding SSH client such as Mobaxterm or else a remoting solution like TightVNC from the windows box (VM) to the linux box where the GPUs are. There are some nuanced differences between these two approaches. I believe the X-forwarded SSH client approach will have somewhat lighter network load and does not actually require an X-desktop to be running on the target, whereas TightVNC is an actual remote desktop solution. Therefore the user experience may be somewhat different for these two approaches, however if all the machines in question (windows VMs, linux GPU box(es)) have 100Mb or faster networking between them, and you're only running a few clients at a time, I think it may not matter much.
Either solution will probably work best if you establish individual user accounts on the linux box for each student/client.
And since the students will be sharing the GPU resources, if multiple students are trying to run projects simultaneously, there could be issues, but probably not a problem for introductory level programming work.
Once they have made a connection, the students can then launch nsight to run the linux-based GUI IDE (nsight Eclipse Edition) to build CUDA projects, and run/debug/profile them.

writing node.js raspberrypi programs on windows/ubuntu

I've spent the past two days trying to get the node wiring-pi module running on either windows or ubuntu. It installed no problem on my RaspberryPi, but developing on RPi isn't ideal. After a ton of error messages that don't lead me very close to a solution, I'm beginning to realize that trying to set-up a node module which was designed to run on an ARM processor and getting it working on an x86 machine for development may not be the best idea.
Has anybody else dealt with this sort of thing before? How do you write your ARM based programs in an x86 environment? Developing directly on the Pi has it's own set of issues.
What I was thinking of doing was to require the wiring-pi module like this
var wpi = require('wiring-pi')|| { //recreate the required wiring-pi methods for testing on x86};
however, that would mean my npm install would also fail, or need to be different depending on if I was building directly on the raspberyPi or on the windows/ubuntu x86 system.
Anybody else have another solution to working around these sorts of issues?
I have the same problem, and came to the same realization that trying to get the ARM modules working on X86 was not feasible. Hopefully your Raspberry Pi specific calls are isolated in a module that you can then easily replace on x86. I've not found a more clever solution than that.

How to simulate ThreadX application on Windows OS

I have an application using ThreadX 5.1 as the kernel.
The Image is flashed on to a hardware running an ARM 9 processor.
I'm trying to build a Simulator for the application that can be run on Windows (say XP, 32-bit).
Is there any way I can make it run on Windows, without modifying the entire source code to start calling win32 system calls?
You can build a Simulator for the application that can be run on Windows with "ThreadX for Win32".
"ThreadX for Win32"'s specification is hear.
http://rtos.com/products/threadx/Win32
Yes you can if you are willing to put in the work.
First observe that each threadx system call has an equivalent posix call except for events.
So your threadx program can run as a single process using posix threads, mutexes, etc.
Events can be handled by an external library (there are a few out there).
If you find this difficult in windows then the simplest thing to do is set up a linux vm. I use an ubuntu vm running on Virtual Box. It is very easy to set up. All you will need is the cdt version of eclipse.
Next you need to stub out all of your low level system calls.
This is also easier than you might think. For example, if you have a SPI driver to read and write to flash, you can replace your flash with a big array which is quite easy to work with at this level.
Having said all this, you may get more mileage if your threadx application is modular. Then you can test each module on it's own and you don't need to mess with threads, etc.
As a first approximation this may give you what you need without going the distance to port the whole thing to run under posix.
I have done this successfully in the past and developed a full set of unit tests for a module that allowed me to develop and test it (on my mac) before going to the target. Development is much faster and reliable this way.
Another option you may want to consider is to find a qemu project that supports your microprocessor. With some work you can develop a complete simulator for your platform and then run the real firmware under the emulator.
Good luck.

Resources