ffi_prep_closure(): bad user_data (it seems that the version of the libffi library seen at runtime is different from the 'ffi.h' seen at compile-time) - python-3.x

I am running few python scripts on my raspberry pi4 running Ubuntu 20.04.2 LTS (GNU/Linux 5.4.0-1034-raspi aarch64)..But today all scripts stopped working and I'm getting this exception in all of them..
ffi_prep_closure(): bad user_data (it seems that the version of the libffi library seen at runtime is different from the 'ffi.h' file seen at compile-time)

I'm actually interested in the root cause of this but you can probably fix your issue by pinning the version of cffi to <=1.15.0.
A new version of cffi was published on 30 June which is, I presume, why you started seeing errors around that time.
I'm seeing the same error messages on Ubuntu 20.04 but not on Arch Linux.

Related

libboost_python-dev for Python 3.8 on Ubuntu 22.04

I have a production server with Ubuntu 20.04 and Python v3.8.10.
My development environment is Ubuntu 22.04 with Python v3.10.6.
Both environments make use of a virtual environment when running python code.
Thus far the differences between Python 3.8 and 3.10 have not necessitated that I match my environments exactly. However, I am now forced to use a poorly maintained Python library in my code (provided by a payment gateway), and I encountered version-specific bugs with the library immediately after installing it. Needless to say, I am not confident that if I get it working in the development environment, it will work in the production environment as well. I need to be sure it works in production the first time.
I believe I have managed to successfully install Python 3.8 from source on my development system (without uninstalling Python 3.10). I managed to set up the virtualenv with the older version of Python successfully as well. The installation of requirements, however, fails with /usr/bin/ld: cannot find -lboost_python: No such file or directory during the installation of the pgmagick package. My understanding of the error is that I am missing the libboost_python-dev package for Python v3.8.
How can I install the older libboost_python-dev package on ubuntu 22.04?

Flutter extremely slow on linux

Flutter application is extremely slow on linux. The performance tab shows 15fps and it never goes beyond that. I was unable to find any explanation or possible solution for this issue. Any suggestions?
clang version 10.0.0-4ubuntu1/
cmake version 3.16.3/
ninja version 1.10.0/
pkg-config version 0.29.1/
OS : Fedora Linux
For anyone potentially running into the same problem, this was either an issue with my OS or my flutter SDK. I initially installed the SDK using snap but after reinstalling my OS I installed it by cloning the github repo and setting the path variable etc... (the way explained in the official website) and the issue was resolved.

Can’t make python work in conda environment (GLIBC not found)

I'm trying to set up a conda environment with python 3.6 on a remote server working on CentOS. The installation goes well, but once I try to execute python I get the following message python: /lib64/libc.so.6: version 'GLIBC_2.15' not found. I noticed that for other python version older than 3.4 this doesn't happen.
Given this, I tried installing glibc before python, but after installing python 3.6 and trying to run it, now I get Segmentation fault (core dumped).
Note that I don't have permissions to update conda and that the version the server is using is 4.4.7, so I haven't tried updating it. However, I had previously set an environment without any problem. After I tried to install a package my jupyter notebooks wouldn't work so I removed the environment.
What would be the new system and the old one.
The old system -- the remove server running CentOS, has GLIBC that is older than 2.15.
The new system -- the one on which your Python 3.6 was compiled, used GLIBC-2.15 (or newer).
You need to either find a Python 3.6 build which is targeted to your version of CentOS, or you need to compile one yourself on a system with GLIBC matching whatever is installed on your remote server.
P.S. Saying "server running CentOS" is like saying "system running Windows" (i.e. not saying much). Which version of CentOS?

TrainDeepLearningModel tool not responding in arcgis pro

Platform: Precision 5820, 32G, rtx4000; Win 10 Pro, Arcgis Pro 2.6 concurrent license;
Issue:
I installed the deep learning tools following the guidelines provided here:
deeplearninginstallation
tersorflow was not found after installation so I manually installed the 2.1.0 version. I now have arcgis 1.8.2, pro 2.6, fastai 1.0.60, python 3.6.12, pytorch 1.4.0, tensorflow-gpu 2.1.0; environment check in arcgis pro python seemed fine.
However, after I select toolbox-image analyst-deeplearning-traindeeplearningmodel, the program seems to go into a hang, with most buttons disabled/unresponsive, this would continue until I force terminate the program. I also ran into "tool not licensed" twice, which was gone after I restarted the program; and a "name 'CallBackHandler' is not defined" once, which was also gone after I restarted.
I tried runing the command from the arcgis pro python prompt:
TrainDeepLearningModel(r"**", r"**", 40, "RETINANET", 16, "# #", None, "RESNET50", None, 10, "STOP_TRAINING", "FREEZE_MODEL")
executing the command would also send the program into a hang similar to the previous one. Monitor shows that ram and GPU usage haven't changed much, so I left the program running for an hour before forcibly terminating it.
I'd greatly appreciate it if anyone can tell me what the issues are here. I'll post any other env parameters if anyone requires. Cheers.
I got the tool up and running now by running conda install -c pytorch -c fastai fastai=1.0.54 pytorch=1.1.0 torchvision scikit-image and removing all the conflicting specifications in the cloned arcgispro-py3 env that I had. Now I still don't understand what went wrong. Presumably one or more packages in the env was conflicting. But seeing as I'm not a python expert, I couldn't identify the exact issue.
Before this I tried the versions stated here deeplearning install guide, but wasn't able to get pass tensorflow-gpu because python kept checking conflications. Now I actually don't have tensorflow-gpu in the env. I have tensorflow 2.1.0, keras-applications 1.0.8/base 2.3.1/preprocesing 1.1.0 (no keras-gpu), scikit-image 0.17.2, pillow 6.2.1, fastai 1.0.54, pytorch 1.1.0, libtiff 4.0.10. Some are different from what the guideline provided.
Thing is when I ran the process, CPU usage was up and GPU wasn't despite the fact that I specified GPU as the processing core. But I have much more pressing things to do right now like getting the analysis finished. So I'll probably tweek the env around a little after I'm done with this bit and see what happens. Meanwhile, anyone's input is still welcome.

creating appimage using source code and linuxdeployqt

i trying to create a appimage for my Linux system. Using qt-creator i have completed the programing and ran the app successfully . but when i am trying to make it appimage using linuxdeployQt i am facing some errors
linuxdeployqt 5 (commit 37631e5), build 631 built on 2019-01-25 22:47:58 UTC ERROR:
The host system is too new.
Please run on a system with a glibc version no newer than what comes with the oldest still-
supported mainstream distribution, which currently is glibc 2.20.
This is so that the resulting bundle will work on most still-supported Linux distributions.
For more information, please see
https://github.com/probonopd/linuxdeployqt/issues/340
i don't know what this issue is. when i visit the website, it is not clear also. So anyone familiar with this kind please put your help here.
It means that your glibc is too new.
That's correct, to work around this issue while using linuxdeployqt you have to choose as build environment an older system such as Centos 6 or Ubuntu 14.04.
As an alternative, you can use appimage-builder which allows producing AppImages on newer systems.
It means that your glibc is too new. I think it is supported glibc version comes with Ubuntu 14.04 as it is mentioned in herr https://github.com/probonopd/linuxdeployqt/issues/340. I have faced the same problem and still struggling to solve this issue.

Resources