RASA: Entry point not found - tensorflow\contrib\coder\python\ops\_coder_ops.so not found - python-3.x

Rasa Core version: 0.13.0a4
Python version: 3.6.1
Operating system: Windows 10
I could successfully train the model for the created stories using python nlu_model.py. But when I try to launch the bot in command prompt, I get an entry point not found error.
This link said that this might be a Tensorflow GPU issue, but I am not using GPU.
I tried to upgrade to the latest Tensorlow version, but still didnt work.

Related

Can’t make python work in conda environment (GLIBC not found)

I'm trying to set up a conda environment with python 3.6 on a remote server working on CentOS. The installation goes well, but once I try to execute python I get the following message python: /lib64/libc.so.6: version 'GLIBC_2.15' not found. I noticed that for other python version older than 3.4 this doesn't happen.
Given this, I tried installing glibc before python, but after installing python 3.6 and trying to run it, now I get Segmentation fault (core dumped).
Note that I don't have permissions to update conda and that the version the server is using is 4.4.7, so I haven't tried updating it. However, I had previously set an environment without any problem. After I tried to install a package my jupyter notebooks wouldn't work so I removed the environment.
What would be the new system and the old one.
The old system -- the remove server running CentOS, has GLIBC that is older than 2.15.
The new system -- the one on which your Python 3.6 was compiled, used GLIBC-2.15 (or newer).
You need to either find a Python 3.6 build which is targeted to your version of CentOS, or you need to compile one yourself on a system with GLIBC matching whatever is installed on your remote server.
P.S. Saying "server running CentOS" is like saying "system running Windows" (i.e. not saying much). Which version of CentOS?

zsh: illegal hardware instruction npm run start-server

Goal : To build a Node.js web server for a training and classification model on the server-side using TensorFlow.js
I am trying to do this tutorial to learn TensorFlow.js.
Expected results : Server should run locally on an appropriate port like so.
$ npm run start-server
...
> Running socket on port: 8001
Epoch 1 / 1
eta=0.0 ========================================================================================================>
2432ms 34741us/step - acc=0.429 loss=1.49
Actual results : Server does not run
#MacBook-Pro baseball % npm run start-server
> tfjs-examples-baseball-node#0.2.0 start-server
> node server.js
zsh: illegal hardware instruction npm run start-server
My hardware and software configuration :
MacBook Pro (13 inches, M1, 2020)
Chip : Apple M1
macOS Big Sur version 11.4
#tensorflow/tfjs-node: ^1.3.2
Node.js version : v14.17.5
Xcode 12.5.1 / Build version 12E507
What have I tried ?
At this stage, I can't remember the number of Github issues and Stackoverflow question I have read to solve this problem without success.
Tried using prior versions of #tensorflow/tfjs-node like 1.2.0 and 1.0.0.
Tried the most recent version of #tensorflow/tfjs-node i.e. 3.8.0
Checked if Python 2.7.16 and 3.9.6 are installed on my Mac. They are.
Deleted node_modules and package-lock.json and ran npm install
Updated Node.js to its most recent version.
Made sure Xcode is installed.
It seems to be a hardware compatibility issue but I can't figure out the solution. Please note that I am trying to use the Javascript implementation of TensorFlow and not Python.
first, tfjs-node includes binary tensorflow implementation (same on as python), the JS part is just a wrapper (tfjs-node installer actually builds NAPI bindings to the binary)
second, this is M1 CPU which doesn't emulates x86 instruction set - and advanced instructions (e.g. AVX) are simply not supported.
since you've already tried old tfjs-node v1.2 (last one before AVX), only proper solution is to build tfjs-node from scratch on M1 hardware - quite a painful process, but not impossible
you might get lucky if you search on github for someone else's port
UPDATE:
Apple has created a fork of TF that uses Apple's ML libs based on TF 2.4RC (and optionally Metal libs),
but it doesn't seem well maintained - last update is in March 2021.
https://github.com/apple/tensorflow_macos
https://developer.apple.com/metal/tensorflow-plugin/
So first step is to get TF working in Python
then it's a question of rebuilding #tensorflow/tfjs-node package to use that library instead of prebundled one

Open3D-ML and pytorch

I’m currently trying to work with open3d ML and Pytorch. I followed the installation guide given in the Open3D-ML github. However when I try to import open3d.ml.torch it sends me the following error : Exception: Open3D was not built with PyTorch support!
I’m working with
python 3.8
open3d 0.12.0
pytorch 1.6.0
cuda 10.1
Windows 10
Do you have any idea of where that error comes from ?
It does not support for Windows at the moment. You can install Ubuntu on WSL (Window Subsystem for Linux) on Windows OS, and install open3d-ml on ubuntu.
Can you check if the output of the following commands on Windows Terminal (or PowerShell) is:
wsl cat /proc/version
Linux version 5.10.16.3-microsoft-standard-WSL2
wsl --list
Ubuntu-20.04 (Standard)
wsl -l -v
Ubuntu-20.04 Running 2
In my experience, Open3D-ML with CUDA only works if you are a Windows Insider, updated the WSL kernel correctly, and you are using Ubuntu in WSL.
Also, check if the folder /usr/lib/wsl/lib exists. If not, then CUDA won't work in WSL.

TrainDeepLearningModel tool not responding in arcgis pro

Platform: Precision 5820, 32G, rtx4000; Win 10 Pro, Arcgis Pro 2.6 concurrent license;
Issue:
I installed the deep learning tools following the guidelines provided here:
deeplearninginstallation
tersorflow was not found after installation so I manually installed the 2.1.0 version. I now have arcgis 1.8.2, pro 2.6, fastai 1.0.60, python 3.6.12, pytorch 1.4.0, tensorflow-gpu 2.1.0; environment check in arcgis pro python seemed fine.
However, after I select toolbox-image analyst-deeplearning-traindeeplearningmodel, the program seems to go into a hang, with most buttons disabled/unresponsive, this would continue until I force terminate the program. I also ran into "tool not licensed" twice, which was gone after I restarted the program; and a "name 'CallBackHandler' is not defined" once, which was also gone after I restarted.
I tried runing the command from the arcgis pro python prompt:
TrainDeepLearningModel(r"**", r"**", 40, "RETINANET", 16, "# #", None, "RESNET50", None, 10, "STOP_TRAINING", "FREEZE_MODEL")
executing the command would also send the program into a hang similar to the previous one. Monitor shows that ram and GPU usage haven't changed much, so I left the program running for an hour before forcibly terminating it.
I'd greatly appreciate it if anyone can tell me what the issues are here. I'll post any other env parameters if anyone requires. Cheers.
I got the tool up and running now by running conda install -c pytorch -c fastai fastai=1.0.54 pytorch=1.1.0 torchvision scikit-image and removing all the conflicting specifications in the cloned arcgispro-py3 env that I had. Now I still don't understand what went wrong. Presumably one or more packages in the env was conflicting. But seeing as I'm not a python expert, I couldn't identify the exact issue.
Before this I tried the versions stated here deeplearning install guide, but wasn't able to get pass tensorflow-gpu because python kept checking conflications. Now I actually don't have tensorflow-gpu in the env. I have tensorflow 2.1.0, keras-applications 1.0.8/base 2.3.1/preprocesing 1.1.0 (no keras-gpu), scikit-image 0.17.2, pillow 6.2.1, fastai 1.0.54, pytorch 1.1.0, libtiff 4.0.10. Some are different from what the guideline provided.
Thing is when I ran the process, CPU usage was up and GPU wasn't despite the fact that I specified GPU as the processing core. But I have much more pressing things to do right now like getting the analysis finished. So I'll probably tweek the env around a little after I'm done with this bit and see what happens. Meanwhile, anyone's input is still welcome.

keras Installation with already installed Tensorflow GPU version in windows 10

I have following environment in my windows 10 machine
Python : 3.6.0
Anaconda:4.3.1
Tensorflow:1.1.0
Screen Shots
OS:Windows 10-64bit
Now when I am trying to install keras into my system I am getting a huge list of errros.
Detailed Error Log
Now I have two questions here.
Can I install keras into my system when I already have tensorflow GPU version which was really hard to install?
If keras can be installed into my this system configuration then will my tensorflow GPU version work properly afterwards?

Resources