Can't Import Bert_Text after installing it successfully - python-3.x

Bert is very powerful model for text classification but implementation of bert requires much more code than any other model. bert-text is pypi package to provide developer a ready-to-use solution.I have installed it properly.When I have tried to import ,it is throwing error ModuleNotFoundError: No module named 'bert_text'.I have properly written the name bert_text.
I have tried it in Kaggle,Colab and local machine but the error is same.

Hey as this is a refactor made by Yan Sun, This issue is already pending, you can go to this link and subscribe for an update when the developers will provide its solution. https://github.com/SunYanCN/bert-text/issues/1

Related

lxml library in AWS Lambda

I've included this library as a layer to my lambda function but when I go to test it I get the error: cannot import name 'etree' from 'lxml'
There are multiple posts about people having this issue and some say to that I need to build it to compile some c libraries. Most posts say to look for another file or folder name 'lxml' which I've verified is not the issue.
I'm able to run the same code I've deployed to my layer on my local linux workstation and it runs without an issue.
Turns out my lambda was running python version 3.8 and that version is not compatible with the version of lxml I am using 4.5.1. Changing the run time to 3.7 fixed the issue. Hope this helps someone.

Can't we run an onnx model imported to pytorch?

I have been trying to import a model from onnx format to work with pytorch. I am finding it difficult to get an example for the same. As most of the resources in Internet talks about exporting a pytorch model to onnx.
I found that torch.onnx() can only export the model and the import method hasn't been implemented yet. A direct installation of onnx library, helps me to do onnx.load("model_name.onnx"). How do I use this model with pytorch? I am not able to move the model to GPU with model.to(device="GPU")
PyTorch doesn't currently support importing onnx models. As of writing this answer it's an open feature request.
While not guaranteed to work, a potential solution is to use a tool developed by Microsoft called MMdnn (no it's not windows only!) which supports conversion to and from various frameworks. Unfortunately onnx can only be a target of a conversion, and not a source. That said you may be able to import your model to another framework, then use MMdnn to convert from that framework to pytorch. Obviously this isn't ideal and the potential for success will depend on how other frameworks use onnx which may not be amenable to the way MMdnn works.
Update August 2022
Unfortunately it appears the feature request was rejected and Mmdnn has been abandoned. There are some more recent 3rd party tools that provide some ability to import onnx into pytorch like onnx2pytorch and onnx-pytorch. Neither of these tools appear to be actively developed, though pytorch and onnx are relatively stable at this point so hopefully these tools remain relevant in the future (official support would be better IMO). Note that both of these tools have unaddressed issues, so it may be necessary to try both of them if one doesn't work for you.
Update September 2022
Based on the comment from #DanNissenbaum there is a newer 3rd party tool onnx2torch that is being actively developed and maintained.

ImportError: No module named 'kivy'

I have 3 Projects:
The weather app from the Book "Creating Apps in Kivy" that i have completed.
A StudentDB app from a Youtube Tutorial.
The beginning of my own app.
The first two still work. But if i want to build my own, kivy is not known.
I have thrown everything out but the basics. I do not know why kivy only runs for project my first projects. I had no issue with the other ones. But i cannot escape the issue.
I have tried pip install kivy but it is already installed, i do not know the version though.
I use Python 3.5.6 in the Anaconda bundle.
I have tried making a new Project and abandoning my old.
Begin of code i used
from kivy.app import App
class BasicApp(App):
pass
End of code i used
Does anyone know this issue? Do i have to install something special? I am not sure how to handle this one.

Python NLTK using local nltk_data

I've been recently working with NLTK library for language processing. I can normally install packages using nltk.download('package'), if I have the internet access etc.
The problem arises, If I try to run my code offline on a cluster. Here,
from nltk.tag import PerceptronTagger
ImportError: cannot import name 'PerceptronTagger'
and similar errors emerge, as nltk can't seem to find the nltk_data folder. I tried:
nltk.data.path.append("./nltk_data"), where I copied nltk_data along with code.
nltk.download('punct') #,download_dir="./nltk_data"), but this doesn't work, as there is no internet access.
Question is then, how can I use nltk_data locally?
Thanks.
It appears the machine I was running this on had NLTK 3.0.2, hence updating NLTK solved the problem all together.

Uploading a lambda code along with keras libraries on AWS

I have a working lambda code, but I have this new idea of a neural network system that I am trying to implement for which I would require Keras (Python). I tried uploading the same working code along with the keras dependencies as a ZIP file form my desktop with an additional "import keras" statement. Funnily , the same code with the additional "import keras" gives me an error saying remote endpoint could not be reached on the developer portal but when I remove the "import keras" statement, my code works perfectly fine. Why is this happening even after I give both the keras dependencies and my lambda code on the ZIP file? I understand that I am not using keras anywhere in my existing code but when I give the keras dependencies and just try to import, logically, it should still work right?

Resources