I am trying to import models from hugging face and use them in Visual Studio Code.
I installed transformers, tensorflow, and torch.
I have tried looking at multiple tutorials online but have found nothing.
I am trying to run the following code:
from transformers import pipeline
classifier = pipeline('sentiment-analysis')
result = classifier("I hate it when I'm sitting under a tree and an apple hits my head.")
print(result)
However, I get the following error:
No model was supplied, defaulted to distilbert-base-uncased-finetuned-sst-2-english and revision af0f99b (https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english).
Using a pipeline without specifying a model name and revision in production is not recommended.
Traceback (most recent call last):
File "c:\Users\user\Desktop\Artificial Intelligence\transformers\Workshops\workshop_3.py", line 4, in <module>
classifier = pipeline('sentiment-analysis')
File "C:\Users\user\Desktop\Artificial Intelligence\transformers\src\transformers\pipelines\__init__.py", line 702, in pipeline
framework, model = infer_framework_load_model(
File "C:\Users\user\Desktop\Artificial Intelligence\transformers\src\transformers\pipelines\base.py", line 266, in infer_framework_load_model
raise ValueError(f"Could not load model {model} with any of the following classes: {class_tuple}.")
ValueError: Could not load model distilbert-base-uncased-finetuned-sst-2-english with any of the following classes: (<class 'transformers.models.auto.modeling_auto.AutoModelForSequenceClassification'>, <class 'transformers.models.auto.modeling_tf_auto.TFAutoModelForSequenceClassification'>, <class 'transformers.models.distilbert.modeling_distilbert.DistilBertForSequenceClassification'>, <class 'transformers.models.distilbert.modeling_tf_distilbert.TFDistilBertForSequenceClassification'>).
I have already searched online for ways to set up transformers to use in Visual Studio Code but nothing is helping.
Do you know how to fix this error, or if someone knows how to successfully use models from Hugging Face into my code, it would be appreciated?
This question is a little less about Hugging Face itself and likely more about installation and the installation steps you took (and potentially your program's access to the cache file where the models are automatically downloaded to.).
From what I am seeing either:
1/ your program is unable to access the model
2/ your program is throwing specific value errors in a bit of an edge case
If 1/ Take a look here: [https://huggingface.co/docs/transformers/installation#cache-setup][1]
Notice that it the docs walks through where the pre-trained models are downloaded. Check that it was downloaded here: C:\Users\username\.cache\huggingface\hub (of course with your own username on your computer instead. Check in the cache location to make sure it was downloaded? (You can check in the cache locations mentioned.)
Second, if for some reason, there is an issue with downloading, you can try downloading manually and doing it via offline mode (this is more to get it up and running): https://huggingface.co/docs/transformers/installation#offline-mode
Third, if it is downloaded, do you have the right permissions to access the .cache? (Try running your program (if it is a program that you trust) on Windows Terminal as an administrator.). Various ways - find one that you're comfortable with, here are a couple hints from Stackoverflow/StackExchange: Opening up Windows Terminal with elevated privileges, from within Windows Terminal or this: https://superuser.com/questions/1560049/open-windows-terminal-as-admin-with-winr
If 2/ I have seen people bring up very specific issues on not finding specific values (not the same as yours but similar) and the issue was solved by installing PyTorch because some models only exist as PyTorch models. You can see the full response from #YokoHono here: Transformers model from Hugging-Face throws error that specific classes couldn t be loaded
Related
Short version:
I am trying to upload US LCI database to Brightway2 and I am failing miserably. Has anyone succeeded? If so, could you share it with me? :D
Long version:
I am following the notebook IO - Importing the US LCI database notebook and I am having a lot of problems. I am aware that, as the notebook indicates, it is a work in progress. Anyhow, I wanted to give it a try:
I tried uploading every ecospold version database found here, following the method from the notebook. The only one that gave me a similar results was version FY20.Q3.02. However, right off the bat I get the following differences/errors:
Same as the notebook, I get this error: Couldn't apply strategy link_technosphere_by_activity_hash: Object in source database can't be uniquely linked to target database. And two activities that are linked. When I follow the instructions of ignoring these datasets, it throws me that error over and over again.
Trying to move on with the tutorial, I get more errors and at the end I end up with all exchanges unlinked:
633 datasets
37513 exchanges
37505 unlinked exchanges
Finally, after running the code in line [15]:
import functools
f = functools.partial(link_iterable_by_fields,
other=Database(config.biosphere),
kind='biosphere'
)
sp.apply_strategy(f)
sp.statistics(f)
I end up with:
0 datasets
0 exchanges
0 unlinked exchanges
Which is hilarious and sad at the same time. Since I am new with Python and BW, my troubleshooting is clumpsy and probably erroneous (I promise I googled a lot and went through the code). And concluded I am failing and it is time to ask questions:
Has anybody succeeded uploading the US LCI database to Brightway2?
If so, how? Which file did you use?
Thank you!!!!
This is an excellent question. I have added text to the offending notebook to note that it is obsolete.
In general, I think trying to import the ecospold files is a fools errand, as though they are labeled ecospold2, they are actually ecospold1 (which is a totally different format):
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<ecoSpold xmlns="http://www.EcoInvent.org/EcoSpold01">
The most recent export also raises an error when I try the ecospold1 importer:
AttributeError: no such child: {http://www.EcoInvent.org/EcoSpold01}modellingAndValidation
This is a required attribute in ecospold1.
I think the best way forward would be to consume the JSON-LD directly. Note that it is important not to run bw2setup(), as you would also want to use their list of elementary flows and LCIA methods. Currently the experimental JSON-LD importer fails because the provided datasets need allocation, but don't provide a set of consistent allocation methods. When I use the git checkout of bw2io and do the following:
uslci = JSONLDImporter(
"/Users/cmutel/Downloads/National_Renewable_Energy_Laboratory-USLCI_Database/",
"US LCI",
preferred_allocation="CAUSAL_ALLOCATION"
)
uslci.apply_strategies()
I get the following error:
UnallocatableDataset: We currently only support exchange-specific CAUSAL_ALLOCATION
This is fixable, but someone would need to step through this and fix the allocation procedure, and I don't have the time to do that now.
I have used this project from Github: https://github.com/nicknochnack/TFODCourse
The project contains a model that can detect License Plate on a given Vehicle image. The Github repo also contains code for the conversion of model into Tensorflow Lite file.
I used that code to generate TFLite file.
And then, I followed this link: https://developers.google.com/codelabs/tflite-object-detection-android
Where I downloaded the sample Application of Object detection model and following the instructions, I copied my TFLite files into the Android Application.
Now, if I run the application and take a photo, it gives me this error,
/TaskJniUtils: Error getting native address of native library: task_vision_jni
java.lang.RuntimeException: Error occurred when initializing ObjectDetector: Input tensor has type kTfLiteFloat32: it requires specifying NormalizationOptions metadata to preprocess input images.
at org.tensorflow.lite.task.vision.detector.ObjectDetector
I understand that I have to add Metadata in my TFLite model. so, I searched about it and ended up on this link: https://www.tensorflow.org/lite/models/convert/metadata#model_with_metadata_format
But I didn't understand at all what exactly should I be doing. Can anyone please help me in pointing to the right direction that for my problem specifically, what exactly do I need to do?
I was able to run the Flask app with yolov5 on a PC with an internet connection. I followed the steps mentioned in yolov5 docs and used this file: yolov5/utils/flask_rest_api/restapi.py,
But I need to achieve the same offline(On a particular PC). Now the issue is, when I am using the following:
model = torch.hub.load("ultralytics/yolov5", "yolov5", force_reload=True)
It tries to download model from internet. And throws an error.
Urllib.error.URLError: <urlopen error [Errno - 2] name or service not known>
How to get the same results offline.
Thanks in advance.
If you want to run detection offline, you need to have the model already downloaded.
So, download the model (for example yolov5s.pt) from https://github.com/ultralytics/yolov5/releases and store it for example to the yolov5/models.
After that, replace
# model = torch.hub.load("ultralytics/yolov5", "yolov5s", force_reload=True) # force_reload to recache
with
model = torch.hub.load(r'C:\Users\Milan\Projects\yolov5', 'custom', path=r'C:\Users\Milan\Projects\yolov5\models\yolov5s.pt', source='local')
With this line, you can run detection also offline.
Note: When you start the app for the first time with the updated torch.hub.load, it will download the model if not present (so you do not need to download it from https://github.com/ultralytics/yolov5/releases).
There is one more issue involved here. When this code is run on a machine that has no internet connection at all. Then you may face the following error.
Downloading https://ultralytics.com/assets/Arial.ttf to /home/<local_user>/.config/Ultralytics/Arial.ttf...
Traceback (most recent call last):
File "/home/<local_user>/Py_Prac_WSL/yolov5-flask-master/yolov5/utils/plots.py", line 56, in check_pil_font
return ImageFont.truetype(str(font) if font.exists() else font.name, size)
File "/home/<local_user>/.local/share/virtualenvs/23_Jun-82xb8nrB/lib/python3.8/site-packages/PIL/ImageFont.py", line 836, in truetype
return freetype(font)
File "/home/<local_user>/.local/share/virtualenvs/23_Jun-82xb8nrB/lib/python3.8/site-packages/PIL/ImageFont.py", line 833, in freetype
return FreeTypeFont(font, size, index, encoding, layout_engine)
File "/home/<local_user>/.local/share/virtualenvs/23_Jun-82xb8nrB/lib/python3.8/site-packages/PIL/ImageFont.py", line 193, in __init__
self.font = core.getfont(
OSError: cannot open resource
To overcome this error, you need to download manually, the Arial.ttf file from https://ultralytics.com/assets/Arial.ttf and paste it to the following location, on Linux:
/home/<your_pc_user>/.config/Ultralytics
On windows, paste Arial.ttf here:
C:\Windows\Fonts
The first line of the error message mentions the same thing. After this, the code runs smoothly in offline mode.
Further as mentioned at https://docs.ultralytics.com/tutorials/pytorch-hub/, any custom-trained-model other than the one uploaded at PyTorch-model-hub can be accessed by this code.
path_hubconfig = 'absolute/path/to/yolov5'
path_trained_model = 'absolute/path/to/best.pt'
model = torch.hub.load(path_hubconfig, 'custom', path=path_trained_model, source='local') # local repo
With this code, object detection is carried out by the locally saved custom-trained model. Once, the custom trained model is saved locally this piece of code access it directly avoiding any necessity of the internet.
I have a Google Colab notebook with PyTorch code running in it.
At the beginning of the train function, I create, save and download word_to_ix and tag_to_ix dictionaries without a problem, using the following code:
from google.colab import files
torch.save(tag_to_ix, pos_dict_path)
files.download(pos_dict_path)
torch.save(word_to_ix, word_dict_path)
files.download(word_dict_path)
I train the model, and then try to download it with the code:
torch.save(model.state_dict(), model_path)
files.download(model_path)
Then I get a MessageError: TypeError: Failed to fetch.
Obviously, the problem is not with the third party cookies (as suggested here), because the first files are downloaded without a problem. (I actually also tried adding the link in my Allow section, but, surprise surprise, it made no difference.)
I was originally trying to save the model as is (which, to my understanding, saves it as a Pickle), and I thought maybe Colab files doesn't handle downloading Pickles well, but as you can see above, I'm now trying to save a dict object (which is also what word_to_ix and tag_to_ix) are, and it's still not working.
Downloading the file manually with right-click isn't a solution, because sometimes I leave the code running while I do other things, and by the time I get back to it, the runtime has disconnected, and the files are gone.
Any suggestions?
when I try to run crfsharp, I get the following error at VS2012,
+err{"Could not find file 'C:\codeplex\POIParser\data\training\POIParser_corpus.train.tag'.":"C:\codeplex\POIParser\data\training\POIParser_corpus.train.tag"} System.Exception {System.IO.FileNotFoundException}
where can I find this file "POIParser_corpus.train.tag" ? I have downloaded both source code and main program of crfsharp and running it in VS2012.
Also I want to ask you can I use the CRFsharp to extract aspects by using training templates?
How do you run it ?
To train a CRF model, you need to prepare training corpus, template file at first and run CRFSharpConsole.exe with some parameters. CRFSharpConsole.exe will show usage, if you run it without any parameters.
Actually, I recommend you to download demo package from [DOWNLOADS] section in CRFSharp project web site(http://crfsharp.codeplex.com) at first, and then play with demo. In demo package, it will show you how to run CRFSharp in command line. For example, you can download Named entity recognized demo in English demo and run batch file to train a new model and test it.
For POIParser_corpus.train.tag you mentioned, it's the training corpus for Chinese POI inner-structure parser. You can also download it and run build_model.bat to train the model, and run test_model.bat to test it.