I got above error when I tried to reload the a object of Class for Deep Q Network with target network along with experience replay and train it again.
While there are few similar errors related with tensorflow but I am using PyTorch in google Colab.
Related
When using a model I fine-tuned for GPT-3 using openai api from CLI, it stopped working and I get an error with this message: "That model does not exist".
But this is a model I have used before, so it should exist.
Sometimes with new versions old fine-tuned stop working.
To check out current fine tuned models besides running openai api fine_tunes.list you should check the fine-tuned model showing up in Playground with the other models.
I am using the SentenceTransformers library (here: https://pypi.org/project/sentence-transformers/#pretrained-models) for creating embeddings of sentences using the pretrained model bert-base-nli-mean-tokens. I have an application that will be deployed to a device that does not have internet access. How can I save this model locally so that when I call it, it loads the model locally, rather than attempting to download from the internet? As the library maintainers make clear, the method SentenceTransformer downloads the model from the internet (see here: https://pypi.org/project/sentence-transformers/#pretrained-models) and I cannot find a method for saving the model locally.
Hugging face usage
You can download the models locally by using the Hugging Face transformer library method.
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("sentence-transformers/bert-base-nli-mean-tokens")
model = AutoModel.from_pretrained("sentence-transformers/bert-base-nli-mean-tokens")
tokenizer.save_pretrained('./local_directory/')
model.save_pretrained('./local_directory/')
After instantiating the SentenceTransformer via download, you can then save it to any path of your choosing with the 'save()' method.
model = SentenceTransformer('distilbert-base-nli-stsb-mean-tokens')
model.save('/my/local/directory/for/models/')
The accepted answer doesn't work, as it doesn't have the encapsulating folder and config.json that SentenceTransformer is looking for
I am running yolo v3 custom object detection for my project.But it is giving error when I tried to run my custom object detection.I am using windows 10 chrome browser.I need help to solve this error.
Thanks in advance.
This is the command that I am trying to run:
!./darknet detector train "/content/gdrive/My Drive/darknet/obj.data" "/content/darknet/cfg/yolov3.cfg" "/content/gdrive/My Drive/darknet/darknet_53.conv.74"
THIS IS THE ERROR IMAGE
The issue is solved when google colab assigned me full resources.Now I am training my own yolo custom object detection.
I am getting this error when training my model on google cloud while trying to run Tensorflow Object detection
gapic-google-cloud-logging-v2 0.91.3 has requirement google-gax<0.16dev,>=0.15.7, but you'll have google-gax 0.12.5 which is incompatible.
any help how to fix it?
I have a working lambda code, but I have this new idea of a neural network system that I am trying to implement for which I would require Keras (Python). I tried uploading the same working code along with the keras dependencies as a ZIP file form my desktop with an additional "import keras" statement. Funnily , the same code with the additional "import keras" gives me an error saying remote endpoint could not be reached on the developer portal but when I remove the "import keras" statement, my code works perfectly fine. Why is this happening even after I give both the keras dependencies and my lambda code on the ZIP file? I understand that I am not using keras anywhere in my existing code but when I give the keras dependencies and just try to import, logically, it should still work right?