I'm running the AutoML notebook from https://github.com/GoogleCloudPlatform/python-docs-samples/blob/master/tables/automl/notebooks/purchase_prediction/purchase_prediction.ipynb
In Colab the following line:
table = pd.read_csv(nested_gcs_uri, low_memory=False)
fails with the error in the subject.
I've tried pip install gcsfs which reports Requirement already satisfied
Import gcsfs returns
ModuleNotFoundError: No module named 'gcsfs'
Provided Purchase Prediction with AutoML Tables notebook works perfectly on AI Platform Notebooks in GCP. Before running any command it's important to make sure that billing, AI Platform API, Compute Engine API, AutoML API are enabled in GCP project. For some reasons it throws an error while running in Colab environment. The bug has been already reported on Github by #Matt Evans.
Related
In the latest version of the Anomaly Detection Service by Azure which supports the Multi-variate Cognitive Service, we need to train a model and then consume it.
The quickstart documentation for Python mentions a few libraries which are not getting imported:
from azure.ai.anomalydetector.models import DetectionRequest, ModelInfo
Both these libraries are throwing import errors.
How can we use the Multivariate Anomaly Detection Service using the Python SDK?
This error was with version azure-ai-anomalydetector==3.0.0b2. With version azure-ai-anomalydetector==3.0.0b3, this has been addressed.
The problem is because of the change of the response format recently. To solve that issue, you can change the line with error to
model_status = self.ad_client.get_multivariate_model(trained_model_id).model_info.status
I am using matplotlib library in my python project which in turn uses numpy. I have deployed the libraries in AWS Lambda Layers and I am importing them in my AWS lambda function. When I test my AWS Lambda function it throws the following error:
Importing the numpy C-extensions failed. This error can happen for many reasons, often due to issues with your setup or how NumPy was installed. We have compiled some common reasons and troubleshooting tips at: numpy.org/devdocs/user/troubleshooting-importerror.html Please note and check the following: * The Python version is: Python3.8 from "/var/lang/bin/python3.8" * The NumPy version is: "1.18.5" Original error was: No module named 'numpy.core._multiarray_umath'
Any idea what could be the possible reason and how to resolve it?
I am answering the question so that if anyone in future also faces this issue so the below solution would might work for them as well.
The problem was that I compiled the required packages in windows 10 enviroment and then I deployed them on layers to be used by AWS Lambda function. AWS Lambda function and Layers use Linux in
the background so the packages compiled in the Windows enviroment were not compatible with AWS Lambda function. When I compiled the required packages again in Linux enviroment and deployed them on layers and used them again with lambda function then it worked like a charm!
This Medium article helped me in solving my issue.
I'm testing Google App Engine and trying to run a simple function to upload files to either the Blobstore or Cloud Storage. I'm typing the Python code directly in the Cloud Shell of my instance. The code is failing when I call:
from google.appengine.ext import blobstore
I get the error code:
Traceback (most recent call last):
File "upload_test.py", line 1, in <module>
from google.appengine.api import users
ImportError: No module named 'google.appengine'
Even though the documentation says that: You can use Google Cloud Shell, which comes with git and Cloud SDK already installed, I've tried installing a bunch of libraries:
gcloud components install app-engine-python
pip install google-cloud-datastore
pip install google-cloud-storage
pip install --upgrade google-api-python-client
I'm still getting the same error. How can I get the appengine library to work? Alternatively, is this the wrong method for creating an app that allows the user to upload files?
The google.appengine module is baked into the first-generation Python (2.7) runtime. It's not available to install via pip, in the second-generation (3.7) runtime, or in Cloud Shell.
The only way to use it is by writing and deploying a first-generation App Engine app.
Thanks #Dustin Ingram
I found the answer in this page.
The current "correct" way of uploading to Cloud Storage is to use google.cloud.storage. The tutorial I linked above explains how to implement it.
The impression I have, however, is that this uses twice the bandwidth as the solution via google.appengine. Originally, the front-end would receive an upload url and send the file directly to the Blobstore (or to Cloud Storage). Now the application uploads to the back-end which, in turn, uploads to Cloud Storage.
I'm not too worried, as I will not be dealing with excessively large files, but it seems strange that the ability to upload directly has been discontinued.
In any case, my problem has been solved.
Bert is very powerful model for text classification but implementation of bert requires much more code than any other model. bert-text is pypi package to provide developer a ready-to-use solution.I have installed it properly.When I have tried to import ,it is throwing error ModuleNotFoundError: No module named 'bert_text'.I have properly written the name bert_text.
I have tried it in Kaggle,Colab and local machine but the error is same.
Hey as this is a refactor made by Yan Sun, This issue is already pending, you can go to this link and subscribe for an update when the developers will provide its solution. https://github.com/SunYanCN/bert-text/issues/1
I am getting this error when training my model on google cloud while trying to run Tensorflow Object detection
gapic-google-cloud-logging-v2 0.91.3 has requirement google-gax<0.16dev,>=0.15.7, but you'll have google-gax 0.12.5 which is incompatible.
any help how to fix it?