ImportError Faker Package Still - python-3.x

Trying to create dummy data from a dataset.
Looked for hours to see why I'm getting this ImportError. I have Faker 2.0.0 installed.
import unicodecsv as csv
from faker import Faker
from collections import defaultdict
ImportError: cannot import name 'Faker' from 'faker' (unknown
location)
Receiving this error message still! I tried using solutions from other forum questions, to avail. Anyone have suggestions?

I found it. If you are dealing with the same issue look in your Scripts folder within your Python directory. You'll find an application named faker also. Rename it and you'll be good to go.

Related

inspect.py file in folder makes importing pandas not work anymore

I am sorry if this is a silly question but I came across a wierd behaviour. I have a folder with some files, one of them named inspect.py
However, if I change the name inspect.py to somethingelse.py, importing pandas starts working.
I would really like to understand why this is. I assume it has something to do with the module called inspect which (I THINK??) comes by default installed.
Can anyone help me understand this, please?
Looking a np.ma.core.py I see
import builtins
import inspect
import operator
import warnings
import textwrap
import re
These are all base Python modules. Your local inspect.py gets imported instead, which messes with the importing the rest of np.ma.core, and numpy in turn. And pandas depends on numpy.

ModuleNotFoundError: No module named 'google.cloud.automl_v1beta1.proto'

I am trying to follow this tutorial on Google Cloud Platform,
https://github.com/GoogleCloudPlatform/ai-platform-samples/blob/master/notebooks/samples/tables/census_income_prediction/getting_started_notebook.ipynb, however, I am running into issues when I try to import the autoML module, specifically the below two lines
# AutoML library.
from google.cloud import automl_v1beta1 as automl
import google.cloud.automl_v1beta1.proto.data_types_pb2 as data_types
The first line works, but for the 2nd one, I get the error: ModuleNotFoundError: No module named 'google.cloud.automl_v1beta1.proto'. It seems for some reason there is no module called proto and I cannot figure out how to resolve this. There are a couple of posts regarding the issue of not being able to find module google.cloud. In my case I am able to import automl_v1beta1 from google.cloud but not proto.data_types_pb2 from google.cloud.automl_v1beta1
I think you can:
from google.cloud import automl_v1beta1 as automl
import google.cloud.automl_v1beta1.types as data_types
Or:
import google.cloud.automl_v1beta1 as automl
import google.cloud.automl_v1beta1.types as data_types
But (!) given the import errors, there may be other changes to the SDK in the code that follows.

Spark deep learning Import error

I am trying to replicate a deep learning project from https://medium.com/linagora-engineering/making-image-classification-simple-with-spark-deep-learning-f654a8b876b8 . I am working on spark version 1.6.3. I have installed keras and tensorflow. But everytime i try to import from sparkdl it throws an error. I am working on Pyspark. When I run this:-
from sparkdl import readImages
I get this error:-
File "C:\Users\HP\AppData\Local\Temp\spark-802a2258-3089-4ad7-b8cb-
6815cbbb019a\userFiles-c9514201-07fa-45f9-9fd8-
c8a3a0b4bf70\databricks_spark-deep-learning-0.1.0-spark2.1-
s_2.11.jar\sparkdl\transformers\keras_image.py", line 20, in <module>
ImportError: cannot import name 'TypeConverters'
Can someone pls help?
Its not a full fix, as i have yet to be able to import things from sparkdl in jupyter notebooks aswell, but!
readImages is a function in pyspark.ml.image package
so to import it you need to:
from pyspark.ml.image import ImageSchema
to use it:
imagesDF = ImageSchema.readImages("/path/to/imageFolder")
This will give you a dataframe of the images, with column "image"
You can add a label column as such:
labledImageDF = imagesDF.withColumn("label", lit(0))
but remember to import functions from pyspark.sql to use lit function
from pyspark.sql.functions import *
Hope this at least partially helps

Error when import Spark GaussianMixture

I get following error
object GaussianMixture is not a member of package org.apache.spark.ml.clustering
when I try to do following import from spark-shell
import org.apache.spark.ml.clustering.GaussianMixture
As this is part of Spark, I don't think any dependencies need to be added. Please help me with this issue.
I belive the GaussianMixture uses the mllib package. Try to import:
import org.apache.spark.mllib.clustering.GaussianMixture

Pandas DataReader

This may be a really simple question but I am truly stuck.
I am trying to call Pandas' DataReader like:
from pandas.io.date import DataReader
but it does not get DataReader. I do not know what I am doing wrong, especially for such a simple thing. All I am trying to do is to acquire data from Yahoo Finance.
Thanks a lot for the help.
Pandas data reader was removed from pandas, it is now a separate repo and a separate install
https://github.com/pydata/pandas-datareader
From the readme.
Starting in 0.19.0, pandas no longer supports pandas.io.data or pandas.io.wb, so you must replace your imports from pandas.io with those from pandas_datareader:
from pandas.io import data, wb # becomes
from pandas_datareader import data, wb
Many functions from the data module have been included in the top level API.
import pandas_datareader as pdr
pdr.get_data_yahoo('AAPL')

Resources