Serialization error with Spark Pandas_UDF - apache-spark

I have a python function that I converted to Pandas_UDF function and it worked fine up until last week but getting the below error from the last few days. We tried a simple python function with Pandas UDF and it is not throwing this error. I am not sure what exactly in my code is causing this. Has there been any change to spark environment. I am using Azure Databricks if it helps.
Search only turned up the this link but it is old.
Appreciate any pointers on how to fix this issue.
Thanks,
Yudi
SparkException: Job aborted due to stage failure: Task 0 in stage 23.0 failed 4 times, most recent failure: Lost task 0.3 in stage 23.0 (TID 252, 172.17.69.7, executor 0): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File "/databricks/spark/python/pyspark/serializers.py", line 180, in _read_with_length
return self.loads(obj)
File "/databricks/spark/python/pyspark/serializers.py", line 669, in loads
return pickle.loads(obj, encoding=encoding)
File "/databricks/spark/python/pyspark/cloudpickle.py", line 875, in subimport
import(name)
ImportError: No module named '_pandasujson'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/databricks/spark/python/pyspark/worker.py", line 394, in main
func, profiler, deserializer, serializer = read_udfs(pickleSer, infile, eval_type)
File "/databricks/spark/python/pyspark/worker.py", line 234, in read_udfs
arg_offsets, udf = read_single_udf(pickleSer, infile, eval_type, runner_conf)
File "/databricks/spark/python/pyspark/worker.py", line 160, in read_single_udf
f, return_type = read_command(pickleSer, infile)
File "/databricks/spark/python/pyspark/worker.py", line 69, in read_command
command = serializer._read_with_length(file)
File "/databricks/spark/python/pyspark/serializers.py", line 183, in _read_with_length
raise SerializationError("Caused by " + traceback.format_exc())
pyspark.serializers.SerializationError: Caused by Traceback (most recent call last):
File "/databricks/spark/python/pyspark/serializers.py", line 180, in _read_with_length
return self.loads(obj)
File "/databricks/spark/python/pyspark/serializers.py", line 669, in loads
return pickle.loads(obj, encoding=encoding)
File "/databricks/spark/python/pyspark/cloudpickle.py", line 875, in subimport
import(name)
ImportError: No module named '_pandasujson'

Related

Pandas UDF for pyspark - Package not found error

I am using the pandas UDF approach to scale my models. However, I am getting an error with the pmdarima package not found. The code works fine till I run it on my notebook on the pandas dataframe itself. So the package is available for use in the notebook. From few answers online, the error seems in package not being available on the worker nodes where the code is trying to parallelize. Can someone help on how to resolve this? How can I also install the package on my worker nodes, if that's the case.
FYI - I am working on Azure Databricks.
def funct1(grp_keys, df):
other statements
model = pm.auto_arima(train_data['sum_hlqty'],X=x,
test='adf',trace=False,
maxiter = 12,max_p=5,max_q=5,
njobs=-1)
forecast_df = sales.groupby('Col1','Col2').applyInPandas(funct1,schema="C1 string, C2 string, C3 date, C4 float, C5 float")
Py4JJavaError: An error occurred while calling o256.sql.
: org.apache.spark.SparkException: Job aborted.
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:230)
at com.databricks.sql.transaction.tahoe.files.TransactionalWriteEdge.$anonfun$writeFiles$5(TransactionalWriteEdge.scala:183)
at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withCustomExecutionEnv$5(SQLExecution.scala:116)
at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:249)
at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withCustomExecutionEnv$1(SQLExecution.scala:101)
at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:845)
at org.apache.spark.sql.execution.SQLExecution$.withCustomExecutionEnv(SQLExecution.scala:77)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:199)
at com.databricks.sql.transaction.tahoe.files.TransactionalWriteEdge.$anonfun$writeFiles$1(TransactionalWriteEdge.scala:135)
at com.databricks.logging.UsageLogging.$anonfun$recordOperation$4(UsageLogging.scala:431)
at com.databricks.logging.UsageLogging.$anonfun$withAttributionContext$1(UsageLogging.scala:239)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:62)
at com.databricks.logging.UsageLogging.withAttributionContext(UsageLogging.scala:234)
at com.databricks.logging.UsageLogging.withAttributionContext$(UsageLogging.scala:231)
at com.databricks.spark.util.PublicDBLogging.withAttributionContext(DatabricksSparkUsageLogger.scala:19)
at com.databricks.logging.UsageLogging.withAttributionTags(UsageLogging.scala:276)
at com.databricks.logging.UsageLogging.withAttributionTags$(UsageLogging.scala:269)
at com.databricks.spark.util.PublicDBLogging.withAttributionTags(DatabricksSparkUsageLogger.scala:19)
at com.databricks.logging.UsageLogging.recordOperation(UsageLogging.scala:412)
at com.databricks.logging.UsageLogging.recordOperation$(UsageLogging.scala:338)
at com.databricks.spark.util.PublicDBLogging.recordOperation(DatabricksSparkUsageLogger.scala:19)
at com.databricks.spark.util.PublicDBLogging.recordOperation0(DatabricksSparkUsageLogger.scala:56)
at com.databricks.spark.util.DatabricksSparkUsageLogger.recordOperation(DatabricksSparkUsageLogger.scala:129)
at com.databricks.spark.util.UsageLogger.recordOperation(UsageLogger.scala:71)
at com.databricks.spark.util.UsageLogger.recordOperation$(UsageLogger.scala:58)
at com.databricks.spark.util.DatabricksSparkUsageLogger.recordOperation(DatabricksSparkUsageLogger.scala:85)
at com.databricks.spark.util.UsageLogging.recordOperation(UsageLogger.scala:401)
at com.databricks.spark.util.UsageLogging.recordOperation$(UsageLogger.scala:380)
at com.databricks.sql.transaction.tahoe.OptimisticTransaction.recordOperation(OptimisticTransaction.scala:84)
at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordDeltaOperation(DeltaLogging.scala:108)
at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordDeltaOperation$(DeltaLogging.scala:94)
at com.databricks.sql.transaction.tahoe.OptimisticTransaction.recordDeltaOperation(OptimisticTransaction.scala:84)
at com.databricks.sql.transaction.tahoe.files.TransactionalWriteEdge.writeFiles(TransactionalWriteEdge.scala:92)
at com.databricks.sql.transaction.tahoe.files.TransactionalWriteEdge.writeFiles$(TransactionalWriteEdge.scala:88)
at com.databricks.sql.transaction.tahoe.OptimisticTransaction.writeFiles(OptimisticTransaction.scala:84)
at com.databricks.sql.transaction.tahoe.files.TransactionalWrite.writeFiles(TransactionalWrite.scala:112)
at com.databricks.sql.transaction.tahoe.files.TransactionalWrite.writeFiles$(TransactionalWrite.scala:111)
at com.databricks.sql.transaction.tahoe.OptimisticTransaction.writeFiles(OptimisticTransaction.scala:84)
at com.databricks.sql.transaction.tahoe.commands.WriteIntoDelta.write(WriteIntoDelta.scala:112)
at com.databricks.sql.transaction.tahoe.commands.WriteIntoDelta.$anonfun$run$2(WriteIntoDelta.scala:71)
at com.databricks.sql.transaction.tahoe.commands.WriteIntoDelta.$anonfun$run$2$adapted(WriteIntoDelta.scala:70)
at com.databricks.sql.transaction.tahoe.DeltaLog.withNewTransaction(DeltaLog.scala:203)
at com.databricks.sql.transaction.tahoe.commands.WriteIntoDelta.$anonfun$run$1(WriteIntoDelta.scala:70)
at com.databricks.sql.acl.CheckPermissions$.trusted(CheckPermissions.scala:1128)
at com.databricks.sql.transaction.tahoe.commands.WriteIntoDelta.run(WriteIntoDelta.scala:69)
at com.databricks.sql.transaction.tahoe.catalog.WriteIntoDeltaBuilder$$anon$1.insert(DeltaTableV2.scala:193)
at org.apache.spark.sql.execution.datasources.v2.SupportsV1Write.writeWithV1(V1FallbackWriters.scala:118)
at org.apache.spark.sql.execution.datasources.v2.SupportsV1Write.writeWithV1$(V1FallbackWriters.scala:116)
at org.apache.spark.sql.execution.datasources.v2.AppendDataExecV1.writeWithV1(V1FallbackWriters.scala:38)
at org.apache.spark.sql.execution.datasources.v2.AppendDataExecV1.run(V1FallbackWriters.scala:44)
at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result$lzycompute(V2CommandExec.scala:39)
at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result(V2CommandExec.scala:39)
at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.executeCollect(V2CommandExec.scala:45)
at org.apache.spark.sql.Dataset.$anonfun$logicalPlan$1(Dataset.scala:234)
at org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:3709)
at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withCustomExecutionEnv$5(SQLExecution.scala:116)
at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:249)
at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withCustomExecutionEnv$1(SQLExecution.scala:101)
at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:845)
at org.apache.spark.sql.execution.SQLExecution$.withCustomExecutionEnv(SQLExecution.scala:77)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:199)
at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3707)
at org.apache.spark.sql.Dataset.<init>(Dataset.scala:234)
at org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:104)
at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:845)
at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:101)
at org.apache.spark.sql.SparkSession.$anonfun$sql$1(SparkSession.scala:680)
at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:845)
at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:675)
at sun.reflect.GeneratedMethodAccessor655.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:380)
at py4j.Gateway.invoke(Gateway.java:295)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:251)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 98 in stage 7774.0 failed 4 times, most recent failure: Lost task 98.3 in stage 7774.0 (TID 177293, 10.240.138.10, executor 133): org.apache.spark.api.python.PythonException: 'pyspark.serializers.SerializationError: Caused by Traceback (most recent call last):
File "/databricks/spark/python/pyspark/serializers.py", line 177, in _read_with_length
return self.loads(obj)
File "/databricks/spark/python/pyspark/serializers.py", line 466, in loads
return pickle.loads(obj, encoding=encoding)
File "/databricks/spark/python/pyspark/cloudpickle.py", line 1110, in subimport
__import__(name)
**ModuleNotFoundError: No module named 'pmdarima''** Full traceback below:
Traceback (most recent call last):
File "/databricks/spark/python/pyspark/serializers.py", line 177, in _read_with_length
return self.loads(obj)
File "/databricks/spark/python/pyspark/serializers.py", line 466, in loads
return pickle.loads(obj, encoding=encoding)
File "/databricks/spark/python/pyspark/cloudpickle.py", line 1110, in subimport
__import__(name)
ModuleNotFoundError: No module named 'pmdarima'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/databricks/spark/python/pyspark/worker.py", line 638, in main
func, profiler, deserializer, serializer = read_udfs(pickleSer, infile, eval_type)
File "/databricks/spark/python/pyspark/worker.py", line 438, in read_udfs
arg_offsets, f = read_single_udf(pickleSer, infile, eval_type, runner_conf, udf_index=0)
File "/databricks/spark/python/pyspark/worker.py", line 255, in read_single_udf
f, return_type = read_command(pickleSer, infile)
File "/databricks/spark/python/pyspark/worker.py", line 75, in read_command
command = serializer._read_with_length(file)
File "/databricks/spark/python/pyspark/serializers.py", line 180, in _read_with_length
raise SerializationError("Caused by " + traceback.format_exc())
pyspark.serializers.SerializationError: Caused by Traceback (most recent call last):
File "/databricks/spark/python/pyspark/serializers.py", line 177, in _read_with_length
return self.loads(obj)
File "/databricks/spark/python/pyspark/serializers.py", line 466, in loads
return pickle.loads(obj, encoding=encoding)
File "/databricks/spark/python/pyspark/cloudpickle.py", line 1110, in subimport
__import__(name)
**ModuleNotFoundError: No module named 'pmdarima'****strong text**

Write Shapefile to AWS S3 with geopandas in Glue Python Shell

I have read shapefile in a zip format from my S3 bucket successfully through geopandas, but I get error when trying to output the same geodataframe as a shapefile to the same S3 bucket.
The code below is how I read the zip file, and it works nicely:
## session for connecting to S3
session = boto3.session.Session(aws_access_key_id='MY-KEY-ID',
aws_secret_access_key='MY-KEY')
s3 = session.resource('s3')
bucket = s3.Bucket('my_bucket')
## read shapefile
TPG = bucket.Object(key='/shapefiles/grid.zip')
TPGrid = geopandas.read_file(TPG.get()['Body'])
But when I tried to output the same geodataframe like this:
TPGrid.to_file(filename='s3://my_bucket/output/TPGrid.zip', driver='ESRI Shapefile')
I will get error code:
ERROR:fiona._env:Only read-only mode is supported for /vsicurl
ERROR:fiona._env:Only read-only mode is supported for /vsicurl
ERROR:fiona._env:Only read-only mode is supported for /vsicurl
ERROR:fiona._env:Unable to open /vsis3/my_bucket/output/TPGrid.zip/TPGrid.shp or /vsis3/my_bucket/output/TPGrid.zip/TPGrid.SHP.
Traceback (most recent call last):
File "fiona/ogrext.pyx", line 1133, in fiona.ogrext.WritingSession.start
File "fiona/_err.pyx", line 291, in fiona._err.exc_wrap_pointer
fiona._err.CPLE_AppDefinedError: Unable to open /vsis3/my_bucket/output/TPGrid.zip/TPGrid.shp or /vsis3/my_bucket/output/TPGrid.zip/TPGrid.SHP.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/tmp/runscript.py", line 211, in <module>
runpy.run_path(temp_file_path, run_name='__main__')
File "/usr/local/lib/python3.6/runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "/usr/local/lib/python3.6/runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "/usr/local/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmp/glue-python-scripts-c8krhm5u/test_to_file_geo.py", line 40, in <module>
File "/glue/lib/installation/geopandas/geodataframe.py", line 1086, in to_file
_to_file(self, filename, driver, schema, index, **kwargs)
File "/glue/lib/installation/geopandas/io/file.py", line 328, in _to_file
filename, mode=mode, driver=driver, crs_wkt=crs_wkt, schema=schema, **kwargs
File "/glue/lib/installation/fiona/env.py", line 408, in wrapper
return f(*args, **kwargs)
File "/glue/lib/installation/fiona/__init__.py", line 274, in open
**kwargs)
File "/glue/lib/installation/fiona/collection.py", line 165, in __init__
self.session.start(self, **kwargs)
File "fiona/ogrext.pyx", line 1141, in fiona.ogrext.WritingSession.start
fiona.errors.DriverIOError: Unable to open /vsis3/my_bucket/output/TPGrid.zip/TPGrid.shp or /vsis3/my_bucket/output/TPGrid.zip/TPGrid.SHP.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/tmp/runscript.py", line 230, in <module>
raise e_type(e_value).with_traceback(new_stack)
File "/tmp/glue-python-scripts-c8krhm5u/test_to_file_geo.py", line 40, in <module>
File "/glue/lib/installation/geopandas/geodataframe.py", line 1086, in to_file
_to_file(self, filename, driver, schema, index, **kwargs)
File "/glue/lib/installation/geopandas/io/file.py", line 328, in _to_file
filename, mode=mode, driver=driver, crs_wkt=crs_wkt, schema=schema, **kwargs
File "/glue/lib/installation/fiona/env.py", line 408, in wrapper
return f(*args, **kwargs)
File "/glue/lib/installation/fiona/__init__.py", line 274, in open
**kwargs)
File "/glue/lib/installation/fiona/collection.py", line 165, in __init__
self.session.start(self, **kwargs)
File "fiona/ogrext.pyx", line 1141, in fiona.ogrext.WritingSession.start
fiona.errors.DriverIOError: Unable to open /vsis3/my_bucket/output/TPGrid.zip/TPGrid.shp or /vsis3/my_bucket/output/TPGrid.zip/TPGrid.SHP.
I have tried several ways, such as using '.csv' or '.shp', but not any one worked.
I am using python 3.6 and packages below, hope these information will help:
geopandas-0.9.0
shapely-1.7.1
fiona-1.8.20
GDAL-3.2.3
I kept fighting with this problem all day....
Any help will be highly appreciated.

Python3 multiprocessing shared dictionary consume by all process

I'm beginner for multiprocessing,
i would like to use multiprocessing for parallel code running with streaming data.
To start good, I have coded below and got error.
Could you please tell me correct way to print on the screen.
Code:
import sys
from multiprocessing import Process, Manager
import time
def producer(dic, name):
for i in range(10000):
dic["A"] = i
time.sleep(2)
def consumer(dic, name):
for i in range(10000):
aval = dic.get("A")
#print(f" {name} - Val = {aval}")
sys.stdout.write(f" {name} - Val = {aval}")
sys.stdout.flush()
time.sleep(2.2)
if __name__ == '__main__':
manager = Manager()
dic = manager.dict()
Process(target=producer, args=(dic,"TT")).start()
time.sleep(1)
Process(target=consumer, args=(dic,"Con1")).start()
Process(target=consumer, args=(dic,"Con2")).start()
When I run the same in the windows console, I got below error, how can I print Consumer's print function in the console.Thanks
(base) PS D:\> python .\mulpro.py
Process Process-3:
Process Process-4:
Traceback (most recent call last):
File "C:\ProgramData\Anaconda3\lib\multiprocessing\managers.py", line 811, in
_callmethod
conn = self._tls.connection
AttributeError: 'ForkAwareLocal' object has no attribute 'connection'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\ProgramData\Anaconda3\lib\multiprocessing\process.py", line 297, in _
bootstrap
self.run()
File "C:\ProgramData\Anaconda3\lib\multiprocessing\process.py", line 99, in ru
n
self._target(*self._args, **self._kwargs)
File "D:\mulpro.py", line 19, in consumer
aval = dic.get("A")
File "<string>", line 2, in get
File "C:\ProgramData\Anaconda3\lib\multiprocessing\managers.py", line 815, in
_callmethod
self._connect()
File "C:\ProgramData\Anaconda3\lib\multiprocessing\managers.py", line 802, in
_connect
conn = self._Client(self._token.address, authkey=self._authkey)
File "C:\ProgramData\Anaconda3\lib\multiprocessing\connection.py", line 490, i
n Client
c = PipeClient(address)
Traceback (most recent call last):
File "C:\ProgramData\Anaconda3\lib\multiprocessing\connection.py", line 691, i
n PipeClient
_winapi.WaitNamedPipe(address, 1000)
File "C:\ProgramData\Anaconda3\lib\multiprocessing\managers.py", line 811, in
_callmethod
conn = self._tls.connection
FileNotFoundError: [WinError 2] The system cannot find the file specified
AttributeError: 'ForkAwareLocal' object has no attribute 'connection'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\ProgramData\Anaconda3\lib\multiprocessing\process.py", line 297, in _
bootstrap
self.run()
File "C:\ProgramData\Anaconda3\lib\multiprocessing\process.py", line 99, in ru
n
self._target(*self._args, **self._kwargs)
File "D:\mulpro.py", line 19, in consumer
aval = dic.get("A")
File "<string>", line 2, in get
File "C:\ProgramData\Anaconda3\lib\multiprocessing\managers.py", line 815, in
_callmethod
self._connect()
File "C:\ProgramData\Anaconda3\lib\multiprocessing\managers.py", line 802, in
_connect
conn = self._Client(self._token.address, authkey=self._authkey)
File "C:\ProgramData\Anaconda3\lib\multiprocessing\connection.py", line 490, i
n Client
c = PipeClient(address)
File "C:\ProgramData\Anaconda3\lib\multiprocessing\connection.py", line 691, i
n PipeClient
_winapi.WaitNamedPipe(address, 1000)
FileNotFoundError: [WinError 2] The system cannot find the file specified
Process Process-2:
Traceback (most recent call last):
File "C:\ProgramData\Anaconda3\lib\multiprocessing\process.py", line 297, in _
bootstrap
self.run()
File "C:\ProgramData\Anaconda3\lib\multiprocessing\process.py", line 99, in ru
n
self._target(*self._args, **self._kwargs)
File "D:\mulpro.py", line 13, in producer
dic["A"] = i
File "<string>", line 2, in __setitem__
File "C:\ProgramData\Anaconda3\lib\multiprocessing\managers.py", line 818, in
_callmethod
conn.send((self._id, methodname, args, kwds))
File "C:\ProgramData\Anaconda3\lib\multiprocessing\connection.py", line 206, i
n send
self._send_bytes(_ForkingPickler.dumps(obj))
File "C:\ProgramData\Anaconda3\lib\multiprocessing\connection.py", line 280, i
n _send_bytes
ov, err = _winapi.WriteFile(self._handle, buf, overlapped=True)
BrokenPipeError: [WinError 232] The pipe is being closed
The reason might be that the main process doesn't wait for two children process, so make your code like this could work:
def run():
manager = Manager()
dic = manager.dict()
Process(target=producer, args=(dic,"TT")).start()
time.sleep(1)
Process(target=consumer, args=(dic,"Con1")).start()
Process(target=consumer, args=(dic,"Con2")).start()
while True:
pass
if __name__ == '__main__':
run()
However it's very strange that if I append a dead loop in main instead of using another function, it still raise that exception. Anyway, the code above could help.

Yahoo_Finance Package for Python - Share() does not work anymore

Since today I am experiencing some errors caused by the yahoo_finance package version 1.4.
Here is a code example that is causing the error:
from yahoo_finance import Share
Apple = Share("AAPL")
Results in the following Error:
Traceback (most recent call last):
File "C:\Users\Julian\Anaconda3\lib\site-packages\yahoo_finance\__init__.py", line 120, in _request
_, results = response['query']['results'].popitem()
AttributeError: 'NoneType' object has no attribute 'popitem'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\Julian\Anaconda3\lib\site-packages\yahoo_finance\__init__.py", line 123, in _request
raise YQLQueryError(response['error']['description'])
KeyError: 'error'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\Julian\Anaconda3\lib\site-packages\IPython\core\interactiveshell.py", line 2881, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-3-44fc1f59aa21>", line 1, in <module>
Apple = Share("AAPL")
File "C:\Users\Julian\Anaconda3\lib\site-packages\yahoo_finance\__init__.py", line 178, in __init__
self.refresh()
File "C:\Users\Julian\Anaconda3\lib\site-packages\yahoo_finance\__init__.py", line 142, in refresh
self.data_set = self._fetch()
File "C:\Users\Julian\Anaconda3\lib\site-packages\yahoo_finance\__init__.py", line 181, in _fetch
data = super(Share, self)._fetch()
File "C:\Users\Julian\Anaconda3\lib\site-packages\yahoo_finance\__init__.py", line 134, in _fetch
data = self._request(query)
File "C:\Users\Julian\Anaconda3\lib\site-packages\yahoo_finance\__init__.py", line 125, in _request
raise YQLResponseMalformedError()
yahoo_finance.YQLResponseMalformedError: Response malformed.
Do you experience similar issues or is this only an issue for me personally?
Thank you for your replies.
If yes - what are potential fixes for this issue?

Can not install ggplot for Python on Windows 7, Anaconda 5

I am trying to install ggplot for Python, see below:
All three commands return error messages. The first one is perhaps the most elaborate:
Exception:
Traceback (most recent call last):
File "C:\Users\akonstantinidis\AppData\Local\Continuum\anaconda3\lib\shutil.py", line 387, in _rmtree_unsafe
os.unlink(fullname)
PermissionError: [WinError 5] Access is denied: 'C:\\Users\\AKONST~1\\AppData\\Local\\Temp\\pip-y54n7lcl-uninstall\\users\\akonstantinidis\\appdata\\local\\continuum\\anaconda3\\lib\\site-packages\\numpy\\core\\multiarray.cp36-win_amd64.pyd'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\akonstantinidis\AppData\Local\Continuum\anaconda3\lib\site-packages\pip\basecommand.py", line 215, in main
status = self.run(options, args)
File "C:\Users\akonstantinidis\AppData\Local\Continuum\anaconda3\lib\site-packages\pip\commands\install.py", line 342, in run
prefix=options.prefix_path,
File "C:\Users\akonstantinidis\AppData\Local\Continuum\anaconda3\lib\site-packages\pip\req\req_set.py", line 795, in install
requirement.commit_uninstall()
File "C:\Users\akonstantinidis\AppData\Local\Continuum\anaconda3\lib\site-packages\pip\req\req_install.py", line 767, in commit_uninstall
self.uninstalled.commit()
File "C:\Users\akonstantinidis\AppData\Local\Continuum\anaconda3\lib\site-packages\pip\req\req_uninstall.py", line 142, in commit
rmtree(self.save_dir)
File "C:\Users\akonstantinidis\AppData\Local\Continuum\anaconda3\lib\site-packages\pip\_vendor\retrying.py", line 49, in wrapped_f
return Retrying(*dargs, **dkw).call(f, *args, **kw)
File "C:\Users\akonstantinidis\AppData\Local\Continuum\anaconda3\lib\site-packages\pip\_vendor\retrying.py", line 212, in call
raise attempt.get()
File "C:\Users\akonstantinidis\AppData\Local\Continuum\anaconda3\lib\site-packages\pip\_vendor\retrying.py", line 247, in get
six.reraise(self.value[0], self.value[1], self.value[2])
File "C:\Users\akonstantinidis\AppData\Local\Continuum\anaconda3\lib\site-packages\six.py", line 686, in reraise
File "C:\Users\akonstantinidis\AppData\Local\Continuum\anaconda3\lib\site-packages\pip\_vendor\retrying.py", line 200, in call
attempt = Attempt(fn(*args, **kwargs), attempt_number, False)
File "C:\Users\akonstantinidis\AppData\Local\Continuum\anaconda3\lib\site-packages\pip\utils\__init__.py", line 102, in rmtree
onerror=rmtree_errorhandler)
File "C:\Users\akonstantinidis\AppData\Local\Continuum\anaconda3\lib\shutil.py", line 494, in rmtree
return _rmtree_unsafe(path, onerror)
File "C:\Users\akonstantinidis\AppData\Local\Continuum\anaconda3\lib\shutil.py", line 384, in _rmtree_unsafe
_rmtree_unsafe(fullname, onerror)
File "C:\Users\akonstantinidis\AppData\Local\Continuum\anaconda3\lib\shutil.py", line 384, in _rmtree_unsafe
_rmtree_unsafe(fullname, onerror)
File "C:\Users\akonstantinidis\AppData\Local\Continuum\anaconda3\lib\shutil.py", line 384, in _rmtree_unsafe
_rmtree_unsafe(fullname, onerror)
[Previous line repeated 6 more times]
File "C:\Users\akonstantinidis\AppData\Local\Continuum\anaconda3\lib\shutil.py", line 389, in _rmtree_unsafe
onerror(os.unlink, fullname, sys.exc_info())
File "C:\Users\akonstantinidis\AppData\Local\Continuum\anaconda3\lib\site-packages\pip\utils\__init__.py", line 114, in rmtree_errorhandler
func(path)
PermissionError: [WinError 5] Access is denied: 'C:\\Users\\AKONST~1\\AppData\\Local\\Temp\\pip-y54n7lcl-uninstall\\users\\akonstantinidis\\appdata\\local\\continuum\\anaconda3\\lib\\site-packages\\numpy\\core\\multiarray.cp36-win_amd64.pyd'
I am not sure what these messages imply and how they should be dealt with.
Your advice will be appreciated.
Try running the installation as administrator.Right click the anoconda prompt and select run as administrator and then run the installation.You will need an admin user to do this.

Resources