AttributeError: 'DataFrame' object has no attribute 'droplevel' in pandas - python-3.x

I am getting a strange (to my understanding) message when I try to drop a level from a multi-indexed pandas dataframe.
For a reproducible example:
toy.to_json()
'{"["ISRG","EPS_diluted"]":{"2004-12-31":0.33,"2005-01-28":0.33,"2005-03-31":0.25,"2005-04-01":0.25,"2005-04-29":0.25},"["DHR","EPS_diluted"]":{"2004-12-31":0.67,"2005-01-28":0.67,"2005-03-31":0.67,"2005-04-01":0.58,"2005-04-29":0.58},"["BDX","EPS_diluted"]":{"2004-12-31":0.75,"2005-01-28":0.75,"2005-03-31":0.72,"2005-04-01":0.72,"2005-04-29":0.72},"["SYK","EPS_diluted"]":{"2004-12-31":0.4,"2005-01-28":0.4,"2005-03-31":0.42,"2005-04-01":0.42,"2005-04-29":0.42},"["BSX","EPS_diluted"]":{"2004-12-31":0.35,"2005-01-28":0.35,"2005-03-31":0.42,"2005-04-01":0.42,"2005-04-29":0.42},"["BAX","EPS_diluted"]":{"2004-12-31":0.18,"2005-01-28":0.18,"2005-03-31":0.36,"2005-04-01":0.36,"2005-04-29":0.36},"["EW","EPS_diluted"]":{"2004-12-31":0.4,"2005-01-28":0.4,"2005-03-31":0.5,"2005-04-01":0.5,"2005-04-29":0.5},"["MDT","EPS_diluted"]":{"2004-12-31":0.44,"2005-01-28":0.45,"2005-03-31":0.45,"2005-04-01":0.45,"2005-04-29":0.16},"["ABT","EPS_diluted"]":{"2004-12-31":0.63,"2005-01-28":0.63,"2005-03-31":0.53,"2005-04-01":0.53,"2005-04-29":0.53}}'
toy.droplevel(level = 1, axis = 1)
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-33-982eee5ba162> in <module>()
----> 1 toy.droplevel(level = 1, axis = 1)
C:\Program Files (x86)\Microsoft Visual Studio\Shared\Anaconda3_64\lib\site-packages\pandas\core\generic.py in __getattr__(self, name)
4370 if self._info_axis._can_hold_identifiers_and_holds_name(name):
4371 return self[name]
-> 4372 return object.__getattribute__(self, name)
4373
4374 def __setattr__(self, name, value):
AttributeError: 'DataFrame' object has no attribute 'droplevel'

Problem is the use of an older pandas version, because if you check DataFrame.droplevel:
New in version 0.24.0.
The solution is to use MultiIndex.droplevel:
toy.columns = toy.columns.droplevel(level = 1)

Related

Why my custom dataset gives attribute error?

my initial data was like this
My data is a pandas dataframe with columns 'title' and 'label'. I want to make a custom dataset with this. so I made the dataset like below. I'm working on google colab
class newsDataset(torch.utils.data.Dataset):
def __init__(self,train=True,transform=None):
if train:
self.file = ttrain
else:
self.file= ttest
self.text_list = self.file['title'].values.tolist()
self.class_list=self.file['label'].values.tolist()
def __len__(self):
return len(self.text_list)
def __getitem__(self,idx):
label = self.class_list[idx]
text = self.text_list[idx]
if self.transform is not None:
text=self.transform(text)
return label, text
and this is how I call the dataloader
trainset=newsDataset()
train_iter = DataLoader(trainset)
iter(train_iter).next()
and it gives
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-153-9872744bc8a9> in <module>()
----> 1 iter(train_iter).next()
5 frames
/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataset.py in __getattr__(self, attribute_name)
81 return function
82 else:
---> 83 raise AttributeError
84
85 #classmethod
AttributeError:
There was no exact error message. can anybody help me?
Please add the following missing line to your __init__ function:
self.transform = transform
You don't have self.transform attribute so you need to initialize it in __init__ method

Why can't I pickle my custom exception in Python

I am using Python 3.6. I defined a custom exception following this page: https://docs.python.org/3.6/tutorial/errors.html
class MyException(Exception):
def __init__(self, a):
self.a = a
Now, if I try to pickle + unpickle this exception, I get the following error:
>> e = MyException(a=1); pickle.loads(pickle.dumps(e))
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-44-413e2ac6234d> in <module>
----> 1 e = MyException(a=1); pickle.loads(pickle.dumps(e))
TypeError: __init__() missing 1 required positional argument: 'a'
Does anyone know why?
Seems to be that the Exception baseclass has special treatment for named arguments (likely in its implementation of __new__)
you can fix this by properly calling the base class in your __init__ method:
>>> class MyException(Exception):
... def __init__(self, a):
... super().__init__(a)
... self.a = a
...
>>> pickle.loads(pickle.dumps(MyException(a=1)))
MyException(1,)

module 'pandas' has no attribute 'series'

i got this error while running this code
import numpy as np
import pandas as pd
labels = ['a','b','c']
my_list = [10,20,30]
arr = np.array(my_list)
d = {'a':10,'b':20,'c':30}
pd.series(data = my_list)
full error msg
AttributeError Traceback (most recent call last)
<ipython-input-10-494578c29940> in <module>
----> 1 pd.series(data = my_list)
F:\New folder (8)\lib\site-packages\pandas\__init__.py in __getattr__(name)
260 return _SparseArray
261
--> 262 raise AttributeError(f"module 'pandas' has no attribute '{name}'")
263
264
AttributeError: module 'pandas' has no attribute 'series'
Series is a Pandas class, so it starts with a capital letter. The below should work.
pd.Series(data = my_list)

Why am I getting AttributeError: 'Series' object has no attribute 'to_datetime' and AttributeError: 'Series' object has no attribute 'concat'

DATA SET HERE https://drive.google.com/open?id=1r24rrKWcIpA1x34tPY8olJFMtjzl0IRn
I am trying to convert my time series into type DateTime, so to do that I needed to make all the number eg.(1256,430,7) into same size eg.(1256,0430,0007) for the to_datetime() to work.
So fist I separated the Entity according to their length and added number of zero required, concat the "Series" into one that were seperated.
FIRST ERROR
This error was sorted by using append() in Series. Then I tried to_datetime()
Second Error
I cant figure out what am I doing wrong
I updated my pandas library up to date.
Still the problem remains.
I tried this on Google Colab thinking might be some problem in my pandas lib.
a='0'+arr_time[arr_time.astype(str).str.len()==3].astype(int).astype(str)
b='0'+dep_time[dep_time.astype(str).str.len()==3].astype(int).astype(str)
c='00'+arr_time[arr_time.astype(str).str.len()==2].astype(int).astype(str)
d='00'+dep_time[dep_time.astype(str).str.len()==2].astype(int).astype(str)
e='000'+arr_time[arr_time.astype(str).str.len()==1].astype(int).astype(str)
f='000'+dep_time[dep_time.astype(str).str.len()==1].astype(int).astype(str)
g=arr_time[arr_time.astype(str).str.len()==4].astype(int).astype(str)
h=dep_time[dep_time.astype(str).str.len()==4].astype(int).astype(str)
arr_time=pd.concat([a,c,e,g])
dep_time=pd.concat([b,d,f,h])
'''concat() is then replaced by append() ERROR detail is below
{AttributeError Traceback (most recent call
last)
<ipython-input-20-61e7a2e98b70> in <module>()
----> 1 arr_time=pd.concat([aa,ba,ca,pa])
2 dep_time=pd.concat([ad,bd,cd,pa])
/usr/local/lib/python3.6/dist-packages/pandas/core/generic.py in
__getattr__(self, name)
5065 if
self._info_axis._can_hold_identifiers_and_holds_name(name):
5066 return self[name]
-> 5067 return object.__getattribute__(self, name)
5068
5069 def __setattr__(self, name, value):
AttributeError: 'Series' object has no attribute 'concat'}'''
arr_time=a.append(c).append(e).append(g)
dep_time=b.append(d).append(f).append(h)
datetime=arr_time.to_datetime(format="%H%M")
'''second error BOTH OF THEM LOOK ALIKE
{AttributeError Traceback (most recent call last)
<ipython-input-13-5a63dad5c284> in <module>
----> 1 datetime=arr_time.to_datetime(format="%H%M")
~\AppData\Local\Continuum\anaconda3\lib\site- packages\pandas\core\generic.py in __getattr__(self, name)
5065 if
self._info_axis._can_hold_identifiers_and_holds_name(name):
5066 return self[name]
-> 5067 return object.__getattribute__(self, name)
5068
5069 def __setattr__(self, name, value):
AttributeError: 'Series' object has no attribute 'to_datetime'}'''

pyspark 2.2 'DataFrame' object has no attribute 'map' , backward compatibility is missing how to solve it [duplicate]

This question already has answers here:
AttributeError: 'DataFrame' object has no attribute 'map'
(2 answers)
Closed 5 years ago.
When I am working with Spark 1.6 below code is working fine:
ddl = sqlContext.sql("""show create table {mytable }""".format(mytable="""mytest.my_dummytable"""))
map(''.join, ddl\
.map(lambda my_row: [str(data).replace("`", "'") for data in my_row])\
.collect())
However, when I moved to spark 2.2 I got the following exception:
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-> in <module>()
1 ddl = sqlContext.sql("""show create table {mytable }""".format(mytable ="""mytest.my_dummytable"""))
----> 2 map(''.join, ddl..map(lambda my_row: [str(data).replace("`", "'") for data in my_row]).collect())
spark2/python/pyspark/sql/dataframe.py in __getattr__(self, name)
if name not in self.columns:
raise AttributeError(
-> "'%s' object has no attribute '%s'" % (self.__class__.__name__, name))
jc = self._jdf.apply(name)
return Column(jc)
AttributeError: 'DataFrame' object has no attribute 'map'
You have to call .rdd first. Spark 2.0 stopped aliasing df.map() to df.rdd.map(). See this.

Resources