Filling nulls with a list in Pandas using fillna - python-3.x

Given a pd.Series, I would like to replace null values with a list. That is, given:
import numpy as np
import pandas as pd
ser = pd.Series([0,1,np.nan])
I want a function that would return
0 0
1 1
2 [nan]
But if I try using the natural function for this, namely fillna:
result = ser.fillna([np.nan])
but I get the error
TypeError: "value" parameter must be a scalar or dict, but you passed a "list"
Any suggestions of a simple way to acheive this?

Use apply, because fillna working with scalars only:
print (ser.apply(lambda x: [np.nan] if pd.isnull(x) else x))
0 0
1 1
2 [nan]
dtype: object

You can change to object
ser=ser.astype('object')
Then assign the list np.nan
ser.loc[ser.isnull()]=[[np.nan]]

I ended up using
ser.loc[ser.isnull()] = ser.loc[ser.isnull()].apply(lambda x: [np.nan])
because pd.isnull(x) would give me ambiguous truth values error ( i have other lists in my series too ). This is a combination of YOBEN_S' and jezrael's answer.

fillna can take a Series, and a list can be converted to a Series. Wrapping your list in pd.Series() worked for me:
result = ser.fillna(pd.Series([np.nan]))
result
0 0.0
1 1.0
2 NaN
dtype: float64

Related

Syntax for np.where replace column values [duplicate]

Background
I just upgraded my Pandas from 0.11 to 0.13.0rc1. Now, the application is popping out many new warnings. One of them like this:
E:\FinReporter\FM_EXT.py:449: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_index,col_indexer] = value instead
quote_df['TVol'] = quote_df['TVol']/TVOL_SCALE
I want to know what exactly it means? Do I need to change something?
How should I suspend the warning if I insist to use quote_df['TVol'] = quote_df['TVol']/TVOL_SCALE?
The function that gives errors
def _decode_stock_quote(list_of_150_stk_str):
"""decode the webpage and return dataframe"""
from cStringIO import StringIO
str_of_all = "".join(list_of_150_stk_str)
quote_df = pd.read_csv(StringIO(str_of_all), sep=',', names=list('ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefg')) #dtype={'A': object, 'B': object, 'C': np.float64}
quote_df.rename(columns={'A':'STK', 'B':'TOpen', 'C':'TPCLOSE', 'D':'TPrice', 'E':'THigh', 'F':'TLow', 'I':'TVol', 'J':'TAmt', 'e':'TDate', 'f':'TTime'}, inplace=True)
quote_df = quote_df.ix[:,[0,3,2,1,4,5,8,9,30,31]]
quote_df['TClose'] = quote_df['TPrice']
quote_df['RT'] = 100 * (quote_df['TPrice']/quote_df['TPCLOSE'] - 1)
quote_df['TVol'] = quote_df['TVol']/TVOL_SCALE
quote_df['TAmt'] = quote_df['TAmt']/TAMT_SCALE
quote_df['STK_ID'] = quote_df['STK'].str.slice(13,19)
quote_df['STK_Name'] = quote_df['STK'].str.slice(21,30)#.decode('gb2312')
quote_df['TDate'] = quote_df.TDate.map(lambda x: x[0:4]+x[5:7]+x[8:10])
return quote_df
More error messages
E:\FinReporter\FM_EXT.py:449: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_index,col_indexer] = value instead
quote_df['TVol'] = quote_df['TVol']/TVOL_SCALE
E:\FinReporter\FM_EXT.py:450: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_index,col_indexer] = value instead
quote_df['TAmt'] = quote_df['TAmt']/TAMT_SCALE
E:\FinReporter\FM_EXT.py:453: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_index,col_indexer] = value instead
quote_df['TDate'] = quote_df.TDate.map(lambda x: x[0:4]+x[5:7]+x[8:10])
The SettingWithCopyWarning was created to flag potentially confusing "chained" assignments, such as the following, which does not always work as expected, particularly when the first selection returns a copy. [see GH5390 and GH5597 for background discussion.]
df[df['A'] > 2]['B'] = new_val # new_val not set in df
The warning offers a suggestion to rewrite as follows:
df.loc[df['A'] > 2, 'B'] = new_val
However, this doesn't fit your usage, which is equivalent to:
df = df[df['A'] > 2]
df['B'] = new_val
While it's clear that you don't care about writes making it back to the original frame (since you are overwriting the reference to it), unfortunately this pattern cannot be differentiated from the first chained assignment example. Hence the (false positive) warning. The potential for false positives is addressed in the docs on indexing, if you'd like to read further. You can safely disable this new warning with the following assignment.
import pandas as pd
pd.options.mode.chained_assignment = None # default='warn'
Other Resources
pandas User Guide: Indexing and selecting data
Python Data Science Handbook: Data Indexing and Selection
Real Python: SettingWithCopyWarning in Pandas: Views vs Copies
Dataquest: SettingwithCopyWarning: How to Fix This Warning in Pandas
Towards Data Science: Explaining the SettingWithCopyWarning in pandas
How to deal with SettingWithCopyWarning in Pandas?
This post is meant for readers who,
Would like to understand what this warning means
Would like to understand different ways of suppressing this warning
Would like to understand how to improve their code and follow good practices to avoid this warning in the future.
Setup
np.random.seed(0)
df = pd.DataFrame(np.random.choice(10, (3, 5)), columns=list('ABCDE'))
df
A B C D E
0 5 0 3 3 7
1 9 3 5 2 4
2 7 6 8 8 1
What is the SettingWithCopyWarning?
To know how to deal with this warning, it is important to understand what it means and why it is raised in the first place.
When filtering DataFrames, it is possible slice/index a frame to return either a view, or a copy, depending on the internal layout and various implementation details. A "view" is, as the term suggests, a view into the original data, so modifying the view may modify the original object. On the other hand, a "copy" is a replication of data from the original, and modifying the copy has no effect on the original.
As mentioned by other answers, the SettingWithCopyWarning was created to flag "chained assignment" operations. Consider df in the setup above. Suppose you would like to select all values in column "B" where values in column "A" is > 5. Pandas allows you to do this in different ways, some more correct than others. For example,
df[df.A > 5]['B']
1 3
2 6
Name: B, dtype: int64
And,
df.loc[df.A > 5, 'B']
1 3
2 6
Name: B, dtype: int64
These return the same result, so if you are only reading these values, it makes no difference. So, what is the issue? The problem with chained assignment, is that it is generally difficult to predict whether a view or a copy is returned, so this largely becomes an issue when you are attempting to assign values back. To build on the earlier example, consider how this code is executed by the interpreter:
df.loc[df.A > 5, 'B'] = 4
# becomes
df.__setitem__((df.A > 5, 'B'), 4)
With a single __setitem__ call to df. OTOH, consider this code:
df[df.A > 5]['B'] = 4
# becomes
df.__getitem__(df.A > 5).__setitem__('B', 4)
Now, depending on whether __getitem__ returned a view or a copy, the __setitem__ operation may not work.
In general, you should use loc for label-based assignment, and iloc for integer/positional based assignment, as the spec guarantees that they always operate on the original. Additionally, for setting a single cell, you should use at and iat.
More can be found in the documentation.
Note
All boolean indexing operations done with loc can also be done with iloc. The only difference is that iloc expects either
integers/positions for index or a numpy array of boolean values, and
integer/position indexes for the columns.
For example,
df.loc[df.A > 5, 'B'] = 4
Can be written nas
df.iloc[(df.A > 5).values, 1] = 4
And,
df.loc[1, 'A'] = 100
Can be written as
df.iloc[1, 0] = 100
And so on.
Just tell me how to suppress the warning!
Consider a simple operation on the "A" column of df. Selecting "A" and dividing by 2 will raise the warning, but the operation will work.
df2 = df[['A']]
df2['A'] /= 2
/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/IPython/__main__.py:1: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
df2
A
0 2.5
1 4.5
2 3.5
There are a couple ways of directly silencing this warning:
(recommended) Use loc to slice subsets:
df2 = df.loc[:, ['A']]
df2['A'] /= 2 # Does not raise
Change pd.options.mode.chained_assignment
Can be set to None, "warn", or "raise". "warn" is the default. None will suppress the warning entirely, and "raise" will throw a SettingWithCopyError, preventing the operation from going through.
pd.options.mode.chained_assignment = None
df2['A'] /= 2
Make a deepcopy
df2 = df[['A']].copy(deep=True)
df2['A'] /= 2
#Peter Cotton in the comments, came up with a nice way of non-intrusively changing the mode (modified from this gist) using a context manager, to set the mode only as long as it is required, and the reset it back to the original state when finished.
class ChainedAssignent:
def __init__(self, chained=None):
acceptable = [None, 'warn', 'raise']
assert chained in acceptable, "chained must be in " + str(acceptable)
self.swcw = chained
def __enter__(self):
self.saved_swcw = pd.options.mode.chained_assignment
pd.options.mode.chained_assignment = self.swcw
return self
def __exit__(self, *args):
pd.options.mode.chained_assignment = self.saved_swcw
The usage is as follows:
# Some code here
with ChainedAssignent():
df2['A'] /= 2
# More code follows
Or, to raise the exception
with ChainedAssignent(chained='raise'):
df2['A'] /= 2
SettingWithCopyError:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
The "XY Problem": What am I doing wrong?
A lot of the time, users attempt to look for ways of suppressing this exception without fully understanding why it was raised in the first place. This is a good example of an XY problem, where users attempt to solve a problem "Y" that is actually a symptom of a deeper rooted problem "X". Questions will be raised based on common problems that encounter this warning, and solutions will then be presented.
Question 1
I have a DataFrame
df
A B C D E
0 5 0 3 3 7
1 9 3 5 2 4
2 7 6 8 8 1
I want to assign values in col "A" > 5 to 1000. My expected output is
A B C D E
0 5 0 3 3 7
1 1000 3 5 2 4
2 1000 6 8 8 1
Wrong way to do this:
df.A[df.A > 5] = 1000 # works, because df.A returns a view
df[df.A > 5]['A'] = 1000 # does not work
df.loc[df.A > 5]['A'] = 1000 # does not work
Right way using loc:
df.loc[df.A > 5, 'A'] = 1000
Question 21
I am trying to set the value in cell (1, 'D') to 12345. My expected output is
A B C D E
0 5 0 3 3 7
1 9 3 5 12345 4
2 7 6 8 8 1
I have tried different ways of accessing this cell, such as
df['D'][1]. What is the best way to do this?
1. This question isn't specifically related to the warning, but
it is good to understand how to do this particular operation correctly
so as to avoid situations where the warning could potentially arise in
future.
You can use any of the following methods to do this.
df.loc[1, 'D'] = 12345
df.iloc[1, 3] = 12345
df.at[1, 'D'] = 12345
df.iat[1, 3] = 12345
Question 3
I am trying to subset values based on some condition. I have a
DataFrame
A B C D E
1 9 3 5 2 4
2 7 6 8 8 1
I would like to assign values in "D" to 123 such that "C" == 5. I
tried
df2.loc[df2.C == 5, 'D'] = 123
Which seems fine but I am still getting the
SettingWithCopyWarning! How do I fix this?
This is actually probably because of code higher up in your pipeline. Did you create df2 from something larger, like
df2 = df[df.A > 5]
? In this case, boolean indexing will return a view, so df2 will reference the original. What you'd need to do is assign df2 to a copy:
df2 = df[df.A > 5].copy()
# Or,
# df2 = df.loc[df.A > 5, :]
Question 4
I'm trying to drop column "C" in-place from
A B C D E
1 9 3 5 2 4
2 7 6 8 8 1
But using
df2.drop('C', axis=1, inplace=True)
Throws SettingWithCopyWarning. Why is this happening?
This is because df2 must have been created as a view from some other slicing operation, such as
df2 = df[df.A > 5]
The solution here is to either make a copy() of df, or use loc, as before.
In general the point of the SettingWithCopyWarning is to show users (and especially new users) that they may be operating on a copy and not the original as they think. There are false positives (IOW if you know what you are doing it could be ok). One possibility is simply to turn off the (by default warn) warning as #Garrett suggest.
Here is another option:
In [1]: df = DataFrame(np.random.randn(5, 2), columns=list('AB'))
In [2]: dfa = df.ix[:, [1, 0]]
In [3]: dfa.is_copy
Out[3]: True
In [4]: dfa['A'] /= 2
/usr/local/bin/ipython:1: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_index,col_indexer] = value instead
#!/usr/local/bin/python
You can set the is_copy flag to False, which will effectively turn off the check, for that object:
In [5]: dfa.is_copy = False
In [6]: dfa['A'] /= 2
If you explicitly copy then no further warning will happen:
In [7]: dfa = df.ix[:, [1, 0]].copy()
In [8]: dfa['A'] /= 2
The code the OP is showing above, while legitimate, and probably something I do as well, is technically a case for this warning, and not a false positive. Another way to not have the warning would be to do the selection operation via reindex, e.g.
quote_df = quote_df.reindex(columns=['STK', ...])
Or,
quote_df = quote_df.reindex(['STK', ...], axis=1) # v.0.21
Here I answer the question directly. How can we deal with it?
Make a .copy(deep=False) after you slice. See pandas.DataFrame.copy.
Wait, doesn't a slice return a copy? After all, this is what the warning message is attempting to say? Read the long answer:
import pandas as pd
df = pd.DataFrame({'x':[1,2,3]})
This gives a warning:
df0 = df[df.x>2]
df0['foo'] = 'bar'
This does not:
df1 = df[df.x>2].copy(deep=False)
df1['foo'] = 'bar'
Both df0 and df1 are DataFrame objects, but something about them is different that enables pandas to print the warning. Let's find out what it is.
import inspect
slice= df[df.x>2]
slice_copy = df[df.x>2].copy(deep=False)
inspect.getmembers(slice)
inspect.getmembers(slice_copy)
Using your diff tool of choice, you will see that beyond a couple of addresses, the only material difference is this:
| | slice | slice_copy |
| _is_copy | weakref | None |
The method that decides whether to warn is DataFrame._check_setitem_copy which checks _is_copy. So here you go. Make a copy so that your DataFrame is not _is_copy.
The warning is suggesting to use .loc, but if you use .loc on a frame that _is_copy, you will still get the same warning. Misleading? Yes. Annoying? You bet. Helpful? Potentially, when chained assignment is used. But it cannot correctly detect chain assignment and prints the warning indiscriminately.
Pandas dataframe copy warning
When you go and do something like this:
quote_df = quote_df.ix[:,[0,3,2,1,4,5,8,9,30,31]]
pandas.ix in this case returns a new, stand alone dataframe.
Any values you decide to change in this dataframe, will not change the original dataframe.
This is what pandas tries to warn you about.
Why .ix is a bad idea
The .ix object tries to do more than one thing, and for anyone who has read anything about clean code, this is a strong smell.
Given this dataframe:
df = pd.DataFrame({"a": [1,2,3,4], "b": [1,1,2,2]})
Two behaviors:
dfcopy = df.ix[:,["a"]]
dfcopy.a.ix[0] = 2
Behavior one: dfcopy is now a stand alone dataframe. Changing it will not change df
df.ix[0, "a"] = 3
Behavior two: This changes the original dataframe.
Use .loc instead
The pandas developers recognized that the .ix object was quite smelly[speculatively] and thus created two new objects which helps in the accession and assignment of data. (The other being .iloc)
.loc is faster, because it does not try to create a copy of the data.
.loc is meant to modify your existing dataframe inplace, which is more memory efficient.
.loc is predictable, it has one behavior.
The solution
What you are doing in your code example is loading a big file with lots of columns, then modifying it to be smaller.
The pd.read_csv function can help you out with a lot of this and also make the loading of the file a lot faster.
So instead of doing this
quote_df = pd.read_csv(StringIO(str_of_all), sep=',', names=list('ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefg')) #dtype={'A': object, 'B': object, 'C': np.float64}
quote_df.rename(columns={'A':'STK', 'B':'TOpen', 'C':'TPCLOSE', 'D':'TPrice', 'E':'THigh', 'F':'TLow', 'I':'TVol', 'J':'TAmt', 'e':'TDate', 'f':'TTime'}, inplace=True)
quote_df = quote_df.ix[:,[0,3,2,1,4,5,8,9,30,31]]
Do this
columns = ['STK', 'TPrice', 'TPCLOSE', 'TOpen', 'THigh', 'TLow', 'TVol', 'TAmt', 'TDate', 'TTime']
df = pd.read_csv(StringIO(str_of_all), sep=',', usecols=[0,3,2,1,4,5,8,9,30,31])
df.columns = columns
This will only read the columns you are interested in, and name them properly. No need for using the evil .ix object to do magical stuff.
This topic is really confusing with Pandas. Luckily, it has a relatively simple solution.
The problem is that it is not always clear whether data filtering operations (e.g. loc) return a copy or a view of the DataFrame. Further use of such filtered DataFrame could therefore be confusing.
The simple solution is (unless you need to work with very large sets of data):
Whenever you need to update any values, always make sure that you explicitly copy the DataFrame before the assignment.
df # Some DataFrame
df = df.loc[:, 0:2] # Some filtering (unsure whether a view or copy is returned)
df = df.copy() # Ensuring a copy is made
df[df["Name"] == "John"] = "Johny" # Assignment can be done now (no warning)
I had been getting this issue with .apply() when assigning a new dataframe from a pre-existing dataframe on which I've used the .query() method. For instance:
prop_df = df.query('column == "value"')
prop_df['new_column'] = prop_df.apply(function, axis=1)
Would return this error. The fix that seems to resolve the error in this case is by changing this to:
prop_df = df.copy(deep=True)
prop_df = prop_df.query('column == "value"')
prop_df['new_column'] = prop_df.apply(function, axis=1)
However, this is not efficient especially when using large dataframes, due to having to make a new copy.
If you're using the .apply() method in generating a new column and its values, a fix that resolves the error and is more efficient is by adding .reset_index(drop=True):
prop_df = df.query('column == "value"').reset_index(drop=True)
prop_df['new_column'] = prop_df.apply(function, axis=1)
Just simply:
import pandas as pd
# ...
pd.set_option('mode.chained_assignment', None)
To remove any doubt, my solution was to make a deep copy of the slice instead of a regular copy.
This may not be applicable depending on your context (Memory constraints / size of the slice, potential for performance degradation - especially if the copy occurs in a loop like it did for me, etc...)
To be clear, here is the warning I received:
/opt/anaconda3/lib/python3.6/site-packages/ipykernel/__main__.py:54:
SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame
See the caveats in the documentation:
http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
Illustration
I had doubts that the warning was thrown because of a column I was dropping on a copy of the slice. While not technically trying to set a value in the copy of the slice, that was still a modification of the copy of the slice.
Below are the (simplified) steps I have taken to confirm the suspicion, I hope it will help those of us who are trying to understand the warning.
Example 1: dropping a column on the original affects the copy
We knew that already but this is a healthy reminder. This is NOT what the warning is about.
>> data1 = {'A': [111, 112, 113], 'B':[121, 122, 123]}
>> df1 = pd.DataFrame(data1)
>> df1
A B
0 111 121
1 112 122
2 113 123
>> df2 = df1
>> df2
A B
0 111 121
1 112 122
2 113 123
# Dropping a column on df1 affects df2
>> df1.drop('A', axis=1, inplace=True)
>> df2
B
0 121
1 122
2 123
It is possible to avoid changes made on df1 to affect df2. Note: you can avoid importing copy.deepcopy by doing df.copy() instead.
>> data1 = {'A': [111, 112, 113], 'B':[121, 122, 123]}
>> df1 = pd.DataFrame(data1)
>> df1
A B
0 111 121
1 112 122
2 113 123
>> import copy
>> df2 = copy.deepcopy(df1)
>> df2
A B
0 111 121
1 112 122
2 113 123
# Dropping a column on df1 does not affect df2
>> df1.drop('A', axis=1, inplace=True)
>> df2
A B
0 111 121
1 112 122
2 113 123
Example 2: dropping a column on the copy may affect the original
This actually illustrates the warning.
>> data1 = {'A': [111, 112, 113], 'B':[121, 122, 123]}
>> df1 = pd.DataFrame(data1)
>> df1
A B
0 111 121
1 112 122
2 113 123
>> df2 = df1
>> df2
A B
0 111 121
1 112 122
2 113 123
# Dropping a column on df2 can affect df1
# No slice involved here, but I believe the principle remains the same?
# Let me know if not
>> df2.drop('A', axis=1, inplace=True)
>> df1
B
0 121
1 122
2 123
It is possible to avoid changes made on df2 to affect df1
>> data1 = {'A': [111, 112, 113], 'B':[121, 122, 123]}
>> df1 = pd.DataFrame(data1)
>> df1
A B
0 111 121
1 112 122
2 113 123
>> import copy
>> df2 = copy.deepcopy(df1)
>> df2
A B
0 111 121
1 112 122
2 113 123
>> df2.drop('A', axis=1, inplace=True)
>> df1
A B
0 111 121
1 112 122
2 113 123
This should work:
quote_df.loc[:,'TVol'] = quote_df['TVol']/TVOL_SCALE
Some may want to simply suppress the warning:
class SupressSettingWithCopyWarning:
def __enter__(self):
pd.options.mode.chained_assignment = None
def __exit__(self, *args):
pd.options.mode.chained_assignment = 'warn'
with SupressSettingWithCopyWarning():
#code that produces warning
As this question is already fully explained and discussed in existing answers, I will just provide a neat pandas approach to the context manager using pandas.option_context (links to documentation and example) - there is absolutely isn't any need to create a custom class with all the dunder methods and other bells and whistles.
First the context manager code itself:
from contextlib import contextmanager
#contextmanager
def SuppressPandasWarning():
with pd.option_context("mode.chained_assignment", None):
yield
Then an example:
import pandas as pd
from string import ascii_letters
a = pd.DataFrame({"A": list(ascii_letters[0:4]), "B": range(0,4)})
mask = a["A"].isin(["c", "d"])
# Even shallow copy below is enough to not raise the warning, but why is a mystery to me.
b = a.loc[mask] # .copy(deep=False)
# Raises the `SettingWithCopyWarning`
b["B"] = b["B"] * 2
# Does not!
with SuppressPandasWarning():
b["B"] = b["B"] * 2
It is worth noticing is that both approaches do not modify a, which is a bit surprising to me, and even a shallow df copy with .copy(deep=False) would prevent this warning to be raised (as far as I understand, shallow copy should at least modify a as well, but it doesn't. pandas magic.).
This might apply to NumPy only, which means you might need to import it, but the data I used for my examples NumPy was not essential with the calculations, but you can simply stop this settingwithcopy warning message, by using this one line of code below:
np.warnings.filterwarnings('ignore')
Follow-up beginner question / remark
Maybe a clarification for other beginners like me (I come from R which seems to work a bit differently under the hood). The following harmless-looking and functional code kept producing the SettingWithCopy warning, and I couldn't figure out why. I had both read and understood the issued with "chained indexing", but my code doesn't contain any:
def plot(pdb, df, title, **kw):
df['target'] = (df['ogg'] + df['ugg']) / 2
# ...
But then, later, much too late, I looked at where the plot() function is called:
df = data[data['anz_emw'] > 0]
pixbuf = plot(pdb, df, title)
So "df" isn't a data frame, but an object that somehow remembers that it was created by indexing a data frame (so is that a view?) which would make the line in plot(),
df['target'] = ...
equivalent to
data[data['anz_emw'] > 0]['target'] = ...
which is a chained indexing.
Anyway,
def plot(pdb, df, title, **kw):
df.loc[:,'target'] = (df['ogg'] + df['ugg']) / 2
fixed it.
You could avoid the whole problem like this, I believe:
return (
pd.read_csv(StringIO(str_of_all), sep=',', names=list('ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefg')) #dtype={'A': object, 'B': object, 'C': np.float64}
.rename(columns={'A':'STK', 'B':'TOpen', 'C':'TPCLOSE', 'D':'TPrice', 'E':'THigh', 'F':'TLow', 'I':'TVol', 'J':'TAmt', 'e':'TDate', 'f':'TTime'}, inplace=True)
.ix[:,[0,3,2,1,4,5,8,9,30,31]]
.assign(
TClose=lambda df: df['TPrice'],
RT=lambda df: 100 * (df['TPrice']/quote_df['TPCLOSE'] - 1),
TVol=lambda df: df['TVol']/TVOL_SCALE,
TAmt=lambda df: df['TAmt']/TAMT_SCALE,
STK_ID=lambda df: df['STK'].str.slice(13,19),
STK_Name=lambda df: df['STK'].str.slice(21,30)#.decode('gb2312'),
TDate=lambda df: df.TDate.map(lambda x: x[0:4]+x[5:7]+x[8:10]),
)
)
Using Assign. From the documentation: Assign new columns to a DataFrame, returning a new object (a copy) with all the original columns in addition to the new ones.
See Tom Augspurger's article on method chaining in pandas: Modern Pandas (Part 2): Method Chaining
I was facing the same warning, while I executed this part of my code:
def scaler(self, numericals):
scaler = MinMaxScaler()
self.data.loc[:, numericals[0]] = scaler.fit_transform(self.data.loc[:, numericals[0]])
self.data.loc[:, numericals[1]] = scaler.fit_transform(self.data.loc[:, numericals[1]])
where scaler is a MinMaxScaler and numericals[0] contains names of three of my numerical columns.
The warning was removed as I changed the code to:
def scaler(self, numericals):
scaler = MinMaxScaler()
self.data.loc[:][numericals[0]] = scaler.fit_transform(self.data.loc[:][numericals[0]])
self.data.loc[:][numericals[1]] = scaler.fit_transform(self.data.loc[:][numericals[1]])
So, just change [:, ~] to [:][~].
If you have assigned the slice to a variable and want to set using the variable as in the following:
df2 = df[df['A'] > 2]
df2['B'] = value
And you do not want to use Jeff's solution, because your condition computing df2 is to long or for some other reason, then you can use the following:
df.loc[df2.index.tolist(), 'B'] = value
df2.index.tolist() returns the indices from all entries in df2, which will then be used to set column B in the original dataframe.
In my case, I would create a new column based on the index, but I got the same warning as you:
df_temp["Quarter"] = df_temp.index.quarter
I use insert() instead of direct assignment, and it works for me:
df_temp.insert(loc=0, column='Quarter', value=df_temp.index.quarter)
For me this issue occurred in a following simplified example. And I was also able to solve it (hopefully with a correct solution):
Old code with warning:
def update_old_dataframe(old_dataframe, new_dataframe):
for new_index, new_row in new_dataframe.iterrorws():
old_dataframe.loc[new_index] = update_row(old_dataframe.loc[new_index], new_row)
def update_row(old_row, new_row):
for field in [list_of_columns]:
# line with warning because of chain indexing old_dataframe[new_index][field]
old_row[field] = new_row[field]
return old_row
This printed the warning for the line old_row[field] = new_row[field]
Since the rows in update_row method are actually type Series, I replaced the line with:
old_row.at[field] = new_row.at[field]
I.e., a method for accessing/lookups for a Series. Even though both works just fine and the result is same, this way I don't have to disable the warnings (=keep them for other chain indexing issues somewhere else).
Just create a copy of your dataframe(s) using the .copy() method before the warning appears, to remove all of your warnings.
This happens, because we do not want to make changes to the original quote_df. In other words, we do not want to play with the reference of the object of the quote_df which we have created for quote_df.
quote_df = quote_df.copy()

Iteration over a list in a Pandas DataFrame column

I have a dataframe df as this one:
my_list
Index
0 [81310, 81800]
1 [82160]
2 [75001, 75002, 75003, 75004, 75005, 75006, 750...
3 [95190]
4 [38170, 38180]
5 [95240]
6 [71150]
7 [62520]
I have a list named code with at least one element.
code = ['75008', '75015']
I want to create another column in my DataFrame named my_min, containing the minimum absolute difference between each element of the list code and the list from df.my_list.
Here are the commands I tried :
df.loc[:, 'my_list'] = min([abs(int(x)-int(y)) for x in code for y in df.loc[:, 'my_list'].str[:]])
>>> TypeError: int() argument must be a string, a bytes-like object or a number, not 'list'
#or
df.loc[:, 'my_list'] = min([abs(int(x)-int(y)) for x in code for y in df.loc[:, 'my_list']])
>>> TypeError: int() argument must be a string, a bytes-like object or a number, not 'list'
#or
df.loc[:, 'my_list'] = min([abs(int(x)-int(y)) for x in code for y in df.loc[:, 'my_list'].tolist()])
>>> TypeError: int() argument must be a string, a bytes-like object or a number, not 'list'
#or
df.loc[:, 'my_list'] = min([abs(int(x)-int(y)) for x in code for y in z for z in df.loc[:, 'my_list'].str[:]])
>>> UnboundLocalError: local variable 'z' referenced before assignment
#or
df.loc[:, 'my_list'] = min([abs(int(x)-int(y)) for x in code for y in z for z in df.loc[:, 'my_list']])
>>> UnboundLocalError: local variable 'z' referenced before assignment
#or
df.loc[:, 'my_list'] = min([abs(int(x)-int(y)) for x in code for y in z for z in df.loc[:, 'my_list'].tolist()])
>>> UnboundLocalError: local variable 'z' referenced before assignment
you could do this with a list comprehension:
import pandas as pd
import numpy as np
df = pd.DataFrame({'my_list':[[81310, 81800],[82160]]})
code = ['75008', '75015']
pd.DataFrame({'my_min':[min([abs(int(i) - j) for i in code for j in x])
for x in df.my_list]})
returns
my_min
0 6295
1 7145
You could also use pd.Series.apply instead of the outer list, for example:
df.my_list.apply(lambda x: min([abs(int(i) - j) for i in code for j in x]) )
Write a helper: def find_min(lst): -- it is clear you know how to do that. The helper will consult a global named code.
Then apply it:
df['my_min'] = df.my_list.apply(find_min)
The advantage of breaking out a helper
is you can write separate unit tests for it.
If you prefer to avoid globals,
you will find partial quite helpful.
https://docs.python.org/3/library/functools.html#functools.partial
If you have pandas 0.25+ you can use explode and combine with np.min:
# sample data
df = pd.DataFrame({'my_list':
[[81310, 81800], [82160], [75001,75002]]})
code = ['75008', '75015']
# concatenate the lists into one series
s = df.my_list.explode()
# convert `code` into np.array
code = np.array(code, dtype=int)
# this is the output series
pd.Series(np.min(np.abs(s.values[:,None] - code),axis=1),
index=s.index).min(level=0)
Output:
0 6295
1 7145
2 6
dtype: int64

How to properly make a pandas mask [duplicate]

Background
I just upgraded my Pandas from 0.11 to 0.13.0rc1. Now, the application is popping out many new warnings. One of them like this:
E:\FinReporter\FM_EXT.py:449: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_index,col_indexer] = value instead
quote_df['TVol'] = quote_df['TVol']/TVOL_SCALE
I want to know what exactly it means? Do I need to change something?
How should I suspend the warning if I insist to use quote_df['TVol'] = quote_df['TVol']/TVOL_SCALE?
The function that gives errors
def _decode_stock_quote(list_of_150_stk_str):
"""decode the webpage and return dataframe"""
from cStringIO import StringIO
str_of_all = "".join(list_of_150_stk_str)
quote_df = pd.read_csv(StringIO(str_of_all), sep=',', names=list('ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefg')) #dtype={'A': object, 'B': object, 'C': np.float64}
quote_df.rename(columns={'A':'STK', 'B':'TOpen', 'C':'TPCLOSE', 'D':'TPrice', 'E':'THigh', 'F':'TLow', 'I':'TVol', 'J':'TAmt', 'e':'TDate', 'f':'TTime'}, inplace=True)
quote_df = quote_df.ix[:,[0,3,2,1,4,5,8,9,30,31]]
quote_df['TClose'] = quote_df['TPrice']
quote_df['RT'] = 100 * (quote_df['TPrice']/quote_df['TPCLOSE'] - 1)
quote_df['TVol'] = quote_df['TVol']/TVOL_SCALE
quote_df['TAmt'] = quote_df['TAmt']/TAMT_SCALE
quote_df['STK_ID'] = quote_df['STK'].str.slice(13,19)
quote_df['STK_Name'] = quote_df['STK'].str.slice(21,30)#.decode('gb2312')
quote_df['TDate'] = quote_df.TDate.map(lambda x: x[0:4]+x[5:7]+x[8:10])
return quote_df
More error messages
E:\FinReporter\FM_EXT.py:449: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_index,col_indexer] = value instead
quote_df['TVol'] = quote_df['TVol']/TVOL_SCALE
E:\FinReporter\FM_EXT.py:450: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_index,col_indexer] = value instead
quote_df['TAmt'] = quote_df['TAmt']/TAMT_SCALE
E:\FinReporter\FM_EXT.py:453: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_index,col_indexer] = value instead
quote_df['TDate'] = quote_df.TDate.map(lambda x: x[0:4]+x[5:7]+x[8:10])
The SettingWithCopyWarning was created to flag potentially confusing "chained" assignments, such as the following, which does not always work as expected, particularly when the first selection returns a copy. [see GH5390 and GH5597 for background discussion.]
df[df['A'] > 2]['B'] = new_val # new_val not set in df
The warning offers a suggestion to rewrite as follows:
df.loc[df['A'] > 2, 'B'] = new_val
However, this doesn't fit your usage, which is equivalent to:
df = df[df['A'] > 2]
df['B'] = new_val
While it's clear that you don't care about writes making it back to the original frame (since you are overwriting the reference to it), unfortunately this pattern cannot be differentiated from the first chained assignment example. Hence the (false positive) warning. The potential for false positives is addressed in the docs on indexing, if you'd like to read further. You can safely disable this new warning with the following assignment.
import pandas as pd
pd.options.mode.chained_assignment = None # default='warn'
Other Resources
pandas User Guide: Indexing and selecting data
Python Data Science Handbook: Data Indexing and Selection
Real Python: SettingWithCopyWarning in Pandas: Views vs Copies
Dataquest: SettingwithCopyWarning: How to Fix This Warning in Pandas
Towards Data Science: Explaining the SettingWithCopyWarning in pandas
How to deal with SettingWithCopyWarning in Pandas?
This post is meant for readers who,
Would like to understand what this warning means
Would like to understand different ways of suppressing this warning
Would like to understand how to improve their code and follow good practices to avoid this warning in the future.
Setup
np.random.seed(0)
df = pd.DataFrame(np.random.choice(10, (3, 5)), columns=list('ABCDE'))
df
A B C D E
0 5 0 3 3 7
1 9 3 5 2 4
2 7 6 8 8 1
What is the SettingWithCopyWarning?
To know how to deal with this warning, it is important to understand what it means and why it is raised in the first place.
When filtering DataFrames, it is possible slice/index a frame to return either a view, or a copy, depending on the internal layout and various implementation details. A "view" is, as the term suggests, a view into the original data, so modifying the view may modify the original object. On the other hand, a "copy" is a replication of data from the original, and modifying the copy has no effect on the original.
As mentioned by other answers, the SettingWithCopyWarning was created to flag "chained assignment" operations. Consider df in the setup above. Suppose you would like to select all values in column "B" where values in column "A" is > 5. Pandas allows you to do this in different ways, some more correct than others. For example,
df[df.A > 5]['B']
1 3
2 6
Name: B, dtype: int64
And,
df.loc[df.A > 5, 'B']
1 3
2 6
Name: B, dtype: int64
These return the same result, so if you are only reading these values, it makes no difference. So, what is the issue? The problem with chained assignment, is that it is generally difficult to predict whether a view or a copy is returned, so this largely becomes an issue when you are attempting to assign values back. To build on the earlier example, consider how this code is executed by the interpreter:
df.loc[df.A > 5, 'B'] = 4
# becomes
df.__setitem__((df.A > 5, 'B'), 4)
With a single __setitem__ call to df. OTOH, consider this code:
df[df.A > 5]['B'] = 4
# becomes
df.__getitem__(df.A > 5).__setitem__('B', 4)
Now, depending on whether __getitem__ returned a view or a copy, the __setitem__ operation may not work.
In general, you should use loc for label-based assignment, and iloc for integer/positional based assignment, as the spec guarantees that they always operate on the original. Additionally, for setting a single cell, you should use at and iat.
More can be found in the documentation.
Note
All boolean indexing operations done with loc can also be done with iloc. The only difference is that iloc expects either
integers/positions for index or a numpy array of boolean values, and
integer/position indexes for the columns.
For example,
df.loc[df.A > 5, 'B'] = 4
Can be written nas
df.iloc[(df.A > 5).values, 1] = 4
And,
df.loc[1, 'A'] = 100
Can be written as
df.iloc[1, 0] = 100
And so on.
Just tell me how to suppress the warning!
Consider a simple operation on the "A" column of df. Selecting "A" and dividing by 2 will raise the warning, but the operation will work.
df2 = df[['A']]
df2['A'] /= 2
/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/IPython/__main__.py:1: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
df2
A
0 2.5
1 4.5
2 3.5
There are a couple ways of directly silencing this warning:
(recommended) Use loc to slice subsets:
df2 = df.loc[:, ['A']]
df2['A'] /= 2 # Does not raise
Change pd.options.mode.chained_assignment
Can be set to None, "warn", or "raise". "warn" is the default. None will suppress the warning entirely, and "raise" will throw a SettingWithCopyError, preventing the operation from going through.
pd.options.mode.chained_assignment = None
df2['A'] /= 2
Make a deepcopy
df2 = df[['A']].copy(deep=True)
df2['A'] /= 2
#Peter Cotton in the comments, came up with a nice way of non-intrusively changing the mode (modified from this gist) using a context manager, to set the mode only as long as it is required, and the reset it back to the original state when finished.
class ChainedAssignent:
def __init__(self, chained=None):
acceptable = [None, 'warn', 'raise']
assert chained in acceptable, "chained must be in " + str(acceptable)
self.swcw = chained
def __enter__(self):
self.saved_swcw = pd.options.mode.chained_assignment
pd.options.mode.chained_assignment = self.swcw
return self
def __exit__(self, *args):
pd.options.mode.chained_assignment = self.saved_swcw
The usage is as follows:
# Some code here
with ChainedAssignent():
df2['A'] /= 2
# More code follows
Or, to raise the exception
with ChainedAssignent(chained='raise'):
df2['A'] /= 2
SettingWithCopyError:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
The "XY Problem": What am I doing wrong?
A lot of the time, users attempt to look for ways of suppressing this exception without fully understanding why it was raised in the first place. This is a good example of an XY problem, where users attempt to solve a problem "Y" that is actually a symptom of a deeper rooted problem "X". Questions will be raised based on common problems that encounter this warning, and solutions will then be presented.
Question 1
I have a DataFrame
df
A B C D E
0 5 0 3 3 7
1 9 3 5 2 4
2 7 6 8 8 1
I want to assign values in col "A" > 5 to 1000. My expected output is
A B C D E
0 5 0 3 3 7
1 1000 3 5 2 4
2 1000 6 8 8 1
Wrong way to do this:
df.A[df.A > 5] = 1000 # works, because df.A returns a view
df[df.A > 5]['A'] = 1000 # does not work
df.loc[df.A > 5]['A'] = 1000 # does not work
Right way using loc:
df.loc[df.A > 5, 'A'] = 1000
Question 21
I am trying to set the value in cell (1, 'D') to 12345. My expected output is
A B C D E
0 5 0 3 3 7
1 9 3 5 12345 4
2 7 6 8 8 1
I have tried different ways of accessing this cell, such as
df['D'][1]. What is the best way to do this?
1. This question isn't specifically related to the warning, but
it is good to understand how to do this particular operation correctly
so as to avoid situations where the warning could potentially arise in
future.
You can use any of the following methods to do this.
df.loc[1, 'D'] = 12345
df.iloc[1, 3] = 12345
df.at[1, 'D'] = 12345
df.iat[1, 3] = 12345
Question 3
I am trying to subset values based on some condition. I have a
DataFrame
A B C D E
1 9 3 5 2 4
2 7 6 8 8 1
I would like to assign values in "D" to 123 such that "C" == 5. I
tried
df2.loc[df2.C == 5, 'D'] = 123
Which seems fine but I am still getting the
SettingWithCopyWarning! How do I fix this?
This is actually probably because of code higher up in your pipeline. Did you create df2 from something larger, like
df2 = df[df.A > 5]
? In this case, boolean indexing will return a view, so df2 will reference the original. What you'd need to do is assign df2 to a copy:
df2 = df[df.A > 5].copy()
# Or,
# df2 = df.loc[df.A > 5, :]
Question 4
I'm trying to drop column "C" in-place from
A B C D E
1 9 3 5 2 4
2 7 6 8 8 1
But using
df2.drop('C', axis=1, inplace=True)
Throws SettingWithCopyWarning. Why is this happening?
This is because df2 must have been created as a view from some other slicing operation, such as
df2 = df[df.A > 5]
The solution here is to either make a copy() of df, or use loc, as before.
In general the point of the SettingWithCopyWarning is to show users (and especially new users) that they may be operating on a copy and not the original as they think. There are false positives (IOW if you know what you are doing it could be ok). One possibility is simply to turn off the (by default warn) warning as #Garrett suggest.
Here is another option:
In [1]: df = DataFrame(np.random.randn(5, 2), columns=list('AB'))
In [2]: dfa = df.ix[:, [1, 0]]
In [3]: dfa.is_copy
Out[3]: True
In [4]: dfa['A'] /= 2
/usr/local/bin/ipython:1: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_index,col_indexer] = value instead
#!/usr/local/bin/python
You can set the is_copy flag to False, which will effectively turn off the check, for that object:
In [5]: dfa.is_copy = False
In [6]: dfa['A'] /= 2
If you explicitly copy then no further warning will happen:
In [7]: dfa = df.ix[:, [1, 0]].copy()
In [8]: dfa['A'] /= 2
The code the OP is showing above, while legitimate, and probably something I do as well, is technically a case for this warning, and not a false positive. Another way to not have the warning would be to do the selection operation via reindex, e.g.
quote_df = quote_df.reindex(columns=['STK', ...])
Or,
quote_df = quote_df.reindex(['STK', ...], axis=1) # v.0.21
Here I answer the question directly. How can we deal with it?
Make a .copy(deep=False) after you slice. See pandas.DataFrame.copy.
Wait, doesn't a slice return a copy? After all, this is what the warning message is attempting to say? Read the long answer:
import pandas as pd
df = pd.DataFrame({'x':[1,2,3]})
This gives a warning:
df0 = df[df.x>2]
df0['foo'] = 'bar'
This does not:
df1 = df[df.x>2].copy(deep=False)
df1['foo'] = 'bar'
Both df0 and df1 are DataFrame objects, but something about them is different that enables pandas to print the warning. Let's find out what it is.
import inspect
slice= df[df.x>2]
slice_copy = df[df.x>2].copy(deep=False)
inspect.getmembers(slice)
inspect.getmembers(slice_copy)
Using your diff tool of choice, you will see that beyond a couple of addresses, the only material difference is this:
| | slice | slice_copy |
| _is_copy | weakref | None |
The method that decides whether to warn is DataFrame._check_setitem_copy which checks _is_copy. So here you go. Make a copy so that your DataFrame is not _is_copy.
The warning is suggesting to use .loc, but if you use .loc on a frame that _is_copy, you will still get the same warning. Misleading? Yes. Annoying? You bet. Helpful? Potentially, when chained assignment is used. But it cannot correctly detect chain assignment and prints the warning indiscriminately.
Pandas dataframe copy warning
When you go and do something like this:
quote_df = quote_df.ix[:,[0,3,2,1,4,5,8,9,30,31]]
pandas.ix in this case returns a new, stand alone dataframe.
Any values you decide to change in this dataframe, will not change the original dataframe.
This is what pandas tries to warn you about.
Why .ix is a bad idea
The .ix object tries to do more than one thing, and for anyone who has read anything about clean code, this is a strong smell.
Given this dataframe:
df = pd.DataFrame({"a": [1,2,3,4], "b": [1,1,2,2]})
Two behaviors:
dfcopy = df.ix[:,["a"]]
dfcopy.a.ix[0] = 2
Behavior one: dfcopy is now a stand alone dataframe. Changing it will not change df
df.ix[0, "a"] = 3
Behavior two: This changes the original dataframe.
Use .loc instead
The pandas developers recognized that the .ix object was quite smelly[speculatively] and thus created two new objects which helps in the accession and assignment of data. (The other being .iloc)
.loc is faster, because it does not try to create a copy of the data.
.loc is meant to modify your existing dataframe inplace, which is more memory efficient.
.loc is predictable, it has one behavior.
The solution
What you are doing in your code example is loading a big file with lots of columns, then modifying it to be smaller.
The pd.read_csv function can help you out with a lot of this and also make the loading of the file a lot faster.
So instead of doing this
quote_df = pd.read_csv(StringIO(str_of_all), sep=',', names=list('ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefg')) #dtype={'A': object, 'B': object, 'C': np.float64}
quote_df.rename(columns={'A':'STK', 'B':'TOpen', 'C':'TPCLOSE', 'D':'TPrice', 'E':'THigh', 'F':'TLow', 'I':'TVol', 'J':'TAmt', 'e':'TDate', 'f':'TTime'}, inplace=True)
quote_df = quote_df.ix[:,[0,3,2,1,4,5,8,9,30,31]]
Do this
columns = ['STK', 'TPrice', 'TPCLOSE', 'TOpen', 'THigh', 'TLow', 'TVol', 'TAmt', 'TDate', 'TTime']
df = pd.read_csv(StringIO(str_of_all), sep=',', usecols=[0,3,2,1,4,5,8,9,30,31])
df.columns = columns
This will only read the columns you are interested in, and name them properly. No need for using the evil .ix object to do magical stuff.
This topic is really confusing with Pandas. Luckily, it has a relatively simple solution.
The problem is that it is not always clear whether data filtering operations (e.g. loc) return a copy or a view of the DataFrame. Further use of such filtered DataFrame could therefore be confusing.
The simple solution is (unless you need to work with very large sets of data):
Whenever you need to update any values, always make sure that you explicitly copy the DataFrame before the assignment.
df # Some DataFrame
df = df.loc[:, 0:2] # Some filtering (unsure whether a view or copy is returned)
df = df.copy() # Ensuring a copy is made
df[df["Name"] == "John"] = "Johny" # Assignment can be done now (no warning)
I had been getting this issue with .apply() when assigning a new dataframe from a pre-existing dataframe on which I've used the .query() method. For instance:
prop_df = df.query('column == "value"')
prop_df['new_column'] = prop_df.apply(function, axis=1)
Would return this error. The fix that seems to resolve the error in this case is by changing this to:
prop_df = df.copy(deep=True)
prop_df = prop_df.query('column == "value"')
prop_df['new_column'] = prop_df.apply(function, axis=1)
However, this is not efficient especially when using large dataframes, due to having to make a new copy.
If you're using the .apply() method in generating a new column and its values, a fix that resolves the error and is more efficient is by adding .reset_index(drop=True):
prop_df = df.query('column == "value"').reset_index(drop=True)
prop_df['new_column'] = prop_df.apply(function, axis=1)
Just simply:
import pandas as pd
# ...
pd.set_option('mode.chained_assignment', None)
To remove any doubt, my solution was to make a deep copy of the slice instead of a regular copy.
This may not be applicable depending on your context (Memory constraints / size of the slice, potential for performance degradation - especially if the copy occurs in a loop like it did for me, etc...)
To be clear, here is the warning I received:
/opt/anaconda3/lib/python3.6/site-packages/ipykernel/__main__.py:54:
SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame
See the caveats in the documentation:
http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
Illustration
I had doubts that the warning was thrown because of a column I was dropping on a copy of the slice. While not technically trying to set a value in the copy of the slice, that was still a modification of the copy of the slice.
Below are the (simplified) steps I have taken to confirm the suspicion, I hope it will help those of us who are trying to understand the warning.
Example 1: dropping a column on the original affects the copy
We knew that already but this is a healthy reminder. This is NOT what the warning is about.
>> data1 = {'A': [111, 112, 113], 'B':[121, 122, 123]}
>> df1 = pd.DataFrame(data1)
>> df1
A B
0 111 121
1 112 122
2 113 123
>> df2 = df1
>> df2
A B
0 111 121
1 112 122
2 113 123
# Dropping a column on df1 affects df2
>> df1.drop('A', axis=1, inplace=True)
>> df2
B
0 121
1 122
2 123
It is possible to avoid changes made on df1 to affect df2. Note: you can avoid importing copy.deepcopy by doing df.copy() instead.
>> data1 = {'A': [111, 112, 113], 'B':[121, 122, 123]}
>> df1 = pd.DataFrame(data1)
>> df1
A B
0 111 121
1 112 122
2 113 123
>> import copy
>> df2 = copy.deepcopy(df1)
>> df2
A B
0 111 121
1 112 122
2 113 123
# Dropping a column on df1 does not affect df2
>> df1.drop('A', axis=1, inplace=True)
>> df2
A B
0 111 121
1 112 122
2 113 123
Example 2: dropping a column on the copy may affect the original
This actually illustrates the warning.
>> data1 = {'A': [111, 112, 113], 'B':[121, 122, 123]}
>> df1 = pd.DataFrame(data1)
>> df1
A B
0 111 121
1 112 122
2 113 123
>> df2 = df1
>> df2
A B
0 111 121
1 112 122
2 113 123
# Dropping a column on df2 can affect df1
# No slice involved here, but I believe the principle remains the same?
# Let me know if not
>> df2.drop('A', axis=1, inplace=True)
>> df1
B
0 121
1 122
2 123
It is possible to avoid changes made on df2 to affect df1
>> data1 = {'A': [111, 112, 113], 'B':[121, 122, 123]}
>> df1 = pd.DataFrame(data1)
>> df1
A B
0 111 121
1 112 122
2 113 123
>> import copy
>> df2 = copy.deepcopy(df1)
>> df2
A B
0 111 121
1 112 122
2 113 123
>> df2.drop('A', axis=1, inplace=True)
>> df1
A B
0 111 121
1 112 122
2 113 123
This should work:
quote_df.loc[:,'TVol'] = quote_df['TVol']/TVOL_SCALE
Some may want to simply suppress the warning:
class SupressSettingWithCopyWarning:
def __enter__(self):
pd.options.mode.chained_assignment = None
def __exit__(self, *args):
pd.options.mode.chained_assignment = 'warn'
with SupressSettingWithCopyWarning():
#code that produces warning
As this question is already fully explained and discussed in existing answers, I will just provide a neat pandas approach to the context manager using pandas.option_context (links to documentation and example) - there is absolutely isn't any need to create a custom class with all the dunder methods and other bells and whistles.
First the context manager code itself:
from contextlib import contextmanager
#contextmanager
def SuppressPandasWarning():
with pd.option_context("mode.chained_assignment", None):
yield
Then an example:
import pandas as pd
from string import ascii_letters
a = pd.DataFrame({"A": list(ascii_letters[0:4]), "B": range(0,4)})
mask = a["A"].isin(["c", "d"])
# Even shallow copy below is enough to not raise the warning, but why is a mystery to me.
b = a.loc[mask] # .copy(deep=False)
# Raises the `SettingWithCopyWarning`
b["B"] = b["B"] * 2
# Does not!
with SuppressPandasWarning():
b["B"] = b["B"] * 2
It is worth noticing is that both approaches do not modify a, which is a bit surprising to me, and even a shallow df copy with .copy(deep=False) would prevent this warning to be raised (as far as I understand, shallow copy should at least modify a as well, but it doesn't. pandas magic.).
This might apply to NumPy only, which means you might need to import it, but the data I used for my examples NumPy was not essential with the calculations, but you can simply stop this settingwithcopy warning message, by using this one line of code below:
np.warnings.filterwarnings('ignore')
Follow-up beginner question / remark
Maybe a clarification for other beginners like me (I come from R which seems to work a bit differently under the hood). The following harmless-looking and functional code kept producing the SettingWithCopy warning, and I couldn't figure out why. I had both read and understood the issued with "chained indexing", but my code doesn't contain any:
def plot(pdb, df, title, **kw):
df['target'] = (df['ogg'] + df['ugg']) / 2
# ...
But then, later, much too late, I looked at where the plot() function is called:
df = data[data['anz_emw'] > 0]
pixbuf = plot(pdb, df, title)
So "df" isn't a data frame, but an object that somehow remembers that it was created by indexing a data frame (so is that a view?) which would make the line in plot(),
df['target'] = ...
equivalent to
data[data['anz_emw'] > 0]['target'] = ...
which is a chained indexing.
Anyway,
def plot(pdb, df, title, **kw):
df.loc[:,'target'] = (df['ogg'] + df['ugg']) / 2
fixed it.
You could avoid the whole problem like this, I believe:
return (
pd.read_csv(StringIO(str_of_all), sep=',', names=list('ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefg')) #dtype={'A': object, 'B': object, 'C': np.float64}
.rename(columns={'A':'STK', 'B':'TOpen', 'C':'TPCLOSE', 'D':'TPrice', 'E':'THigh', 'F':'TLow', 'I':'TVol', 'J':'TAmt', 'e':'TDate', 'f':'TTime'}, inplace=True)
.ix[:,[0,3,2,1,4,5,8,9,30,31]]
.assign(
TClose=lambda df: df['TPrice'],
RT=lambda df: 100 * (df['TPrice']/quote_df['TPCLOSE'] - 1),
TVol=lambda df: df['TVol']/TVOL_SCALE,
TAmt=lambda df: df['TAmt']/TAMT_SCALE,
STK_ID=lambda df: df['STK'].str.slice(13,19),
STK_Name=lambda df: df['STK'].str.slice(21,30)#.decode('gb2312'),
TDate=lambda df: df.TDate.map(lambda x: x[0:4]+x[5:7]+x[8:10]),
)
)
Using Assign. From the documentation: Assign new columns to a DataFrame, returning a new object (a copy) with all the original columns in addition to the new ones.
See Tom Augspurger's article on method chaining in pandas: Modern Pandas (Part 2): Method Chaining
I was facing the same warning, while I executed this part of my code:
def scaler(self, numericals):
scaler = MinMaxScaler()
self.data.loc[:, numericals[0]] = scaler.fit_transform(self.data.loc[:, numericals[0]])
self.data.loc[:, numericals[1]] = scaler.fit_transform(self.data.loc[:, numericals[1]])
where scaler is a MinMaxScaler and numericals[0] contains names of three of my numerical columns.
The warning was removed as I changed the code to:
def scaler(self, numericals):
scaler = MinMaxScaler()
self.data.loc[:][numericals[0]] = scaler.fit_transform(self.data.loc[:][numericals[0]])
self.data.loc[:][numericals[1]] = scaler.fit_transform(self.data.loc[:][numericals[1]])
So, just change [:, ~] to [:][~].
If you have assigned the slice to a variable and want to set using the variable as in the following:
df2 = df[df['A'] > 2]
df2['B'] = value
And you do not want to use Jeff's solution, because your condition computing df2 is to long or for some other reason, then you can use the following:
df.loc[df2.index.tolist(), 'B'] = value
df2.index.tolist() returns the indices from all entries in df2, which will then be used to set column B in the original dataframe.
In my case, I would create a new column based on the index, but I got the same warning as you:
df_temp["Quarter"] = df_temp.index.quarter
I use insert() instead of direct assignment, and it works for me:
df_temp.insert(loc=0, column='Quarter', value=df_temp.index.quarter)
For me this issue occurred in a following simplified example. And I was also able to solve it (hopefully with a correct solution):
Old code with warning:
def update_old_dataframe(old_dataframe, new_dataframe):
for new_index, new_row in new_dataframe.iterrorws():
old_dataframe.loc[new_index] = update_row(old_dataframe.loc[new_index], new_row)
def update_row(old_row, new_row):
for field in [list_of_columns]:
# line with warning because of chain indexing old_dataframe[new_index][field]
old_row[field] = new_row[field]
return old_row
This printed the warning for the line old_row[field] = new_row[field]
Since the rows in update_row method are actually type Series, I replaced the line with:
old_row.at[field] = new_row.at[field]
I.e., a method for accessing/lookups for a Series. Even though both works just fine and the result is same, this way I don't have to disable the warnings (=keep them for other chain indexing issues somewhere else).
Just create a copy of your dataframe(s) using the .copy() method before the warning appears, to remove all of your warnings.
This happens, because we do not want to make changes to the original quote_df. In other words, we do not want to play with the reference of the object of the quote_df which we have created for quote_df.
quote_df = quote_df.copy()

Python using apply function to skip Nan

I am trying to preprocess a dataset to use for XGBoost by mapping the classes in each column to numerical values. A working example looks like this:
from collections import defaultdict
from sklearn.preprocessing import LabelEncoder
import pandas as pd
df1 = pd.DataFrame(data = {'col1': ['A', 'B','C','B','A'], 'col2': ['Z', 'X','Z','Z','Y'], 'col3':['I','J','I','J','J']})
d = defaultdict(LabelEncoder)
encodedDF = df1.apply(lambda x: d[x.name].fit_transform(x))
inv = encodedDF.apply(lambda x: d[x.name].inverse_transform(x))
Where encodedDF gives the output:
col1 col2 col3
0 2 0
1 0 1
2 2 0
1 2 1
0 1 1
And inv just reverts it back to the original dataframe. My issue is when null values get introduced:
df2 = pd.DataFrame(data = {'col1': ['A', 'B',None,'B','A'], 'col2': ['Z', 'X','Z',None,'Y'], 'col3':['I','J','I','J','J']})
encodedDF = df2.apply(lambda x: d[x.name].fit_transform(x))
Running the above will throw the error:
"TypeError: ('argument must be a string or number', 'occurred at index col1')"
Basically, I want to apply the encoding, but skip over the individual cell values that are null to get an output like this:
col1 col2 col3
0 2 0
1 0 1
NaN 2 0
1 NaN 1
0 1 1
I can't use dropna() before applying the encoding because then I lose data that I will be trying to impute down the line with XGBoost. I can't use conditionals to skip x if null, (e.g. using x.notnull() in the lambda function) because fit_transform(x) uses a Pandas.Series object as the argument, and none of the logical operators that I could use in the conditional appear to do what I'm trying to do. I'm not sure what else to try in order to get this to work. I hope what I'm trying to do makes sense. Let me know if I need to clarify.
I think I figured out a workaround. I probably should have been using sklearn's OneHotEncoder class from the beginning instead of the LabelEncoder/defaultdict combo. I'm brand new to all this. I replaced NaNs with dummy values, and then dropped those dummy values once I encoded the dataframe.
import pandas as pd
from sklearn.preprocessing import OneHotEncoder
df = pd.DataFrame(data = {'col1': ['A', 'B','C',None,'A'], 'col2': ['Z', 'X',None,'Z','Y'], 'col3':['I','J',None,'J','J'], 'col4':[45,67,None,32,94]})
replaceVals = {'col1':'missing','col2':'missing','col3':'missing','col4':-1}
df = df.fillna(value = replaceVals,axis=0)
drop = [['missing'],['missing'],['missing'],[-1]]
enc = OneHotEncoder(drop=drop)
encodeDF = enc.fit_transform(df)

Reducing dimensionality of multiindex pandas dataframe using apply

I have the following dataframe:
df = pd.DataFrame({('psl', 't1'): {'fiat': 36.389809173765507,
'mazda': 18.139242981049016,
'opel': 0.97626485600703961,
'toyota': 74.464422292108878},
('psl', 't2'): {'fiat': 35.423004380643462,
'mazda': 24.269803148695079,
'opel': 1.0170540474994665,
'toyota': 60.389948228586832},
('psv', 't1'): {'fiat': 35.836800462163097,
'mazda': 15.893295606055901,
'opel': 0.78744853046848606,
'toyota': 74.054850828062271},
('psv', 't2'): {'fiat': 34.379812557124815,
'mazda': 23.202587247335682,
'opel': 0.80191294532382451,
'toyota': 58.735083244244322}})
It looks like this:
I wish to reduce it from a multiindex to a normal index. I wish to do this by applying a function using t1 and t2 values and returning only a single value which will result in there being two columns: psl and psv.
I have succeeded in grouping it as such and applying a function:
df.groupby(level=0, axis=1).agg(np.mean)
which is very close to what I want except that I don't want to apply np.mean, but rather a custom function. In particular, a percent change function.
My end goal is to be able to do something like this:
df.groupby(level=0, axis=1).apply(lambda t1, t2: (t2-t1)/t1)
Which returns this error:
TypeError: <lambda>() missing 1 required positional argument: 't2'
I have also tried this:
df.apply(lambda x: x[x.name].apply(lambda x: x['t1']/x['t2']))
which in turn returns:
KeyError: (('psl', 't1'), 'occurred at index (psl, t1)')
Could you please include a thorough explanation of each part of your answer to the best of your abilities so I can better understand how pandas works.
Not easy. Use custom function with squeeze for Series and xs for select MultiIndex in columns:
def f(x):
t2 = x.xs('t2', axis=1, level=1)
t1 = x.xs('t1', axis=1, level=1)
a = (t2-t1)/t1
#print (a)
return (a.squeeze())
df1 = df.groupby(level=0, axis=1).agg(f)
print (df1)
psl psv
fiat -0.026568 -0.040656
mazda 0.337972 0.459898
opel 0.041781 0.018369
toyota -0.189009 -0.206871
Use lambda function is possible, but really awfull with repeating code:
df1 = df.groupby(level=0, axis=1)
.agg(lambda x: ((x.xs('t2', axis=1, level=1)-x.xs('t1', axis=1, level=1))/
x.xs('t1', axis=1, level=1)).squeeze())
Using iloc can solve the problem:
df.groupby(level=0, axis=1).agg(lambda x: (x.iloc[:,0]-x.iloc[:,1])/x.iloc[:,0])
Outputs:
psl psv
fiat 0.026568 0.040656
mazda -0.337972 -0.459898
opel -0.041781 -0.018369
toyota 0.189009 0.206871

Resources