What am I doing wrong with series.replace()? - python-3.x

I am trying to replace integer values in pd.Series with other integer values as follows. I am using dict-like replace:
ser_list = [pd.Series([65, 1, 0, 0, 1]), pd.Series([0, 62, 1, 1, 0])]
for ser in ser_list:
ser.replace({65: 10, 62: 20})
I am expecting the result:
[10, 1, 0, 0, 1] # first series in the list
[0, 20, 1, 1, 0] # second series in the list
where 65 should be replaced with 10 in the first series, and 62 should be replaced with 20 in the second.
However, in with this code it is returning the original series without any replacement. Any clue why?

It is possible, by inplace=True:
for ser in ser_list:
ser.replace({65: 10, 62: 20}, inplace=True)
print (ser_list)
[0 10
1 1
2 0
3 0
4 1
dtype: int64, 0 0
1 20
2 1
3 1
4 0
dtype: int64]
But not recommended like mentioned #Dan in comments - link:
The pandas core team discourages the use of the inplace parameter, and eventually it will be deprecated (which means "scheduled for removal from the library"). Here's why:
inplace won't work within a method chain.
The use of inplace often doesn't prevent copies from being created, contrary to what the name implies.
Removing the inplace option would reduce the complexity of the pandas codebase.
Or assign to same variable in list comprehension:
ser_list = [ser.replace({65: 10, 62: 20}) for ser in ser_list]
Loop solution is possible with append to new list and assign back:
out = []
for ser in ser_list:
ser = ser.replace({65: 10, 62: 20})
out.append(ser)
print (out)
[0 10
1 1
2 0
3 0
4 1
dtype: int64, 0 0
1 20
2 1
3 1
4 0
dtype: int64]

We can also use Series.map with fillna and list comprehension:
new = [ser.map({65: 10, 62: 20}).fillna(ser) for ser in ser_list]
print(new)
[0 10.0
1 1.0
2 0.0
3 0.0
4 1.0
dtype: float64, 0 0.0
1 20.0
2 1.0
3 1.0
4 0.0
dtype: float64]

Related

Count positive, negative or zero values numbers for multiple columns in Python

Given a dataset as follows:
[{'id': 1, 'ltp': 2, 'change': nan},
{'id': 2, 'ltp': 5, 'change': 1.5},
{'id': 3, 'ltp': 3, 'change': -0.4},
{'id': 4, 'ltp': 0, 'change': 2.0},
{'id': 5, 'ltp': 5, 'change': -0.444444},
{'id': 6, 'ltp': 16, 'change': 2.2}]
Or
id ltp change
0 1 2 NaN
1 2 5 1.500000
2 3 3 -0.400000
3 4 0 2.000000
4 5 5 -0.444444
5 6 16 2.200000
I would like to count the number of positive, negative and 0 values for columns ltp and change, the result may like this:
columns positive negative zero
0 ltp 5 0 1
1 change 3 2 0
How could I do that with Pandas or Numpy? Thanks.
Updated: if I need groupby type and count following the logic above
id ltp change type
0 1 2 NaN a
1 2 5 1.500000 a
2 3 3 -0.400000 a
3 4 0 2.000000 b
4 5 5 -0.444444 b
5 6 16 2.200000 b
The expected output:
type columns positive negative zero
0 a ltp 3 0 0
1 a change 1 1 0
2 b ltp 2 0 1
3 b change 2 1 0
Use np.sign with selected columns first, then counts values in value_counts, transpose, replaced missing values and last rename columns names by dictionary with convert index to column columns:
d= {-1:'negative', 1:'positive', 0:'zero'}
df = (np.sign(df[['ltp','change']])
.apply(pd.value_counts)
.T
.fillna(0)
.astype(int)
.rename(columns=d)
.rename_axis('columns')
.reset_index())
print (df)
columns negative zero positive
0 ltp 0 1 5
1 change 2 0 3
EDIT: Another solution with type column with DataFrame.melt, mapping column with np.sign and count values by crosstab:
d= {-1:'negative', 1:'positive', 0:'zero'}
df1 = df.melt(id_vars='type', value_vars=['ltp','change'], var_name='columns')
df1['value'] = np.sign(df1['value']).map(d)
df1 = (pd.crosstab([df1['type'],df1['columns']], df1['value'])
.rename_axis(columns=None)
.reset_index())
print (df1)
type columns negative positive zero
0 a change 1 1 0
1 a ltp 0 3 0
2 b change 1 2 0
3 b ltp 0 2 1

Dataframe sequence detection: Find groups where three rows in a row have negative values

Lets say I have a column df['test']:
-1, -2, -3, 2, -4, 3, -5, -4, -3, -7
So I would like to filter out the groups which have at least three negative values in a row. So
groups = my_grouping_function_by_sequence()
groups[0] = [-1,-2-3]
groups[1] = [-5,-4,-3,-7]
Are there some pre-defined checks on testing for sequences in numerical data for pandas? It does not need to be pandas, but I am searching for a fast and adaptable solution. Any advice would be helpful. Thanks!
Using GroupBy and cumsum to create groups of consecutive negative numbers.
grps = df['test'].gt(0).cumsum()
dfs = [d.dropna() for _, d in df.mask(df['test'].gt(0)).groupby(grps) if d.shape[0] >= 3]
Output
for df in dfs:
print(df)
test
0 -1.0
1 -2.0
2 -3.0
test
6 -5.0
7 -4.0
8 -3.0
9 -7.0
Explanation
Let's go through this step by step:
The first line, creates groups for consecutive negative numbers
print(grps)
0 0
1 0
2 0
3 1
4 1
5 2
6 2
7 2
8 2
9 2
Name: test, dtype: int32
But as we can see, it also includes the positive numbers, which we don't want to consider in our ouput. So we use DataFrame.mask to convert these values to NaN:
df.mask(df['test'].gt(0))
# same as df.mask(df['test'] > 0)
test
0 -1.0
1 -2.0
2 -3.0
3 NaN
4 -4.0
5 NaN
6 -5.0
7 -4.0
8 -3.0
9 -7.0
Then we groupby on this dataframe and only keep the groups which are >= 3 rows:
for _, d in df.mask(df['test'].gt(0)).groupby(grps):
if d.shape[0] >= 3:
print(d.dropna())
test
0 -1.0
1 -2.0
2 -3.0
test
6 -5.0
7 -4.0
8 -3.0
9 -7.0
Too acknowledge #erfan answer elegant but didn't easily understand. My attempt below.
df = pd.DataFrame({'test': [-1, -2, -3, 2, -4, 3, -5, -4, -3, -7]})
Conditionally select rows with negatives
df['j'] = np.where(df['test']<0,1,-1)
df['k']=df['j'].rolling(3, min_periods=1).sum()
df2=df[df['k']==3]
slice Iteratively the dataframe getting 3rd and 2 consecutive rows above
for index, row in df2.iterrows():
print(df.loc[index - 2 : index + 0, 'test'])
#Erfan your answer is brilliant and I'm still trying to understand the second line. Your first line got me started to try to write it in my own, less efficient way.
import pandas as pd
df = pd.DataFrame({'test': [-1, -2, -3, 2, -4, 3, -5, -4, -3, -7]})
df['+ or -'] = df['test'].gt(0)
df['group'] = df['+ or -'].cumsum()
df_gb = df.groupby('group').count().reset_index().drop('+ or -', axis=1)
df_new = pd.merge(df, df_gb, how='left', on='group').drop('+ or -', axis=1)
df_new = df_new[(df_new['test_x'] < 0) & (df_new['test_y'] >=3)].drop('test_y',
axis=1)
for i in df_new['group'].unique():
j = pd.DataFrame(df_new.loc[df_new['group'] == i, 'test_x'])
print(j)

How can I merge data-frame rows by different columns

I have a DataFrame with 200k rows and some 50 columns with same id in different columns, looking like below:
df = pd.DataFrame({'pic': [1, 0, 0, 0, 2, 0, 3, 0, 0]
, 'story': [0, 1, 0, 2, 0, 0, 0, 0, 3]
, 'des': [0, 0, 1, 0, 0, 2, 0, 3, 0]
, 'some_another_value': [2, 1, 6, 5, 4, 3, 1, 1, 1]
, 'some_value': [1, 2, 3, 4, 5, 6, 7, 8, 9]})
pic story des some_another_value some_value
0 1 0 0 2 nan
1 0 1 0 nan 2
2 0 0 1 nan 3
3 0 2 0 nan 4
4 2 0 0 4 nan
5 0 0 2 nan 6
6 3 0 0 1 nan
7 0 0 3 nan 8
8 0 3 0 nan 9
I would like to merge the rows which have the same value in 'pic' 'story' 'des'
pic story des some_another_value some_value
0 1 1 1 2 5
3 2 2 2 4 10
6 3 3 3 1 17
How can this be achieved?
*I am looking for a solution which not contain a for loop
*Prefer not a sum method
I'm not sure why you say Prefer not a sum method when your expected output data clearly indicate sum. For your sample data, in each row, exactly one of pic, story, des is zero, so:
df.groupby(df[['pic','story', 'des']].sum(1)).sum()
gives
pic story des some_another_value some_value
1 1 1 1 2.0 5.0
2 2 2 2 4.0 10.0
3 3 3 3 1.0 17.0

How to replace selected rows of pandas dataframe with a np array, sequentially?

I have a pandas dataframe
A B C
0 NaN 2 6
1 3.0 4 0
2 NaN 0 4
3 NaN 1 2
where I have a column A that has NaN values in some rows (not necessarily consecutive).
I want to replace these values not with a constant value (which pd.fillna does), but rather with the values from a numpy array.
So the desired outcome is:
A B C
0 1.0 2 6
1 3.0 4 0
2 5.0 0 4
3 7.0 1 2
I'm not sure the .replace method will help here as well, since that seems to replace value <-> value via dictionary. Whereas here I want to sequentially change NaN to its corresponding value (by index) in the np array.
I tried:
MWE:
huh = pd.DataFrame([[np.nan, 2, 6],
[3, 4, 0],
[np.nan, 0, 4],
[np.nan, 1, 2]],
columns=list('ABC'))
huh.A[huh.A.isnull()] = np.array([1,5,7]) # what i want to do, but this gives error
gives the error
SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
'''
I read the docs but I can't understand how to do this with .loc.
How do I do this properly, preferably without a for loop?
Other info:
The number of elements in the np array will always match the number of NaN in the dataframe, so your answer does not need to check for this.
You are really close, need DataFrame.loc for avoid chained assignments:
huh.loc[huh.A.isnull(), 'A'] = np.array([1,5,7])
print (huh)
A B C
0 1.0 2 6
1 3.0 4 0
2 5.0 0 4
3 7.0 1 2
zip
This should account for uneven lengths
m = huh.A.isna()
a = np.array([1, 5, 7])
s = pd.Series(dict(zip(huh.index[m], a)))
huh.fillna({'A': s})
A B C
0 1.0 2 6
1 3.0 4 0
2 5.0 0 4
3 7.0 1 2

vectorize groupby pandas

I have a dataframe like this:
day time category count
1 1 a 13
1 2 a 47
1 3 a 1
1 5 a 2
1 6 a 4
2 7 a 14
2 2 a 10
2 1 a 9
2 4 a 2
2 6 a 1
I want to group by day, and category and get a vector of the counts per time. Where time can be between 1 and 10. The max and min of time I have defined in two variables called max and min.
This is how I want the resulting dataframe to look:
day category count
1 a [13,47,1,0,2,4,0,0,0,0]
2 a [9,10,0,2,0,1,14,0,0,0]
Does anyone know how to make this aggregation into a vaector?
Use reindex with MultiIndex.from_product for append missing categories and then groupby with list:
df = df.set_index(['day','time', 'category'])
a = df.index.levels[0]
b = range(1,11)
c = df.index.levels[2]
df = df.reindex(pd.MultiIndex.from_product([a,b,c], names=df.index.names), fill_value=0)
df = df.groupby(['day','category'])['count'].apply(list).reset_index()
print (df)
day category count
0 1 a [13, 47, 1, 0, 2, 4, 0, 0, 0, 0]
1 2 a [9, 10, 0, 2, 0, 1, 14, 0, 0, 0]
EDIT:
df = (df.set_index(['day','time', 'category'])['count']
.unstack(1, fill_value=0)
.reindex(columns=range(1,11), fill_value=0))
print (df)
time 1 2 3 4 5 6 7 8 9 10
day category
1 a 13 47 1 0 2 4 0 0 0 0
2 a 9 10 0 2 0 1 14 0 0 0
df = df.apply(list, 1).reset_index(name='count')
print (df)
day ... count
0 1 ... [13, 47, 1, 0, 2, 4, 0, 0, 0, 0]
1 2 ... [9, 10, 0, 2, 0, 1, 14, 0, 0, 0]
[2 rows x 3 columns]

Resources