Pandas qcut ValueError: Input array must be 1 dimensional - python-3.x

I was trying to categorize my values into 10 bins and I met with this error. How can I avoid this error and bin them smoothly?
Attached are samples of the data and code.
Data
JPM
2008-01-02 NaN
2008-01-03 NaN
2008-01-04 NaN
2008-01-07 NaN
2008-01-08 NaN
... ...
2009-12-24 -0.054014
2009-12-28 0.002679
2009-12-29 -0.030015
2009-12-30 -0.019058
2009-12-31 -0.010090
505 rows × 1 columns
Code
group_names = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
discretized_roc = pd.qcut(df, 10, labels=group_names)

Pass column JPM and for only integer indicators of the bins use labels=False:
discretized_roc = pd.qcut(df['JPM'], 10, labels=False)
If need first column instead label use DataFrame.iloc:
discretized_roc = pd.qcut(df.iloc[:, 0], 10, labels=False)

Related

Add Multiindex Dataframe and corresponding Series

I am failing to add a multiindex dataframe and a corresponding series. E.g.,
df = pd.DataFrame({
'a': [0, 0, 1, 1], 'b': [0, 1, 0, 1],
'c': [1, 2, 3, 4], 'd':[1, 1, 1, 1]}).set_index(['a', 'b'])
# Dataframe might contain records that are not in the series and vice versa
s = df['d'].iloc[1:]
df + s
produces
ValueError: cannot join with no overlapping index names
Does anyone know how to resolve this? I can work around the issue by adding each column separately, using e.g.
df['d'] + s
But I would like to add the two in a single operation. Any help is much appreciated.
By default, + tries to align along columns, the following would work with +:
s = df.iloc[:, 1:]
df + s
# c d
#a b
#0 0 NaN 2
# 1 NaN 2
#1 0 NaN 2
# 1 NaN 2
In your case, you need to align along index. You can explicitly specify axis=0 with add method for that:
df.add(s, axis=0)
# c d
#a b
#0 0 NaN NaN
# 1 3.0 2.0
#1 0 4.0 2.0
# 1 5.0 2.0

Filter dataframe by minimum number of values in groups

I have the following dataframe structure:
#----------------------------------------------------------#
# Generate dataframe mock example.
# define categorical column.
grps = pd.DataFrame(['a', 'a', 'a', 'b', 'b', 'b'])
# generate dataframe 1.
df1 = pd.DataFrame([[3, 4, 6, 8, 10, 4],
[5, 7, 2, 8, 9, 6],
[5, 3, 4, 8, 4, 6]]).transpose()
# introduce nan into dataframe 1.
for col in df1.columns:
df1.loc[df1.sample(frac=0.1).index, col] = np.nan
# generate dataframe 2.
df2 = pd.DataFrame([[3, 4, 6, 8, 10, 4],
[5, 7, 2, 8, 9, 6],
[5, 3, 4, 8, 4, 6]]).transpose()
# concatenate categorical column and dataframes.
df = pd.concat([grps, df1, df2], axis = 1)
# Assign column headers.
df.columns = ['Groups', 1, 2, 3, 4, 5, 6]
# Set index as group column.
df = df.set_index('Groups')
# Generate stacked dataframe structure.
test_stack_df = df.stack(dropna = False).reset_index()
# Change column names.
test_stack_df = test_stack_df.rename(columns = {'level_1': 'IDs',
0: 'Values'})
#----------------------------------------------------------#
Original dataframe - 'df' before stacking:
Groups 1 2 3 4 5 6
a 3 5 5 3 5 5
a nan nan 3 4 7 3
a 6 2 nan 6 2 4
b 8 8 8 8 8 8
b 10 9 4 10 9 4
b 4 6 6 4 6 6
I would like to filter the columns such that there are minimally 3 valid values in each group - 'a' & 'b'. The final output should be only columns 4, 5, 6.
I am currently using the following method:
# Function to define boolean series.
def filter_vals(test_stack_df, orig_df):
# Reset index.
df_idx_reset = orig_df.reset_index()
# Generate list with size of each 'Group'.
grp_num = pd.value_counts(df_idx_reset['Groups']).to_list()
# Data series for each 'Group'.
expt_class_1 = test_stack_df.head(grp_num[0])
expt_class_2 = test_stack_df.tail(grp_num[1])
# Check if both 'Groups' contain at least 3 values per 'ID'.
valid_IDs = len(expt_class_1['Values'].value_counts()) >=3 & \
len(expt_class_2['Values'].value_counts()) >=3
# Return 'true' or 'false'
return(valid_IDs)
# Apply function to dataframe to generate boolean series.
bool_series = test_stack_df.groupby('IDs').apply(filter_vals, df)
# Transpose original dataframe.
df_T = df.transpose()
# Filter by boolean series & transpose again.
df_filtered = df_T[bool_series].transpose()
I could achieve this with minimal fuss by applying pandas.dataframe.dropna() method and use a threshold of 6. However, this won't account for different sized groups or allow me to specify the minimum number of values, which the current code does.
For larger dataframes i.e. 4000+ columns, the code is a little slow i.e. takes ~ 20 secs to complete filtering process. I have tried alternate methods that access the original dataframe directly using groupby & transform but can't get anything to work.
Is there a simpler and faster method? Thanks for your time!
EDIT: 03/05/2020 (15:58) - just spotted something that wasn't clear in the function above. Still works but have clarified variable names. Sorry for the confusion!
This will do the trick for you:
df.notna().groupby(level='Groups').sum(axis=0).ge(3).all(axis=0)
Outputs:
1 False
2 False
3 False
4 True
5 True
6 True
dtype: bool

How to NaN elements in a numpy array based on upper and lower boundery

I have an numpy array with 0-10 elements.
a = np.arange(0,11)
np.random.shuffle(a)
a
array([ 1, 7, 8, 0, 2, 3, 4, 10, 9, 5, 6])
I wanted to convert elements to NaN if they are between 4 and 8.
As a first step I tried getting the array with np.where like below:
np.where([a > 4] & [a < 8])
but got error. any help please?
You need:
import numpy as np
a = np.arange(0,11)
np.random.shuffle(a)
print(a)
# [ 7 4 2 3 6 10 1 9 5 0 8]
a = np.where((a > 4) & (a < 8), np.nan, a)
print(a)
# [nan 4. 2. 3. nan 10. 1. 9. nan 0. 8.]
If you want to know how np.where()works, refer numpy.where() detailed, step-by-step explanation / examples

pandas how to derived values for a new column base on another column

I have a dataframe that has a column that each value is a list, now I want to derive a new column which only considers list whose size is greater than 1, and assigns a unique integer to the corresponding row as id.
A sample dataframe is like,
document_no_list cluster_id
[1,2,3] 1
[4,5,6,7] 2
[8] nan
[9,10] 3
column cluster_id only considers the 1st, 2nd and 4th row, each of which has a size greater than 1, and assigns a unique integer id to its corresponding cell in the column.
I am wondering how to do that in pandas.
We can use np.random.choice for unique random values with .loc for assignment i.e
df = pd.DataFrame({'document_no_list' :[[1,2,3],[4,5,6,7],[8],[9,10]]})
x = df['document_no_list'].apply(len) > 1
df.loc[x,'Cluster'] = np.random.choice(range(len(df)),x.sum(),replace=False)
Output :
document_no_list Cluster
0 [1, 2, 3] 2.0
1 [4, 5, 6, 7] 1.0
2 [8] NaN
3 [9, 10] 3.0
If you want continuous numbers then you can use
df.loc[x,'Cluster'] = np.arange(x.sum())+1
document_no_list Cluster
0 [1, 2, 3] 1.0
1 [4, 5, 6, 7] 2.0
2 [8] NaN
3 [9, 10] 3.0
Hope it helps
Create a boolean column based on condition and apply cumsum() on rows with 1's
df['cluster_id'] = df['document_no_list'].apply(lambda x: len(x)> 1).astype(int)
df.loc[df['cluster_id'] == 1, 'cluster_id'] = df.loc[df['cluster_id'] == 1, 'cluster_id'].cumsum()
document_no_list cluster_id
0 [1, 2, 3] 1
1 [4, 5, 6, 7] 2
2 [8] 0
3 [9, 10] 3

Why is order of data items reversed while creating a pandas series?

I am new to python and pandas so please bear with me. I tried searching the answer everywhere but couldn't find it. Here's my question:
This is my input code:
list = [1, 2, 3, 1, 2, 3]
s = pd.Series([1, 2, 3, 10, 20, 30], list)
The output is:
1 1
2 2
3 3
1 10
2 20
3 30
dtype: int64
Now, my question is why the "list" is coming before the first list specified while creating the series? I tried running the same code multiple times to check if the series creation is orderless. Any help would be highly appreciated.
Python Version:
Python 3.6.0
Pandas Version:
'0.19.2'
I think you omit index which specify first column called index - so Series construction now is:
#dont use list as variable, because reversed word in python
L = [1, 2, 3, 1, 2, 3]
s = pd.Series(data=[1, 2, 3, 10, 20, 30], index=L)
print (s)
1 1
2 2
3 3
1 10
2 20
3 30
dtype: int64
You can also check Series documentation.

Resources