How to select rows and columns that meet criteria from a list - python-3.x

Let's say I've got a pandas dataframe that looks like:
df1 = pd.DataFrame({"Item ID":["A", "B", "C", "D", "E"], "Value1":[1, 2, 3, 4, 0],
"Value2":[4, 5, 1, 8, 7], "Value3":[3, 8, 1, 2, 0],"Value4":[4, 5, 7, 9, 4]})
print(df1)
Item_ID Value1 Value2 Value3 Value4
0 A 1 4 3 4
1 B 2 5 8 5
2 C 3 1 1 7
3 D 4 8 2 9
4 E 0 7 0 4
Now I've got a second dataframe that looks like:
df2 = {"Item ID":["A", "C", "D"], "Value5":[4, 5, 7]}
print(df2)
Item_ID Value5
0 A 4
1 C 5
2 D 7
What I want do is find where the Item ID's match between my two data frames, and then add the "Value5" column values to the intersection of the rows AND ONLY columns Value1 and Value2 from df1 (these columns could change every iteration, so these columns need to be contained in a variable).
My output should show:
4 added to Row A, columns "Value1" and "Value2"
5 added to Row C, columns "Value1" and "Value2"
7 added to Row D, columns "Value1" and "Value2"
Item_ID Value1 Value2 Value3 Value4
0 A 5 8 3 4
1 B 2 5 8 5
2 C 8 6 1 7
3 D 11 15 2 9
4 E 0 7 0 4
Of course my data is many thousand rows long. I can do it using a for loop, but this is taking way too long. I want to be able to vectorize this in some way. Any ideas?
This is what I ended up doing based on #sammywemmy's suggestions
#Takes columns names and changes them into a list
names = df1.colnames.tolist()
#Merge df1 and df2 based on 'Item_ID'
merged = df1.merge(df2, on='Item_ID', how='outer')
for i in range(len(names)):
#using assign and **, we can bring in variable names with assign.
#Then add our Value 5 column
merged = merged.assign(**{names[i] : lambda x : x[names[i]] + x.Value5})
#Only keep all the columns before and including 'Value4'
df1= merged.loc[:,:'Value4']

Try this:
#set 'Item ID' as the index
df1 = df1.set_index('Item ID')
df2 = df2.set_index('Item ID')
#create list of columns that you are interested in
list_of_cols = ['Value1','Value2']
#create two separate dataframes
#unselected will not contain the columns you want to add
unselected = df1.drop(list_of_cols,axis=1)
#this will contain the columns you wish to add
selected = df1.filter(list_of_cols)
#reindex df2 so it has the same indices as df1
#then convert to a series
#fill the null values with 0
A = df2.reindex(index=selected.index,fill_value=0).loc[:,'Value5']
#add the series A to selected
selected = selected.add(A,axis='index')
#combine selected and unselected into one dataframe
result = pd.concat([unselected,selected],axis=1)
#this part is extra to get ur dataframe back to the way it was
#assumption here is that it is value1, value 2, bla bla
#so 1>2>3
#if ur columns are not actually Value1, Value2,
#bla bla, then a different sorting has to be used
#alternatively before the calculations,
#you could create a mapping of the columns to numbers
#that will give u a sorting mechanism and
#restore ur dataframe after calculations are complete
columns = sorted(result.columns,key = lambda x : x[-1])
#reindex back to the way it was
result = result.reindex(columns,axis='columns')
print(result)
Value1 Value2 Value3 Value4
Item ID
A 5 8 3 4
B 2 5 8 5
C 8 6 1 7
D 11 15 2 9
E 0 7 0 4
Alternative solution, using python's built-in dictionaries:
#create dictionaries
dict1 = (df1
#create temporary column
#and set as index
.assign(temp=df1['Item ID'])
.set_index('temp')
.to_dict('index')
)
dict2 = (df2
.assign(temp=df2['Item ID'])
.set_index('temp')
.to_dict('index')
)
list_of_cols = ['Value1','Value2']
intersected_keys = dict1.keys() & dict2.keys()
key_value_pair = [(key,col) for key in intersected_keys
for col in list_of_cols ]
#check for keys that are in both dict1 and 2
#loop through dict 1 and add values from dict2
#can be optimized with a dict comprehension
#leaving as is for better clarity IMHO
for key, val in key_value_pair:
dict1[key][val] = dict1[key][val] + dict2[key]['Value5']
#print(dict1)
{'A': {'Item ID': 'A', 'Value1': 5, 'Value2': 8, 'Value3': 3, 'Value4': 4},
'B': {'Item ID': 'B', 'Value1': 2, 'Value2': 5, 'Value3': 8, 'Value4': 5},
'C': {'Item ID': 'C', 'Value1': 8, 'Value2': 6, 'Value3': 1, 'Value4': 7},
'D': {'Item ID': 'D', 'Value1': 11, 'Value2': 15, 'Value3': 2, 'Value4': 9},
'E': {'Item ID': 'E', 'Value1': 0, 'Value2': 7, 'Value3': 0, 'Value4': 4}}
#create dataframe
pd.DataFrame.from_dict(dict1,orient='index').reset_index(drop=True)
Item ID Value1 Value2 Value3 Value4
0 A 5 8 3 4
1 B 2 5 8 5
2 C 8 6 1 7
3 D 11 15 2 9
4 E 0 7 0 4

Related

Removing rows from DataFrame based on different conditions applied to subset of a data

Here is the Dataframe I am working with:
You can create it using the snippet:
my_dict = {'id': [1,2,1,2,1,2,1,2,1,2,3,1,3, 3],
'category':['a', 'a', 'b', 'b', 'b', 'b', 'a', 'a', 'b', 'b', 'b', 'a', 'a', 'a'],
'value' : [1, 12, 34, 12, 12 ,34, 12, 35, 34, 45, 65, 55, 34, 25]
}
x = pd.DataFrame(my_dict)
x
I want to filter IDs based on the condition: for category a, the count of values should be 2 and for category b, the count of values should be 3. Therefore, I would remove id 1 from category a and id 3 from category b from my original dataset x.
I can write the code for individual categories and start removing id's manually by using the code:
x.query('category == "a"').groupby('id').value.count().loc[lambda x: x != 2]
x.query('category == "b"').groupby('id').value.count().loc[lambda x: x != 3]
But, I don't want to do it manually since there are multiple categories. Is there a better way of doing it by considering all the categories at once and remove id's based on the condition listed in a list/dictionary?
If need filter MultiIndex Series - s by dictionary use Index.get_level_values with Series.map and get equal values per groups in boolean indexing:
s = x.groupby(['category','id']).value.count()
d = {'a': 2, 'b': 3}
print (s[s.eq(s.index.get_level_values(0).map(d))])
category id
a 2 2
3 2
b 1 3
2 3
Name: value, dtype: int64
If need filter original DataFrame:
s = x.groupby(['category','id'])['value'].transform('count')
print (s)
0 3
1 2
2 3
3 3
4 3
5 3
6 3
7 2
8 3
9 3
10 1
11 3
12 2
13 2
Name: value, dtype: int64
d = {'a': 2, 'b': 3}
print (x[s.eq(x['category'].map(d))])
id category value
1 2 a 12
2 1 b 34
3 2 b 12
4 1 b 12
5 2 b 34
7 2 a 35
8 1 b 34
9 2 b 45
12 3 a 34
13 3 a 25

Create key value pair dict for each new line

QI have a data frame of 4 columns. I need to create a {key: value} dictionary for 2 of those columns where this {key: value} pair should be created for each separate line in the data frame. Please refer to the example below:
df>>
a b c d
0 1 2 3 4
1 9 8 7 6
Expected output>>
a b c d new-column
0 1 2 3 4 {a:1, b:2}
1 9 8 7 6 {a:9, b:8}
You can use to_dict, craft a Series and join to the original DataFrame:
df2 = df.join(pd.Series(df[['a','b']].to_dict('index'), name='new_column'))
Output:
a b c d new_column
0 1 2 3 4 {'a': 1, 'b': 2}
1 9 8 7 6 {'a': 9, 'b': 8}
You can assign the values of returned dictionary of to_dict('index') to new column
df['new-column'] = df[['a','b']].to_dict('index').values()
print(df)
a b c d new-column
0 1 2 3 4 {'a': 1, 'b': 2}
1 9 8 7 6 {'a': 9, 'b': 8}

Can I apply vectorization here? Or should I think about this differently?

To put it simply, I have rows of activity that happen in a given month of the year. I want to append additional rows of inactivity in between this activity, while resetting the month values into a sequence. For example, if I have months 2, 5, 7, I need to map these to 1, 4, 7, while my inactive months happen in 2, 3, 5, and 6. So, I would have to add four rows with this inactivity. I've done this with dictionaries and for-loops, but I know this is not efficient, especially when I move this to thousands of rows of data to process. Any suggestions on how to optimize this? Do I need to think about the data format differently? I've had a suggestion to make lists and then move that to the dataframe at the end, but I don't see a huge gain there. I don't know enough of NumPy to figure out how to do this with vectorization since that's super fast and it would be awesome to learn something new. Below is my code with the steps I took:
df = pd.DataFrame({'col1': ['A','A', 'B','B','B','C','C'], 'col2': ['X','Y','X','Y','Z','Y','Y'], 'col3': [1, 8, 2, 5, 7, 6, 7]})
Output:
col1 col2 col3
0 A X 1
1 A Y 8
2 B X 2
3 B Y 5
4 B Z 7
5 C Y 6
6 C Y 7
I'm creating a dictionary to handle this in for loops:
df1 = df.groupby('col1')['col3'].apply(list).to_dict()
df2 = df.groupby('col1')['col2'].apply(list).to_dict()
max_num = max(df.col3)
Output:
{'A': [1, 8], 'B': [2, 5, 7], 'C': [6, 7]}
{'A': ['X', 'Y'], 'B': ['X', 'Y', 'Z'], 'C': ['Y', 'Y']}
8
And now I'm adding those rows using my dictionaries by creating a new data frame:
df_new = pd.DataFrame({'col1': [], 'col2': [], 'col3': []})
for key in df1.keys():
k = 1
if list(df1[key])[-1] - list(df1[key])[0] + 1 < max_num:
for i in list(range(list(df1[key])[0], list(df1[key])[-1] + 1, 1)):
if i in df1[key]:
df_new = df_new.append({'col1': key, 'col2': list(df2[key])[list(df1[key]).index(i)],'col3': str(k)}, ignore_index=True)
else:
df_new = df_new.append({'col1': key, 'col2': 'N' ,'col3': str(k)}, ignore_index=True)
k += 1
df_new = df_new.append({'col1': key, 'col2': 'E', 'col3': str(k)}, ignore_index=True)
else:
for i in list(range(list(df1[key])[0], list(df1[key])[-1] + 1, 1)):
if i in df1[key]:
df_new = df_new.append({'col1': key, 'col2': list(df2[key])[list(df1[key]).index(i)],'col3': str(k)}, ignore_index=True)
else:
df_new = df_new.append({'col1': key, 'col2': 'N' ,'col3': str(k)}, ignore_index=True)
k += 1
Output:
col1 col2 col3
0 A X 1
1 A N 2
2 A N 3
3 A N 4
4 A N 5
5 A N 6
6 A N 7
7 A Y 8
8 B X 1
9 B N 2
10 B N 3
11 B Y 4
12 B N 5
13 B Z 6
14 B E 7
15 C Y 1
16 C Y 2
17 C E 3
And then I pivot to the form I want it:
df_pivot = df_new.pivot(index='col1', columns='col3', values='col2')
Output:
col3 1 2 3 4 5 6 7 8
col1
A X N N N N N N Y
B X N N Y N Z E NaN
C Y Y E NaN NaN NaN NaN NaN
Thanks for the help.
We can replace the steps of creating and using dictionaries by the statement below, which utilizes reindex to place the additional values N and E without explicit loops.
df_new = df.set_index('col3')\
.groupby('col1')\
.apply(lambda dg:
dg.drop('col1', 1)
.reindex(range(dg.index.min(), dg.index.max()+1), fill_value='N')
.reindex(range(dg.index.min(), min(max_num, dg.index.max()+1)+1), fill_value='E')
.set_index(pd.RangeIndex(1, min(max_num, dg.index.max()-dg.index.min()+1+1)+1, name='col3'))
)\
.reset_index()
After this, you can apply your pivot statement as it is.

Splitting dictionary/list into Separate Columns

I have movie dataset saved for revenue prediction. However, the genres column of this dataset has a dictionary in that dictionary there is 2 or more list in 1 row. The DataFrame looks like this this is not actual dataframe but dataframe is similar to this:
df = pd.DataFrame({'a':[1,2,3], 'b':[{'c':1}, [{'c':4},{'d':3}], [{'c':5, 'd':6},{'c':7, 'd':8}]]})
this is output
a b
0 1 {'c': 1}
1 2 [{'c': 4}, {'d': 3}]
2 3 [{'c': 5, 'd': 6}, {'c': 7, 'd': 8}]
I need to split this column into separate columns.
How can i do that I used apply(pd.series) method This is what I'm getting as a output
0 1 c
0 NaN NaN 1.0
1 {'c': 4} {'d': 3} NaN
2 {'c': 5, 'd': 6} {'c': 5, 'd': 6} NaN
but I want like this if possible:
a c d
0 1 1 NaN
1 2 4 3
2 3 5,7 6,8
I do not know if it is possible to achieve what you want by using apply(pd.Series) because you have mixed types in your 'b' column: you have dictionaries and list of dictionaries. Maybe it is, not sure.
However this is how I would do.
First, loop over your column to build a set with all the new column names: that is, the keys of the dictionaries.
Then you can use apply with a custom function to extract the value for each column.
Notice that the values in this column are strings, needed because you want to concatenate with a comma cases like your row #2.
newcols = set()
for el in df['b']:
if isinstance(el, dict):
newcols.update(el.keys())
elif isinstance(el, list):
for i in el:
newcols.update(i.keys())
def extractvalues(x, col):
if isinstance(x['b'], dict):
return x['b'].get(col, np.nan)
elif isinstance(x['b'], list):
return ','.join(str(i.get(col, '')) for i in x['b']).strip(',')
for nc in newcols:
df[nc] = df.apply(lambda r: extractvalues(r, nc), axis=1)
df.drop('b', axis=1, inplace=True)
Your dataframe is now:
a c d
0 1 1 NaN
1 2 4 3
2 3 5,7 6,8

How to trim and reshape dataframe?

I have df that looks like this:
a b c d e f
1 na 2 3 4 5
1 na 2 3 4 5
1 na 2 3 4 5
1 6 2 3 4 5
How do I trim and reshape the dataframe so that for every column the n/a are dropped and the dataframe looks like this:
Edit;
df.dropna() is dropping all the rows.
a b c d e f
1 6 2 3 4 5
This dataframe has millions of rows, I need to be able to drop the n/a rows by column while retaining rows and columns with data in them.
edit;
df.dropna() is dropping all the rows in the column. When I check if the columns with n/a are empty, df.column_name.empty() I get false. So there is data in columns with n/a
For me dropna working nice for remove missing values and Nones:
df = df.dropna()
print (df)
a b c d e f
3 1 6.0 2 3 4 5
But if possible multiple values for removing create mask by isin, chain testing missing values with isnull and last filter by any - return at least one True per row by inverted mask ~:
df = pd.DataFrame({'a': ['a', None, 's', 'd'],
'b': ['na',7, 2, 6],
'c': [2, 2, 2, 2],
'd': [3, 3, 3, 3],
'e': [4, 4, np.nan, 4],
'f': [5, 5, 5, 5]})
print (df)
a b c d e f
0 a na 2 3 4.0 5
1 None 7 2 3 4.0 5
2 s 2 2 3 NaN 5
3 d 6 2 3 4.0 5
df1 = df.dropna()
print (df1)
a b c d e f
0 a na 2 3 4.0 5
3 d 6 2 3 4.0 5
mask = (df.isin(['na', 'n/a']) | df.isnull()).any(axis=1)
df2 = df[~mask]
print (df2)
a b c d e f
3 d 6 2 3 4.0 5

Resources