delete duplicated rows based on conditions pandas - python-3.x

I want to delete rows in dataframe if (x1, x2, x3) are the same between different rows and save in variable all ids of the rows deleted.
For example, with this data, I want to delete the second row;
d = {'id': ["i1", "i2", "i3", "i4"], 'x1': [13, 13, 61, 61], 'x2': [10, 10, 13, 13], 'x3': [12, 12, 2, 22], 'x4': [24, 24,9, 12]}
df = pd.DataFrame(data=d)

#input data
d = {'id': ["i1", "i2", "i3", "i4"], 'x1': [13, 13, 61, 61], 'x2': [10, 10, 13, 13], 'x3': [12, 12, 2, 22], 'x4': [24, 24,9, 12]}
df = pd.DataFrame(data=d)
#create new column where contents from x1, x2 and x3 columns are merged
df['MergedColumn'] = df[df.columns[1:4]].apply(lambda x: ','.join(x.dropna().astype(str)),axis=1)
#remove duplicates based on the created column and drop created column
df1 = pd.DataFrame(df.drop_duplicates("MergedColumn", keep='first').drop(columns="MergedColumn"))
#print output dataframe
print(df1)
#merge two dataframes
df2 = pd.merge(df, df1, how='left', on = 'id')
#find rows with null values in the right table (rows that were removed)
df2 = df2[df2['x1_y'].isnull()]
#prints ids of rows that were removed
print(df2['id'])

Related

Plotting aggregated values from specific column of multiple dataframe indexed by timedate

I have three dataframe as below:
import pandas as pd
labels=['1','2','3','Aggregated']
df1 = {'date_time': ["2022-10-06 17:23:11","2022-10-06 17:23:12","2022-10-06 17:23:13","2022-10-06 17:23:14","2022-10-06 17:23:15","2022-10-06 17:23:16"],
'value': [4, 5, 6, 7, 8, 9]}
df2 = {'date_time': ["2022-10-06 17:23:13","2022-10-06 17:23:14","2022-10-06 17:23:15","2022-10-06 17:23:16","2022-10-06 17:23:17","2022-10-06 17:23:18"],
'value': [4, 5, 6, 7, 8, 9]}
df3 = {'date_time': ["2022-10-06 17:23:16","2022-10-06 17:23:17","2022-10-06 17:23:18","2022-10-06 17:23:19","2022-10-06 17:23:20","2022-10-06 17:23:21"],
'value': [4, 5, 6, 7, 8, 9]}
I need to create another dataframe df that contains all the datetime elements from all three df1,df2,df3 such that the common valued are summed up in-terms of common timestamps (excluding the millisecond parts) as shown below.
df=
{'date_time': ["2022-10-06 17:23:11","2022-10-06 17:23:12","2022-10-06 17:23:13","2022-10-06 17:23:14","2022-10-06 17:23:15","2022-10-06 17:23:16","2022-10-06 17:23:17","2022-10-06 17:23:18","2022-10-06 17:23:19","2022-10-06 17:23:20","2022-10-06 17:23:21"],
'value': [4, 5, 6+4, 7+5, 8+6, 9+7+4, 8+5, 9+6, 7, 8, 9]}
For adding the columns I used following:
df = (pd.concat([df1,df2,df3],axis=0)).groupby('date_time')['value'].sum().reset_index()
For plotting I used following which results in df2 and df3 to time shift towards df1.
for dataFrame,number in zip([df1,df2,df3,df],labels):
dataFrame["value"].plot(label=number)
How can I plot the three df1,df2,df3 without time shifting and also plot the aggregated df on the same plot for dataframe column 'value'?
IIUC you search for something like this:
labels=["Aggregated","1","2","3"]
color_dict = {"Aggregated": "orange", "1": "darkred", "2": "green", "3": "blue"}
fig, ax = plt.subplots()
for i, (d, label) in enumerate(zip([df, df1, df2, df3], labels),1):
ax.plot(d["date_time"], d["value"], lw=3/i, color=color_dict[label], label=label)
plt.setp(ax.get_xticklabels(), ha="right", rotation=45)
plt.legend()
plt.show()

pyspark: for loop calculations over the columns

Anyone know how can I do theses calculations in pyspark?
data = {
'Name': ['Tom', 'nick', 'krish', 'jack'],
'Age': [20, 21, 19, 18],
'CSP': [2, 6, 8, 7],
'coef': [2, 2, 3, 3]
}
# Create DataFrame
df = pd.DataFrame(data)
colsToRecalculate = ['Age','CSP']
for i in range(len(colsToRecalculate)):
df[colsToRecalculate[i]] =df[colsToRecalculate[i]]/df["coef"]
You can use select() on spark dataframe and include multiple columns (with different calculations) as parameters. In your case:
df2 = spark.createDataFrame(pd.DataFrame(data))
df2.select(*[(F.col(c) / F.col('coef')).alias(c) for c in colsToRecalculate], 'coef').show()
Slight variation to bzu's answer which selects non-listed columns manually within the select. We can use dataframe.columns and check the columns against the colsToRecalculate list - If column is in the list, do the calculation, else leave column as is.
data_sdf. \
select(*[(func.col(k) / func.col('coef')).alias(k) if k in colsToRecalculate else k for k in data_sdf.columns])

Select rows from a DataFrame based on different combination of values in a column in pandas

example: I have two data frames df1 and df2.
Q. I want to select df2's all rows in for loop based on 'favorite_color'. if
'favorite_color' is "'blue','yellow'", then return all rows of df2 having 'favorite_color'
as 'blue' OR 'yellow' OR 'blue' and 'yellow'.
raw_data = {'name': ['Willard Morris', 'Al Jennings', 'Omar Mullins', 'Spencer McDaniel'],
'age': [20, 19, 22, 21],
'favorite_color': ['blue', "'blue','yellow'", 'yellow', "'green','yellow'"],
'grade': [88, 92, 95, 70]}
df1 = pd.DataFrame(raw_data)
df1.head()
raw_data = {'name': ['Willard Morris', 'Al Jennings', 'Omar Mullins', 'Spencer
McDaniel','Omar','Spencer','abds'],
'age': [20, 19, 22, 21,24,30,35],
'favorite_color': ['blue', "'blue','yellow'", "'green','yellow'",
"green","'green','blue'",'yellow',"'blue','green','yellow'"],
'grade': [88, 92, 95, 70,80,75,85]}
df2 = pd.DataFrame(raw_data)
df2.head(7)

Loop over columns and calculate relative change within groups

I have a dataset (df) and want to achieve df_goal. That is to create a new variable that captures the relative change within groups from value1 and value2. In my real dataset I have a lot of columns, so I want to find a solution that loops over columns and add new ones along the way.
I have tried versions of the snippet below but it doesn't work. Any suggestions?
for col in df.columns:
df[col + 'REL_CGH'] = df.groupby(['GROUP']).apply((df.col / dfcol[0]) * 100)
import pandas as pd
df = pd.DataFrame({'GROUP': ['A', 'A', 'A', 'B', 'B', 'B'],
'VALUE1': [5, 6, 7, 3, 5, 8],
'VALUE2': [11, 16, 21, 321, 401, 423]})
df_goal = pd.DataFrame({'GROUP': ['A', 'A', 'A', 'B', 'B', 'B'],
'VALUE1': [5, 6, 7, 3, 5, 8],
'VALUE2': [11, 16, 21, 321, 401, 423],
'VALUE1_REL_CHG': [100, 120, 140, 100, 167, 267],
'VALUE2_REL_CHG' :[100, 145, 191, 100, 174, 183]})
You can use GroupBy.transform with GroupBy.first for first value per groups of all columns defined in list cols, divide by DataFrame.div, round and convert to integers, use DataFrame.add_suffix and last append to original:
cols = ['VALUE1','VALUE2']
df = (df.join(df[cols].div(df.groupby(['GROUP'])[cols].transform('first'))
.mul(100)
.round()
.astype(int)
.add_suffix('_REL_CGH')))
print (df)
GROUP VALUE1 VALUE2 VALUE1_REL_CGH VALUE2_REL_CGH
0 A 5 11 100 100
1 A 6 16 120 145
2 A 7 21 140 191
3 B 3 321 100 100
4 B 5 401 167 125
5 B 8 423 267 132
Your solution should be changed with lambda function, but is slowier if large DataFrame:
for col in cols:
df[col + 'REL_CGH'] = df.groupby(['GROUP'])[col].apply(lambda x: (x / x.iloc[0]) * 100)

How can I iterate over pandas dataframes and concatenate on another dataframe [duplicate]

I have 3 CSV files. Each has the first column as the (string) names of people, while all the other columns in each dataframe are attributes of that person.
How can I "join" together all three CSV documents to create a single CSV with each row having all the attributes for each unique value of the person's string name?
The join() function in pandas specifies that I need a multiindex, but I'm confused about what a hierarchical indexing scheme has to do with making a join based on a single index.
Zero's answer is basically a reduce operation. If I had more than a handful of dataframes, I'd put them in a list like this (generated via list comprehensions or loops or whatnot):
dfs = [df0, df1, df2, ..., dfN]
Assuming they have a common column, like name in your example, I'd do the following:
import functools as ft
df_final = ft.reduce(lambda left, right: pd.merge(left, right, on='name'), dfs)
That way, your code should work with whatever number of dataframes you want to merge.
You could try this if you have 3 dataframes
# Merge multiple dataframes
df1 = pd.DataFrame(np.array([
['a', 5, 9],
['b', 4, 61],
['c', 24, 9]]),
columns=['name', 'attr11', 'attr12'])
df2 = pd.DataFrame(np.array([
['a', 5, 19],
['b', 14, 16],
['c', 4, 9]]),
columns=['name', 'attr21', 'attr22'])
df3 = pd.DataFrame(np.array([
['a', 15, 49],
['b', 4, 36],
['c', 14, 9]]),
columns=['name', 'attr31', 'attr32'])
pd.merge(pd.merge(df1,df2,on='name'),df3,on='name')
alternatively, as mentioned by cwharland
df1.merge(df2,on='name').merge(df3,on='name')
This is an ideal situation for the join method
The join method is built exactly for these types of situations. You can join any number of DataFrames together with it. The calling DataFrame joins with the index of the collection of passed DataFrames. To work with multiple DataFrames, you must put the joining columns in the index.
The code would look something like this:
filenames = ['fn1', 'fn2', 'fn3', 'fn4',....]
dfs = [pd.read_csv(filename, index_col=index_col) for filename in filenames)]
dfs[0].join(dfs[1:])
With #zero's data, you could do this:
df1 = pd.DataFrame(np.array([
['a', 5, 9],
['b', 4, 61],
['c', 24, 9]]),
columns=['name', 'attr11', 'attr12'])
df2 = pd.DataFrame(np.array([
['a', 5, 19],
['b', 14, 16],
['c', 4, 9]]),
columns=['name', 'attr21', 'attr22'])
df3 = pd.DataFrame(np.array([
['a', 15, 49],
['b', 4, 36],
['c', 14, 9]]),
columns=['name', 'attr31', 'attr32'])
dfs = [df1, df2, df3]
dfs = [df.set_index('name') for df in dfs]
dfs[0].join(dfs[1:])
attr11 attr12 attr21 attr22 attr31 attr32
name
a 5 9 5 19 15 49
b 4 61 14 16 4 36
c 24 9 4 9 14 9
In python 3.6.3 with pandas 0.22.0 you can also use concat as long as you set as index the columns you want to use for the joining:
pd.concat(
objs=(iDF.set_index('name') for iDF in (df1, df2, df3)),
axis=1,
join='inner'
).reset_index()
where df1, df2, and df3 are defined as in John Galt's answer:
import pandas as pd
df1 = pd.DataFrame(np.array([
['a', 5, 9],
['b', 4, 61],
['c', 24, 9]]),
columns=['name', 'attr11', 'attr12']
)
df2 = pd.DataFrame(np.array([
['a', 5, 19],
['b', 14, 16],
['c', 4, 9]]),
columns=['name', 'attr21', 'attr22']
)
df3 = pd.DataFrame(np.array([
['a', 15, 49],
['b', 4, 36],
['c', 14, 9]]),
columns=['name', 'attr31', 'attr32']
)
This can also be done as follows for a list of dataframes df_list:
df = df_list[0]
for df_ in df_list[1:]:
df = df.merge(df_, on='join_col_name')
or if the dataframes are in a generator object (e.g. to reduce memory consumption):
df = next(df_list)
for df_ in df_list:
df = df.merge(df_, on='join_col_name')
Simple Solution:
If the column names are similar:
df1.merge(df2,on='col_name').merge(df3,on='col_name')
If the column names are different:
df1.merge(df2,left_on='col_name1', right_on='col_name2').merge(df3,left_on='col_name1', right_on='col_name3').drop(columns=['col_name2', 'col_name3']).rename(columns={'col_name1':'col_name'})
Here is a method to merge a dictionary of data frames while keeping the column names in sync with the dictionary. Also it fills in missing values if needed:
This is the function to merge a dict of data frames
def MergeDfDict(dfDict, onCols, how='outer', naFill=None):
keys = dfDict.keys()
for i in range(len(keys)):
key = keys[i]
df0 = dfDict[key]
cols = list(df0.columns)
valueCols = list(filter(lambda x: x not in (onCols), cols))
df0 = df0[onCols + valueCols]
df0.columns = onCols + [(s + '_' + key) for s in valueCols]
if (i == 0):
outDf = df0
else:
outDf = pd.merge(outDf, df0, how=how, on=onCols)
if (naFill != None):
outDf = outDf.fillna(naFill)
return(outDf)
OK, lets generates data and test this:
def GenDf(size):
df = pd.DataFrame({'categ1':np.random.choice(a=['a', 'b', 'c', 'd', 'e'], size=size, replace=True),
'categ2':np.random.choice(a=['A', 'B'], size=size, replace=True),
'col1':np.random.uniform(low=0.0, high=100.0, size=size),
'col2':np.random.uniform(low=0.0, high=100.0, size=size)
})
df = df.sort_values(['categ2', 'categ1', 'col1', 'col2'])
return(df)
size = 5
dfDict = {'US':GenDf(size), 'IN':GenDf(size), 'GER':GenDf(size)}
MergeDfDict(dfDict=dfDict, onCols=['categ1', 'categ2'], how='outer', naFill=0)
One does not need a multiindex to perform join operations.
One just need to set correctly the index column on which to perform the join operations (which command df.set_index('Name') for example)
The join operation is by default performed on index.
In your case, you just have to specify that the Name column corresponds to your index.
Below is an example
A tutorial may be useful.
# Simple example where dataframes index are the name on which to perform
# the join operations
import pandas as pd
import numpy as np
name = ['Sophia' ,'Emma' ,'Isabella' ,'Olivia' ,'Ava' ,'Emily' ,'Abigail' ,'Mia']
df1 = pd.DataFrame(np.random.randn(8, 3), columns=['A','B','C'], index=name)
df2 = pd.DataFrame(np.random.randn(8, 1), columns=['D'], index=name)
df3 = pd.DataFrame(np.random.randn(8, 2), columns=['E','F'], index=name)
df = df1.join(df2)
df = df.join(df3)
# If you have a 'Name' column that is not the index of your dataframe,
# one can set this column to be the index
# 1) Create a column 'Name' based on the previous index
df1['Name'] = df1.index
# 1) Select the index from column 'Name'
df1 = df1.set_index('Name')
# If indexes are different, one may have to play with parameter how
gf1 = pd.DataFrame(np.random.randn(8, 3), columns=['A','B','C'], index=range(8))
gf2 = pd.DataFrame(np.random.randn(8, 1), columns=['D'], index=range(2,10))
gf3 = pd.DataFrame(np.random.randn(8, 2), columns=['E','F'], index=range(4,12))
gf = gf1.join(gf2, how='outer')
gf = gf.join(gf3, how='outer')
There is another solution from the pandas documentation (that I don't see here),
using the .append
>>> df = pd.DataFrame([[1, 2], [3, 4]], columns=list('AB'))
A B
0 1 2
1 3 4
>>> df2 = pd.DataFrame([[5, 6], [7, 8]], columns=list('AB'))
A B
0 5 6
1 7 8
>>> df.append(df2, ignore_index=True)
A B
0 1 2
1 3 4
2 5 6
3 7 8
The ignore_index=True is used to ignore the index of the appended dataframe, replacing it with the next index available in the source one.
If there are different column names, Nan will be introduced.
I tweaked the accepted answer to perform the operation for multiple dataframes on different suffix parameters using reduce and i guess it can be extended to different on parameters as well.
from functools import reduce
dfs_with_suffixes = [(df2,suffix2), (df3,suffix3),
(df4,suffix4)]
merge_one = lambda x,y,sfx:pd.merge(x,y,on=['col1','col2'..], suffixes=sfx)
merged = reduce(lambda left,right:merge_one(left,*right), dfs_with_suffixes, df1)
df1 = pd.DataFrame(np.array([
['a', 5, 9],
['b', 4, 61],
['c', 24, 9]]),
columns=['name', 'attr11', 'attr12']
)
df2 = pd.DataFrame(np.array([
['a', 5, 19],
['d', 14, 16]]
),
columns=['name', 'attr21', 'attr22']
)
df3 = pd.DataFrame(np.array([
['a', 15, 49],
['c', 4, 36],
['d', 14, 9]]),
columns=['name', 'attr31', 'attr32']
)
df4 = pd.DataFrame(np.array([
['a', 15, 49],
['c', 4, 36],
['c', 14, 9]]),
columns=['name', 'attr41', 'attr42']
)
Three ways to join list dataframe
pandas.concat
dfs = [df1, df2, df3]
dfs = [df.set_index('name') for df in dfs]
# cant not run if index not unique
dfs = pd.concat(dfs, join='outer', axis = 1)
functools.reduce
dfs = [df1, df2, df3, df4]
# still run with index not unique
import functools as ft
df_final = ft.reduce(lambda left, right: pd.merge(left, right, on='name', how = 'outer'), dfs)
join
# cant not run if index not unique
dfs = [df1, df2, df3]
dfs = [df.set_index('name') for df in dfs]
dfs[0].join(dfs[1:], how = 'outer')
Joining together all three can be done using .join() function.
You have three DataFrames lets say
df1, df2, df3.
To join these into one DataFrame you can:
df = df1.join(df2).join(df3)
This is the simplest way I found to do this task.

Resources