Fill Null values in Data-Frame with Column names - python-3.x

I have a data-frame with 55 columns and 2 million rows having mix of categorical and numeric fileds. There are null/na values in the data-set. I want to fill Null values with Column names.
The data-set I have is:
A B C D .....
1 na na 3 .....
na 3 4 na .....
........................
The output the I am trying to get is:
A B C D .....
1 B C 3 .....
A 3 4 D .....
........................
I am trying to use :
df.fillna(method='ffill')
Is there another way?
Python:3.6.5

Use DataFrame.fillna with columns converted to Series by Index.to_series:
df = df.fillna(df.columns.to_series())
print (df)
A B C D
0 1 B C 3
1 A 3 4 D
EDIT: If categorical columns in DataFrame select these columns and append non exist values by cat.add_categories:
for c in df.select_dtypes('category'):
df[c] = df[c].cat.add_categories(c)
df = df.fillna(df.columns.to_series())

Related

How can I duplicate a row and append it directly after the duplicated row using pandas?

I've been trying to figure this problem for a couple of hours now and seem to reach a dead end everytime. A small example of what I want to do is shown below.
Normal Series
a
b
c
d
Duplicated Series
a
a
b
b
c
c
d
d
Try with loc and df.index.repeat:
>>> df.loc[df.index.repeat(2)]
Normal Series
0 a
0 a
1 b
1 b
2 c
2 c
3 d
3 d
>>>
Or with reset_index:
>>> df.loc[df.index.repeat(2)].reset_index(drop=True)
Normal Series
0 a
1 a
2 b
3 b
4 c
5 c
6 d
7 d
>>>
You can just concat a duplicated series together and sort it.
sample = pd.Series(['a','b','c','d'])
output = pd.concat([sample,sample]).sort_values().reset_index(drop=True)
output

SUM based on list of categories

Consider the following Excel
A B C D
1 foo 7 whaa
2 bar 5 AA
3 baz 9 BB
4 bal 1 AA
5 oof 3 blah
6 aba 9 C
Extra:
Each row has either a value in column C OR in column D
The values in column Care categories (in this example ÀA,BB,C`)
The values in column Dcan be anything
I need a SUM (based on column A) as follows:
SUM of column B for all lines that have a value in (any value) in column D (called Rest)
SUM of column B for each category in column C. I have a list of the categories (see below)
So like this:
A B
1 Rest 10 <----- 7 + 3
2 AA 6 <----- 5 + 1
3 BB 9
4 C 9
What formulas do I need in column B above to get this result?
or, you can use sumproduct to solve:
H2=SUMPRODUCT(($D$4:$D$9=IF(G2="Rest","",G2))*$C$4:$C$9)
H2=SUMIF($D$4:$D$9,IF(G2="Rest","",G2),$C$4:$C$9)

Drop by multiple columns groups if specific values not exit in another column in Pandas

How can I drop the whole group by city and district if date's value of 2018/11/1 not exits in the following dataframe:
city district date value
0 a c 2018/9/1 12
1 a c 2018/10/1 4
2 a c 2018/11/1 5
3 b d 2018/9/1 3
4 b d 2018/10/1 7
The expected result will like this:
city district date value
0 a c 2018/9/1 12
1 a c 2018/10/1 4
2 a c 2018/11/1 5
Thank you!
Create helper column by DataFrame.assign, compare by datetime and test if at least one true per groups with GroupBy.any and GroupBy.transform for possible filter by boolean indexing:
mask = (df.assign(new=df['date'].eq('2018/11/1'))
.groupby(['city','district'])['new'].transform('any'))
df = df[mask]
print (df)
city district date value
0 a c 2018/9/1 12
1 a c 2018/10/1 4
2 a c 2018/11/1 5
If error with misisng values in mask one possivle idea is replace misisng values in columns used for groups:
mask = (df.assign(new=df['date'].eq('2018/11/1'),
city= df['city'].fillna(-1),
district= df['district'].fillna(-1))
.groupby(['city','district'])['new'].transform('any'))
df = df[mask]
print (df)
city district date value
0 a c 2018/9/1 12
1 a c 2018/10/1 4
2 a c 2018/11/1 5
Another idea is add possible misisng index values by reindex and also replace missing values to False:
mask = (df.assign(new=df['date'].eq('2018/11/1'))
.groupby(['city','district'])['new'].transform('any'))
df = df[mask.reindex(df.index, fill_value=False).fillna(False)]
print (df)
city district date value
0 a c 2018/9/1 12
1 a c 2018/10/1 4
2 a c 2018/11/1 5
There's a special GroupBy.filter() method for this. Assuming date is already datetime:
filter_date = pd.Timestamp('2018-11-01').date()
df = df.groupby(['city', 'district']).filter(lambda x: (x['date'].dt.date == filter_date).any())

Upsert function in Dataframe - Python

I am trying to update one dataframe with another dataframe with respect to the first column. If there is an extra row in the second dataframe, it should be inserted in the first dataframe. It there is a row with the same data in the first column but different data in the other coulmns, that row should be updated. Also, the row which has no value in the first column should be dropped.
Code used -
df = df_1.combine_first(df_2)\
.reset_index()\
.reindex(columns=df_1.columns)
df = df.drop_duplicates(subset='A', keep= 'last', inplace=False)
df.dropna(subset=['A'])
print ("Final Data")
print (df)
First Dataframe -
A B C
0 45 a b
1 98 c d
2 67 bn k
Second Dataframe -
A B C
0 45 a d
1 98 c d
2 67 bn k
3 90 x z
4
Final should look like -
A B C
0 45 a d
1 98 c d
2 67 bn k
3 90 x z
The final dataframe that I get -
A B C
0 45.0 a b
1 98.0 c d
2 67.0 bn k
3 90.0 x z
4
So, neither the data is getting updated, nor is it removing the row with null values. What am I missing?
Based on my understanding of your question, your second dataframe basically supercedes the first, if there is a matching index. If there isn't, then the difference is added to the first dataframe. I am also assuming that there are no duplicate keys in the first column, A.
Framing this requirement a little differently, the final output should contain all the rows in the second dataframe, as well as the values (since they are meant to overwrite the first dataframe if there's a match). Therefore, we will start off using the second dataframe as it is, and then add back the rows that exist in the first dataframe but not in the second. See the example below. (I'm also using a slightly different first dataframe to highlight the effects)
import pandas as pd
df1 = pd.DataFrame({'A':[45,98,67,91],'B':['a','c','bn','y'],'C':['b','d','k','oo']})
df2 = pd.DataFrame({'A':[45,98,67,90,''],'B':['a','c','bn','x',''],'C':['d','d','k','z','']})
# Remove rows with empty values in first column. This should be whatever conditions applicable to you i.e. checking for np.nan instead of str('')
df2 = df2.loc[df2['A'] != '']
df1.set_index('A', inplace=True)
df2.set_index('A', inplace=True)
# Find keys in dataframe 1 that are not in dataframe 2
idx_diff = df1.index.difference(df2.index)
# Append these rows to dataframe 2
df_ins = df1.loc[idx_diff]
df3 = df2.append(df_ins)
df3.reset_index(inplace=True)
>>>df3
A B C
0 45 a d
1 98 c d
2 67 bn k
3 90 x z
4 91 y oo

Get column names from pandas DataFrame in format dtype:object

I have a similar doubt to the one in the mentioned link. Instead of returning column names in a list, I want column names in the format dtype:object.
For example,
A
B
C
D
Name:x,dtype:object
I am using Excel file in xlsx format.
Link: Get list from pandas DataFrame column headers
I think you need read_excel first for df and then Series constructor or Index.to_series for Series from column names:
df = pd.DataFrame({'A':[1,2,3],
'B':[4,5,6],
'C':[7,8,9],
'D':[1,3,5]})
print (df)
A B C D
0 1 4 7 1
1 2 5 8 3
2 3 6 9 5
s = pd.Series(df.columns.values, name='x')
print (s)
0 A
1 B
2 C
3 D
Name: x, dtype: object
s1 = df.columns.to_series().rename('x')
print (s1)
A A
B B
C C
D D
Name: x, dtype: object

Resources