How to fill NaN values based on the top and bottom strings with highest frequency - python-3.x

I have a dataframe of string values with missing values in it. It needs to be populated/filled by the below conditions.
From the NaN value index , Check the last 3 rows and next 3 rows and replace the NaN with the most frequent/repeated value out of 6 rows.
If there is 2 strings with an equal amount of frequency occurred from the last 3 rows and next 3 rows , replace the NaN with the value that has lowest index out of theses 6 rows.
My DataFrame:
reading
0 talk
1 kill
2 NaN
3 vertical
4 type
5 kill
6 NaN
7 vertical
8 vertical
9 type
10 durable
11 NaN
12 durable
13 vertical
Expected output:
reading
0 talk
1 kill
2 kill
3 vertical
4 type
5 kill
6 vertical
7 vertical
8 vertical
9 type
10 durable
11 vertical
12 durable
13 vertical
Here is the minimum reproducible code:
import pandas as pd
import numpy as np
df = pd.DataFrame({'reading':['talk','kill',np.NAN,'vertical','type','kill',np.NAN,'vertical','vertical','type','durable',np.NAN,'durable','vertical']})
def filldf(df):
# Do the logic here
return df
I am not sure how to approach this problem. Any help will be appreciated !!

If you don't have too many NaN values, you can iterate over the index of NaN "reading" values and simply look for the mode of the surrounding 6 values of it (use iloc to get the first occurrence of multiple modes) and assign the values back to the corresponding "NaN" values
msk = df['reading'].isna()
df.loc[msk, 'reading'] = [df.loc[min(0, i-3):i+3, 'reading'].mode().iloc[0] for i in df.index[msk]]
Output:
reading
0 talk
1 kill
2 kill
3 vertical
4 type
5 kill
6 vertical
7 vertical
8 vertical
9 type
10 durable
11 vertical
12 durable
13 vertical

Related

For and if loop combination takes lot of time in Pandas (Data manipulation)

I have two datasets, each about half a million observations. I am writing the below code and it seems the code never seems to stop executing. I would like to know if there is a better way of doing it. Appreciate inputs.
Below are sample formats of my dataframes. Both dataframes share a set of 'sid' values , meaning all the 'sid' values in 'df2' will have a match in 'df1' 'sid' values. The 'tid' values and consequently the 'rid' values (which are a combination of 'sid' and 'tid' values) may not appear in both sets.
The task is simple. I would like to create the 'tv' column in df2. Wherever the 'rid' in df2 matches with the 'rid' in 'df1', the 'tv' column in df2 takes the corresponding 'tv' value from df1. If it does not match, the 'tv' value in 'df2' will be the median 'tv' value for the matching 'sid' subset in 'df1'.
In fact my original task includes creating a few more similar columns like 'tv' in df2 (based on their values in 'df1' ; these columns exist in 'df1').
I believe as my code contains for loop combined with if else statement and multiple value assignment statements, it is taking forever to execute. Appreciate any inputs.
df1
sid tid rid tv
0 0 0 0-0 9
1 0 1 0-1 8
2 0 3 0-3 4
3 1 5 1-5 2
4 1 7 1-7 3
5 1 9 1-9 14
6 1 10 1-10 24
7 1 11 1-11 13
8 2 14 2-14 2
9 2 16 2-16 5
10 3 17 3-17 6
11 3 18 3-18 8
12 3 20 3-20 5
13 3 21 3-21 11
14 4 23 4-23 6
df2
sid tid rid
0 0 0 0-0
1 0 2 0-2
2 1 3 1-3
3 1 6 1-6
4 1 9 1-9
5 2 10 2-10
6 2 12 2-12
7 3 1 3-1
8 3 15 3-15
9 3 1 3-1
10 4 19 4-19
11 4 22 4-22
rids = [rid.split('-') for rid in df1.rid]
for r in df2.rid:
s,t = r.split('-')
if [s,t] in rids:
df2.loc[df2.rid== r,'tv'] = df1.loc[df1.rid == r,'tv']
else:
df2.loc[df2.rid== r,'tv'] = df1.loc[df1.sid == int(s),'tv'].median()
The expected df2 shall be as follows:
sid tid rid tv
0 0 0 0-0 9.0
1 0 2 0-2 8.0
2 1 3 1-3 13.0
3 1 6 1-6 13.0
4 1 9 1-9 14.0
5 2 10 2-10 3.5
6 2 12 2-12 3.5
7 3 1 3-1 7.0
8 3 15 3-15 7.0
9 3 1 3-1 7.0
10 4 19 4-19 6.0
11 4 22 4-22 6.0
You can left merge on df2 with a subset(because you need only tv column you can also pass the df1 without any subset) of df1 on 'rid' then calculate median and fill values:
out=df2.merge(df1[['rid','tv']],on='rid',how='left')
out['tv']=out['tv_y'].fillna(out['sid'].map(df1.groupby('sid')['tv'].median()))
out= out.drop(['tv_x','tid_y','tv_y'], axis=1)
out = out.rename(columns = {'tid_x': 'tid'})
out
OR
Since you said that:
all the 'sid' values in 'df2' will have a match in 'df1' 'sid' values
So you can also left merge them on ['sid','rid'] and then fillna() value of tv with the median of df1 'tv' column by mapping values using map() method:
out=df2.merge(df1,on=['sid','rid'],how='left')
out['tv']=out['tv_y'].fillna(out['sid'].map(df1.groupby('sid')['tv'].median()))
out= out.drop(['tv_x','tv_y'], axis=1)
out
output of out:
sid tid rid tv
0 0 0 0-0 9.0
1 0 2 0-2 8.0
2 1 3 1-3 13.0
3 1 6 1-6 13.0
4 1 9 1-9 14.0
5 2 10 2-10 3.5
6 2 12 2-12 3.5
7 3 1 3-1 7.0
8 3 15 3-15 7.0
9 3 1 3-1 7.0
10 4 19 4-19 6.0
11 4 22 4-22 6.0
Here is a suggestion without any loops, based on dictionaries:
matching_values = dict(zip(df1['rid'][df1['rid'].isin(df2['rid'])], df1['tv'][df1['rid'].isin(df2['rid'])]))
df2[df2['rid'].isin(df1['rid'])]['tv'] = df2[df2['rid'].isin(df1['rid'])]['rid']
df2[df2['rid'].isin(df1['rid'])]['tv'].replace(matching_values)
median_values = df2[(~df2['rid'].isin(df1['rid']) & (df2['sid'].isin(df1['sid'])].groupby('sid')['tv'].median().to_dict()
df2[(~df2['rid'].isin(df1['rid']) & (df2['sid'].isin(df1['sid'])]['tv'] = df2[(~df2['rid'].isin(df1['rid']) & (df2['sid'].isin(df1['sid'])]['sid']
df2[(~df2['rid'].isin(df1['rid']) & (df2['sid'].isin(df1['sid'])]['tv'].replace(median_values)
This should do the trick. The logic here is that we first create a dictionary, in which the "rid and "sid" values are the keys and the median and matching "tv" values are the dictionary values. Next, we replace the "tv" values in df2 with the rid and sid keys, respectively, (because they are the dictionary keys) which can thus easily be replaced by the correct tv values by calling .replace().
Don't use for loops in pandas, that is known to be slow. That way you don't get to benefit from all the internal optimizations that have been made.
Try to use the split-apply-combine pattern:
split df1 into sid to calculate the median: df1.groupby('sid')['tv'].median()
join df2 on df1: df2.join(df1.set_index('rid'), on='rid')
fill the NaN values with the median calculated in step 1.
(Haven't tested the code).

Remove "x" number of characters from a string in a pandas dataframe?

I have a pandas dataframe df looking like this:
a b
thisisastring 5
anotherstring 6
thirdstring 7
I want to remove characters from the left of the strings in column a based on the number in column b. So I tried:
df["a"] = d["a"].str[df["b"]:]
But this will result in:
a b
NaN 5
NaN 6
NaN 7
Instead of:
a b
sastring 5
rstring 6
ring 7
Any help? Thanks in advance!
Using zip with string slice
df.a=[x[y:] for x,y in zip(df.a,df.b)]
df
Out[584]:
a b
0 sastring 5
1 rstring 6
2 ring 7
You can do it with apply, to apply this row-wise:
df.apply(lambda x: x.a[x.b:],axis=1)
0 sastring
1 rstring
2 ring
dtype: object

python-3: how to create a new pandas column as subtraction of two consecutive rows of another column?

I have a pandas dataframe
x
1
3
4
7
10
I want to create a new column y as y[i] = x[i] - x[i-1] (and y[0] = x[0]).
So the above data frame will become:
x y
1 1
3 2
4 1
7 3
10 3
How to do that with python-3? Many thanks
Using .shift() and fillna():
df['y'] = (df['x'] - df['x'].shift(1)).fillna(df['x'])
To explain what this is doing, if we print(df['x'].shift(1)) we get the following series:
0 NaN
1 1.0
2 3.0
3 4.0
4 7.0
Which is your values from 'x' shifted down one row. The first row gets NaN because there is no value above it to shift down. So, when we do:
print(df['x'] - df['x'].shift(1))
We get:
0 NaN
1 2.0
2 1.0
3 3.0
4 3.0
Which is your subtracted values, but in our first row we get a NaN again. To clear this, we use .fillna(), telling it that we want to just take the value from df['x'] whenever a null value is encountered.

Pandas Python How do use common data from one data frame to write to a different data frame?

I am trying to use df4's LineNum column to identify the GeneralDescription in df1 by matching LineNumbers and writing to the corresponding GeneralDescription's column cell in df1. I am going for a solution that is scalable to work with data frames with thousands of rows and several other inconsequential columns. I would rather not merge if it isnt absolutely necessary. I just want to write to df1's TrueDepartment column and leave the original structure of the 2 data frames the same. Thanks –
df1
LineNum Warehouse GeneralDescription
0 2 Empty Empty
1 3 Empty Empty
2 4 PBS Empty
3 5 Empty Empty
4 6 Empty Empty
5 7 General Liability Empty
6 8 Empty Empty
7 9 Empty Empty
df4
LineNum GeneralDescription
0 4 TRUCKING
1 6 TRUCKING-GREENVILLE,TN
2 7 Human Resources
Desired result
LineNum Warehouse GeneralDescription
0 2 Empty Empty
1 3 Empty Empty
2 4 PBS TRUCKING
3 5 Empty Empty
4 6 Empty TRUCKING-GREENVILLE,TN
5 7 General Liability Human Resources
6 8 Empty Empty
7 9 Empty Empty
This is the code I have so far with packages that might be helpful. As it is I'm getting the error that says KeyError: 'the label [LineNum] is not in the [index]'
import pandas as pd
import openpyxl
import numpy as np
data= [[2,'Empty','Empty'],[3,'Empty','Empty'],[4,'PBS','Empty'],[5,'Empty','Empty'],[6,'Empty','Empty'],[7,'General Liability','Empty'],[8,'Empty','Empty'],[9,'Empty','Empty']]
df1=pd.DataFrame(data,columns=['LineNum','Warehouse','GeneralDescription'])
data4 = [[4,'TRUCKING'],[6,'TRUCKING-GREENVILLE,TN'],[7,'Human Resources']]
df4=pd.DataFrame(data4,columns=['LineNum','GeneralDescription'])
for i in range(len(df1.index)):
if df1.loc[i,'LineNum']==df4.loc['LineNum']:
df1.loc[i,'GeneralDescription']=df4.loc['GeneralDescription']
Use map with Series created by df4 with fillna by original column values:
s = df4.set_index('LineNum')['TrueDepartment']
df1['TrueDepartment'] = df1['LineNum'].map(s).fillna(df1['TrueDepartment'])
print (df1)
LineNum Department TrueDepartment
0 2 Empty Empty
1 3 Empty Empty
2 4 GBS TRUCKING
3 5 Empty Empty
4 6 Empty TRUCKING-GREENVILLE,TN
5 7 General Liability Human Resources
6 8 Empty Empty
7 9 Empty Empty
Solution with DataFrame.merge:
df = df1.merge(df4,how='left', on='LineNum', suffixes=('','_'))
df['TrueDepartment'] = df['TrueDepartment_'].combine_first(df['TrueDepartment'])
df = df.drop('TrueDepartment_', axis=1)
print (df)
LineNum Department TrueDepartment
0 2 Empty Empty
1 3 Empty Empty
2 4 GBS TRUCKING
3 5 Empty Empty
4 6 Empty TRUCKING-GREENVILLE,TN
5 7 General Liability Human Resources
6 8 Empty Empty
7 9 Empty Empty

Replacing values in specific columns in a Pandas Dataframe, when number of columns are unknown

I am brand new to Python and stacks exchange. I have been trying to replace invalid values ( x<-3 and x>12) with np.nan in specific columns.
I don't know how many columns I will have to deal with and thus will have to create a general code that takes this into account. I do however know, that the first two columns are ids and names respectively. I have searched google and stacks exchange for a solution but haven't been able to find a solution that solves my specific objective.
My question is; How would one replace values found in the third column and onwards?
My dataframe looks like this;
Data
I tried this line:
Data[Data > 12.0] = np.nan.
this replaced the first two columns with nan
1st attempt
I tried this line:
Data[(Data.iloc[(range(2,Columns))] >=12) & (Data.iloc[(range(2,Columns))]<=-3)] = np.nan
where,
Columns = len(Data.columns)
This is clearly wrong replacing all values in rows 2 to 6 (Columns = 7).
2nd attempt
Any thoughts would be greatly appreciated.
Python 3.6.1 64bits, Qt 5.6.2, PyQt5 5.6 on Darwin
You're looking for the applymap() method.
import pandas as pd
import numpy as np
# get the columns after the second one
cols = Data.columns[2:]
# apply mask to those columns
new_df = Data[cols].applymap(lambda x: np.nan if x > 12 or x <= -3 else x)
Documentation: https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.applymap.html
This approach assumes your columns after the second contain float or int values.
You can set values to specific columns of a dataframe by using iloc and slicing the columns that you need. Then we can set the values using where
A short example using some random data
df = pd.DataFrame(np.random.randint(0,10,(4,10)))
0 1 2 3 4 5 6 7 8 9
0 7 7 9 4 2 6 6 1 7 9
1 0 1 2 4 5 5 3 9 0 7
2 0 1 4 4 3 8 7 0 6 1
3 1 4 0 2 5 7 2 7 9 9
Now we set the region to update and the region we want to update using iloc, and we slice columns indexed as 2 to the last column
df.iloc[:,2:] = df.iloc[:,2:].where((df < 7) & (df > 2))
Which will set the values in the Data Frame to NaN.
0 1 2 3 4 5 6 7 8 9
0 7 7 NaN 4.0 NaN 6.0 6.0 NaN NaN NaN
1 0 1 NaN 4.0 5.0 5.0 3.0 NaN NaN NaN
2 0 1 4.0 4.0 3.0 NaN NaN NaN 6.0 NaN
3 1 4 NaN NaN 5.0 NaN NaN NaN NaN NaN
For your data the code would be this
Data.iloc[:,2:] = Data.iloc[:,2:].where((Data <= 12) & (Data >= -3))
Operator clarification
The setup I show directly above would look like this
-3 <= Data <= 12, gives everything between those numbers
If we reverse this logic using the & operator it looks like this
-3 >= Data <= 12, a number cannot be both less than -3 and greater than 12 at the same time.
So we use the or operator instead |. Code looks like this now....
Data.iloc[:,2:] = Data.iloc[:,2:].where((Data >= 12) | (Data <= -3))
So the data is checked on a conditional basis
Data <= -3 or Data >= 12

Resources