Fill the Null values of the first row of dataframe with 100 [duplicate] - python-3.x

This question already has answers here:
pandas fillna not working
(5 answers)
Closed 3 years ago.
I have a dataframe which looks like this:
51183 53423 51989 52483 51342
100 NaN NaN 83.33 NaN
NaN NaN 50 25 12.5
Here , '51183' , '53423'....are column names. I want to fill the null value present in the first row with 100.
I tried doing this:
df[:1].fillna(100)
It just changes the null values in the first row to 100 but it doesn't update it in the dataframe.
I want the result to look like this:
51183 53423 51989 52483 51342
100 100 100 83.33 100
NaN NaN 50 25 12.5
If you could help me achieve that , I'll greatly appreciate it.

To update the row, try this:
df[:1] = df[:1].fillna(100)

Your try was almost OK.
df[:1] gets the initial row, but treats it as a copy of this row.
Then .fillna(100) changes all NaN values to 100, but in this copy,
not in the table.
An attempt to add inplace=True:
df[:1].fillna(100, inplace=True)
does the job, but issues also a SettingWithCopyWarning warning.
A method to do the job without this warning is e.g. to use .iloc and then .fillna:
df.iloc[0].fillna(100, inplace=True)

Related

Pandas apply with eval not giving NAN as result when NAN in column its calculating on

I have to support the ability for user to run any formula against a frame to produce a new column.
I may have a frame that looks like
dim01 dim02 msr01
0 A 25 1.0
1 B 26 5.3
2 C 53 NaN
I interpret user code to allow them to run a formula using supported functions/ standard operators / other columns
So a formula might look like SQRT([msr01]*100+7)
I convert the user input to Python syntax so this would evaluate to something like
formula_str = '(math.sqrt((row.msr01*100)+7))'
I then apply it to my pandas dataframe like this
data_frame['msr002'] = data_frame.apply(lambda row: eval(formula_str), axis=1)
This was working good until I hit data with a NaN in a column used in the calculation. I noticed that when this case happens I get a frame like this in return.
dim01 dim02 msr01 msr02
0 A 25 1.0 10.344
1 B 26 5.3 23.173
2 C 53 NaN 7.342
So it appears that the eval is not evaluating the NaN correctly.
I am using a lexer/parser to ensure that the user sent formula isnt dangerous and to convert from everyday user syntax to use python functions and make it work against pandas columns.
Any advice on how to fix this?
Perhaps I should include something in the lambda that looks if any required column is NaN and just hardcode to Nan in that case? But that doesn't seem like the best solution to me.
I did see this question which is similar but didnt think it answered my exact need.
So you can try with
df.msr01.mul(100).add(7)**0.5
Out[716]:
0 10.34408
1 23.17326
2 NaN
Name: msr01, dtype: float64
Also with your original code
df.apply(lambda row: eval(formula_str), axis=1)
Out[714]:
0 10.34408
1 23.17326
2 NaN
dtype: float64

Fixing Pandas NaN when making a new column?

I have two Panda's Dataframes
id volume
1 100
2 200
3 300
and
id 2020-07-01 2020-07-02 ...
1 12 14
2 5 1
3 7 8
I am trying to make a new column in the first table based on the values in the second table.
df['Total_Change'] = df2.iloc[:, 0] - df2.iloc[:, -1]
df['Change_MoM'] = df2.iloc[:, -2] - df2.iloc[:, -1]
This works, but the values are all shifted down in the table by one so that the first value is NaN and the last value is lost, so that my result is
id volume Total_Change Change_MoM
1 100 NaN NaN
2 200 -2 -2
3 300 4 4
Why is this happening? I've already double checked that the df2.iloc statements are grabbing the correct values, but I don't understand why my first table is shifting the values down a row. I've also tried shifting the table up one, but that left an NaN at the bottom.
The two tables are the same size. To be clear, I want to know how to prevent the NaN from occurring in the first place, not to replace it with some other value.
Both dfs have different index a quick fix is add reset_index()
df=df.reset_index(drop=True)
df2=df2.reset_index(drop=True)

Python dataframe with value 'NA' not fetching

I am trying to read a excel with below data:
But when i tried to debug the dataframe its showing only:
Could you explain why the NA is not showing in the dataframe.
Also is there any way to fetch NA .
Python version : 3.7
In pd.read_excel there's an argument for this called na_values.
Quoted from the documentation:
Additional strings to recognize as NA/NaN.
Furthermore you have to overwrite the default NaN values, which is also empty cell '', with the parameter keep_default_na=False
Again quoting from the documentation:
If na_values are specified and keep_default_na is False the default NaN values are overridden, otherwise they’re appended to.
So the following should help your problem:
df = pd.read_excel('Filename.xlsx', na_values='NA', keep_default_na=False)
Output
Item Status
0 Soap NaN
1 butter
2 Rice NaN
3 pen Available

How to join two dataframes for which column time values are within a certain range and are not datetime or timestamp objects?

I have two dataframes as shown below:
time browncarbon blackcarbon
181.7335 0.105270 NaN
181.3809 0.166545 0.001217
181.6197 0.071581 NaN
422 rows x 3 columns
start end toc
179.9989 180.0002 155.0
180.0002 180.0016 152.0
180.0016 180.0030 151.0
1364 rows x 3 columns
The first dataframe has a time column that has instants every four minutes. The second dataframe has a two time columns spaced every two minutes. Both these time columns do not start and end at the same time. However, they contain data collected over the same day. How could I make another dataframe containing:
time browncarbon blackcarbon toc
422 rows X 4 columns
There is a related answer on Stack Overflow, however, that is applicable only when the time columns are datetime or timestamp objects. The link is: How to join two dataframes for which column values are within a certain range?
Addendum 1: The multiple start and end rows that get encapsulated into one of the time rows should also correspond to one toc row, as it does right now, however, it should be the average of the multiple toc rows, which is not the case presently.
Addendum 2: Merging two pandas dataframes with complex conditions
We create a artificial key column to do an outer merge to get the cartesian product back (all matches between the rows). Then we filter all the rows where time falls in between the range with .query.
note: I edited the value of one row so we can get a match (see row 0 in example dataframes on the bottom)
df1.assign(key=1).merge(df2.assign(key=1), on='key', how='outer')\
.query('(time >= start) & (time <= end)')\
.drop(['key', 'start', 'end'], axis=1)
output
time browncarbon blackcarbon toc
1 180.0008 0.10527 NaN 152.0
Example dataframes used:
df1:
time browncarbon blackcarbon
0 180.0008 0.105270 NaN
1 181.3809 0.166545 0.001217
2 181.6197 0.071581 NaN
df2:
start end toc
0 179.9989 180.0002 155.0
1 180.0002 180.0016 152.0
2 180.0016 180.0030 151.0
Since the start and end intervals are mutually exclusive, we may be able to create new columns in df2 such that it would contain all the integer values in the range of floor(start) and floor(end). Later, add another column in df1 as floor(time) and then take left outer join on df1 and df2. I think that should do except that you may have to remove nan values and extra columns if required. If you send me the csv files, I may be able to send you the script. I hope I answered your question.
Perhaps you could just convert your columns to Timestamps and then use the answer in the other question you linked
from pandas import Timestamp
from dateutil.relativedelta import relativedelta as rd
def to_timestamp(x):
return Timestamp(2000, 1, 1) + rd(days=x)
df['start_time'] = df.start.apply(to_timestamp)
df['end_time'] = df.end.apply(to_timestamp)
Your 2nd data frame is too short, so it wouldn't reflect a meaningful merge. So I modified it a little:
df2 = pd.DataFrame({'start': [179.9989, 180.0002, 180.0016, 181.3, 181.5, 181.7],
'end': [180.0002, 180.0016, 180.003, 181.5, 185.7, 181.8],
'toc': [155.0, 152.0, 151.0, 150.0, 149.0, 148.0]})
df1['Rank'] = np.arange(len(df1))
new_df = pd.merge_asof(df1.sort_values('time'), df2,
left_on='time',
right_on='start')
gives you:
time browncarbon blackcarbon Rank start end toc
0 181.3809 0.166545 0.001217 1 181.3 181.5 150.0
1 181.6197 0.071581 NaN 2 181.5 185.7 149.0
2 181.7335 0.105270 NaN 0 181.7 181.8 148.0
which you can drop extra column and sort_values on Rank. For example:
new_df.sort_values('Rank').drop(['Rank','start','end'], axis=1)
gives:
time browncarbon blackcarbon toc
2 181.7335 0.105270 NaN 148.0
0 181.3809 0.166545 0.001217 150.0
1 181.6197 0.071581 NaN 149.0

How to obtain the difference of values of two specific dates with Pandas [duplicate]

This questions is similar to Python: Pandas Divide DataFrame by first row
I have a DataFrame which looks like this:
1125400 5430095 1095751
2013-04-02 98.91 NaN 5626.79
2013-04-03 99.29 NaN 5727.53
2013-04-04 99.79 NaN 5643.75
2013-04-07 100.55 NaN 5630.78
2013-04-08 100.65 NaN 5633.77
I would like to divide the values of the last row by the values of the first row in order to obtain the percentage difference over time.
A clearer way is to use iloc:
df.iloc[0] / df.iloc[-1]
Just take the first row and the last row values, then divide them, like this: df.T[df.index[0]] / df.T[df.index[-1]]

Resources