Accumulate values ​from one df into another - python-3.x

I have two dataframes and I want to accumulate the value of one of the dataframes in the other. How can I do it?
Dataframe 1:
Product
Amount
Price
Total
A
1
12.0
15
B
4
20.0
15
C
2
4.0
15
D
5
30.0
15
Dataframe 2:
Product
Amount
Price
B
3
20.0
C
2
4.0
Result:
Product
Amount
Price
Total
A
1
12.0
15
B
7
20.0
15
C
4
4.0
15
D
5
30.0
15
Thanks!

Use concat with aggregate sum - necessary all numeric columns without Product:
df = pd.concat([df1, df2]).groupby('Product', as_index=False).sum()
print (df)
Product Amount Price Total
0 A 1 12.0 15.0
1 B 7 40.0 15.0
2 C 4 8.0 15.0
3 D 5 30.0 15.0

Related

How to join two time series data frame so the resultant data frame has the all the unique dates and without duplicate dates that are common

I have two time series data frame:
df1 = pd.DataFrame({'Date': [pd.to_datetime('1980-01-03'), pd.to_datetime('1980-01-04'),
pd.to_datetime('1980-01-05'), pd.to_datetime('1980-01-06'),
pd.to_datetime('1980-01-07'), pd.to_datetime('1980-01-8')],
'Temp': [13.5,10,14,12,10,9]})
df1
Date Temp
0 1980-01-03 13.5
1 1980-01-04 10.0
2 1980-01-05 14.0
3 1980-01-06 12.0
4 1980-01-07 10.0
5 1980-01-08 9.0
and
df2 = pd.DataFrame({'Date': [pd.to_datetime('1980-01-01'), pd.to_datetime('1980-01-02'),
pd.to_datetime('1980-01-03'), pd.to_datetime('1980-01-04')],
'Temp': [10,17,13.5,10]})
df2
Date Temp
0 1980-01-01 10.0
1 1980-01-02 17.0
2 1980-01-03 13.5
3 1980-01-04 10.0
Now my task is to join these data frames based on Dates such that the resultant data frame has the dates that are unique to both data frames and also has single entry for common (present in both data frames) dates and are arranged in proper date sequence.
To that effect I tried the following:
df = pd.concat([df1, df2])
df.reset_index().drop(columns = ['index'], axis = 1)
Date Temp
0 1980-01-03 13.5
1 1980-01-04 10.0
2 1980-01-05 14.0
3 1980-01-06 12.0
4 1980-01-07 10.0
5 1980-01-08 9.0
6 1980-01-01 10.0
7 1980-01-02 17.0
8 1980-01-03 13.5
9 1980-01-04
But this is incorrect result. What I am trying to get is:
Date Temp
0 1980-01-01 10.0
1 1980-01-02 17.0
2 1980-01-03 13.5
3 1980-01-04 10.0
4 1980-01-05 14.0
5 1980-01-06 12.0
6 1980-01-07 10.0
7 1980-01-08 9.0
What can I do? May be the pd.concat() is not the way to go?
A possible solution:
pd.merge(df1, df2, how="outer").sort_values(by="Date").reset_index(drop=True)
Output:
Date Temp
0 1980-01-01 10.0
1 1980-01-02 17.0
2 1980-01-03 13.5
3 1980-01-04 10.0
4 1980-01-05 14.0
5 1980-01-06 12.0
6 1980-01-07 10.0
7 1980-01-08 9.0

Replace only leading NaN values in Pandas dataframe

I have a dataframe of time series data, in which data reporting starts at different times (columns) for different observation units (rows). Prior to first reported datapoint for each unit, the dataframe contains NaN values, e.g.
0 1 2 3 4 ...
A NaN NaN 4 5 6 ...
B NaN 7 8 NaN 10...
C NaN 2 11 24 17...
I want to replace the leading (left-side) NaN values with 0, but only the leading ones (i.e. leaving the internal missing ones as NaN. So the result on the example above would be:
0 1 2 3 4 ...
A 0 0 4 5 6 ...
B 0 7 8 NaN 10...
C 0 2 11 24 17...
(Note the retained NaN for row B col 3)
I could iterate through the dataframe row-by-row, identify the first index of a non-NaN value in each row, and replace everything left of that with 0. But is there a way to do this as a whole-array operation?
notna + cumsum by rows, cells with zeros are leading NaN:
df[df.notna().cumsum(1) == 0] = 0
df
0 1 2 3 4
A 0.0 0.0 4 5.0 6
B 0.0 7.0 8 NaN 10
C 0.0 2.0 11 24.0 17
Here is another way using cumprod() and apply()
s = df.isna().cumprod(axis=1).sum(axis=1)
df.apply(lambda x: x.fillna(0,limit = s.loc[x.name]),axis=1)
Output:
0 1 2 3 4
A 0.0 0.0 4.0 5.0 6.0
B 0.0 7.0 8.0 NaN 10.0
C 0.0 2.0 11.0 24.0 17.0

Concatenate two dataframes of different sizes (pandas)

I have two dataframes with unique ids. They share some columns but not all. I need to create a combined dataframe which will include rows from missing ids from the second dataframe. Tried merge and concat, no luck. It's probably too late, my brain stopped working. Will appreciate your help!
df1 = pd.DataFrame({
'id': ['a','b','c','d','f','g','h','j','k','l','m'],
'metric1': [123,22,356,412,54,634,72,812,129,110,200],
'metric2':[1,2,3,4,5,6,7,8,9,10,11]
})
df2 = pd.DataFrame({
'id': ['a','b','c','d','f','g','h','q','z','w'],
'metric1': [123,22,356,412,54,634,72,812,129,110]
})
df2
The result should look like this:
id metric1 metric2
0 a 123 1.0
1 b 22 2.0
2 c 356 3.0
3 d 412 4.0
4 f 54 5.0
5 g 634 6.0
6 h 72 7.0
7 j 812 8.0
8 k 129 9.0
9 l 110 10.0
10 m 200 11.0
11 q 812 NaN
12 z 129 NaN
13 w 110 NaN
In this case using combine_first
df1.set_index('id').combine_first(df2.set_index('id')).reset_index()
Out[766]:
id metric1 metric2
0 a 123.0 1.0
1 b 22.0 2.0
2 c 356.0 3.0
3 d 412.0 4.0
4 f 54.0 5.0
5 g 634.0 6.0
6 h 72.0 7.0
7 j 812.0 8.0
8 k 129.0 9.0
9 l 110.0 10.0
10 m 200.0 11.0
11 q 812.0 NaN
12 w 110.0 NaN
13 z 129.0 NaN

shifting a column down in a pandas dataframe

I have data in the following way
A B C
1 2 3
2 5 6
7 8 9
I want to change the dataframe into
A B C
2 3
1 5 6
2 8 9
3
One way would be to add a blank row to the dataframe and then use shift
# input df:
A B C
0 1 2 3
1 2 5 6
2 7 8 9
df.loc[len(df.index), :] = None
df['A'] = df.A.shift(1)
print (df)
A B C
0 NaN 2.0 3.0
1 1.0 5.0 6.0
2 2.0 8.0 9.0
3 7.0 NaN NaN

find running total on every 7th day in pandas

I have a data like this. first column is the number of days from one starting point. second column is value generated after each number of days as given.
example after 1 day i get 5$, after 2nd day i get 3$ and so on. there may be some time where there is no revenue like 4th day. the numbers are not consecutive.
data =pd.DataFrame({'day':[1,2,3,5,6,7,8,9,10,11,14,15,17,18,19],
'value':[5,3,7,8,9,4,6,5,2,8,6,7,9,5,2]})
I want to find total value after every 7 day window.
output should be like
day value
7 36
14 27
21 23
I am using loop to achieve this. is there a better pythonic way of doing this.
df =pd.DataFrame({})
sum_value=0
for index, row in data.iterrows():
sum_value+= row['value']
if row['day'] %7==0:
df = df.append(pd.DataFrame({'day':row['day'],'sum_value':[sum_value]}))
sum_value=0
pritn(df)
Also, how to find sum of previous 7 day values at each day (each row)
expected output
day value
1 5
2 8
3 15
5 23
6 32
7 36
8 37
9 39
10 34
and so on...
I hope i did the calculation right. it is basically running total of previous 7 days of values. it would be easier if the numbers are not missing in days column.
Use groupby with helper Series with subtract 1 and integer division with aggregate sum and last:
df = data.groupby((data['day'] - 1) // 7 , as_index=False).agg({'day':'last', 'value':'sum'})
print (df)
day value
0 7 36
1 14 27
2 19 23
Details:
print ((data['day'] - 1) // 7)
0 0
1 0
2 0
3 0
4 0
5 0
6 1
7 1
8 1
9 1
10 1
11 2
12 2
13 2
14 2
Name: day, dtype: int64
Similar solution if need divide day column by 7:
df = data.groupby((data['day'] - 1) // 7)['value'].sum().reset_index()
df['day'] = (df['day'] + 1) * 7
print (df)
day value
0 7 36
1 14 27
2 21 23
EDIT: Need rolling with sum, but first is necessary add missing dates by reindex - necessary unique values of day column.
idx = np.arange(data['day'].min(), data['day'].max() + 1)
df = data.set_index('day').reindex(idx).rolling(7, min_periods=1).sum()
df = df[df.index.isin(data['day'])]
print (df)
value
day
1 5.0
2 8.0
3 15.0
5 23.0
6 32.0
7 36.0
8 37.0
9 39.0
10 34.0
11 42.0
14 27.0
15 28.0
17 30.0
18 27.0
19 29.0
If get:
ValueError: cannot reindex from a duplicate axis
it means duplicates day values and solution is aggregate sum first:
#duplicated day 1
data =pd.DataFrame({'day':[1,1,3,5,6,7,8,9,10,11,14,15,17,18,19],
'value':[5,3,7,8,9,4,6,5,2,8,6,7,9,5,2]})
idx = np.arange(data['day'].min(), data['day'].max() + 1)
df = data.groupby('day')['value'].sum().reindex(idx).rolling(7, min_periods=1).sum()
df = df[df.index.isin(data['day'])]
print (df)
day
1 8.0
3 15.0
5 23.0
6 32.0
7 36.0
8 34.0
9 39.0
10 34.0
11 42.0
14 27.0
15 28.0
17 30.0
18 27.0
19 29.0
Name: value, dtype: float64

Resources