I want to simulate battery charging data.
Imagine there is a battery with a constant capacity. e.g. 30000, in the real world a person will charge it at random times at 18:00-18:30, so sometimes the he starts at 18:29 some times start at 18:00, so the half hourly value will be varied by the start charging time. But the total amount won't be changed.
index value
0 2021-01-01 00:00:00 0
1 2021-01-01 00:30:00 0
2 2021-01-01 01:00:00 0
3 2021-01-01 01:30:00 0
4 2021-01-01 02:00:00 0
... ... ...
995 2021-01-21 17:30:00 0
996 2021-01-21 18:00:00 0
997 2021-01-21 18:30:00 0
998 2021-01-21 19:00:00 0
999 2021-01-21 19:30:00 0
1000 2021-01-21 20:00:00 0
So, if the charging speed is 5000 per half hour, it sometimes likes:[10,5000,5000,5000,5000,5000,4990], and sometimes [2500,5000,5000,5000,5000,2500].
And I want to generate such a pattern and insert it into a given time.
index value
0 2021-01-01 00:00:00 0
1 2021-01-01 00:30:00 0
2 2021-01-01 01:00:00 0
3 2021-01-01 01:30:00 0
4 2021-01-01 02:00:00 0
... ... ...
995 2021-01-21 17:30:00 0
996 2021-01-21 18:00:00 2500
997 2021-01-21 18:30:00 5000
998 2021-01-21 19:00:00 5000
999 2021-01-21 19:30:00 5000
1000 2021-01-21 20:00:00 5000
1001 2021-01-01 20:30:00 2500
1002 2021-01-01 21:00:00 0
Assume he charging around the time defined by start parameter. If the start is '2021-01-01 18:00' it will start charging between 18:00 to 18:30.
The function I want:
def insertPattern(emptyTimeseriesDF, capacity, speed, start) :
return dfWithInsertedPattern
Empty ts generated by:
index = pd.date_range(datetime.datetime(2021,1,1), periods=1000, freq='30min')
columns = ['value']
df = pd.DataFrame(index=index, columns=columns)
df = df.fillna(0)
df = df.reset_index()
df
Related
I have a dataframe with a timestamp column, another date column and price column.
The timestamp column is more like every 5 min data for a specific hour (between 10 am and 11 am) that am pulling out.
Eg:
Timestamp EndDate Price
2021-01-01 10:00:00 2021-06-30 08:00:00 100
2021-01-01 10:00:00 2021-09-30 08:00:00 105
2021-01-01 10:05:00 2021-03-30 08:00:00 102
2021-01-01 10:05:00 2021-06-30 08:00:00 100
2021-01-01 10:05:00 2021-09-30 08:00:00 105
2021-01-01 10:10:00 2021-03-30 08:00:00 102
2021-01-01 10:10:00 2021-06-30 08:00:00 100
2021-01-02 10:00:00 2021-06-30 08:00:00 100
2021-01-02 10:00:00 2021-09-30 08:00:00 105
2021-01-02 10:00:00 2021-03-30 08:00:00 102
2021-01-02 10:00:00 2021-06-30 08:00:00 100
2021-01-02 10:05:00 2021-09-30 08:00:00 105
2021-01-02 10:05:00 2021-03-30 08:00:00 102
2021-01-02 10:05:00 2021-06-30 08:00:00 100
For the snapshot every 5 min, some end up with 3 records, some with 2, some with 4 records.
Within that hour (or day) I want to pull out the set of records such that the set contains the maximum number of records, so for the 1st of jan in the above example, it should pull out 10:05 data, for 2nd jan it should pull out 10:00 data. If there are multiple sets with the same number of max records, then it can pull out the latest time for that day.
Am not sure how I can do this efficiently, perhaps use a count ?
u can split the the timstap for a better use, so i did this:
import numpy as np
import pandas as pd
filename=(r'C:xxxxxx\Example2.xlsx')
df0=pd.read_excel(filename)
df0['new_date'] = [d.date() for d in df0['Timestamp']]
df0['new_time'] = [d.time() for d in df0['Timestamp']]
this yields:
then we can use groupby() and thn apply() to count values as follow:
df = df0.groupby('new_date')['new_time'].apply(lambda x:
x.value_counts().index[0]).reset_index()
that yields:
I'm totally new to Time Series Analysis and I'm trying to work on examples available online
this is what I have currently:
# Time based features
data = pd.read_csv('Train_SU63ISt.csv')
data['Datetime'] = pd.to_datetime(data['Datetime'],format='%d-%m-%Y %H:%M')
data['Hour'] = data['Datetime'].dt.hour
data['minute'] = data['Datetime'].dt.minute
data.head()
ID Datetime Count Hour Minute
0 0 2012-08-25 00:00:00 8 0 0
1 1 2012-08-25 01:00:00 2 1 0
2 2 2012-08-25 02:00:00 6 2 0
3 3 2012-08-25 03:00:00 2 3 0
4 4 2012-08-25 04:00:00 2 4 0
What I'm looking for is something like this:
ID Datetime Count Hour Minute 4-Hour-window
0 0 2012-08-25 00:00:00 20 4 0 00:00:00 - 04:00:00
1 1 2012-08-25 04:00:00 22 8 0 04:00:00 - 08:00:00
2 2 2012-08-25 08:00:00 18 12 0 08:00:00 - 12:00:00
3 3 2012-08-25 12:00:00 16 16 0 12:00:00 - 16:00:00
4 4 2012-08-25 16:00:00 18 20 0 16:00:00 - 20:00:00
5 5 2012-08-25 20:00:00 14 24 0 20:00:00 - 00:00:00
6 6 2012-08-25 00:00:00 20 4 0 00:00:00 - 04:00:00
7 7 2012-08-26 04:00:00 24 8 0 04:00:00 - 08:00:00
8 8 2012-08-26 08:00:00 20 12 0 08:00:00 - 12:00:00
9 9 2012-08-26 12:00:00 10 16 0 12:00:00 - 16:00:00
10 10 2012-08-26 16:00:00 18 20 0 16:00:00 - 20:00:00
11 11 2012-08-26 20:00:00 14 24 0 20:00:00 - 00:00:00
I think what you are looking for is the resample function, see here: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.resample.html
Something like this should work (not tested):
sampled_data = data.resample(
'4H',
kind='timestamp',
on='Datetime',
label='left'
).sum()
The function is very similar to groupby and groups the data into chunks of the column specified in on=, in this case we use timestamps and chunks of 4 hours.
Finally, you need to use some kind of disaggregation, in this case sum(), to convert all elements of each group into a single element per timechunk
I have a list of data with total number of orders and I would like to calculate the average number of orders per day of the week. For example, average number of order on Monday.
0 2018-01-01 00:00:00 3162
1 2018-01-02 00:00:00 1146
2 2018-01-03 00:00:00 396
3 2018-01-04 00:00:00 848
4 2018-01-05 00:00:00 1624
5 2018-01-06 00:00:00 3052
6 2018-01-07 00:00:00 3674
7 2018-01-08 00:00:00 1768
8 2018-01-09 00:00:00 1190
9 2018-01-10 00:00:00 382
10 2018-01-11 00:00:00 3170
Make sure your date column is in datetime format (looks like it already is)
Add column to convert date to day of week
Group by the day of week and take average
df['Date'] = pd.to_datetime(df['Date']) # Step 1
df['DayofWeek'] =df['Date'].dt.day_name() # Step 2
df.groupby(['DayofWeek']).mean() # Step 3
Here I have dataset with datetime. Here I want to get time different value row by row in my csv file.
So I wrote the code to get the time different value in minutes. Then I want to convert that time different in hour.
That means;
if time difference value is 30 minutes. in hours 0.5h
if 120 min > 2h
But when I tried to it, it doesn't match with my required format. I just divide that time difference with 60.
my code:
df1['time_diff'] = pd.to_datetime(df1["time"])
print(df1['time_diff'])
0 2019-08-09 06:15:00
1 2019-08-09 06:45:00
2 2019-08-09 07:45:00
3 2019-08-09 09:00:00
4 2019-08-09 09:25:00
5 2019-08-09 09:30:00
6 2019-08-09 11:00:00
7 2019-08-09 11:30:00
8 2019-08-09 13:30:00
9 2019-08-09 13:50:00
10 2019-08-09 15:00:00
11 2019-08-09 15:25:00
12 2019-08-09 16:25:00
13 2019-08-09 18:00:00
df1['delta'] = (df1['time_diff']-df1['time_diff'].shift()).fillna(0)
df1['t'] = df1['delta'].apply(lambda x: x / np.timedelta64(1,'m')).astype('int64')% (24*60)
then the result:
After dividing by 60:
df1['t'] = df1['delta'].apply(lambda x: x / np.timedelta64(1,'m')).astype('int64')% (24*60)/60
result:
comparing each two images you can see in my first picture 30 min is there when I tries to convert into hours it is not showing and it just showing 1 only.
But have to convert 30 min as 0.5 hr.
Expected output:
[![
time_diff in min expected output of time_diff in hour
0 0
30 0.5
60 1
75 1.25
25 0.4167
5 0.083
90 1.5
30 0.5
120 2
20 0.333
70 1.33
25 0.4167
60 1
95 1.583
Can anyone help me to solve this error?
I suggest use Series.dt.total_seconds with divide by 60 and 3600:
df1['datetimes'] = pd.to_datetime(df1['date']+ ' ' + df1['time'], dayfirst=True)
df1['delta'] = df1['datetimes'].diff().fillna(pd.Timedelta(0))
td = df1['delta'].dt.total_seconds()
df1['time_diff in min'] = td.div(60).astype(int)
df1['time_diff in hour'] = td.div(3600)
print (df1)
datetimes delta time_diff in min time_diff in hour
0 2019-08-09 06:15:00 00:00:00 0 0.000000
1 2019-08-09 06:45:00 00:30:00 30 0.500000
2 2019-08-09 07:45:00 01:00:00 60 1.000000
3 2019-08-09 09:00:00 01:15:00 75 1.250000
4 2019-08-09 09:25:00 00:25:00 25 0.416667
5 2019-08-09 09:30:00 00:05:00 5 0.083333
6 2019-08-09 11:00:00 01:30:00 90 1.500000
7 2019-08-09 11:30:00 00:30:00 30 0.500000
8 2019-08-09 13:30:00 02:00:00 120 2.000000
9 2019-08-09 13:50:00 00:20:00 20 0.333333
10 2019-08-09 15:00:00 01:10:00 70 1.166667
11 2019-08-09 15:25:00 00:25:00 25 0.416667
12 2019-08-09 16:25:00 01:00:00 60 1.000000
13 2019-08-09 18:00:00 01:35:00 95 1.583333
I have a dataframe with a RangeIndex, timestamps in the first column and several thousands hourly temperature observations in the second.
It is easy enough to group the observations by 24 and find daily Tmax and Tmin. But I also want the timestamp of each day's max and min values.
How can I do that?
I hope I can get help without posting a working example, because the nature of the data makes it unpractical.
EDIT: Here's some data, spanning two days.
DT T-C
0 2015-01-01 00:00:00 -2.5
1 2015-01-01 01:00:00 -2.1
2 2015-01-01 02:00:00 -2.3
3 2015-01-01 03:00:00 -2.3
4 2015-01-01 04:00:00 -2.3
5 2015-01-01 05:00:00 -2.0
...
24 2015-01-02 00:00:00 1.1
25 2015-01-02 01:00:00 1.1
26 2015-01-02 02:00:00 0.8
27 2015-01-02 03:00:00 0.5
28 2015-01-02 04:00:00 1.0
29 2015-01-02 05:00:00 0.7
First create DatetimeIndex, then aggregate by Grouper with days and idxmax
idxmin for datetimes for min and max temperature:
df['DT'] = pd.to_datetime(df['DT'])
df = df.set_index('DT')
df = df.groupby(pd.Grouper(freq='D'))['T-C'].agg(['idxmax','idxmin','max','min'])
print (df)
idxmax idxmin max min
DT
2015-01-01 2015-01-01 05:00:00 2015-01-01 00:00:00 -2.0 -2.5
2015-01-02 2015-01-02 00:00:00 2015-01-02 03:00:00 1.1 0.5