How do I reformat dates in a CSV to just show MM/YYYY - python-3.x

Using Python 3 Pandas, spending an embarrassing amount of time trying to figure out how to take a column of dates from a CSV and make a new column with just MM/YYYY or YYYY/MM/01.
The data looks like Col1 but I am trying to produce Col2:
Col1 Col2
2/12/2017 2/1/2017
2/16/2017 2/1/2017
2/28/2017 2/1/2017
3/2/2017 3/1/2017
3/13/2017 3/1/2017
Am able to parse the year and month out:
df['Month'] = pd.DatetimeIndex(df['File_Processed_Date']).month
df['Year'] = pd.DatetimeIndex(df['File_Processed_Date']).year
df['Period'] = df['Month'] + '/' + df['Year']
That last line is wrong. Is there a clever python way to just show 2/2017?
Get the error: "TypeError: ufunc 'add' did not contain a loop with signature matching types dtype('
Update, answer by piRsquared:
d = pd.to_datetime(df.File_Processed_Date)
df['Period'] = d.dt.strftime('%m/1/%Y')
This will create a pandas column in a dataframe that converts Col1 into Col2 successfully. Thanks!

let d be just 'Col1' converted to Timestamp
d = pd.to_datetime(df.Col1)
then
d.dt.strftime('%m/1/%Y')
0 02/1/2017
1 02/1/2017
2 02/1/2017
3 03/1/2017
4 03/1/2017
Name: Col1, dtype: object
​
d.dt.strftime('%m%Y')
0 02/2017
1 02/2017
2 02/2017
3 03/2017
4 03/2017
Name: Col1, dtype: object
d.dt.strftime('%Y/%m/01')
0 2017/02/01
1 2017/02/01
2 2017/02/01
3 2017/03/01
4 2017/03/01
Name: Col1, dtype: object
d - pd.offsets.MonthBegin()
0 2017-02-01
1 2017-02-01
2 2017-02-01
3 2017-03-01
4 2017-03-01
Name: Col1, dtype: datetime64[ns]

The function you are looking for is strftime.

Related

How to add unique date values of a datetime64[ns] Series object

I have a column of type datetime64[ns] (df.timeframe).
df has columns ['id', 'timeframe', 'type']
df['type'] can be 'A' or 'B'
I want to get the total number of unique dates per df.type == 'A' and per df.id
I tried this:
df = df.groupby(['id', 'type']).timeframe.apply(lambda x: x.dt.date()).unique().rename('test').reset_index()
But got error:
TypeError: 'Series' object is not callable
What should I do?
You could use value_counts:
df[df['type']=='A'].assign(timeframe=df['timeframe'].dt.date)
.value_counts(['id','type','timeframe'], sort=False)
.reset_index().rename(columns={0:'count'})
id type timeframe count
0 1 A 2022-06-06 2
1 1 A 2022-06-08 1
2 1 A 2022-06-10 2
3 2 A 2022-06-07 1
4 2 A 2022-06-09 1
5 2 A 2022-06-10 1

groupby column in pandas

I am trying to groupby columns value in pandas but I'm not getting.
Example:
Col1 Col2 Col3
A 1 2
B 5 6
A 3 4
C 7 8
A 11 12
B 9 10
-----
result needed grouping by Col1
Col1 Col2 Col3
A 1,3,11 2,4,12
B 5,9 6,10
c 7 8
but I getting this ouput
<pandas.core.groupby.generic.DataFrameGroupBy object at 0x0000025BEB4D6E50>
I am getting using excel power query with function group by and count all rows, but I can´t get the same with python and pandas. Any help?
Try this
(
df
.groupby('Col1')
.agg(lambda x: ','.join(x.astype(str)))
.reset_index()
)
it outputs
Col1 Col2 Col3
0 A 1,3,11 2,4,12
1 B 5,9 6,10
2 C 7 8
Very good I created solution between 0 and 0:
df[df['A'] != 0].groupby((df['A'] == 0).cumsum()).sub()
It will group column between 0 and 0 and sum it

Complex group by using Pandas

I am facing a situation where I need to group-by a dataframe by a column 'ID' and also calculate the total time frame depicted for that particular ID to complete. I only want to calculate the difference between the date_open and data_closed for the particular ID with the ID count.
We only need to focus on the date open and the date closed field. So it needs to do something taking the max closing date and the min open date and subtracting the two
The dataframe looks as follows:
ID Date_Open Date_Closed
1 01/01/2019 02/01/2019
1 07/01/2019 09/01/2019
2 10/01/2019 11/01/2019
2 13/01/2019 19/01/2019
3 10/01/2019 11/01/2019
The output should look like this :
ID Count_of_ID Total_Time_In_Days
1 2 8
2 2 9
3 1 1
How should I achieve this ?
Using GroupBy with named_aggregation and the min and max of the dates:
df[['Date_Open', 'Date_Closed']] = (
df[['Date_Open', 'Date_Closed']].apply(lambda x: pd.to_datetime(x, format='%d/%m/%Y'))
)
dfg = df.groupby('ID').agg(
Count_of_ID=('ID','size'),
Date_Open=('Date_Open','min'),
Date_Closed=('Date_Closed','max')
)
dfg['Total_Time_In_Days'] = dfg['Date_Closed'].sub(dfg['Date_Open']).dt.days
dfg = dfg.drop(columns=['Date_Closed', 'Date_Open']).reset_index()
ID Count_of_ID Total_Time_In_Days
0 1 2 8
1 2 2 9
2 3 1 1
Now we have Total_Time_In_Days as int:
print(dfg.dtypes)
ID int64
Count_of_ID int64
Total_Time_In_Days int64
dtype: object
This can also be used:
df['Date_Open'] = pd.to_datetime(df['Date_Open'], dayfirst=True)
df['Date_Closed'] = pd.to_datetime(df['Date_Closed'], dayfirst=True)
df_grouped = df.groupby(by='ID').count()
df_grouped['Total_Time_In_Days'] = df.groupby(by='ID')['Date_Closed'].max() - df.groupby(by='ID')['Date_Open'].min()
df_grouped = df_grouped.drop(columns=['Date_Open'])
df_grouped.columns=['Count', 'Total_Time_In_Days']
print(df_grouped)
Count Total_Time_In_Days
ID
1 2 8 days
2 2 9 days
3 1 1 days
I'll try first to create the a column depicting how much time passed from Date_open to Date_closed for each instance of the dataframe. Like this:
df['Total_Time_In_Days'] = df.Date_closed - df.Date_open
Then you can use groupby:
df.groupby('id').agg({'id':'count','Total_Time_In_Days':'sum'})
If you need any help with the .agg function you can refer to it's official documentation here.

Drop by multiple columns groups if specific values not exit in another column in Pandas

How can I drop the whole group by city and district if date's value of 2018/11/1 not exits in the following dataframe:
city district date value
0 a c 2018/9/1 12
1 a c 2018/10/1 4
2 a c 2018/11/1 5
3 b d 2018/9/1 3
4 b d 2018/10/1 7
The expected result will like this:
city district date value
0 a c 2018/9/1 12
1 a c 2018/10/1 4
2 a c 2018/11/1 5
Thank you!
Create helper column by DataFrame.assign, compare by datetime and test if at least one true per groups with GroupBy.any and GroupBy.transform for possible filter by boolean indexing:
mask = (df.assign(new=df['date'].eq('2018/11/1'))
.groupby(['city','district'])['new'].transform('any'))
df = df[mask]
print (df)
city district date value
0 a c 2018/9/1 12
1 a c 2018/10/1 4
2 a c 2018/11/1 5
If error with misisng values in mask one possivle idea is replace misisng values in columns used for groups:
mask = (df.assign(new=df['date'].eq('2018/11/1'),
city= df['city'].fillna(-1),
district= df['district'].fillna(-1))
.groupby(['city','district'])['new'].transform('any'))
df = df[mask]
print (df)
city district date value
0 a c 2018/9/1 12
1 a c 2018/10/1 4
2 a c 2018/11/1 5
Another idea is add possible misisng index values by reindex and also replace missing values to False:
mask = (df.assign(new=df['date'].eq('2018/11/1'))
.groupby(['city','district'])['new'].transform('any'))
df = df[mask.reindex(df.index, fill_value=False).fillna(False)]
print (df)
city district date value
0 a c 2018/9/1 12
1 a c 2018/10/1 4
2 a c 2018/11/1 5
There's a special GroupBy.filter() method for this. Assuming date is already datetime:
filter_date = pd.Timestamp('2018-11-01').date()
df = df.groupby(['city', 'district']).filter(lambda x: (x['date'].dt.date == filter_date).any())

manipulating pandas dataframe - conditional

I have a pandas dataframe that looks like this:
ID Date Event_Type
1 01/01/2019 A
1 01/01/2019 B
2 02/01/2019 A
3 02/01/2019 A
I want to be left with:
ID Date
1 01/01/2019
2 02/01/2019
3 02/01/2019
Where my condition is:
If the ID is the same AND the dates are within 2 days of each other then drop one of the rows.
If however the dates are more than 2 days apart then keep both rows.
How do I do this?
I believe you need first convert values to datetimes by to_datetime, then get diff and get first values per groups by isnull() chained with comparing if next values are higher like timedelta treshold:
df['Date'] = pd.to_datetime(df['Date'], format='%d/%m/%Y')
s = df.groupby('ID')['Date'].diff()
df = df[(s.isnull() | (s > pd.Timedelta(2, 'd')))]
print (df)
ID Date Event_Type
0 1 2019-01-01 A
2 2 2019-02-01 A
3 3 2019-02-01 A
Check solution with another data:
print (df)
ID Date Event_Type
0 1 01/01/2019 A
1 1 04/01/2019 B <-difference 3 days
2 2 02/01/2019 A
3 3 02/01/2019 A
df['Date'] = pd.to_datetime(df['Date'], format='%d/%m/%Y')
s = df.groupby('ID')['Date'].diff()
df = df[(s.isnull() | (s > pd.Timedelta(2, 'd')))]
print (df)
ID Date Event_Type
0 1 2019-01-01 A
1 1 2019-01-04 B
2 2 2019-01-02 A
3 3 2019-01-02 A

Resources