I have a text file in which month, day and year are in different columns. I want to combine them to one column and covert it in date format. I am trying to use parce_dates option in pandas read_table. But it is not working and giving me error file structure not yet supported
dateparse = lambda x: pd.datetime.strptime(x, '%m-%d-%y')
date = pd.read_table("date.txt", sep = ' ', parse_dates = {'date':['month', 'day','year']}, date_parser=dateparse)
My data looks like this:
Data
Remove the date_parser arguments and it'll work just fine:
date = pd.read_table('date.txt', sep=' ', parse_dates={'date': ['month', 'day','year']})
Read the data as a pandas DataFrame and create a new column with combined date
df = pd.read_csv('date.txt', sep = ' ')
df['date'] = pd.to_datetime(df[['month','day','year']])
Parsing custom dates from multiple columns during pandas read_ step is possible.
date_parser= lambda x,y,z: datetime.strptime(f"{x}.{y}.{z}", "%m.%d.%Y")
date = pd.read_table('date.txt', sep=' ', parse_dates={'date': ['month', 'day','year']}, date_parser=date_parser)
Related
I have a column in my pandas data frame which is string and want to convert it to pandas date so that I will be able to sort
import pandas as pd
dat = pd.DataFrame({'col' : ['202101', '202212']})
dat['col'].astype('datetime64[ns]')
However this generates error. Could you please help to find the correct way to perform this
I think this code should work.
dat['date'] = pd.to_datetime(dat['col'], format= "%Y%m")
dat['date'] = dat['date'].dt.to_period('M')
dat.sort_values(by = 'date')
If you want to replace the sorted dataframe add in brackets inplace = True.
Your code didn't work because your wrong format to date. If you would have date in format for example 20210131 yyyy-mm-dd. This code would be enought.
dat['date'] = pd.to_datetime(dat['col'], format= "%Y%m%d")
I have one field in a pandas DataFrame that was imported as string format.
It should be a datetime variable. How do I convert it to a datetime column and then filter based on date.
Example:
df = pd.DataFrame({'date': ['05SEP2014:00:00:00.000']})
Use the to_datetime function, specifying a format to match your data.
raw_data['Mycol'] = pd.to_datetime(raw_data['Mycol'], format='%d%b%Y:%H:%M:%S.%f')
If you have more than one column to be converted you can do the following:
df[["col1", "col2", "col3"]] = df[["col1", "col2", "col3"]].apply(pd.to_datetime)
You can use the DataFrame method .apply() to operate on the values in Mycol:
>>> df = pd.DataFrame(['05SEP2014:00:00:00.000'],columns=['Mycol'])
>>> df
Mycol
0 05SEP2014:00:00:00.000
>>> import datetime as dt
>>> df['Mycol'] = df['Mycol'].apply(lambda x:
dt.datetime.strptime(x,'%d%b%Y:%H:%M:%S.%f'))
>>> df
Mycol
0 2014-09-05
Use the pandas to_datetime function to parse the column as DateTime. Also, by using infer_datetime_format=True, it will automatically detect the format and convert the mentioned column to DateTime.
import pandas as pd
raw_data['Mycol'] = pd.to_datetime(raw_data['Mycol'], infer_datetime_format=True)
chrisb's answer works:
raw_data['Mycol'] = pd.to_datetime(raw_data['Mycol'], format='%d%b%Y:%H:%M:%S.%f')
however it results in a Python warning of
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
I would guess this is due to some chaining indexing.
Time Saver:
raw_data['Mycol'] = pd.to_datetime(raw_data['Mycol'])
To silence SettingWithCopyWarning
If you got this warning, then that means your dataframe was probably created by filtering another dataframe. Make a copy of your dataframe before any assignment and you're good to go.
df = df.copy()
df['date'] = pd.to_datetime(df['date'], format='%d%b%Y:%H:%M:%S.%f')
errors='coerce' is useful
If some rows are not in the correct format or not datetime at all, errors= parameter is very useful, so that you can convert the valid rows and handle the rows that contained invalid values later.
df['date'] = pd.to_datetime(df['date'], format='%d%b%Y:%H:%M:%S.%f', errors='coerce')
# for multiple columns
df[['start', 'end']] = df[['start', 'end']].apply(pd.to_datetime, format='%d%b%Y:%H:%M:%S.%f', errors='coerce')
Setting the correct format= is much faster than letting pandas find out1
Long story short, passing the correct format= from the beginning as in chrisb's post is much faster than letting pandas figure out the format, especially if the format contains time component. The runtime difference for dataframes greater than 10k rows is huge (~25 times faster, so we're talking like a couple minutes vs a few seconds). All valid format options can be found at https://strftime.org/.
1 Code used to produce the timeit test plot.
import perfplot
from random import choices
from datetime import datetime
mdYHMSf = range(1,13), range(1,29), range(2000,2024), range(24), *[range(60)]*2, range(1000)
perfplot.show(
kernels=[lambda x: pd.to_datetime(x),
lambda x: pd.to_datetime(x, format='%m/%d/%Y %H:%M:%S.%f'),
lambda x: pd.to_datetime(x, infer_datetime_format=True),
lambda s: s.apply(lambda x: datetime.strptime(x, '%m/%d/%Y %H:%M:%S.%f'))],
labels=["pd.to_datetime(df['date'])",
"pd.to_datetime(df['date'], format='%m/%d/%Y %H:%M:%S.%f')",
"pd.to_datetime(df['date'], infer_datetime_format=True)",
"df['date'].apply(lambda x: datetime.strptime(x, '%m/%d/%Y %H:%M:%S.%f'))"],
n_range=[2**k for k in range(20)],
setup=lambda n: pd.Series([f"{m}/{d}/{Y} {H}:{M}:{S}.{f}"
for m,d,Y,H,M,S,f in zip(*[choices(e, k=n) for e in mdYHMSf])]),
equality_check=pd.Series.equals,
xlabel='len(df)'
)
Just like we convert object data type to float or int. Use astype()
raw_data['Mycol']=raw_data['Mycol'].astype('datetime64[ns]')
Incoming CSV from American Express download looks like below. (I would prefer each field has quotes around it, but it doesn't. It is treating the quoted long number in the second CSV column as the first column in the Pandas data frame, i.e. 320193480240275508 as my "Date" column:
12/13/19,'320193480240275508',Alamo Rent A Car,John
Doe,-12345,178.62,Travel-Vehicle Rental,DEBIT,
colnames = ['Date', 'TransNum', 'Payee', 'NotUsed4', 'NotUsed5', 'Amount', 'AmexCategory', 'DebitCredit']
df = pd.read_csv(filenameIn, names=colnames, header=0, delimiter=",")
delimiter=",")
pd.set_option('display.max_rows', 15)
pd.set_option('display.width', 200)
print (df)
print (df.values)
Start
Date ... DebitCredit 12/13/19 '320193480240275508' ... NaN
I have a routine to reformat the date ( to handle things like 1/3/19, and to add the century). It is called like this:
df['Date'][j] = reformatAmexDate2(df['Date'][j])
That routine shows the date as follows:
def reformatAmexDate2(oldDate):
print ("oldDate=" + oldDate)
oldDate='320193480240275508'
I saw this post which recommended dayfirst=True, and added that, but same result. I never even told Pandas that column 1 is a date, so it should treat it as text I believe.
IIUC, the problem seems to be name=colnames, it sets new names for your columns being read from csv file, as you are trying to read specific columns from csv file, you can use usecol
df = pd.read_csv(filenameIn,usecols=colnames, header=0, delimiter=",")
Looking at the data, I didn't notice the comma after the column value, i.e. the comma after "DEBIT,"
12/13/19,'320193480240275508',Alamo Rent A Car,John Doe,-12345,178.62,Travel-Vehicle Rental,DEBIT,
I just added another column at the end of my columns array:
colnames = ['Date', 'TransNum', 'Payee', 'NotUsed4', 'NotUsed5', 'Amount', 'AmexCategory', 'DebitCredit','NotUsed9']
and life is wonderful.
guys, I need a bit help on Pandas and would appreciate greatly your inputs.
My original file looks like this:
I would like to convert it by mergering some pairs of columns (generating their averages) and returns a new file looking like this:
Also, if possible, I would also like to split the column 'RateDateTime' into two columns, one contains the date, the other contains only the time. How should I do it? I tried coding as belows but it doesn't work:
import pandas as pd
dateparse = lambda x: pd.datetime.strptime(x, '%Y/%m/%d %H:%M:%S')
df = pd.read_csv('data.csv', parse_dates=['RateDateTime'], index_col='RateDateTime',date_parser=dateparse)
a=pd.to_numeric(df['RateAsk_open'])
b=pd.to_numeric(df['RateAsk_high'])
c=pd.to_numeric(df['RateAsk_low'])
d=pd.to_numeric(df['RateAsk_close'])
e=pd.to_numeric(df['RateBid_open'])
f=pd.to_numeric(df['RateBid_high'])
g=pd.to_numeric(df['RateBid_low'])
h=pd.to_numeric(df['RateBid_close'])
df['Open'] = (a+e) /2
df['High'] = (b+f) /2
df['Low'] = (c+g) /2
df['Close'] = (d+h) /2
grouped = df.groupby('CurrencyPair')
Open=grouped['Open']
High=grouped['High']
Low=grouped['Low']
Close=grouped['Close']
w=pd.concat([Open, High,Low,Close], axis=1, keys=['Open', 'High','Low','Close'])
w.to_csv('w.csv')
Python returns:
TypeError: cannot concatenate object of type "<class 'pandas.core.groupby.groupby.SeriesGroupBy'>"; only pd.Series, pd.DataFrame, and pd.Panel (deprecated) objs are valid
Can someone help me please? Many thanks!!!
IIUYC, you don't need grouping here. You can simply update existing dataframe with new columns and specify, what columns you need to save to csv file in to_csv method. Here is example:
df['Open'] = df[['RateAsk_open', 'RateBid_open']].mean(axis=1)
df['RateDate'] = df['RateDateTime'].dt.date
df['RateTime'] = df['RateDateTime'].dt.time
df.to_csv('w.csv', columns=['CurrencyPair', 'Open', 'RateDate', 'RateTime'])
I have a column of data one of them being a date and am expected to drop the rows that have leap dates. It is a range of years so I was hoping to drop any that matched the 02-29 filter.
The one way I used is to add additional columns, extract the month and date separately and then filter on the data as shown below. It serves the purpose but obviously not good from an efficiency perspective
df['Yr'], df['Mth-Dte'] = zip(*df['Date'].apply(lambda x: (x[:4], x[5:])))
df = df[df['Mth-Dte'] != '02-29']
Is there a better way to implement this by directly applying the filter on the column in the dataframe?
Adding the data
ID Date
22398 IDM00096087 1/1/2005
22586 IDM00096087 1/1/2005
21790 IDM00096087 1/2/2005
21791 IDM00096087 1/2/2005
14727 IDM00096087 1/3/2005
Thanks in advance
Convert to datetime and use boolean mask.
import pandas as pd
data = {'Date': {14727: '1/3/2005',
21790: '1/2/2005',
21791: '1/2/2005',
22398: '1/1/2005',
22586: '29/2/2008'},
'ID': {14727: 'IDM00096087',
21790: 'IDM00096087',
21791: 'IDM00096087',
22398: 'IDM00096087',
22586: 'IDM00096087'}}
df = pd.DataFrame(data)
Option1, convert + dt:
df.Date = pd.to_datetime(df.Date)
# Filter away february 29
df[~((df.Date.dt.month == 2) & (df.Date.dt.day == 29))] # ~ for not equal to
Option2, convert + strftime:
df.Date = pd.to_datetime(df.Date)
# Filter away february 29
df[df.Date.dt.strftime('%m%d') != '0229']
Option3, without conversion:
mask = pd.to_datetime(df.Date).dt.strftime('%m%d') != '0229'
df[mask]