how to get only date string from a long string - python-3.x
I know there are lots of Q&As to extract datetime from string, such as dateutil.parser, to extract datetime from a string
import dateutil.parser as dparser
dparser.parse('something sep 28 2017 something',fuzzy=True).date()
output: datetime.date(2017, 9, 28)
but my question is how to know which part of string results this extraction, e.g. i want a function that also returns me 'sep 28 2017'
datetime, datetime_str = get_date_str('something sep 28 2017 something')
outputs: datetime.date(2017, 9, 28), 'sep 28 2017'
any clue or any direction that i can search around?
Extend to the discussion with #Paul and following the solution from #alecxe, I have proposed the following solution, which works on a number of testing cases, I've made the problem slight challenger:
Step 1: get excluded tokens
import dateutil.parser as dparser
ostr = 'something sep 28 2017 something abcd'
_, excl_str = dparser.parse(ostr,fuzzy_with_tokens=True)
gives outputs of:
excl_str: ('something ', ' ', 'something abcd')
Step 2 : rank tokens by length
excl_str = list(excl_str)
excl_str.sort(reverse=True,key = len)
gives a sorted token list:
excl_str: ['something abcd', 'something ', ' ']
Step 3: delete tokens and ignore space element
for i in excl_str:
if i != ' ':
ostr = ostr.replace(i,'')
return ostr
gives a final output
ostr: 'sep 28 2017 '
Note: step 2 is required, because it will cause problem if any shorter token a subset of longer ones. e.g., in this case, if deletion follows an order of ('something ', ' ', 'something abcd'), the replacement process will remove something from something abcd, and abcd will never get deleted, ends up with 'sep 28 2017 abcd'
Interesting problem! There is no direct way to get the parsed out date string out of the bigger string with dateutil. The problem is that dateutil parser does not even have this string available as an intermediate result as it really builds parts of the future datetime object on the fly and character by character (source).
It, though, also collects a list of skipped tokens which is probably your best bet. As this list is ordered, you can loop over the tokens and replace the first occurrence of the token:
from dateutil import parser
s = 'something sep 28 2017 something'
parsed_datetime, tokens = parser.parse(s, fuzzy_with_tokens=True)
for token in tokens:
s = s.replace(token.lstrip(), "", 1)
print(s) # prints "sep 28 2017"
I am though not 100% sure if this would work in all the possible cases, especially, with the different whitespace characters (notice how I had to workaround things with .lstrip()).
Related
How to extract several timestamp pairs from a list in Python
I have extracted all timestamps from a transcript file. The output looks like this: ('[, 00:00:03,950, 00:00:06,840, 00:00:06,840, 00:00:09,180, 00:00:09,180, ' '00:00:10,830, 00:00:10,830, 00:00:14,070, 00:00:14,070, 00:00:16,890, ' '00:00:16,890, 00:00:19,080, 00:00:19,080, 00:00:21,590, 00:00:21,590, ' '00:00:24,030, 00:00:24,030, 00:00:26,910, 00:00:26,910, 00:00:29,640, ' '00:00:29,640, 00:00:31,920, 00:00:31,920, 00:00:35,850, 00:00:35,850, ' '00:00:38,629, 00:00:38,629, 00:00:40,859, 00:00:40,859, 00:00:43,170, ' '00:00:43,170, 00:00:45,570, 00:00:45,570, 00:00:48,859, 00:00:48,859, ' '00:00:52,019, 00:00:52,019, 00:00:54,449, 00:00:54,449, 00:00:57,210, ' '00:00:57,210, 00:00:59,519, 00:00:59,519, 00:01:02,690, 00:01:02,690, ' '00:01:05,820, 00:01:05,820, 00:01:08,549, 00:01:08,549, 00:01:10,490, ' '00:01:10,490, 00:01:13,409, 00:01:13,409, 00:01:16,409, 00:01:16,409, ' '00:01:18,149, 00:01:18,149, 00:01:20,340, 00:01:20,340, 00:01:22,649, ' '00:01:22,649, 00:01:26,159, 00:01:26,159, 00:01:28,740, 00:01:28,740, ' '00:01:30,810, 00:01:30,810, 00:01:33,719, 00:01:33,719, 00:01:36,990, ' '00:01:36,990, 00:01:39,119, 00:01:39,119, 00:01:41,759, 00:01:41,759, ' '00:01:43,799, 00:01:43,799, 00:01:46,619, 00:01:46,619, 00:01:49,140, ' '00:01:49,140, 00:01:51,240, 00:01:51,240, 00:01:53,759, 00:01:53,759, ' '00:01:56,460, 00:01:56,460, 00:01:58,740, 00:01:58,740, 00:02:01,640, ' '00:02:01,640, 00:02:04,409, 00:02:04,409, 00:02:07,229, 00:02:07,229, ' '00:02:09,380, 00:02:09,380, 00:02:12,060, 00:02:12,060, 00:02:14,840, ]') In this output, there are always timestamp pairs, i.e. always 2 consecutive timestamps belong together, for example: 00:00:03,950 and 00:00:06,840, 00:00:06,840 and 00:00:09,180, etc. Now, I want to extract all these timestamp pairs separately so that the output looks like this: 00:00:03,950 - 00:00:06,840 00:00:06,840 - 00:00:09,180 00:00:09,180 - 00:00:10,830 etc. For now, I have the following (very inconvenient) solution for my problem: # get first part of first timestamp a = res_timestamps[2:15] print(dedent(a)) # get second part of first timestamp b = res_timestamps[17:29] print(b) # combine timestamp parts c = a + ' - ' + b print(dedent(c)) Of course, this is very bad since I cannot extract the indices manually for all transcripts. Trying to use a loop has not worked yet because each item is not a timestamp but a single character. Is there an elegant solution for my problem? I appreciate any help or tip. Thank you very much in advance!
Regex to the rescue! A solution that works perfectly on your example data: import re from pprint import pprint pprint(re.findall(r"(\d{2}:\d{2}:\d{2},\d{3}), (\d{2}:\d{2}:\d{2},\d{3})", your_data)) This prints: [('00:00:03,950', '00:00:06,840'), ('00:00:06,840', '00:00:09,180'), ('00:00:09,180', '00:00:10,830'), ('00:00:10,830', '00:00:14,070'), ('00:00:14,070', '00:00:16,890'), ('00:00:16,890', '00:00:19,080'), ('00:00:19,080', '00:00:21,590'), ('00:00:21,590', '00:00:24,030'), ('00:00:24,030', '00:00:26,910'), ('00:00:26,910', '00:00:29,640'), ('00:00:29,640', '00:00:31,920'), ('00:00:31,920', '00:00:35,850'), ('00:00:35,850', '00:00:38,629'), ('00:00:38,629', '00:00:40,859'), ('00:00:40,859', '00:00:43,170'), ('00:00:43,170', '00:00:45,570'), ('00:00:45,570', '00:00:48,859'), ('00:00:48,859', '00:00:52,019'), ('00:00:52,019', '00:00:54,449'), ('00:00:54,449', '00:00:57,210'), ('00:00:57,210', '00:00:59,519'), ('00:00:59,519', '00:01:02,690'), ('00:01:02,690', '00:01:05,820'), ('00:01:05,820', '00:01:08,549'), ('00:01:08,549', '00:01:10,490'), ('00:01:10,490', '00:01:13,409'), ('00:01:13,409', '00:01:16,409'), ('00:01:16,409', '00:01:18,149'), ('00:01:18,149', '00:01:20,340'), ('00:01:20,340', '00:01:22,649'), ('00:01:22,649', '00:01:26,159'), ('00:01:26,159', '00:01:28,740'), ('00:01:28,740', '00:01:30,810'), ('00:01:30,810', '00:01:33,719'), ('00:01:33,719', '00:01:36,990'), ('00:01:36,990', '00:01:39,119'), ('00:01:39,119', '00:01:41,759'), ('00:01:41,759', '00:01:43,799'), ('00:01:43,799', '00:01:46,619'), ('00:01:46,619', '00:01:49,140'), ('00:01:49,140', '00:01:51,240'), ('00:01:51,240', '00:01:53,759'), ('00:01:53,759', '00:01:56,460'), ('00:01:56,460', '00:01:58,740'), ('00:01:58,740', '00:02:01,640'), ('00:02:01,640', '00:02:04,409'), ('00:02:04,409', '00:02:07,229'), ('00:02:07,229', '00:02:09,380'), ('00:02:09,380', '00:02:12,060'), ('00:02:12,060', '00:02:14,840')] You could output this in your desired format like so: for start, end in timestamps: print(f"{start} - {end}")
Here's a solution without regular expressions Clean the string, and split on ', ' to create a list Use string slicing to select the odd and even values and zip them together. # give data as your string # convert data into a list by removing end brackets and spaces, and splitting data = data.replace('[, ', '').replace(', ]', '').split(', ') # use list slicing and zip the two components combinations = list(zip(data[::2], data[1::2])) # print the first 5 print(combinations[:5]) [out]: [('00:00:03,950', '00:00:06,840'), ('00:00:06,840', '00:00:09,180'), ('00:00:09,180', '00:00:10,830'), ('00:00:10,830', '00:00:14,070'), ('00:00:14,070', '00:00:16,890')]
parsing dates from strings
I have a list of strings in python like this ['AM_B0_D0.0_2016-04-01T010000.flac.h5', 'AM_B0_D3.7_2016-04-13T215000.flac.h5', 'AM_B0_D10.3_2017-03-17T110000.flac.h5', 'AM_B0_D0.7_2016-10-21T104000.flac.h5', 'AM_B0_D4.4_2016-08-05T151000.flac.h5', 'AM_B0_D0.0_2016-04-01T010000.flac.h5', 'AM_B0_D3.7_2016-04-13T215000.flac.h5', 'AM_B0_D10.3_2017-03-17T110000.flac.h5', 'AM_B0_D0.7_2016-10-21T104000.flac.h5', 'AM_B0_D4.4_2016-08-05T151000.flac.h5'] I want to parse only the date and time (for example, 2016-08-05 15:10:00 )from these strings. So far I used a for loop like the one below but it's very time consuming, is there a better way to do this? for files in glob.glob("AM_B0_*.flac.h5"): if files[11]=='_': year=files[12:16] month=files[17:19] day= files[20:22] hour=files[23:25] minute=files[25:27] second=files[27:29] tindex=pd.date_range(start= '%d-%02d-%02d %02d:%02d:%02d' %(int(year),int(month), int(day), int(hour), int(minute), int(second)), periods=60, freq='10S') else: year=files[11:15] month=files[16:18] day= files[19:21] hour=files[22:24] minute=files[24:26] second=files[26:28] tindex=pd.date_range(start= '%d-%02d-%02d %02d:%02d:%02d' %(int(year), int(month), int(day), int(hour), int(minute), int(second)), periods=60, freq='10S')
Try this (based on the 2nd last '-', no need of if-else case): filesall = ['AM_B0_D0.0_2016-04-01T010000.flac.h5', 'AM_B0_D3.7_2016-04-13T215000.flac.h5', 'AM_B0_D10.3_2017-03-17T110000.flac.h5', 'AM_B0_D0.7_2016-10-21T104000.flac.h5', 'AM_B0_D4.4_2016-08-05T151000.flac.h5', 'AM_B0_D0.0_2016-04-01T010000.flac.h5', 'AM_B0_D3.7_2016-04-13T215000.flac.h5', 'AM_B0_D10.3_2017-03-17T110000.flac.h5', 'AM_B0_D0.7_2016-10-21T104000.flac.h5', 'AM_B0_D4.4_2016-08-05T151000.flac.h5'] def find_second_last(text, pattern): return text.rfind(pattern, 0, text.rfind(pattern)) for files in filesall: start = find_second_last(files,'-') - 4 # from yyyy- part timepart = (files[start:start+17]).replace("T"," ") #insert 2 ':'s timepart = timepart[:13] + ':' + timepart[13:15] + ':' +timepart[15:] # print(timepart) tindex=pd.date_range(start= timepart, periods=60, freq='10S')
In Place of using file[11] as hard coded go for last or 2nd last index of _ then use your code then you don't have to write 2 times same code. Or use regex to parse the string.
How to identify and homogenize date format of instances in a string?
I can't find a way to identify the date formats of a string in MATLAB and put all of them in the same format. I have the following cell array: list = {'01-Sep-1882'; ... '01-Aug-1895'; ... '04/01/1912'; ... 'Tue, 05/28/46'; ... 'Tue, 03/10/53'; ... '06/20/58'; ... 'Thu, 09/20/73'; ... 'Fri, 08/15/75'; ... 'Sun, 12/01/1996'}; If I do datenum(list) there's an error message because all the rows don't have the same date format. Can you think of a way to circumvent this?
You can do this by successively applying datetime to convert each format and isnat to identify those dates that didn't convert properly. In addition, you can specify days of the week in the date format string and what pivot year to use for years with only the last two numbers. Starting with the sample data and expected date formats in your question, here's the code to do it: % Input: list = {'01-Sep-1882'; ... '01-Aug-1895'; ... '04/01/1912'; ... 'Tue, 05/28/46'; ... 'Tue, 03/10/53'; ... '06/20/58'; ... 'Thu, 09/20/73'; ... 'Fri, 08/15/75'; ... 'Sun, 12/01/1996'}; % Conversion code: dt = datetime(list, 'Format', 'dd-MMM-yyyy'); index = isnat(dt); dt(index) = datetime(list(index), 'Format', 'MM/dd/yy', 'PivotYear', 1900); index = isnat(dt); dt(index) = datetime(list(index), 'Format', 'eee, MM/dd/yy', 'PivotYear', 1900) % Output: dt = 9×1 datetime array 01-Sep-1882 01-Aug-1895 01-Apr-1912 28-May-1946 10-Mar-1953 20-Jun-1958 20-Sep-1973 15-Aug-1975 01-Dec-1996 Now you can convert these to numeric values with datenum: dnum = datenum(dt);
Part of the issue here is that MATLAB's datenum cannot understand this format: Tue, 05/28/46. So let's clean the original list so that it can understand. % original list list = {'01-Sep-1882','01-Aug-1895','04/01/1912','Tue, 05/28/46','Tue, 03/10/53','06/20/58','Thu, 09/20/73','Fri, 08/15/75','Sun, 12/01/1996'} % split each cell in the list list_split = cellfun(#(x) strsplit(x,' '), list, 'UniformOutput', false); % now detect where there is unusual format, this will give logical array abnormal_idx = cellfun(#(x) length(x) == 2, list_split,'UniformOutput', true) % make copy clean_list = list; % now at abnormal indices retain only the part that MATLAB understands clean_list(abnormal_idx) = cellfun(#(x) x{2}, list_split(abnormal_idx), 'UniformOutput', false); % now run datenum on clean list date_num = cellfun(#datenum, clean_list, 'UniformOutput', true);
How to search tweets from an id to another id
I'm trying to get tweets using TwitterSearch in Python3. So basically I want to get all tweets between these 2 IDs. 748843914254249984 ->760065085616250880 These 2 IDs are from the Fri Jul 01 11:41:16 +0000 2016 to Mon Aug 01 10:50:12 +0000 2016 So here's the code I made. crawl.py #!/usr/bin/python3 # coding: utf-8 from TwitterSearch import * import datetime def crawl(): try: tso = TwitterSearchOrder() tso.set_keywords(["keyword"]) tso.set_since_id(748843914254249984) tso.set_max_id(760065085616250880) ACCESS_TOKEN = xxx ACCESS_SECRET = xxx CONSUMER_KEY = xxx CONSUMER_SECRET = xxx ts = TwitterSearch( consumer_key = CONSUMER_KEY, consumer_secret = CONSUMER_SECRET, access_token = ACCESS_TOKEN, access_token_secret = ACCESS_SECRET ) for tweet in ts.search_tweets_iterable(tso): print(tweet['id_str'], '-', tweet['created_at']) except TwitterSearchException as e: print( e ) if __name__ == '__main__': crawl() I'm not very familiar with Twitter API and searching with it. But this code should do the job. But it's giving : 760058064816988160 - Mon Aug 01 10:22:18 +0000 2016 [...] 760065085616250880 - Mon Aug 01 10:50:12 +0000 2016 Many, many times... Like I got the same lines over and over again instead of getting everything between my two IDs. So I'm not getting any of the July tweets, any idea why ?
TL;DR Remove the tso.set_max_id(760065085616250880) line. Explanation (as far as I understand it) I have found your problem in the TwitterSearch Docs: "The only parameter with a default value is count with 100. This is because it is the maximum of tweets returned by this very Twitter API endpoint." If I check this in your code by creating a search URL, I get: tso.create_search_url() #?q=Vuitton&since_id=748843914254249984&count=100&max_id=760065085616250880 which contains count=100 (meaning it will get the first page of 100 tweets). And, in contrast with removing the set_since_id and set_max_id which also has count=100 and retrieves many more tweets, it stops at 100 tweets. set_since_id without set_max_id works, the other way around doesn't. So removing the max_id=760065085616250880 from the search URL resulted in the results you want. If anyone can explain why set_max_id is not working along, please edit my answer.
Remove certain dates in list. Python 3.4
I have a list that has several days in it. Each day have several timestamps. What I want to do is to make a new list that only takes the start time and the end time in the list for each date. I also want to delete the Character between the date and the time on each one, the char is always the same type of letter. the time stamps can vary in how many they are on each date. Since I'm new to python it would be preferred to use a lot of simple to understand codes. I've been using a lot of regex so pleas if there is a way with this one. the list has been sorted with the command list.sort() so it's in the correct order. code used to extract the information was the following. file1 = open("test.txt", "r") for f in file1: list1 += re.findall('20\d\d-\d\d-\d\dA\d\d\:\d\d', f) listX = (len(list1)) list2 = list1[0:listX - 2] list2.sort() here is a list of how it looks: 2015-12-28A09:30 2015-12-28A09:30 2015-12-28A09:35 2015-12-28A09:35 2015-12-28A12:00 2015-12-28A12:00 2015-12-28A12:15 2015-12-28A12:15 2015-12-28A14:30 2015-12-28A14:30 2015-12-28A15:15 2015-12-28A15:15 2015-12-28A16:45 2015-12-28A16:45 2015-12-28A17:00 2015-12-28A17:00 2015-12-28A18:15 2015-12-28A18:15 2015-12-29A08:30 2015-12-29A08:30 2015-12-29A08:35 2015-12-29A08:35 2015-12-29A10:45 2015-12-29A10:45 2015-12-29A11:00 2015-12-29A11:00 2015-12-29A13:15 2015-12-29A13:15 2015-12-29A14:00 2015-12-29A14:00 2015-12-29A15:30 2015-12-29A15:30 2015-12-29A15:45 2015-12-29A15:45 2015-12-29A17:15 2015-12-29A17:15 2015-12-30A08:30 2015-12-30A08:30 2015-12-30A08:35 2015-12-30A08:35 2015-12-30A10:45 2015-12-30A10:45 2015-12-30A11:00 2015-12-30A11:00 2015-12-30A13:00 2015-12-30A13:00 2015-12-30A13:45 2015-12-30A13:45 2015-12-30A15:15 2015-12-30A15:15 2015-12-30A15:30 2015-12-30A15:30 2015-12-30A17:15 2015-12-30A17:15 And this is how I want it to look like: 2015-12-28 09:30 2015-12-28 18:15 2015-12-29 08:30 2015-12-29 17:15 2015-12-30 08:30 2015-12-30 17:15
First of all, you should convert all your strings into proper dates, Python can work with. That way, you have a lot more control on it, also to change the formatting later. So let’s parse your dates using datetime.strptime in list2: from datetime import datetime dates = [datetime.strptime(item, '%Y-%m-%dA%H:%M') for item in list2] This creates a new list dates that contains all your dates from list2 but as parsed datetime object. Now, since you want to get the first and the last date of each day, we somehow have to group your dates by the date component. There are various ways to do that. I’ll be using itertools.groupby for it, with a key function that just looks at the date component of each entry: from itertools import groupby for day, times in groupby(dates, lambda x: x.date()): first, *mid, last = times print(first) print(last) If we run this, we already get your output (without date formatting): 2015-12-28 09:30:00 2015-12-28 18:15:00 2015-12-29 08:30:00 2015-12-29 17:15:00 2015-12-30 08:30:00 2015-12-30 17:15:00 Of course, you can also collect that first and last date in a list first to process the dates later: filteredDates = [] for day, times in groupby(dates, lambda x: x.date()): first, *mid, last = times filteredDates.append(first) filteredDates.append(last) And you can also output your dates with a different format using datetime.strftime: for date in filteredDates: print(date.strftime('%Y-%m-%d %H:%M')) That would give us the following output: 2015-12-28 09:30 2015-12-28 18:15 2015-12-29 08:30 2015-12-29 17:15 2015-12-30 08:30 2015-12-30 17:15 If you don’t want to go the route through parsing those dates, of course you could also do this simply by working on the strings. Since they are nicely formatted (i.e. they can be easily compared), you can do that as well. It would look like this then: for day, times in groupby(list2, lambda x: x[:10]): first, *mid, last = times print(first) print(last) Producing the following output: 2015-12-28A09:30 2015-12-28A18:15 2015-12-29A08:30 2015-12-29A17:15 2015-12-30A08:30 2015-12-30A17:15
Because your data is ordered you just need to pull the first and last value from each group, you can use re.sub to remove the single letter replacing it with a space then split each date string just comparing the dates: from re import sub def grp(l): it = iter(l) prev = start = next(it).replace("A"," ") for dte in it: dte = dte.replace("A"," ") # if we have a new date, yield that start and end if dte.split(None, 1)[0] != prev.split(None,1)[0]: yield start yield prev start = dte prev = dte yield start, prev l=["2015-12-28A09:30", "2015-12-28A09:30", ..................... l[:] = grp(l) This could also certainly be done as your process the file without sorting by using a dict to group: from re import findall from collections import OrderedDict with open("dates.txt") as f: od = defaultdict(lambda: {"min": "null", "max": ""}) for line in f: for dte in findall('20\d\d-\d\d-\d\dA\d\d\:\d\d', line): dte, tme = dte.split("A") _dte = "{} {}".format(dte, tme) if od[dte]["min"] > _dte: od[dte]["min"] = _dte if od[dte]["max"] < _dte: od[dte]["max"] = _dt print(list(od.values())) Which will give you the start and end time for each date. [{'min': '2016-01-03 23:59', 'max': '2016-01-03 23:59'}, {'min': '2015-12-28 00:00', 'max': '2015-12-28 18:15'}, {'min': '2015-12-30 08:30', 'max': '2015-12-30 17:15'}, {'min': '2015-12-29 08:30', 'max': '2015-12-29 17:15'}, {'min': '2015-12-15 08:41', 'max': '2015-12-15 08:41'}] The start for 2015-12-28 is also 00:00 not 9:30. if you dates are actually as posted one per line you don't need a regex either: from collections import defaultdict with open("dates.txt") as f: od = defaultdict(lambda: {"min": "null", "max": ""}) for line in f: dte, tme = line.rstrip().split("A") _dte = "{} {}".format(dte, tme) if od[dte]["min"] > _dte: od[dte]["min"] = _dte if od[dte]["max"] < _dte: od[dte]["max"] = _dte print(list(od.values() Which would give you the same output.