Spacy matching priority - nlp

I'm looking to create a physics pattern library with spacy:
I want to detect time and speed pattern. My aim is to stay flexible with those pattern.
time_pattern = [
[
{'LIKE_NUM': True, 'OP': '?'},
{'LOWER':{'IN': ['time', 's','h','min']}},
{'LOWER': {"IN": ['maximum','minimum','min','max']}, 'OP':'?'}
]
]
speed_pattern = [
[
{'LIKE_NUM': True, 'OP': '?'},
{'LOWER':{"IN": ['km', 'm']}},
{'IS_PUNCT': True},
{'LOWER' : {"IN": ['h','hour','s','min']}}
]
]
matcher=Matcher(nlp.vocab, validate =True)
matcher.add("SPEED", speed_pattern)
matcher.add("TIME", time_pattern)
doc=nlp("a certain time, more about 23 min, can't get above 25 km/h")
for id_match, start, end in matcher(doc):
match_label=nlp.vocab[id_match].text
print(match_label, '<--', doc[start:end])
So far my code returns this collection of matches:
TIME <-- time
TIME <-- 23 min
TIME <-- min
SPEED <-- 25 km/h
SPEED<-- km/h
TIME <-- h
I want the matcher to match only once, and to match "23 min" rather than "min". Also would like the matcher not to match an element already matched ( for exemple "h" should not be matched because it already matched in "km/h"

You can try add greedy="LONGEST" to matcher.add() to return only the longest (or FIRST) matches:
matcher.add("SPEED", speed_pattern, greedy="LONGEST")
matcher.add("TIME", time_pattern, greedy="LONGEST")
But note that this doesn't handle overlaps across different match IDs:
TIME <-- 23 min
TIME <-- time
TIME <-- h
SPEED <-- 25 km/h
If you want to filter all of the matches, you can use matcher(doc, as_spans=True) to get the matches directly as spans and then use spacy.util.filter_spans to filter the whole list of spans to a list of non-overlapping spans with the longest spans preferred: https://spacy.io/api/top-level#util.filter_spans
[time, 23 min, 25 km/h]

You can use as_spans=True option with spacy.matcher.Matcher (introduced in spaCy v3.0):
matches = matcher(doc, as_spans=True)
for span in spacy.util.filter_spans(matches):
print(span.label_, "->", span.text)
From the documentation:
Instead of tuples, return a list of Span objects of the matches, with the match_id assigned as the span label. Defaults to False.
See the Python demo:
import spacy
from spacy.tokens.doc import Doc
from spacy.matcher import Matcher
nlp = spacy.load('en_core_web_sm')
time_pattern = [
[
{'LIKE_NUM': True, 'OP': '?'},
{'LOWER':{'IN': ['time', 's','h','min']}},
{'LOWER': {"IN": ['maximum','minimum','min','max']}, 'OP':'?'}
]
]
speed_pattern = [
[
{'LIKE_NUM': True, 'OP': '?'},
{'LOWER':{"IN": ['km', 'm']}},
{'IS_PUNCT': True},
{'LOWER' : {"IN": ['h','hour','s','min']}}
]
]
matcher=Matcher(nlp.vocab, validate =True)
matcher.add("SPEED", speed_pattern)
matcher.add("TIME", time_pattern)
doc=nlp("a certain time, more about 23 min, can't get above 25 km/h")
matches = matcher(doc, as_spans=True)
for span in spacy.util.filter_spans(matches):
print(span.label_, "->", span.text)
Output:
TIME -> time
TIME -> 23 min
SPEED -> 25 km/h

Related

How to extract same parts of file name with different patterns?

I have the following file name formats:
2020-01-05-ABC1111_001.jpg
2020_02_06_B444444_MN_004.jpg
2020_03_20_KUKU44223222-STAFF_005.jpg
2020-04-03-LULU4444211-MN_018.jpg
Most (99%) of the files are of the following format:
2020_04_03_LULU4444211_018.jpg
And I just use rsplit("_", 2) and get what I need.
Where the first part is date, second is and ID, MN or STAFF, page number.
How to build a good regex or split function to somehow split it to date, id, and page?
From all the above examples I would like to get:
{
"2020-01-05-ABC1111_001.jpg": {"date": 2020-01-05, "id": ABC1111, "page_num": 1},
"2020_02_06_B444444_MN_004.jpg": {"date": 2020_02_06, "id": B444444, "page_num": 4},
"2020_03_20_KUKU44223222-STAFF_005.jpg": {"date": 2020_03_20, "id": KUKU44223222, "page_num": 5},
"2020-04-03-LULU4444211-MN_018.jpg": {"date": 2020-04-03, "id": LULU4444211, "page_num": 18}
}
I am have tried rsplit, I know there is an annotation option + Spacy NER model but maybe there is another way to do it more simply?
You might use code like
import re
strings = ['2020-01-05-ABC1111_001.jpg','2020_02_06_B444444_MN_004.jpg','2020_03_20_KUKU44223222-STAFF_005.jpg','2020-04-03-LULU4444211-MN_018.jpg']
rx = re.compile(r'(?P<date>\d{4}[-_]\d{2}[-_]\d{2})[-_](?P<id>[^_-]+)(?:[_-](?:MN|STAFF))?[_-](?P<page_num>\d+)')
d = {}
for s in strings:
m = rx.search(s)
if m:
d[s] = m.groupdict()
print(d)
See the Python demo, yielding
{'2020-01-05-ABC1111_001.jpg': {'date': '2020-01-05', 'id': 'ABC1111', 'page_num': '001'}, '2020_02_06_B444444_MN_004.jpg': {'date': '2020_02_06', 'id': 'B444444', 'page_num': '004'}, '2020_03_20_KUKU44223222-STAFF_005.jpg': {'date': '2020_03_20', 'id': 'KUKU44223222', 'page_num': '005'}, '2020-04-03-LULU4444211-MN_018.jpg': {'date': '2020-04-03', 'id': 'LULU4444211', 'page_num': '018'}}
Note the regex used contains named capturing groups so that you could get access to .groupdict() after a match is found, it looks like
(?P<date>\d{4}[-_]\d{2}[-_]\d{2})[-_](?P<id>[^_-]+)(?:[_-](?:MN|STAFF))?[_-](?P<page_num>\d+)
See the regex demo.
Regex details
(?P<date>\d{4}[-_]\d{2}[-_]\d{2}) - Group "date": 4 digits, _ or -, 2 digits, _ or - and then again 2 digits
[-_] - a hyphen or underscore
(?P<id>[^_-]+) - Group "id": 1 or more chars other than - and _
(?:[_-](?:MN|STAFF))? - an optional non-capturing group matching - or _ and then MN or STAFF
[_-] - a - or _
(?P<page_num>\d+) - Group "page_num": 1 or more digits.
Regexp:
(\d{4}[-_]\d{2}[-_]\d{2})[-_](.*)[-_](\d+)\.[a-zA-Z]+
It contains three regexp groups:
date
id
page number
Explanation:
(\d{4}[-_]\d{2}[-_]\d{2}) # date (yyyy-mm-dd or yyyy_mm_dd) - group 1
[-_] # separator (dash or underscore)
(.+) # id (any character) - group 2
[-_] # separator
(\d+) # page number - group 3
\.[a-zA-Z]+ # file extension
Demo: https://regex101.com/r/IPF7QE/1
You can read groups in Python this way:
if match := re.search(regexp, text_line, re.IGNORECASE):
date = match.group(1)
id = match.group(2)
page_number = match.group(3)

Records of each version in list

I have a list of versions
1.0.0.1 - 10
1.1.0.1 - 10
1.2.0.1 - 10
That is 30 nr in my list. But I only want to show the 5 highest nr of each sort:
1.0.0.5 - 10
1.1.0.5 - 10
1.2.0.5 - 10
How can I do that? The last nr can be any number but the 3 first nr is only
1.0.0
1.1.0
1.2.0
CODE:
import groovy.json.JsonSlurperClassic
def data = new URL("http://xxxx.se:8081/service/rest/beta/components?repository=Releases").getText()
/**
* 'jsonString' is the input json you have shown
* parse it and store it in collection
*/
Map convertedJSONMap = new JsonSlurperClassic().parseText(data)
def list = convertedJSONMap.items.version
list
Version numbers alone usually don't make an easy sort. So I'd split them into numbers and work from there. E.g.
def versions = [
"1.0.0.12", "1.1.0.42", "1.2.0.666",
"1.0.0.6", "1.1.0.77", "1.2.0.8",
"1.0.0.23", "1.1.0.5", "1.2.0.5",
]
println(
versions.collect{
it.split(/\./)*.toInteger() // turn into array of integers
}.groupBy{
it.take(2) // group by the first two numbers
}.collect{ _, vs ->
vs.sort().last() // sort the arrays and take the last
}*.join(".") // piece the numbers back together
)
// => [1.0.0.23, 1.1.0.77, 1.2.0.666]

logstash parse complex message from Telegram

I'm processing through Telegram history (txt file) and I need to extract & process quite complex (nested) multiline pattern.
Here's the whole pattern
Free_Trade_Calls__AltSignals:IOC/ BTC (bittrex)
BUY : 0.00164
SELL :
TARGET 1 : 0.00180
TARGET 2 : 0.00205
TARGET 3 : 0.00240
STOP LOS : 0.000120
2018-04-19 15:46:57 Free_Trade_Calls__AltSignals:TARGET
basically I am looking for a pattern starting with
Free_Trade_Calls__AltSignals: ^%(
and ending with a timestamp.
Inside that pattern (telegram message)
- exchange - in brackets in the 1st line
- extract value after BUY
- SELL values in a array of 3 SELL[3] : target 1-3
- STOP loss value (it can be either STOP, STOP LOSS, STOP LOS)....
I've found this Logstash grok multiline message but I am very new to logstash firend advised it to me. I was trying to parse this text in NodeJS but it really is pain in the ass and mad about it.
Thanks Rob :)
Since you need to grab values from each line, you don't need to use multi-line modifier. You can skip empty line with %{SPACE} character.
For your given log, this pattern can be used,
Free_Trade_Calls__AltSignals:.*\(%{WORD:exchange}\)\s*BUY\s*:\s*%{NUMBER:BUY}\s*SELL :\s*TARGET 1\s*:\s*%{NUMBER:TARGET_1}\s*TARGET 2\s*:\s*%{NUMBER:TARGET_2}\s*TARGET 3\s*:\s*%{NUMBER:TARGET_3}\s*.*:\s*%{NUMBER:StopLoss}
please note that \s* equals to %{SPACE}
It will output,
{
"exchange": [
[
"bittrex"
]
],
"BUY": [
[
"0.00164"
]
],
"BASE10NUM": [
[
"0.00164",
"0.00180",
"0.00205",
"0.00240",
"0.000120"
]
],
"TARGET_1": [
[
"0.00180"
]
],
"TARGET_2": [
[
"0.00205"
]
],
"TARGET_3": [
[
"0.00240"
]
],
"StopLoss": [
[
"0.000120"
]
]
}

Not sure how to create ArangoDB graph using columns in existing collection

Background
I have a rocksdb collection that contains three fields: _id, author, subreddit.
Problem
I would like to create a Arango graph that creates a graph connecting these two existing columns. But the examples and the drivers seem to only accept collections as its edge definitions.
Issue
The ArangoDb documentation is lacking information on how I can create a graph using edges and nodes pulled from the same collection.
EDIT:
Solution
This was fixed with a code change at this Arangodb issues ticket.
Here's one way to do it using jq, a JSON-oriented command-line tool.
First, an outline of the steps:
1) Use arangoexport to export your author/subredit collection to a file, say, exported.json;
2) Run the jq script, nodes_and_edges.jq, shown below;
3) Use arangoimp to import the JSON produced in (2) into ArangoDB.
There are several ways the graph can be stored in ArangoDB, so ultimately you might wish to tweak nodes_and_edges.jq accordingly (e.g. to generate the nodes first, and then the edges).
INDEX
If your jq does not have INDEX defined, then use this:
def INDEX(stream; idx_expr):
reduce stream as $row ({};
.[$row|idx_expr|
if type != "string" then tojson
else .
end] |= $row);
def INDEX(idx_expr): INDEX(.[]; idx_expr);
nodes_and_edges.jq
# This module is for generating JSON suitable for importing into ArangoDB.
### Generic Functions
# nodes/2
# $name must be the name of the ArangoDB collection of nodes corresponding to $key.
# The scheme for generating key names can be altered by changing the first
# argument of assign_keys, e.g. to "" if no prefix is wanted.
def nodes($key; $name):
map( {($key): .[$key]} ) | assign_keys($name[0:1] + "_"; 1);
def assign_keys(prefix; start):
. as $in
| reduce range(0;length) as $i ([];
. + [$in[$i] + {"_key": "\(prefix)\(start+$i)"}]);
# nodes_and_edges facilitates the normalization of an implicit graph
# in an ArangoDB "document" collection of objects having $from and $to keys.
# The input should be an array of JSON objects, as produced
# by arangoexport for a single collection.
# If $nodesq is truthy, then the JSON for both the nodes and edges is emitted,
# otherwise only the JSON for the edges is emitted.
#
# The first four arguments should be strings.
#
# $from and $to should be the key names in . to be used for the from-to edges;
# $name1 and $name2 should be the names of the corresponding collections of nodes.
def nodes_and_edges($from; $to; $name1; $name2; $nodesq ):
def dict($s): INDEX(.[$s]) | map_values(._key);
def objects: to_entries[] | {($from): .key, "_key": .value};
(nodes($from; $name1) | dict($from)) as $fdict
| (nodes($to; $name2) | dict($to) ) as $tdict
| (if $nodesq then $fdict, $tdict | objects
else empty end),
(.[] | {_from: "\($name1)/\($fdict[.[$from]])",
_to: "\($name2)/\($tdict[.[$to]])"} ) ;
### Problem-Specific Functions
# If you wish to generate the collections separately,
# then these will come in handy:
def authors: nodes("author"; "authors");
def subredits: nodes("subredit"; "subredits");
def nodes_and_edges:
nodes_and_edges("author"; "subredit"; "authors"; "subredits"; true);
nodes_and_edges
Invocation
jq -cf extract_nodes_edges.jq exported.json
This invocation will produce a set of JSONL (JSON-Lines) for "authors", one for "subredits" and an edge collection.
Example
exported.json
[
{"_id":"test/115159","_key":"115159","_rev":"_V8JSdTS---","author": "A", "subredit": "S1"},
{"_id":"test/145120","_key":"145120","_rev":"_V8ONdZa---","author": "B", "subredit": "S2"},
{"_id":"test/114474","_key":"114474","_rev":"_V8JZJJS---","author": "C", "subredit": "S3"}
]
Output
{"author":"A","_key":"name_1"}
{"author":"B","_key":"name_2"}
{"author":"C","_key":"name_3"}
{"subredit":"S1","_key":"sid_1"}
{"subredit":"S2","_key":"sid_2"}
{"subredit":"S3","_key":"sid_3"}
{"_from":"authors/name_1","_to":"subredits/sid_1"}
{"_from":"authors/name_2","_to":"subredits/sid_2"}
{"_from":"authors/name_3","_to":"subredits/sid_3"}
Please note that the following queries take a while to complete on this huge dataset, however they should complete sucessfully after some hours.
We start the arangoimp to import our base dataset:
arangoimp --create-collection true --collection RawSubReddits --type jsonl ./RC_2017-01
We use arangosh to create the collections where our final data is going to live in:
db._create("authors")
db._createEdgeCollection("authorsToSubreddits")
We fill the authors collection by simply ignoring any subsequently occuring duplicate authors;
We will calculate the _key of the author by using the MD5 function,
so it obeys the restrictions for allowed chars in _key, and we can know it later on by calling MD5() again on the author field:
db._query(`
FOR item IN RawSubReddits
INSERT {
_key: MD5(item.author),
author: item.author
} INTO authors
OPTIONS { ignoreErrors: true }`);
After the we have filled the second vertex collection (we will keep the imported collection as the first vertex collection) we have to calculate the edges.
Since each author can have created several subreds, its most probably going to be several edges originating from each author. As previously mentioned,
we can use the MD5()-function again to reference the author previously created:
db._query(`
FOR onesubred IN RawSubReddits
INSERT {
_from: CONCAT('authors/', MD5(onesubred.author)),
_to: CONCAT('RawSubReddits/', onesubred._key)
} INTO authorsToSubreddits")
After the edge collection is filled (which may again take a while - we're talking about 40 million edges herer, right? - we create the graph description:
db._graphs.save({
"_key": "reddits",
"orphanCollections" : [ ],
"edgeDefinitions" : [
{
"collection": "authorsToSubreddits",
"from": ["authors"],
"to": ["RawSubReddits"]
}
]
})
We now can use the UI to browse the graphs, or use AQL queries to browse the graph. Lets pick the sort of random first author from that list:
db._query(`for author IN authors LIMIT 1 RETURN author`).toArray()
[
{
"_key" : "1cec812d4e44b95e5a11f3cbb15f7980",
"_id" : "authors/1cec812d4e44b95e5a11f3cbb15f7980",
"_rev" : "_W_Eu-----_",
"author" : "punchyourbuns"
}
]
We identified an author, and now run a graph query for him:
db._query(`FOR vertex, edge, path IN 0..1
OUTBOUND 'authors/1cec812d4e44b95e5a11f3cbb15f7980'
GRAPH 'reddits'
RETURN path`).toArray()
One of the resulting paths looks like that:
{
"edges" : [
{
"_key" : "128327199",
"_id" : "authorsToSubreddits/128327199",
"_from" : "authors/1cec812d4e44b95e5a11f3cbb15f7980",
"_to" : "RawSubReddits/38026350",
"_rev" : "_W_LOxgm--F"
}
],
"vertices" : [
{
"_key" : "1cec812d4e44b95e5a11f3cbb15f7980",
"_id" : "authors/1cec812d4e44b95e5a11f3cbb15f7980",
"_rev" : "_W_HAL-y--_",
"author" : "punchyourbuns"
},
{
"_key" : "38026350",
"_id" : "RawSubReddits/38026350",
"_rev" : "_W-JS0na--b",
"distinguished" : null,
"created_utc" : 1484537478,
"id" : "dchfe6e",
"edited" : false,
"parent_id" : "t1_dch51v3",
"body" : "I don't understand tension at all."
"Mine is set to auto."
"I'll replace the needle and rethread. Thanks!",
"stickied" : false,
"gilded" : 0,
"subreddit" : "sewing",
"author" : "punchyourbuns",
"score" : 3,
"link_id" : "t3_5o66d0",
"author_flair_text" : null,
"author_flair_css_class" : null,
"controversiality" : 0,
"retrieved_on" : 1486085797,
"subreddit_id" : "t5_2sczp"
}
]
}
For a graph you need an edge collection for the edges and vertex collections for the nodes. You can't create a graph using only one collection.
Maybe this topic in the documentations is helpful for you.
Here's an AQL solution, which however presupposes that all the referenced collections already exist, and that UPSERT is not necessary.
FOR v IN testcollection
LET a = v.author
LET s = v.subredit
FILTER a
FILTER s
LET fid = (INSERT {author: a} INTO authors RETURN NEW._id)[0]
LET tid = (INSERT {subredit: s} INTO subredits RETURN NEW._id)[0]
INSERT {_from: fid, _to: tid} INTO author_of
RETURN [fid, tid]

Remove certain dates in list. Python 3.4

I have a list that has several days in it. Each day have several timestamps. What I want to do is to make a new list that only takes the start time and the end time in the list for each date.
I also want to delete the Character between the date and the time on each one, the char is always the same type of letter.
the time stamps can vary in how many they are on each date.
Since I'm new to python it would be preferred to use a lot of simple to understand codes. I've been using a lot of regex so pleas if there is a way with this one.
the list has been sorted with the command list.sort() so it's in the correct order.
code used to extract the information was the following.
file1 = open("test.txt", "r")
for f in file1:
list1 += re.findall('20\d\d-\d\d-\d\dA\d\d\:\d\d', f)
listX = (len(list1))
list2 = list1[0:listX - 2]
list2.sort()
here is a list of how it looks:
2015-12-28A09:30
2015-12-28A09:30
2015-12-28A09:35
2015-12-28A09:35
2015-12-28A12:00
2015-12-28A12:00
2015-12-28A12:15
2015-12-28A12:15
2015-12-28A14:30
2015-12-28A14:30
2015-12-28A15:15
2015-12-28A15:15
2015-12-28A16:45
2015-12-28A16:45
2015-12-28A17:00
2015-12-28A17:00
2015-12-28A18:15
2015-12-28A18:15
2015-12-29A08:30
2015-12-29A08:30
2015-12-29A08:35
2015-12-29A08:35
2015-12-29A10:45
2015-12-29A10:45
2015-12-29A11:00
2015-12-29A11:00
2015-12-29A13:15
2015-12-29A13:15
2015-12-29A14:00
2015-12-29A14:00
2015-12-29A15:30
2015-12-29A15:30
2015-12-29A15:45
2015-12-29A15:45
2015-12-29A17:15
2015-12-29A17:15
2015-12-30A08:30
2015-12-30A08:30
2015-12-30A08:35
2015-12-30A08:35
2015-12-30A10:45
2015-12-30A10:45
2015-12-30A11:00
2015-12-30A11:00
2015-12-30A13:00
2015-12-30A13:00
2015-12-30A13:45
2015-12-30A13:45
2015-12-30A15:15
2015-12-30A15:15
2015-12-30A15:30
2015-12-30A15:30
2015-12-30A17:15
2015-12-30A17:15
And this is how I want it to look like:
2015-12-28 09:30
2015-12-28 18:15
2015-12-29 08:30
2015-12-29 17:15
2015-12-30 08:30
2015-12-30 17:15
First of all, you should convert all your strings into proper dates, Python can work with. That way, you have a lot more control on it, also to change the formatting later. So let’s parse your dates using datetime.strptime in list2:
from datetime import datetime
dates = [datetime.strptime(item, '%Y-%m-%dA%H:%M') for item in list2]
This creates a new list dates that contains all your dates from list2 but as parsed datetime object.
Now, since you want to get the first and the last date of each day, we somehow have to group your dates by the date component. There are various ways to do that. I’ll be using itertools.groupby for it, with a key function that just looks at the date component of each entry:
from itertools import groupby
for day, times in groupby(dates, lambda x: x.date()):
first, *mid, last = times
print(first)
print(last)
If we run this, we already get your output (without date formatting):
2015-12-28 09:30:00
2015-12-28 18:15:00
2015-12-29 08:30:00
2015-12-29 17:15:00
2015-12-30 08:30:00
2015-12-30 17:15:00
Of course, you can also collect that first and last date in a list first to process the dates later:
filteredDates = []
for day, times in groupby(dates, lambda x: x.date()):
first, *mid, last = times
filteredDates.append(first)
filteredDates.append(last)
And you can also output your dates with a different format using datetime.strftime:
for date in filteredDates:
print(date.strftime('%Y-%m-%d %H:%M'))
That would give us the following output:
2015-12-28 09:30
2015-12-28 18:15
2015-12-29 08:30
2015-12-29 17:15
2015-12-30 08:30
2015-12-30 17:15
If you don’t want to go the route through parsing those dates, of course you could also do this simply by working on the strings. Since they are nicely formatted (i.e. they can be easily compared), you can do that as well. It would look like this then:
for day, times in groupby(list2, lambda x: x[:10]):
first, *mid, last = times
print(first)
print(last)
Producing the following output:
2015-12-28A09:30
2015-12-28A18:15
2015-12-29A08:30
2015-12-29A17:15
2015-12-30A08:30
2015-12-30A17:15
Because your data is ordered you just need to pull the first and last value from each group, you can use re.sub to remove the single letter replacing it with a space then split each date string just comparing the dates:
from re import sub
def grp(l):
it = iter(l)
prev = start = next(it).replace("A"," ")
for dte in it:
dte = dte.replace("A"," ")
# if we have a new date, yield that start and end
if dte.split(None, 1)[0] != prev.split(None,1)[0]:
yield start
yield prev
start = dte
prev = dte
yield start, prev
l=["2015-12-28A09:30", "2015-12-28A09:30", .....................
l[:] = grp(l)
This could also certainly be done as your process the file without sorting by using a dict to group:
from re import findall
from collections import OrderedDict
with open("dates.txt") as f:
od = defaultdict(lambda: {"min": "null", "max": ""})
for line in f:
for dte in findall('20\d\d-\d\d-\d\dA\d\d\:\d\d', line):
dte, tme = dte.split("A")
_dte = "{} {}".format(dte, tme)
if od[dte]["min"] > _dte:
od[dte]["min"] = _dte
if od[dte]["max"] < _dte:
od[dte]["max"] = _dt
print(list(od.values()))
Which will give you the start and end time for each date.
[{'min': '2016-01-03 23:59', 'max': '2016-01-03 23:59'},
{'min': '2015-12-28 00:00', 'max': '2015-12-28 18:15'},
{'min': '2015-12-30 08:30', 'max': '2015-12-30 17:15'},
{'min': '2015-12-29 08:30', 'max': '2015-12-29 17:15'},
{'min': '2015-12-15 08:41', 'max': '2015-12-15 08:41'}]
The start for 2015-12-28 is also 00:00 not 9:30.
if you dates are actually as posted one per line you don't need a regex either:
from collections import defaultdict
with open("dates.txt") as f:
od = defaultdict(lambda: {"min": "null", "max": ""})
for line in f:
dte, tme = line.rstrip().split("A")
_dte = "{} {}".format(dte, tme)
if od[dte]["min"] > _dte:
od[dte]["min"] = _dte
if od[dte]["max"] < _dte:
od[dte]["max"] = _dte
print(list(od.values()
Which would give you the same output.

Resources