Search in IMAPv4 - search

Is there a way to tell the IMAP server that it only has to find the first 20 responses that fit a given search criteria. So, for example, I am sending to the server the command (SEARCH SINCE 1-Feb-2010) and it will wait to return all messages since 1-Feb-2010. Is there a way to tell the server that it only has to return the first 20?
PS: the spec for IMAP https://www.rfc-editor.org/rfc/rfc2060.html#section-6.4.4

I don't think that you can do it.
SEARCH returns only message numbers or UIDs so it's pretty fast.

Related

correct REST API for autosuggest on google?

I feel silly asking this.. but its doing my head..
if I use 'https://maps.googleapis.com/maps/api/place/autocomplete/json' and set the input parameter to say - 'Palazzo Cast' I will get about 5 suggestions - none of which will be the one I'm looking for. if I set input to 'Palazzo Castellania' I will get zero results - even though there is a place called this (see below). I've set the region parameter to 'mt'...
If I use 'https://maps.googleapis.com/maps/api/place/findplacefromtext' and set the input parameter to 'Palazzo Castellania' - I will get 'the Ministry of Health' - which is correct - however, if I put a partial string in I'll get only a single candidate which will be something different - there doesn't seem to be a way to get multiple place candidates?
I'm guessing from an API side - I have to do a multi-step process - but it would be good to get some input.
My thoughts:
I start with 'https://maps.googleapis.com/maps/api/place/autocomplete/json' - if I get an empty result, I try 'https://maps.googleapis.com/maps/api/place/findplacefromtext'
if I get a single result from either then I can pass the placeID to the places API to get more detailed data.
Make sense? It feels argly..
Edit
So watching how https://www.google.com.mt/ does it... while typing it uses suggest (and never gives the right answer, just like the API) and then when I hit enter it uses search and gives the correct answer... leading me to the conclusion that there is actually two databases happening!
Basically "its by design".. there is no fix as of Feb 2023.. My thoughts are to cache results and do a first search against that otherwise I'll probably use bing or here

First Result Only Using Google-Search-API

I am using abenassi/Google-Search-API https://github.com/abenassi/Google-Search-API to make multiple Google queries in a small python script. I typically only need the first result (link) but the program is built to collect whole pages of results. So far I have been limiting the result as such:
results = google.search(query)
for result in iter(results[0:1]):
loc = result.link
The problem is that the script is slow as a result (I think) of having to wade through the whole page before I get my one link. Does anyone see something obvious I'm missing, or alternately, a simple way to modify the standard_search module https://github.com/abenassi/Google-Search-API/blob/master/google/modules/standard_search.py to limit results to first link only? Thanks!

Regular Expressions and SQL Server Error Logs - All false results

Ok, I have done my searching and I have tried many things. I think it is time to put my question here:
I have been working on taking in other user's SQL Server error logs, parsing out the rows into columns, then bulk inserting the data 1000 at a time. I troubleshoot SQL Server for other people so sp_readerrorlog will only show me my local instance. Finding root cause involves 4 sets of logs (SQL Server, Application Event, System Event, and get-clusterlog outputs and matching up timestamps. A fast load into SQL Server along with the ability to pull the exact timeframe needed will shorten my time spent staring at log files.
I am currently bottlenecked in testing the rows with a regular expression, which does work if I feed it data myself:
def sqlrowmatch(row):
pattern = re.compile(r'\d\d\d\d-\d\d-\d\d\s\d\d:\d\d:\d\d.\d\d')
if pattern.search(row):
return True
else:
return False
given any string that matches above (1111-11-11 11:11:11.11) will return as true. The idea is if in a SQL Server Error Log, if this is matched, then it is a separate entry. this will allow memory graphs, deadlock graphs, and dumps to all be grouped in one entry as opposed to being split over several lines.
However, if I point it at one of the SQL Error Logs, there seems to be extra characters. This is giving re.match and re.show a hard time finding a match. If I load any line in this function,sqlrowmatch(), it reports back false for all rows.
ÿþ <-- this appears to be the first 2 characters at the first line. re.search just doesn't even find it anywhere in the in the different elements.
False is what is returned if I put the function in with the 'with open' as statement:
with open(file, 'r') as sqllog:
for line in sqllog:
print(sqlrowmatch(line))
the first line should always be true if sqlrowmatch() is used.
2018-10-13 22:40:09.41 Server Microsoft SQL Server 2016 (SP2-CU2-GDR) (KB4458621) - 13.0.5201.2 (X64)
So I am lost and my current project is at a halt. Perhaps some seasoned insight from this group can get me going again.
TIA
Interesting enough, I found my answer here: Opening huge text file, unicode issue
open should be done with encoding='utf-16'
It now matches appropriately

tailLines and SinceTime in logging api,both not worked simultaneously

I am using container engine, and my pods are hosted there.
I am trying to fetch logs, using log api :
http://localhost:8000/api/v1/namespaces/app-test/pods/designer-0/log?tailLines=100&sinceTime=2017-09-17T10:47:58Z
if i used both the query params separately, it works and show the proper result, but if i am using it simultaneously only the top 100 logs are returning, the sinceTime param is get ignored.
my scenario is, i need a log from a specific time, in a chunk like, 100 lines, 100 lines.. like this.
I am not sure, whether it is a bug, or it is not implemented.
I found this from the api reference manual
https://kubernetes.io/docs/api-reference/v1.6/
tailLines - If set, the number of lines from the end of the logs to
show. If not specified, logs are shown from the creation of the
container or sinceSeconds or sinceTime
So, that means if you specify tailLines, it start from the end. I dont see any option explicitly mentioned other than limitBytes. But you will have to play around with it as it does not guarantee number of lines.
tailLines=X tells the server to start that many lines from the end
sinceTime tells the server to start from the specified time
the options are mutually exclusive
Thanks All,
I have later on recognized that, it is not ignoring the sinceTime, as the TailLines intended functionality is return the lines from the last.
So, if i mentioned the sinceTime= 10 PM yesterday, it will return the records from that time..And if also tailLines, is mentioned, so it will return the recent logs from that chunk.
So, it was working as expected. I need to play with LimitBytes for getting the logs in chunk, from that time, Instead of full logs.

Reading emails with groovy (Java Mail)

I am using groovy in order to access gmail and read the Inbox. It is regular JavaMail and will not describe it here.
So for simplicity, after I connect to the store, I use this:
folder.open(Folder.READ_ONLY)
folder.messages.each { msg ->
...
doSomething with msg
...
}
this is working fine.
However I have a performance issue. Sometimes messages[] could be big. Some folders contain more than 1000 messages, and checking them all takes time.
I am looking for a quicker way to get only those emails that are the most recent (for example messages from the last 5 days or something like that)
of course I have the date information in each msg and I could do my comparison, but this is not efficient since it will loop through the entire collection.
Is there a better way to get those messages?
If you have JavaMail issue a SEARCH command with the criterion SINCE 04-JAN-2011, you'll get back the set messages in the currently-selected folder delivered since January 4th. (SENTSINCE 04-JAN-2011 will do the same thing, only based on the "Date" message header.)
Something along the lines of this:
folder.search(new ReceivedDateTerm(ComparisonTerm.GE, sinceDate));

Resources