Sphinx search max_matches error - search

I want to customize sphinx search in such a way that I could keep as many records as I want in memory.
I am getting the follwing error:
searchd error (status: 1): per-query max_matches=25000 out of bounds (per-server max_matches=1000)

In your case I suggest to set max_matches to 100000 on the server side.
Even if you need more you can always use limit N,M to fetch slice of result set without going out of bounds.
In my experience humans don't go over 10-20 pages in search results, so 100K should be more than enough.

Related

How to retrieve all results from NearBySearch on Azure?

I am using NearBySearch from Microsoft Azure. In the official documentation it says that when you make a query the totalResults that the API can return is X. However, you can also read that there is a limit on the number of items returned which is at most 100.
In the case that the totalResults >= limit == 100, the API will only display the first 100 results, thus not showing the remaining ones.
Question: Would you be able to suggest a way to retrieve the additional results using the NearBySearch function?
Note: On the Google API NearBySearch there is a parameter called next_page_token, which allows to view all the possible results. Is there something similar in Azure?
You have a limit of 100 results each query. If you have 150 totalResults. You can execute the query with ofs= 0 and limit= 100 to get the first 100 entries. After that you execute the second query with the ofs=100 (because it is like an index). Your limit is 100. After that you will get the next 100 results. Because there are only 50 results left, your numResults will be 50.
I hope it is understandable
Would you be able to suggest a way to retrieve the additional results
using the NearBySearch function?
Looking at the documentation, I noticed that there is an offset parameter (ofs) which by default is zero. You should be able to use that to get the next set of results if the total results are more than the limit specified by you.

TableStorage queryEntities sometimes returning 0 entries but no error

TableStorage & Nodejs
Using the function "queryEntities" sometimes result.entries.length is 0, even when I am pretty sure there are a lot of entries in the database. The "where" parameters are ok, but sometimes (maybe one every 100) it returns 0 entries. Not error returned. Just 0 entries.
And in my function that's causing troubles.
My theory is that the database sometimes is saturated because this function executes every 10 seconds and maybe sometimes before one finish another one starts and both operate over the same table, and instead of error it returns a length 0 , what is something awful.
There is any way to resolve this? Shouldn't it return error?
This is expected behavior. In this particular scenario, please check for the presence of continuation tokens in the response. Presence of these tokens in the response indicate that there may be entities available matching the query and you should execute the same query again with the continuation token you received.
Please read this document for explanation: https://learn.microsoft.com/en-us/rest/api/storageservices/query-timeout-and-pagination.
From this link:
A query against the Table service may return a maximum of 1,000 items
at one time and may execute for a maximum of five seconds. If the
result set contains more than 1,000 items, if the query did not
complete within five seconds, or if the query crosses the partition
boundary, the response includes headers which provide the developer
with continuation tokens to use in order to resume the query at the
next item in the result set.

Bigquery API Intermittently returns http error 400 "Bad Request"

I am getting http error 400 returns intermittently for a particular query, yet when I examine the text of the query it appears to be correct, and if I then copy the query to the Bigquery GUI and run it, it executes without any problems. The query is being constructed in node.js and submitted though the gcloud node.js api. The response I receive, which contains the text of the query is too large to post here, but I do have the path name:
"pathname":"/bigquery/v2/projects/rising-ocean-426/queries/job_aSR9OCO4U_P51gYZ2xdRb145YEA"
The error seems to occur only if the live_seconds_viewed calculations are included in the query. If any part of the live_seconds_viewed calculation is included then the query fails intermittently.
The initial calculation of this field is:
CASE WHEN event = 'video_engagement'
AND range IS NULL
AND INTEGER(video_seconds_viewed) > 0
THEN 10
ELSE 0 END AS live_seconds_viewed,
Sometimes I can get the query to execute simply by changing the order of the expressions. But again, it is intermittent.
Any help with this would be greatly appreciated.
After long and arduous trial and error, I've determined that the reason why the query is failing is simply that the string length of the query is too long. When the query is executed from the GUI, apparently the white space is stripped so the query executes because without the white space it is short enough to pass the size limit.
When I manipulated the query to determine what part or parts were causing the problem, I would inadvertently reduce the size of the query below the critical limit and cause the query to pass.
It would be great if the error response from Bigquery included some hint about what the problem is rather than firing off a 400 error bad request and calling it quits.
It would be even better if the Bigquery parser would ignore white space when determining the size of the query. In this way the behavior on the GUI would match the behavior when submitting the query through the API.

Bugzilla search result is too long

Suppose I want to search for bugs reported in recent 2 years. The initial result page says "This result was limited to 500 bugs"
Apparently there are more than 500 bugs, so I click See all search results for this query. This time, it shows 10000 bugs, but with a message saying "This list is too long for Bugzilla's little mind; the Next/Prev/First/Last buttons won't appear on individual bugs"
So my question is:
How do I know the exact number of bugs returned by my query (it's unlikely to be exactly 10000)
How do I view the entire search results? Currently it seems like if the search results exceed 10000, the results are truncated. And I didn't find any prev/next page button to navigate the search results page.
You may not see all the bugs because of the configuration your administrator set on your bugzilla instance.
However, using the search function from the bugzilla webservice you can retrieve the list of bugs. If the number of bugs returned by the query is capped, then iterate on the search query using a higher offset and limit. Here is some pseudocode
offset = 0
limit = 5000
currentcount = ws.search(criterias, offset, limit).count
while currentcount == limit
{
offset += limit
currentcount = ws.search(criterias, offset, limit).count
}
totalbugs = currentcount + offset
The same algorithm would work if you also wanted to get the whole list of bugs instead of just the count.
If the idea of sending multiple queries to the webservice don't feel right you may have to talk to the admin to know what hard limits are set on your install of bugzilla and see how you can tweak them to get the results you need

restlet scripts for fetching an item from saved search in netsuite...?

there is no limit for coding how long is the code it doesn't matter.
I want to do this because i got it from my company and i have to write a script for this any idea regarding this is accepted.
nlapiSearchRecord(type, id, filters, columns)
Note:This API returns 1000 results at a time so if the saved search more than 1000 results then you would need to run the sorted search in a loop and concatenate with the results of the previous search
or nlapiLoadSearch(type, id)
Note: This returns in 4000 results at a time
these API's allow to fetch the results from the saved search.
and regarding API limit the restlet allows 5000 APIs so that's enough for serving most of the purposes. If at all you exceed this limit then you can make use of the API
nlapiYieldScript();
that creates the resume point and script resumes from that point.
** If any further clarification is needed please ask **
Cheers!!!!

Resources