In Kibana I am trying to pull the my application log messages that had masked fields.
Example log message:
***statusMessage=, displayMessage=, securityInfoOutput=securityPin=pin=****, pinHint=*************
I want to search and pull the messages that have masked data - more than two consecutive *'s in the message.
Trying with search term message:"pin=\*\*\*\*"
but it didn't work
You seem to be thinking of search in the same way you'd type CTRL+F and search in a file. Search engines don't work that way. Search works based on exact matches of tokens. Tokens typically correspond to words extracted from text.
You can control how text is transformed into tokens using a process known as analysis. Analysis runs text through tokenization and various filters that decide how text is broken up into tokens and other pieces of metadata associated with each token.
This blog post I wrote might help put some of this into context.
Related
Can I set Google Places API to do a fuzzy search? It seems Google map search (which use JavaScript) does that automatically, but it appears the REST API does not. I am frustrated by having to type in the accurate hotel name....any spelling errors bring up no result.
Try Text Search requests,
The Google Places API Text Search Service is a web service that returns information about a set of places based on a string — for example "pizza in New York" or "shoe stores near Ottawa" or "123 Main Street". The service responds with a list of places matching the text string and any location bias that has been set.
The service is especially useful for making ambiguous address queries in an automated system, and non-address components of the string may match businesses as well as addresses. Examples of ambiguous address queries are incomplete addresses, poorly formatted addresses, or a request that includes non-address components such as business names.
I have to build a search textbox in a web page similar to facebook search box. Client side there will be ajax calls. The user need to search into around 300.000 elements that have a description of a few words or an alphanumeric code. When user enters the beginning of a word, a call is made to the server which return best match based on the starting of any word or code but also suggest first the elements most recently by the user, then by the group the user belongs to and finally from the entire set. Result can be limited to 10-20 items.
How can I build a fast search by key with the value just the description of the element? We use SQL server but any other DB could be OK.
The implementation at the time was very complex to summarise here but I came across recently to UI-select that solve the front end problem nicely and it's very good component if you are using Angular
https://github.com/angular-ui/ui-select
then backend you can put whatever you have (I did with Redis)
We have a website where users put up ads for stuff they want to sell, with parameters such as price, location, title and description. These can then be searched for using sphinx and allowing users to specify min- and maxprice, a location with a searchradius (using google maps) etc. Users can choose to save these searches and get emails when new ads appear that fit their search. Herein lies the problem: We want to perform a reverse search every time an ad is posted. With the price, location, title and description as parameters we want to search through all the saved "searches" and get the ones that would have found the ad. The min- and maxprice should just be performed in a query i suppose, and some Quorom syntax to get all ads with at least 2 or mby just 1 occurance in the title/description. Our problem lies mostly in the geo-search. How do we find all searches where the "search-circles" would include our newly posted location without performing a search for every saved search?
That is the main-question, any comment on our suggested solution to the other problems is also very welcome. Thank you in advance / Jenny
The standard 'geo-search' support on sphinx should work just as well on a Prospective Index, as a normal retrospective search.
Having built a sphinx 'index' of all the saved searches...
And you run a query using the 'ad' as the search query:- rather than the 'filter' using a fixed radius, you just use the radius from the attribute (ie the radius stored on the particular query) - if using the API cant use setFilterRange directly, need to use setSelect, to make a new virtual attribute.
$cl->setSelect("*,IF(#geodist<radius,1,0) as myfilter");
$cl->setFilter('myfilter',array(1));
(and yes, the min/maxprice can just be done with normal filters too - just inverting the logic to that you would use in a retrospective search)
... the complication is in the 'full-text' query, if the saved search is anything more than a single keyword, but you appear to have already figured out that part.
I have setup a search scope for the current members of a website (a "Phone book" type of search). It is setup to automatically suggest limiting search results by people's jobtitles, adding something like "jobtitle:Manager" to the search query.
For single words ("Manager", "Supervisor", etc.) this works fine, but as soon as the title contains more than one word ("Managing Supervisor"), it returns zero search results. My gut feeling is that it's because when the url is entered as jobtitle:Managing Supervisor, it limits results by jobtitle = Managing, and then Supervisor simply becomes a generic search term.
I tried testing with manually added quotation marks, jobtitle:"Supervising Manager", but they are removed when I land on the search page and the effects are the same.
Is there any way to allow limiting of search results by fields with multiple words?
This is running SharePoint 2007.
Did you try adding a + between the words?
I'm storing information about local "events". They are described by 3 things - address, date, keywords(tags). I want to have only one search box for at least address and keywords. The date might go to a separate field. I'm assuming that most people will search for events that are taking place "today" so this filter won't get that much traffic.
I need those addresses to be correct (because I'm geocoding them afterwards) so I need to validate them before submitting the form and display a list of "did you mean" if a user made a typo there. I can't do life search here. I can do a live search on keywords. Keep in mind that a user can make a typo there too and I want to catch that.
Is there a clever way to design the input's parser in this case to guess which is supposed to be address and which keywords?
OR
Is there a way to actually parse it as user is entering his query? Maybe I should show autocomplete hints for keywords, after 3 first characters are entered, and if user denies to use them then to assume that it's a part of an address he's typing.
What do You think?
Take a look at Document Cloud's Visual search
http://documentcloud.github.com/visualsearch/#demo