Neo4j - index lookup issue - search

I was trying to set index type from exact to fulltext in neo4j shell, so i can do incasesensitive search with lucene query. So i used this command:
index --set-config Destination type fulltext
but it didn't work. Still couldn't do case insensitive search, so a played around and change some other values, like _blueprints:type and to_lower_case.
That didn't do any good.
Now it somehow ignores first character of name value ( weird ! ) . So if i am searching for "London" for example and i type "Lon" it returns nothing. But if i type "ond" it returns the node. The same for every node.
I tried setting everything back to normal. Didn`t help.
What did i mess up? What am i missing?
I am using a Everyman PHP library to communicate with database.

I created new index with "to_lower_case" property.
I think that will solve my problem, just have to convert string to lower case before inserting it into query. It seems to work.

Setting configuration afterwards doesn't update already indexed values (as the shell notes, I think). If you've created your index with "to_lower_case=true" then additions as well as queries will have the values converted to lower case. Calling Index#get will still require you to lower-case it yourself.

Related

How do I make a WHERE clause with SQLalchemy to compare to a string?

Objective
All I am trying to do is retrieve a single record from a specific table where the primary key matches. I have a feeling I'm greatly over complicating this as it seems to be a simple enough task. I have a theory that it may not know the variable value because it isn't actually pulling it from the Python code but instead trying to find a variable by the same name in the database.
EDIT: Is it possible that I need to wrap my where clause in an expression statement?
Attempted
My Python code is
def get_single_record(name_to_search):
my_engine = super_secret_inhouse_engine_constructor("sample_data.csv")
print("Searching for " + name_to_search)
statement = my_engine.tables["Users"].select().where(my_engine.tables["Users"].c.Name == name_to_search)
# Print out the raw SQL so we can see what exactly it's checking for
print("You are about to run: " + str(statement))
# Print out each result (should only be one)
print("Results:")
for item in my_engine.execute(statement):
print(item)
I tried hard coding a string in its place.
I tried using like instead of where.
All to the same end result.
Expected
I expect it to generate something along the lines of SELECT * FROM MyTable WHERE Name='Todd'.
Actual Result
Searching for Todd
STATEMENT: SELECT "Users"."Name", ...
FROM "Users"
WHERE "Users"."Name" = ?
That is an actual question mark appearing my statement, not simply my own confusion. This is then followed by it printing out a collection of all the records from the table, as though it successfully matched everything.
EDIT 2: Running either my own hard coded SQL string or the generated query by Alchemy returns every record from the table. I'm beginning to think the issue may be with the engine I've set up not accepting the query.
Why I'm Confused
According to the official documentation and third party sources, I should be able to compare to hardcoded strings and then, by proxy, be able to compare to a variable.

Elastic search : Searching for integers with wildcards

I am currently using the tire client for elastic search. Lets say I have a field which is indexed as a field of type long in my elastic search mapping.
I am trying to achieve something like this:
search.query {|query| query.string "30*", :fields => ['id']}
Here 'id' is the long field about which I was talking about. But since I specify the fields in the query, the wildcard doesn't work and I end up getting the exact match as the only result.
But doing the same thing works with the _all search as the field type doesn't matter. I want this wildcard search to work while also searching for the search key in that particular field. Is there any way to do this without changing my mapping?
I see next solutions:
use multifield and make this also of a string type (but requires mapping change)
use range and translate this into something like:
(from 30 to 39) or (from 300 to 309) or (from 3000 to 3099)
or (from 30000 to 30999) or ... (to max value)
use http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/query-dsl-script-filter.html and check this using scripting
Thanks to #alex on that scripting tip. Finally I found something which worked. Phew!
So I ended up doing this(briefly):
search.query do |query|
query.filtered do |f|
f.filter :script, {
:script => "doc['id'].value.toString() ~= '^30[0-9]*$'"
}
end
end
Hope it helps.

Lucene syntax: what is the difference between AND and +

I am newest in Lucene.
I'm using Lucene.NET version 2.9.4.
What is the difference between these queries?
the first is:
title:hello AND tags:word
the second is:
+title:hello +tags:word
I testing a software, and I note that the first returns 3 records, and the second returns many records.
I observe that the first returns records where title and tags fields are fuel, but the second returns records where title and tags can be empty.
Is it the difference?
There is no difference between the two. clause1 AND clause2 is effectively shorthand for +clause1 +clause2
Similarly: clause1 clause2 = clause1 OR clause2
Note, there is really no equivalent for +clause1 clause2 using the boolean operators.
Are you sending the query over the Internet, if you are and not urlencoding the request correctly it could be misinterting the '+' as an encoded space and therefore lucene just runs the second query as if the +'s not there which would just OR the two parts and give the results you get.
title:hello tags:word

Solr sort by min of two fields?

I want to sort a result set by the minimum of several fields.
So after reading the functionquery documentation this is what I came up with:
sort={!func}min(dvd_available_from_tdt,dto_available_from_tdt)%20desc
I also tried:
sort=_val_:min(dvd_available_from_tdt,dto_available_from_tdt)%20desc
sort=_val_:"min(dvd_available_from_tdt,dto_available_from_tdt)"%20desc
sort=_val_:"min(dvd_available_from_tdt,dto_available_from_tdt)%20desc"
sort="{!func}min(dvd_available_from_tdt,dto_available_from_tdt)"%20desc
sort={!func}min(dvd_available_from_tdt,dto_available_from_tdt)%20desc
sort="min(dvd_available_from_tdt,dto_available_from_tdt)"%20desc
and also some other placements of the quotes. But no matter what I always get this error:
HTTP ERROR: 400
Missing sort order.
Can anyobody point me in the right direction?
Try using a query that matches all documents, with a constant score, plus a function.
http://localhost:8983/solr/select/?q=%3A+_val_:price&version=2.2&start=0&rows=10&indent=on&debugQuery=true
Also, upgrading to Solr 3.3 is not that painful, and there's all sorts of cool new toys like sorting by function.
It seems to be available only in solr 3.1. I am running 1.4.1
http://wiki.apache.org/solr/FunctionQuery#Sort_By_Function

How to read cassandra data with out case sensitive

I need to get the data from cassandra with out case sensitive. Please help me.
There is no case-sensitivity concept in Cassandra. All the data is stored as byte[], so it's not even a String.
You can make a custom comparator (see the API) which transforms byte[] to String and disregards case.
The other thing to do is just get the data and transform it on the client side.
Actually, your question is quite unclear as of what is your goal, so I can't give more details.
Update: Run a one-time job that fetches all records from the db and updates them, setting to lower-case. Then continue inserting everything with lowercase.
This has been resolved if you have SOLR enabled using:
CREATE SEARCH INDEX ON tableName WITH COLUMNS *, camelCaseColumn { lowerCase : true };
An index is created that allows the select statement to use lowercase in the where clause. For more details search for LowerCaseStrField.

Resources