which nodejs index method is better - search

According to the neo4j documentation, indexing can be done i 2 ways"
Indexing in Neo4j can be done in two different ways:
1. The database itself is a natural index consisting of its relationships of different types between nodes. For example a tree
structure can be layered on top of the data and used for index lookups
performed by a traverser.
2. Separate index engines can be used, with Apache Lucene being the default
backend included with Neo4j.
But there is no comparison which is better in what and what is better in which cases.
Which one is better and why?

Is this a data warehouse/mart or reporting database? If you have both transactions and search going against the database it might give interesting pros or cons.
Lucene exists for one reason search and it does it really well. If you have a large system with multiple services, for ultimate scalability it is always to split the services up and keep them doing their single responsibility. This would give you flexibility of using that Lucene index against other services if necessary...also if you ever got rid off neo4j, then you still have your index/search artifacts around not coupled to Neo4j.
I would look at it from the overall system architecture not just specific functionality.

Related

Is Cassandra just a storage engine?

I've been evaluating Cassandra to replace MySQL in our microservices environment, due to MySQL being the only portion of the infrastructure that is not distributed. Our needs are both write and read intensive as it's a platform for exchanging raw data. A type of "bus" for lack of better description. Our selects are fairly simple and should remain that way, but I'm already struggling to get past some basic filtering due to the extreme limitations of select queries.
For example, if I need to filter data it has to be in the key. At that point I can't change data in the fields because they're part of the key. I can use a SASI index but then I hit a wall if I need to filter by more than one field. The hope was that materialized views would help with this but in another post I was told to avoid them, due to some instability and problematic behavior.
It would seem that Cassandra is good at storage but realistically, not good as a standalone database platform for non-trivial applications beyond very basic filtering (i.e. a single field.) I'm guessing I'll have to accept the use of another front-end like Elastic, Solr, etc. The other option might be to accept the idea of filtering data within application logic, which is do-able, as long as the data sets coming back remain small enough.
Apache Cassandra is far more than just a storage engine. Its design is a distributed database oriented towards providing high availability and partition tolerance which can limit query capability if you want good and reliable performance.
It has a query engine, CQL, which is quite powerful, but it is limited in a way to guide user to make effective queries. In order to use it effectively you need to model your tables around your queries.
More often than not, you need to query your data in multiple ways, so users will often denormalize their data into multiple tables. Materialized views aim to make that user experience better, but it has had its share of bugs and limitations as you indicated. At this point if you consider using them you should be aware of their limitations, although that is generally good idea for evaluating anything.
If you need advanced querying capabilities or do not have an ahead of time knowledge of what the queries will be, Cassandra may not be a good fit. You can build these capabilities using products like Spark and Solr on top of Cassandra (such as what DataStax Enterprise does), but it may be difficult to achieve using Cassandra alone.
On the other hand there are many use cases where Cassandra is a great fit, such as messaging, personalization, sensor data, and so on.

Yii2: How should site-wide search work?

What is the best practice methododology of implementing site-wide search in Yii2?
This question is not about how to implement search specifically, but rather about what kind of approach to use. Should we use Sphinx? Elasticsearch? Or do we use UNION selects to get the data into a DataProvider?
Assume the application is using a relational database to store data. We want to search and display multiple different models. For example, our database contains tables of Books, Authors and Stores. When we search for a keyword we want to display results from all 3 tables (matching Books by title or content, Authors by full name and Stores by name etc).
There are tutorials which show how to use Elasticsearch but assume that our data is stored in the Elasticsearch database, which does not make sense. Our data is already stored in MySQL or PostgreSQL. Does this mean
we need to maintain a duplicate of our data in the Elasticsearch database?
What is the best practice methododology of implementing site-wide search in Yii2?
That depends on many factors, so I cant give you a specific recommendation for your case. Some of the factors to think about are:
What would you like to achieve with this search? Is every little bit in your database a significant search term?
Do you need only full-text-search or a wide range of analytics?
Have you any limits in time or costs?
Can your (tech-)infrastructure handle your ideas?
Is it worth to bring in another extensive technology in the project?
Can you handle additional maintenance tasks to run such a search engine?
And many more ...
In my internal Yii2 Project with a PostgreSQL RDBMS, I decided to use a PostgreSQL Text Search Type called tsvector. Thats good enough for my needs. Why?
You can use Stemming.
Supports Fuzzy search.
Supports basic ranking.
Supports multiple languages.
I highly recommend this blog post Postgres full-text search is Good Enough.

How do I run geospatial queries at scale with NoSQL?

I am preparing to build an Android/iOS app that will require me to make complex polygon and containment geospatial queries. I like Apache Cassandra's no single point of failure, fault tolerance and data center awareness. Cassandra does not have direct support for geospatial queries (that I am aware of) but MongoDB and Couchbase Server do. MongoDB has scaling issues and I'm not sure if Couchbase would be a better alternative than Cassandra with Solr or Elasticsearch.
Would I be making a mistake by going with Datastax Enterprise (DSE), Cassandra and Elasticsearch over Couchbase Server? Will there be a noticeable difference in load times for web pages with the Cassandra/ES back end vs. Couchbase?
Aerospike just released Server Community Edition 3.7.0, which includes Geospatial Indexes as a feature.
Aerospike can now store GeoJSON objects and execute various queries, allowing an application to track rapidly changing Geospatial objects or simply ask the question of “what’s near me”. Internally, we use Google’s S2 library and Geo Hashing to encode and index these points and regions. The following types of queries are supported:
Points within a Region
Points within a Radius
Regions a Point is in
This can be combined with a User-Defined Function (UDF) to filter the results – i.e., to further refine the results to only include Bars, Restaurants or Places of Worship near you – even ones that are currently open or have availability. Additionally, finding the Region a point is in allows, for example, an advertiser to figure out campaign regions that the mobile user is in – and therefore place a geospatially targeted advertisement. Internally, the same storage mechanisms are used, which enables highly concurrent reads and writes to the Geospatial data or other data held on the record. Geospatial data is a lot of fun to play around with, so we have included a set of examples based on Open Street Map and Yelp Dataset Challenge data.
Geospatial is an Experimental feature in the 3.7.0 release. It’s meant for developers to try out and provide feedback. We think the APIs are good, but in an experimental feature, based on the feedback from the community, Aerospike may choose to modify these APIs by the time this feature is GA. It’s not intended for Production usage right now (though we know some developers will go directly to Production ...)
Aerospike provides a proven highly scalable NoSQL solution. Geospatial query has recently been added, and an Early Adopter release has just been announced. You might want to check that out.
Redis is probably one of the best alternatives. At the current time you would need to use Redis Unstable 3.2. The performance is oustanding. I have been using this with the lettuce java client and have seen incredible results. The larger the radius will decrease performance.
http://redis.io/commands/geohash
You are asking quite a few questions, as has been pointed out. The provided link offers one potential answer to how generic geospatial operations could be implemented using Cassandra. I'll offer one possible answer using straightforward out-of-the-box Cassandra constructs.
Using geohashes (or quad trees), or something similar, create an index of geohashes and their associated polygons. The specific relationship and level(s) of precision are dependent on your data set and use case.
To determine which polygons intersect with a given point or polygon, first compute its geohash(es), then look those geohashes up in the index. For general proximity, this may be sufficient. Either way, this narrows the potential intersection points down to a manageable set.

Multiple indexers on same storage location in Lucene

I want to build a highly scalable application where I intend to use Lucene as my search engine library. While browsing through the docs and faqs, I realize that it only allows one index writer to be open on a storage location by creating some write.lock in index directory. We can open multiple IndexReaders on that index.
I am interested in building an architecture where there are number of indexers running on different machines/servers and multiple searcher answering various types of queries on the indexes created by these indexers. Both searchers and indexers will be running on different computers.
In such scenario it will be preferable to have multiple indexers use same index storage location to index the documents. How to achieve this? Should I go with something like NFS (Networked File System)? Has this issue been taken care of by Solr or some other framework on top of Lucene? One obvious solution which comes to my mind is to create one index per indexer and then asking the searchers to make query across multiple index dirs. But these will lead to large number of different index dirs being created, as many as there are indexer servers, which I guess isn't much desirable. I want (# of index dirs) << (# of indexers) < (# of searchers)
What are the various alternatives do I have in this case?
First of all: never use NFS with Lucene, it's simply slow and risky.
If it comes to scalability and high availability I'd suggest you to just let elasticsearch do all the hard work for you, so that you can concentrate on your data. You can of course have multiple threads indexing data.
If you want to know more about the distributed nature of elasticsearch I'd suggest you to have a look at this video.
Take a look at ElasticSearch and Solr Cloud.
Comparison of ElasticSearch and Solr.

Using Lucene to index private data, should I have a separate index for each user or a single index

I am developing an Azure based website and I want to provide search capabilities using Lucene. (structured json objects would be indexed and stored in Lucene and other content such as Word documents, etc. would be indexed in lucene but stored in blob storage) I want the search to be secure, such that one user would never see a document belonging to another user. I want to allow ad-hoc searches as typed by the user. Lastly, I want to query programmatically to return predefined sets of data, such as "all notes for user X". I think I understand how to add properties to each document to achieve these 3 objectives. (I am listing them here so if anyone is kind enough to answer, they will have better idea of what I am trying to do)
My questions revolve around performance and security.
Can I improve document security by having a separate index for each user, or is including the user's ID as a parameter in each search sufficient?
Can I improve indexing speed and total throughput of the system by having a separate index for each user? My thinking is that having separate indexes would allow me to scale the system by having multiple index writers (perhaps even on different server instances) working at the same time, each on their own index.
Any insight would be greatly appreciated.
Regards,
Nate
Of course, one index.
You can do even better than what you suggested by using ManifoldCF (Apache product that knows how to handle Solr) to manage security.
And one off topic, uninformed suggestion: I'd rather use CloudBees or Heroku (or Amazon) instead of Azure.
Until you will use several machines for indexing I guess it's more convenient to use single index. Lucene community done a lot of work to make indexing process as efficient as it can. So unless you intentionally want to implement distributed indexing I doesn't recommend you to split indexes.
However there are several reasons why you would want to split indexes:
if your machine have several IO devices which could be utilized in parallel. In this case, if you are IO bound, splitting indexes is good idea.
splitting document fields between indexes (this is what ParallelReader is supposed for). This is more exotic form of splitting, but it may be a good idea if search is performed using different groups of fields. Suppose, we have two search query types: the first is using field name and type, and the second is using fields price and discount. If those fields are updated at different rate (I guess, name updates are far more rarely than price updates), updating only part of index would require less IO resources. This will give more overall throughput to the system.

Resources