Restrict facets set in apache solr - search

I use apache solr for my search engine. I have a schema with a field called "typology". I'd like to search inside all typologies but i need to calculate facets of many fields for only one typology. Is it possible?
Thank you

Your base query for facets does not have to be the same as your search query. facet.query parameter allows you to have a different search for that.

Related

How can I find alfresco empty foldres using Lucene Query

I want to retrieve the list of folders in a specific node whose list of childrens is empty using Lucene query.
I create this query:
+PATH:"/app:company_home/cm:contexts/cm:ctx_exploitation/cm:runs/cm:Run_322645//."+Children is empty.
but it does not give good results.
What is the right Lucene syntax to do this
There is no way to find empty folders using a Lucene query.
However, there are some java services and javascript APIs in alfresco
like 'FileFolderService' in Java and 'childByNamePath' in javascript,
using them you can write your logic and find empty folders.
You can find o bytes file using below lucene query.
TYPE:"cm:content" AND #cm:content.size:0

Force document(s) into SOLR search results

Is there a way to force specific documents into SOLR search results?
I'm trying something like this but is not working:
/q=id:21321 OR myNormalSearchConditionsHereIncludingDismaxQUery
My goal is to have certain documents always show up in some search results, no matter what the query is.
you can use Solr MoreLikeThis component
This is just a quick thought i would make from your description.
With this you will no longer require a second query and caching the results.

Implement Search Everything using Solr

How the search everything kind of application is indexing & keeping track of data into its search indexes.
Recently I have been working on Apache Solr which is producing amazing results for a search. But it was for one particular products catalog section that is being searched. As Solr is a stores it's data document, we indexed searchable fields as document in solr. I'm not sure how it can be used to build a search everything kind of search? And how should I index data into Solr?
By search everything I mean, to search into different module for information like Customers, Services, Accounts, Orders, Catalog, Support Ticket, etc. So search return results which is combined as a result from a single search form and user don't need to go into different forms for search that module?
Do I need to build different indexes for each such data models or store them into solr as single document? What is the best strategy to implement this.
You can store all that data in a single index with each document having an extra field that stores its type (Customer, Order, etc.). For the within-module search, just restrict the search query to documents of that type. For the Search All functionality, use copyField to copy all the relevant fields in each document type into one big field, and search with the document type field unconstrained.

How do solr implement filter search

Can anyone point me to the piece of solr source code which performs filter query (excecuting the fq=).
Solr's main façade to Lucene is SolrIndexSearcher. In particular the getDocListC function seems to do the actual passing to Lucene.
Filter queries and other parameters are passed via a QueryCommand class, you can see that filter queries is simply a list of org.apache.lucene.search.Query, just like the "main query".

Why did they create the concept of "schema.xml" in Solr?

Lucene does searching and indexing, all by taking "coding"... Why doesn't Solr do the same ? Why do we need a schema.xml ? Whats its importance ? Is there a way to avoid placing all the fields we want into a schema.xml ? ( I guess dynamic fields are the way to go, right ? )
That's just the way it was built. Lucene is a library, so you link your code against it. Solr, on the other hand, is a server, and in some cases you can just use it with very little coding (e.g. using DataImportHandler to index and Velocity plugin to browse and search).
The schema allows you to declaratively define how each field is analyzed and queried.
If you want a schema-less server based on Lucene, take a look at ElasticSearch.
If you want to avoid constantly tweaking your schema.xml, then dynamic fields are indeed the way to go. For an example, I like the Sunspot schema.xml — it uses dynamic fields to set up type-based naming conventions in field names.
https://github.com/outoftime/sunspot/blob/master/sunspot/solr/solr/conf/schema.xml
Based on this schema, a field named content_text would be parsed as a text field:
<dynamicField name="*_text" stored="false" type="text" multiValued="true" indexed="true"/>
Which corresponds to its earlier definition of the text fieldType.
Most schema.xml files that I work with start off based on the Sunspot schema. I have found that you can save a lot of time by establishing and reusing a good convention in your schema.xml.
Solr acts as a stand-alone search server and can be configured with no coding. You can think of it as a front-end for Lucene. The purspose of the schema.xml file is to define your index.
If possible, I would suggest defining all your fields in the schema file. This gives you greater control over how those fields are indexed and it will allow you to take advantage of copy fields (if you need them).

Resources