I have used solr search engine which has a feature of dynamic fields. For example , if we define product_* field in the schema.xml, it will accept all the fields starting with product_ during the indexing.
Is there a feature like this in azure search where we can just define a wildcard for a field and it can accept the related fields in the indexing? As the fixed field thing reduces flexibility and one has to define a new schema every time for adding new fields.
Azure Cognitive Search does not support dynamic fields. Adding fields to the schema as you detect them during indexing is the suggested workaround.
Please consider creating an item on our User Voice page for this. While we haven't considered adding support for dynamic fields specifically, we have been looking at making schemas more flexible and extensible, and your input could help us prioritize this.
Related
I'm trying to implement Azure Search on Kentico 12. Following the article below.
https://docs.kentico.com/k12/configuring-kentico/setting-up-search-on-your-website/using-azure-search/integrating-azure-search-into-pages
However, I have multiple indexes defined on the smart search not just a single index code name that I can hard code and also cannot aford to hard code index fields. Is there any tutorial out there that I can follow?
It sounds as if you're referring to building an Azure Search web part, is this correct. If so, make a property in your web part which allows you to select the code name from a list in the database. Secondly, regarding field names, you should be using generic field names like DocumentName, NodeAliaspath, etc. Although if you have very specific search results that need to be displayed, simply put in a switch statement to get the field names based on a class name.
I created a basic OOTB document library to store Word and PDF files. I have been tasked to also create a few columns to store some basic metadata about the uploaded documents, for example: AuthorFirstName, AuthorLastName, and a column that lists "topics" discussed in the document.
While I am generally familiar with most Document Library settings, and creating columns, I am seeking information on what column datatype might work best for "topics". In most situations, one uploaded document would have 1-4 topics.
I would rather the datatype not be a single line of text datatype, as I would rather not ask the user to separate the different values (topics) using a delimiter such as a comma or semicolon. I would like to offer users the option to sort or filter in the SharePoint views.
There also seem to be some limitations with the Choice datatype.
While Choice fields seem to support Fill-In Values, when a choice is not pre-populated, they only seem to allow 1 fill-in. I would like the user to able to use a repeating-table-like interface to add a topic, and click an "add" button, and repeat, and so on.
I think in your scenario the best approach would be using managed metadata feature (http://office.microsoft.com/en-us/sharepoint-help/introduction-to-managed-metadata-HA102832521.aspx). It allows you to sort/filter library items, allows users to add new terms into metadata storage, etc.
Using a Lookup field to a custom List is something worth considering. The main advantage is that your data choices are stored separately from the main list and are easier to track and manage. The disadvantage is that you cannot easily have the user add a fill-in option as you desire. You would have to have a link from the library or the upload form to the options list where they would enter a new option separately from tagging it on the document.
Managed metadata is certainly an option as well, but it requires more overhead and sorting/filtering on that is a little trickier. Using a Lookup column is simple, although it does not meet all of your needs.
I have several documents in a solr collection that I want to be able to search through. Most of the data comes from web sites I can easily crawl, however, I need to add some attributes manually to because I have to add these attributes manually.
So as an example I get the following info from a site (all attributes returned from crawled site):
Name: Porsche Boxter
Year: 1996
...
I want to add additional fields through a web interface (info not present on crawled sites):
Cool: yes
foo: bar
My questions:
Does it make sense at all to store additional information along the indexed data within Solr (inside the documents) or would a best practice only have all crawled data in Solr and merge with an external managed database during query time? To me it makes more sense to have all my data that is eventually queried in Solr as some of the manually added attributed are required search criteria (e.g. look only for cool cars from the 90s).
Is it possible to use Solr to store additional information about indexed documents? I know the entire schema in advance, perhaps this is useful?
If I store my data exclusively in Solr, how can I ensure that during the next crawl the manually added data is not overwritten? Would partial update be required?
Since I am new to Solr it would also be very helpful if someone could simply manage what to look for in the documentation that describes my use case.
That depends on how often the external data changes. The more often, the less meaningful. Generally it is a good idea to store such data along the index data, because you get them without an additional database query.
Yes. Use indexed:falseand stored:true. If you knew not know all of such fields in advance you could use a dynamicField like <dynamicField name="*_stored" type="string" indexed="false" stored="true" />.
Yes. You have to use partial update. This is no problem in your case, because the fields not updated have stored:true.
How the search everything kind of application is indexing & keeping track of data into its search indexes.
Recently I have been working on Apache Solr which is producing amazing results for a search. But it was for one particular products catalog section that is being searched. As Solr is a stores it's data document, we indexed searchable fields as document in solr. I'm not sure how it can be used to build a search everything kind of search? And how should I index data into Solr?
By search everything I mean, to search into different module for information like Customers, Services, Accounts, Orders, Catalog, Support Ticket, etc. So search return results which is combined as a result from a single search form and user don't need to go into different forms for search that module?
Do I need to build different indexes for each such data models or store them into solr as single document? What is the best strategy to implement this.
You can store all that data in a single index with each document having an extra field that stores its type (Customer, Order, etc.). For the within-module search, just restrict the search query to documents of that type. For the Search All functionality, use copyField to copy all the relevant fields in each document type into one big field, and search with the document type field unconstrained.
We need to provide search options for users to find content based on specific field values.
We're developing a Training Course module for a client but the standard search looks for the text in any indexed field. We want to allow users to find courses based on searches against specific fields (i.e. Course Type, Location, Price, Date).
We've extended the search to check against specific fields but can't work out how to get the URL parameters passed by the Search form as a GET.
Where does Orchard put URL parameters?
Also, are we missing something, is there a way that Orchard already supports this that we haven't realized?
I would suggest you to copy part of the Search module, more specifically the Controller and the View and then modify it to suit your specific needs. I see you are actually modifying the original module, but this might be a problem on the long term, for instance if we start updating the module you might either lose you changes or have to reapply them to the code base. In the end you will target a MySiteName.Search module. And you can also add custom routes, custom settings.
On a side note the Search API is really powerful and you can even use it to do faceted search, or search on inherited taxonomy terms, tags, full text, ranges, ... Having your own controller code will let you use all of these features easily.