BDC model/search connector and multi value field with refinment - search

BDC model:
My BDC model's entity has a property named Color.
The TypeName is specified as System.String[].
<TypeDescriptor Name="Color" TypeName="System.String[]">
<Properties>
<Property Name="RequiredInForms" Type="System.Boolean">false</Property>
</Properties>
</TypeDescriptor>
Database:
In my database (my BDC content source) I added column values like this one:
;#Blue;#Green;#Yellow;#
Search Schema
I created a new managed property and enabled multiple values (and also refinable - active, queryable, retrievable, safe).
Search Results
Filtering on a specific color via search works.
Example: RsExpAdvWorksProductColor:"blue"
Search Refinement
However I cannot refine on colors.
Adding a refiner on my Managed Property shows up like that:
Color
;#Blue;#Green;#Yellow;#
;#Green;#Yellow;#
;#Red;#Green;#Yellow;#Blue;#Black;#Cyan;#
Obviously the single values are not treated as such - the whole "string" of "special-delimiter" separated values is being shown as a refinment criteria.
Any hints?
Update 2015-03-20: I took a closer look at the built-in multi choice columns. In search results they are being returned as "Value1;#Value2;#" and so on. Basically there is a trailing Red;#Blue;# separator - no leading ;#Red;#Blue;# one. Much to my regret that didn't solve my problem.

Update 2015-03-20: Surprise surprise. It is in fact "working as designed" (like so many things in SharePoint :P). What I am looking for has to be dealt with separately. It behaves exactly the same with built-in multi choice fields so there is nothing wrong with my BDC/Search integration.
Regarding the refiner, have a look at the following links...
http://www.eliostruyf.com/part-6-create-multi-value-search-refiner-control/
https://hyankov.wordpress.com/2014/12/15/sharepoint-2013-refiner-multi-value-contains-instead-of-an-equals/

Related

Can data in Solr be extended with manually defined meta data?

I have several documents in a solr collection that I want to be able to search through. Most of the data comes from web sites I can easily crawl, however, I need to add some attributes manually to because I have to add these attributes manually.
So as an example I get the following info from a site (all attributes returned from crawled site):
Name: Porsche Boxter
Year: 1996
...
I want to add additional fields through a web interface (info not present on crawled sites):
Cool: yes
foo: bar
My questions:
Does it make sense at all to store additional information along the indexed data within Solr (inside the documents) or would a best practice only have all crawled data in Solr and merge with an external managed database during query time? To me it makes more sense to have all my data that is eventually queried in Solr as some of the manually added attributed are required search criteria (e.g. look only for cool cars from the 90s).
Is it possible to use Solr to store additional information about indexed documents? I know the entire schema in advance, perhaps this is useful?
If I store my data exclusively in Solr, how can I ensure that during the next crawl the manually added data is not overwritten? Would partial update be required?
Since I am new to Solr it would also be very helpful if someone could simply manage what to look for in the documentation that describes my use case.
That depends on how often the external data changes. The more often, the less meaningful. Generally it is a good idea to store such data along the index data, because you get them without an additional database query.
Yes. Use indexed:falseand stored:true. If you knew not know all of such fields in advance you could use a dynamicField like <dynamicField name="*_stored" type="string" indexed="false" stored="true" />.
Yes. You have to use partial update. This is no problem in your case, because the fields not updated have stored:true.

Solr Schema Advice for Dynamic Facetable Fields

I haven't used Solr for about three years, and I see that it has now jumped up to 3.5.
The "Documents" that I am trying to add to my Index are properties.
The majority of all properties will have about ten of the same types of fields. Such as
Longitude
Latitude
Name
Location Name.. etc
However, I also want to add in attributes about the property which should be facetable.
Property receive features, which are grouped into ten of so categories. Such as...
(Entertainment, Attractions, General, Kitchen, Spa etc). And then the user when adding their property would select items from a pre defined list.
So that for example, if they are adding General features, they might check:
✓ Heating, ✓ Balcony, ✓ Garage, ✓ Washing Machine etc.
Then on my presentation layer, These options might be displayed under the heading General with all of the available facets that fall within the General category.
So, this is my problem... if I make a facet field called "general" I would actually want to add a lot of values to this field. But then can you facet over a multi valued field?
And then I would want to do exactly the same for "spa" where the user might check that the property has a Swimming Pool, Sauna and a load of other features etc.
Any advice as to how I should construct my schema would be appreciated.
Yes, you can facet over multivalued field. Watch this presentation by Solr's developer about facets:
The Many Facets of Apache Solr by Yonik Seeley
I hope that this will have all answers you need. Only thing you need to do in schema is to set the field as multivalued (and maybe also make some processing if this is text not ids, but this is showed nicely in presentation and slides

Me Filter Issue

I have added the following field to a custom list definition based on custom content type.
<Field Type="User" DisplayName="Line Manager" List="UserInfo" Required="FALSE" EnforceUniqueValues="FALSE" ShowField="ImnName" UserSelectionMode="PeopleOnly" UserSelectionScope="0" ID="{098E0A5A-8187-481E-B155-B674A406EEAF}" SourceID="{53ca79b7-9ffa-457d-aff8-c71508b09cb1}" StaticName="Line_x0020_Manager" Name="Line_x0020_Manager" RowOrdinal="32" Filterable="TRUE" FromBaseType="TRUE"/>
I am putting [Me] filter on this column in a view. The filter is not able to filter the records for the logged in user.
Am I missing something?
EDIT
Interestingly if I add similar column through SharePoint UI the filter works fine. Any clues, ideas welcome.
Is the [Me] filter work on native list definition?
Or, are you sure that your SharePoint installation using English version?
Maybe you should change field type from "User" to "people or group"
Phew!! This got resolved and was one of the most frustrating things. I am not sure if this was an issue with the way I defined the schema below or whether it’s a bug with SharePoint.
I ran a profiler to see what’s going on under the hoods and found a query (pretty huge for me to digest ) where in the RowOrdinal was being used extensively with a predefined value as 0 or 1. As I was using “32” as RowOrdinal, it looked shady to me. I changed it to “0” and bingo!! The filter started working.
BTW here is what MSDN says about it – "Optional Integer. Specifies the database location for the field."
Doesn’t appear like it should take part in records filtering.
So, to close the field should be defined as
<Field Type="User" DisplayName="Line Manager" List="UserInfo" Required="FALSE" EnforceUniqueValues="FALSE" ShowField="ImnName" UserSelectionMode="PeopleOnly" UserSelectionScope="0" ID="{098E0A5A-8187-481E-B155-B674A406EEAF}" SourceID="{53ca79b7-9ffa-457d-aff8-c71508b09cb1}" StaticName="Line_x0020_Manager" Name="Line_x0020_Manager" RowOrdinal="0" Filterable="TRUE" FromBaseType="TRUE"/>

Why did they create the concept of "schema.xml" in Solr?

Lucene does searching and indexing, all by taking "coding"... Why doesn't Solr do the same ? Why do we need a schema.xml ? Whats its importance ? Is there a way to avoid placing all the fields we want into a schema.xml ? ( I guess dynamic fields are the way to go, right ? )
That's just the way it was built. Lucene is a library, so you link your code against it. Solr, on the other hand, is a server, and in some cases you can just use it with very little coding (e.g. using DataImportHandler to index and Velocity plugin to browse and search).
The schema allows you to declaratively define how each field is analyzed and queried.
If you want a schema-less server based on Lucene, take a look at ElasticSearch.
If you want to avoid constantly tweaking your schema.xml, then dynamic fields are indeed the way to go. For an example, I like the Sunspot schema.xml — it uses dynamic fields to set up type-based naming conventions in field names.
https://github.com/outoftime/sunspot/blob/master/sunspot/solr/solr/conf/schema.xml
Based on this schema, a field named content_text would be parsed as a text field:
<dynamicField name="*_text" stored="false" type="text" multiValued="true" indexed="true"/>
Which corresponds to its earlier definition of the text fieldType.
Most schema.xml files that I work with start off based on the Sunspot schema. I have found that you can save a lot of time by establishing and reusing a good convention in your schema.xml.
Solr acts as a stand-alone search server and can be configured with no coding. You can think of it as a front-end for Lucene. The purspose of the schema.xml file is to define your index.
If possible, I would suggest defining all your fields in the schema file. This gives you greater control over how those fields are indexed and it will allow you to take advantage of copy fields (if you need them).

XML Schema: How to validate an attribute with multiple keys concatenated?

Let's say I can get XML like this:
<Property Name="Title"/>
<Property Name="Content"/>
<Property Name="Address"/>
<Source properties="Title,Content,Address"/>
How coud I validate the "properties" attribute of "Source", so that any composition of the above listed "Property" items could be checked? (For example: "Title", "Title,Content", all of these concatenations are correct, while "Title, URL" is not correct.)
You can't do that within XML Schema. You can do it with your own higher level of validation based on XSLT, XQuery or Schematron, for example.
xan is right; validating always means, to match a XML file against a given schema. But there is no schema involved here, your problem is instead, to read a data file, and validate later entries against earlier ones (if the box above is supposed to represent one file) or one data file against another data file (if the gap is supposed to be a file separator). Beyond that, a schema defines the structure of elements and attributes and optionally data types (values only, if there is a strict enumeration of valid values). Also no match here, instead you want to verify data against data. Sorry, the tool of a schema mismatches the problem to solve.

Resources