No result returned from the nest c# elasticsearch query - c#-4.0

I am indexing an attachment field. The POST query in sense returns expected resultset.
My query is
POST /mydocs/_search
{
"query" : {
"bool" : {
"must" : [
{ "match" : { "file.content":"abc"} },
{ "match":{"otherDetails":"asd"}},
{ "match" : { "filePermissionInfo.accountValue" : "xyz"} }
]
}
}
}
I need to convert it to a c# Nest code. I tried converting it, but its not returning any result,even it contains data. If I remove the
m.Match(mt1 => mt1.Field(f1 => f1.File.Coontent).Query(queryTerm))
from the below experssion, it returns a result set. Is there any problem with the attachement field?
client.Search<IndexDocument>(s => s
.Index("mydocs")
.Query(q => q
.Bool(b => b
.Must(m =>
m.Match(mt1 => mt1.Field(f1 => f1.File.Coontent).Query(queryTerm)) &&
m.Match(mt2 => mt2.Field(f2 => f2.FilePermissionInfo.First().SecurityIdValue).Query(accountName)) &&
m.Match(mt3 => mt3.Field(f3 => f3.OtherDetails).Query(other))
)))
);
My mapping is
{
"mydocs": {
"mappings": {
"indexdocument": {
"properties": {
"docLocation": {
"type": "string",
"index": "not_analyzed",
"store": true
},
"documentType": {
"type": "string",
"store": true
},
"file": {
"type": "attachment",
"fields": {
"content": {
"type": "string",
"term_vector": "with_positions_offsets",
"analyzer": "full"
},
"author": {
"type": "string"
},
"title": {
"type": "string",
"term_vector": "with_positions_offsets",
"analyzer": "full"
},
"name": {
"type": "string"
},
"date": {
"type": "date",
"format": "strict_date_optional_time||epoch_millis"
},
"keywords": {
"type": "string"
},
"content_type": {
"type": "string"
},
"content_length": {
"type": "integer"
},
"language": {
"type": "string"
}
}
},
"filePermissionInfo": {
"properties": {
"fileSystemRights": {
"type": "string",
"store": true
},
"securityIdValue": {
"type": "string",
"store": true
}
}
},
"id": {
"type": "double",
"store": true
},
"lastModifiedDate": {
"type": "date",
"store": true,
"format": "strict_date_optional_time||epoch_millis"
},
"otherDetails": {
"type": "string"
},
"title": {
"type": "string",
"store": true,
"term_vector": "with_positions_offsets"
}
}
}
}
}
}

It looks like the query hasn't been translated to NEST correctly. In the query you have
"filePermissionInfo.accountValue"
but in the NEST query, you only have
f2 => f2.FilePermissionInfo
which would result only in filePermissionInfo. You need to change this to
f2 => f2.FilePermissionInfo.AccountValue

Related

Value is not a String - StrongLoop

I am using strong loop for api creation. But it is giving me error. Properties of my json file are :
"properties": {
"id": {
"type": "Number"
},
"name": {
"type": "string",
"required": true
},
"language": {
"type": "string",
"required": false
},
"timezone": {
"type": "string",
"required": false
},
"labelId": {
"type": "number",
"required": false,
"default": 0
},
"street": {
"type": "string",
"required": false
},
"contact": {
"type": "number",
"required": false
},
"maincontact": {
"type": "number",
"required": false
},
"visitorTypes": {
"type": "array",
"required": false
},
"activeVisitorAvatar": {
"type": "boolean"
},
"activeLegalDocument": {
"type": "boolean"
},
"legalDocuments": {
"type": "array"
},
"logo": {
"type": "string",
"required": false
},
"logoType": {
"type": "string",
"required": false
},
"logoSmall": {
"type": "string",
"required": false
},
"logoSmallType": {
"type": "string",
"required": false
},
"activeSignOut": {
"type": "boolean"
},
"activePrint": {
"type": "boolean"
},
"activeScanTemperature": {
"type": "boolean"
},
"printerIp": {
"type": "string"
},
"activeVoicePrompt": {
"type": "boolean"
},
"mandatoryCompanyName": {
"type": "boolean"
},
"mandatoryPhoneNumber": {
"type": "boolean"
},
"sliders": {
"type": "array"
},
"slidersCount": {
"type": "number"
},
"accountId": {
"type": "number"
},
"visitorsignouttime": {
"type": "number"
},
"signoutLink": {
"type": "boolean"
},
"signoutnotification": {
"type": "boolean"
},
"deviceofflinenotification": {
"type": "boolean"
},
"deviceonlinenotification": {
"type": "boolean"
},
"emergencyAlert": {
"type": "object"
},
"autoSignOut": {
"type": "boolean"
},
"signOutTime": {
"type": "string"
},
"signOutPin": {
"type": "boolean"
},
"questionsEnabled": {
"type": "boolean"
},
"questions": {
"type": "array"
},
"logicalQuestionnaire": {
"type": "array"
},
"isEnableTemperatureCheck": {
"type": "boolean"
},
"disableTemperatureCheckScreen": {
"type": "boolean"
},
"isEnableQrCodeWithPinInside": {
"type": "boolean"
},
"isEnableComplianceAlerts": {
"type": "boolean"
},
"alertsWatchlistPriority": {
"type": "number"
},
"alertsCompliancePriority": {
"type": "number"
},
"eventNames": {
"type": "array"
},
"rooms": {
"type": "array"
},
"selfie": {
"type": "boolean"
},
"displayCompany": {
"type": "boolean"
},
"pagination": {
"type": "boolean"
}
},
My body is as follows:
let body = JSON.stringify({
id,
name,
street,
timezone: this.timezone,
activeVisitorAvatar,
activeLegalDocument,
legalDocuments,
visitorTypes,
activeSignOut,
activeScanTemperature,
activePrint,
printerIp,
labelId,
activeVoicePrompt,
mandatoryCompanyName,
mandatoryPhoneNumber,
mandatoryAnswersToQuestions,
visitorsignouttime,
signoutLink,
signoutnotification,
deviceofflinenotification,
deviceonlinenotification,
emergencyMessages: this.getEmergencyMessages(),
address: this.address,
autoSignOut,
signOutTime,
signOutPin,
questionsEnabled,
questions: this.getQuestions(),
logicalQuestionnaire: this.getLogicalQuestionnaire(),
accountId,
isEnableRememberMeForFullUIFlow,
isEnableTemperatureCheck,
isEnableQrCodeWithPinInside,
isEnableComplianceAlerts,
alertsWatchlistPriority,
alertsCompliancePriority,
selfie,
displayCompany,
temperatureThreshold: {
maximum: Number(temperatureMax),
minimum: Number(temperatureMin),
displayTextFormat: this.curr.temperatureThreshold.displayTextFormat,
},
pagination,
gdpr: {
isActive: this.curr.gdpr.isActive,
days: gdprDays,
},
purposes /*filter(purposes, (item) => item.id)*/,
isEnableIfThenQuestionnaire: isEnableIfThenQuestionnaire,
visitorQueueDisplay,
autoRefreshEntries: { enabled: autoRefreshEntries, interval: Number(autoRefreshEntriesInterval) },
visitor_email,
});
But i am receiving error {"error":{"statusCode":400,"name":"Error","message":"Value is not a string.","stack":"Error: Value is not a string.\n at Object.validate
I am stuck here ! Have tried changing the data type in json file but didn't help also can't tell which parameter is causing the error. All the values are coming from front end html and angular. My dependencies are as follows :
"loopback": "^3.0.0",
"loopback-boot": "^2.6.5",
"loopback-component-explorer": "^2.4.0",
"loopback-component-storage": "^3.0.0",
"loopback-connector-mongodb": "3.0.1",
the network request call i am making is :
try {
this.http
.post('/api/sites/edit?access_token=' + this.token, body, {
headers: contentHeaders,
})
.subscribe(
(response) => {
this.hideFlag = true;
this.toastr.success('Saved!');
this.http
.post(
'api/users/setFeaturesByAccountId',
{},
{
params: {
access_token: this.token,
listFeatures: this.listFeature,
accountId: this.curr.accountId,
},
},
)
.subscribe();
},
(error) => {
this.hideFlag = true;
this.toastr.error('Error');
this.showError = error.json().error;
console.log(error.text());
},
);
} catch (error) {
console.log(error);
}
So the error was that loopback 3 doesn't type cast correctly. I have to check which variable was not casting correct and then in my middleware parse accordingly.
if(req.body.accountId){
var numberId = parseInt(req.body.accountId);
req.body.accountId = numberId;
}
Not a perfect solution but a work around!

How to insert array value into table using logic app

when a http request is received. i need to insert the array value into a table. in my case the array is response required.
I used these things: when a http request is received and i used parse JSON and i used for each loop then inside the for-each i used insert entity but it's throwing an error. if anybody knows how to implement let me know the answer.
i used expression for RRT as : body('Parse_JSON')['ResponseRequired'][0]['ResponseRequiredType']
json schema
{
"$schema": "http://json-schema.org/draft-04/schema#",
"properties": {
"AssetErrorCode": {
"type": "string"
},
"AssetErrorDesc": {
"type": "string"
},
"AssetId": {
"type": "integer"
},
"CustomerId": {
"type": "integer"
},
"ResponseRequired": {
"items": [
{
"properties": {
"ResponseRequiredAdditionalData": {
"type": "string"
},
"ResponseRequiredAddress": {
"type": "string"
},
"ResponseRequiredFrequency": {
"type": "string"
},
"ResponseRequiredType": {
"type": "integer"
}
},
"required": [
"ResponseRequiredType",
"ResponseRequiredFrequency",
"ResponseRequiredAddress",
"ResponseRequiredAdditionalData"
],
"type": "object"
},
{
"properties": {
"ResponseRequiredAdditionalData": {
"type": "string"
},
"ResponseRequiredAddress": {
"type": "string"
},
"ResponseRequiredFrequency": {
"type": "string"
},
"ResponseRequiredType": {
"type": "integer"
}
},
"required": [
"ResponseRequiredType",
"ResponseRequiredFrequency",
"ResponseRequiredAddress",
"ResponseRequiredAdditionalData"
],
"type": "object"
},
{
"properties": {
"ResponseRequiredAdditionalData": {
"type": "string"
},
"ResponseRequiredAddress": {
"type": "string"
},
"ResponseRequiredFrequency": {
"type": "string"
},
"ResponseRequiredType": {
"type": "integer"
}
},
"required": [
"ResponseRequiredType",
"ResponseRequiredFrequency",
"ResponseRequiredAddress",
"ResponseRequiredAdditionalData"
],
"type": "object"
}
],
"type": "array"
},
"ServiceKey": {
"type": "string"
}
},
"required": [
"CustomerId",
"ServiceKey",
"AssetId",
"AssetErrorCode",
"AssetErrorDesc",
"ResponseRequired"
],
"type": "object"
}
used this expression : items('For_each')?['ResponseRequiredAddress']
solved the issue

Kibana: Search within text for string

I have A log message in Kibana that contains this:
org.hibernate.exception.GenericJDBCException: Cannot open connection
at org.springframework.orm.hibernate3.HibernateTransactionManager.doBegin(HibernateTransactionManager.java:597)
Actual search that isn't returning results: log_message: "hibernate3"
If I search for "hibernate3" this message will not appear. I am using an Elasticsearch template and have indexed the field, but also want to be able to do case-insensitive full-text searching. Is this possible?
Template that is in use:
{
"template": "filebeat-*",
"mappings": {
"mainProgram": {
"properties": {
"#timestamp": {
"type": "date",
"format": "strict_date_optional_time||epoch_millis"
},
"#version": {
"type": "text"
},
"beat": {
"properties": {
"hostname": {
"type": "text"
},
"name": {
"type": "text"
}
}
},
"class_method": {
"type": "text",
"fielddata": "true",
"index": "true"
},
"class_name": {
"type": "text",
"fielddata": "true"
},
"clientip": {
"type": "ip",
"index": "not_analyzed"
},
"count": {
"type": "long"
},
"host": {
"type": "text",
"index": "not_analyzed"
},
"input_type": {
"type": "text",
"index": "not_analyzed"
},
"log_level": {
"type": "text",
"fielddata": "true",
"index": "true"
},
"log_message": {
"type": "text",
"index": "true"
},
"log_timestamp": {
"type": "text"
},
"log_ts": {
"type": "long",
"index": "not_analyzed"
},
"message": {
"type": "text"
},
"offset": {
"type": "long",
"index": "not_analyzed"
},
"query_params": {
"type": "text",
"index": "true"
},
"sessionid": {
"type": "text",
"index": "true"
},
"source": {
"type": "text",
"index": "not_analyzed"
},
"tags": {
"type": "text"
},
"thread": {
"type": "text",
"index": "true"
},
"type": {
"type": "text"
},
"user_account_combo": {
"type": "text",
"index": "true"
},
"version": {
"type": "text"
}
}
},
"access": {
"properties": {
"#timestamp": {
"type": "date",
"format": "strict_date_optional_time||epoch_millis"
},
"#version": {
"type": "text"
},
"beat": {
"properties": {
"hostname": {
"type": "text"
},
"name": {
"type": "text"
}
}
},
"clientip": {
"type": "ip",
"index": "not_analyzed"
},
"count": {
"type": "long",
"index": "not_analyzed"
},
"host": {
"type": "text",
"index": "true"
},
"input_type": {
"type": "text",
"index": "not_analyzed"
},
"log_timestamp": {
"type": "text"
},
"log_ts": {
"type": "long",
"index": "not_analyzed"
},
"message": {
"type": "text"
},
"offset": {
"type": "long",
"index": "not_analyzed"
},
"query_params": {
"type": "text",
"index": "true"
},
"response_time": {
"type": "long"
},
"sessionid": {
"type": "text",
"index": "true"
},
"source": {
"type": "text",
"index": "not_analyzed"
},
"statuscode": {
"type": "long"
},
"tags": {
"type": "text"
},
"thread": {
"type": "text",
"index": "true"
},
"type": {
"type": "text",
"index": "true"
},
"uripath": {
"type": "text",
"index": "true"
},
"user_account_combo": {
"type": "text",
"index": "true"
},
"verb": {
"type": "text",
"index": "true"
}
}
}
}
}
message: *.hibernate3.*
also works (please note, that no quotes are needed for that)
According to your scenario, what you're looking for is an analyzed type string which would first analyze the string and then index it. A quote from the doc.
In other words, index this field as full text.
Thus make sure that, you have your mapping of the necessary fields properly so that you'll be able to do a full-text search on the docs.
Assuming that, in Kibana if the log line is under the field message, you could simply search for the word by:
message:"hibernate3"
You might also want to refer this, to identify the variance between Term Based and Full-Text.
EDIT
Have the mapping of the field log_message as such:
"log_message": {
"type": "string", <- to make it analyzed
"index": "true"
}
Also try doing a wildcard search as such:
{"wildcard":{"log_message":"*.hibernate3.*"}}
With Kibana 6.4.1 I used the "%" as wildcard.
message: %hibernate3%
For me it was because I was using the ".keyword".
My key was called "message" and I had "message" and "message.keyword" available.
Full text search isn't working on ".keyword".
Not working :
message.keyword : hello
Working :
message : hello

How to apply filter on geo coordinates in elasticsearch?

I am using elasticsearch with mongoosastic npm module. I am trying to apply filter on geo coordinates having following model structure
geoLocation: {
type: {
type: String,
default: 'Point'
},
coordinates: [Number] //orders should be lat,lng
}
with the mapping as follows
{
"events": {
"settings": {
"analysis": {
"filter": {
"edgeNGram_filter": {
"type": "edgeNGram",
"min_gram": 1,
"max_gram": 50,
"side": "front"
}
},
"analyzer": {
"edge_nGram_analyzer": {
"type": "custom",
"tokenizer": "edge_ngram_tokenizer",
"filter": [
"lowercase",
"asciifolding",
"edgeNGram_filter"
]
},
"whitespace_analyzer": {
"type": "custom",
"tokenizer": "whitespace",
"filter": [
"lowercase",
"asciifolding"
]
}
},
"tokenizer": {
"edge_ngram_tokenizer": {
"type": "edgeNGram",
"min_gram": "1",
"max_gram": "50",
"token_chars": [
"letter",
"digit"
]
}
}
}
},
"mappings": {
"event": {
"_all": {
"index_analyzer": "nGram_analyzer",
"search_analyzer": "whitespace_analyzer"
},
"properties": {
"title": {
"type": "string",
"index": "not_analyzed"
},
"geoLocation": {
"index": "not_analyzed",
"type": "geo_point"
}
}
}
}
}
}
Query
{
"query": {
"multi_match": {
"query": "the",
"fields": ["title", ]
}
},
"filter" : {
"geo_distance" : {
"distance" : "200km",
"geoLocation.coordinates" : {
"lat" : 19.007452,
"lon" : 72.831556
}
}
}
}
I am unable to create indexing on the geo coordinates with the following model structure, I dont understand if it is not possible to index geo coordinates with the above model structure because in my case the coordinates has order as lat,long and I have found somewhere that elasticsearch expects coordinates order as long,lat.
Error
Error: SearchPhaseExecutionException[Failed to execute phase [query],
all shards failed; shardFailures {[CDHdgtJnTbeu8tl2mDfllg][events][0]:
SearchParseException[[events][0]: from[-1],size[-1]: Parse Failure
[Failed to parse source
curl -XGET localhost:9200/events
{
"events": {
"aliases": {},
"mappings": {
"1": {
"properties": {
"location": {
"type": "double"
},
"text": {
"type": "string"
}
}
},
"event": {
"properties": {
"city": {
"type": "string"
},
"endTime": {
"type": "date",
"format": "dateOptionalTime"
},
"geo_with_lat_lon": {
"type": "geo_point",
"lat_lon": true
},
"isActive": {
"type": "boolean"
},
"isRecommended": {
"type": "boolean"
},
"location": {
"type": "string"
},
"title": {
"type": "string"
}
}
}
},
"settings": {
"index": {
"creation_date": "1461675012489",
"uuid": "FT-xVUdPQtyuKFm4J4Rd7g",
"number_of_replicas": "1",
"number_of_shards": "5",
"events": {
"mappings": {
"event": {
"_all": {
"enabled": "false",
"search_analyzer": "whitespace_analyzer",
"index_analyzer": "nGram_analyzer"
},
"properties": {
"geoLocation": {
"coordinates": {
"type": "geo_shape",
"index": "not_analyzed"
}
},
"location": {
"type": "string",
"index": "not_analyzed"
},
"title": {
"type": "string",
"index": "not_analyzed"
},
"geo_with_lat_lon": {
"type": "geo_point",
"lat_lon": "true",
"index": "not_analyzed"
}
}
}
},
"settings": {
"analysis": {
"analyzer": {
"edge_nGram_analyzer": {
"type": "custom",
"filter": [
"lowercase",
"asciifolding",
"edgeNGram_filter"
],
"tokenizer": "edge_ngram_tokenizer"
},
"whitespace_analyzer": {
"type": "custom",
"filter": [
"lowercase",
"asciifolding"
],
"tokenizer": "whitespace"
}
},
"filter": {
"edgeNGram_filter": {
"max_gram": "50",
"type": "edgeNGram",
"min_gram": "1",
"side": "front"
}
},
"tokenizer": {
"edge_ngram_tokenizer": {
"max_gram": "50",
"type": "edgeNGram",
"min_gram": "1",
"token_chars": [
"letter",
"digit"
]
}
}
}
}
},
"version": {
"created": "1070099"
}
}
},
"warmers": {}
}
}
I got a solution for my question
Mapping
PUT /geo_test
{
"mappings": {
"type_test": {
"properties": {
"name": {
"type": "string"
},
"geoLocation": {
"type": "nested",
"properties": {
"coordinates": {
"type": "geo_point",
"lat_lon": true
}
}
}
}
}
}
}
Query
POST /geo_test/type_test/_search
{
"query": {
"filtered": {
"filter": {
"nested": {
"path": "geoLocation",
"query": {
"filtered": {
"filter": {
"geo_distance": {
"distance": 5,
"distance_unit": "km",
"geoLocation.coordinates": {
"lat": 41.12,
"lon": -71.34
}
}
}
}
}
}
}
}
}
}

mapper_parsing_exception in new elasticsearch 2.1.1 version

Problem : I have created mapping and its working fine in elasticsearch
1.7.1 but after updating to 2.1.1 it will give me exception
EXCEPTION
response: '{"error":{"root_cause":[{"type":"mapper_parsing_exception","reason"
:"analyzer on field [_all] must be set when search_analyzer is set"}],"type":"ma
pper_parsing_exception","reason":"Failed to parse mapping [movie]: analyzer on f
ield [_all] must be set when search_analyzer is set","caused_by":{"type":"mapper
_parsing_exception","reason":"analyzer on field [_all] must be set when search_a
nalyzer is set"}},"status":400}',
toString: [Function],
toJSON: [Function] }
{
"settings": {
"number_of_shards": 1,
"number_of_replicas": 0,
"analysis": {
"filter": {
"nGram_filter": {
"type": "nGram",
"min_gram": 2,
"max_gram": 20,
"token_chars": [
"letter",
"digit",
"punctuation",
"symbol"
]
}
},
"analyzer": {
"nGram_analyzer": {
"type": "custom",
"tokenizer": "whitespace",
"filter": [
"lowercase",
"asciifolding",
"nGram_filter"
]
},
"whitespace_analyzer": {
"type": "custom",
"tokenizer": "whitespace",
"filter": [
"lowercase",
"asciifolding"
]
}
}
}
},
"mappings": {
"movie": {
"_all": {
"index_analyzer": "nGram_analyzer",
"search_analyzer": "whitespace_analyzer"
},
"properties": {
"movieName": {
"type": "string",
"index": "not_analyzed"
},
"movieYear": {
"type": "double"
},
"imageUrl": {
"type": "string"
},
"genre": {
"type": "string"
},
"director": {
"type": "string"
},
"producer": {
"type": "string"
},
"cast": {
"type": "String"
},
"writer": {
"type": "string"
},
"synopsis": {
"type": "string"
},
"rating": {
"type": "double"
},
"price": {
"type": "double"
},
"format": {
"type": "string"
},
"offer": {
"type": "double"
},
"offerString": {
"type": "string"
},
"language": {
"type": "string"
}
}
}
}
}
The error is quite clear if you ask me, you need to specify analyzer for _all in your movie mapping. Setting index_analyzer was removed in Elasticsearch 2.0.
"_all": {
"analyzer": "nGram_analyzer",
"search_analyzer": "whitespace_analyzer"
},

Resources