I want to query a custom record based on the custom field in the custom record in Anypoint Studio. I tried using Search operation in Netsuite but it seems that I unable to write the dataweave code accordingly.
I used the code below
%dw 2.0
output application/java
---
{
customFieldList: {
customField: [{
internalId: "8",
scriptId: "abc"
} as Object {
class : "org.mule.module.netsuite.extension.api.ListOrRecordRef"
} ]
} as Object {
class : "org.mule.module.netsuite.extension.api.CustomFieldList"
},
recType: {
internalId: "10078"
}
} as Object {
class : "org.mule.module.netsuite.extension.api.CustomRecordSearchBasic"
}
I used basic Custom record search.I am getting below error
Message : null
Element : testFlow2/processors/0 # test:test.xml:18 (Transform Message)
Element DSL : <ee:transform doc:name="Transform Message" doc:id="c6dbb0ee-4979-4667-bab1-e0b5d72a6b2b">
<ee:message>
<ee:set-payload>%dw 2.0
output application/java
---
{
customFieldList: {
customField: [{
internalId: "8",
scriptId: "abc"
} as Object {
class : "org.mule.module.netsuite.extension.api.ListOrRecordRef"
} ]
} as Object {
class : "org.mule.module.netsuite.extension.api.CustomFieldList"
},
recType: {
internalId: "10078"
}
} as Object {
class : "org.mule.module.netsuite.extension.api.CustomRecordSearchBasic"
}</ee:set-payload>
</ee:message>
</ee:transform>
Error type : MULE:UNKNOWN
FlowStack : at testFlow2(testFlow2/processors/0 # test:test.xml:18 (Transform Message))
--------------------------------------------------------------------------------
Root Exception stack trace:
java.lang.NullPointerException
at org.mule.weave.v2.module.pojo.exception.CannotInstantiateException.message(CannotInstantiateException.scala:7)
at org.mule.weave.v2.parser.exception.LocatableException.getMessage(LocatableException.scala:18)
at org.mule.weave.v2.parser.exception.LocatableException.getMessage$(LocatableException.scala:15)
at org.mule.weave.v2.module.pojo.exception.CannotInstantiateException.getMessage(CannotInstantiateException.scala:6)
at org.mule.runtime.core.internal.el.dataweave.DataWeaveExpressionLanguageAdaptor$1.handledException(DataWeaveExpressionLanguageAdaptor.java:298)
at org.mule.runtime.core.internal.el.dataweave.DataWeaveExpressionLanguageAdaptor$1.evaluate(DataWeaveExpressionLanguageAdaptor.java:309)
at org.mule.runtime.core.internal.el.DefaultExpressionManagerSession.evaluate(DefaultExpressionManagerSession.java:105)
at com.mulesoft.mule.runtime.core.internal.processor.SetPayloadTransformationTarget.process(SetPayloadTransformationTarget.java:32)
at com.mulesoft.mule.runtime.core.internal.processor.TransformMessageProcessor.lambda$0(TransformMessageProcessor.java:92)
at java.util.Optional.ifPresent(Optional.java:159)
at com.mulesoft.mule.runtime.core.internal.processor.TransformMessageProcessor.process(TransformMessageProcessor.java:92)
at org.mule.runtime.core.api.util.func.CheckedFunction.apply(CheckedFunction.java:25)
at org.mule.runtime.core.api.rx.Exceptions.lambda$checkedFunction$2(Exceptions.java:84)
at org.mule.runtime.core.internal.util.rx.Operators.lambda$nullSafeMap$0(Operators.java:47)
at reactor.core.* (1 elements filtered from stack; set debug level logging or '-Dmule.verbose.exceptions=true' for everything)(Unknown Source)
at org.mule.runtime.core.privileged.processor.chain.* (2 elements filtered from stack; set debug level logging or '-Dmule.verbose.exceptions=true' for everything)(Unknown Source)
at reactor.core.* (6 elements filtered from stack; set debug level logging or '-Dmule.verbose.exceptions=true' for everything)(Unknown Source)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at org.mule.service.scheduler.internal.AbstractRunnableFutureDecorator.doRun(AbstractRunnableFutureDecorator.java:111)
at org.mule.service.scheduler.internal.RunnableFutureDecorator.run(RunnableFutureDecorator.java:54)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
I don't know where I am doing wrong.
Any help is appreciated.
Thank you so much. But I figured it some how. I got the solution to it. But I can accept other solutions too. I just added a class and it worked.
Thank you again.
After smoothly working for more than 10 months, I start getting this error on production suddenly while doing simple search queries.
{
"error" : {
"root_cause" : [
{
"type" : "circuit_breaking_exception",
"reason" : "[parent] Data too large, data for [<http_request>] would be [745522124/710.9mb], which is larger than the limit of [745517875/710.9mb]",
"bytes_wanted" : 745522124,
"bytes_limit" : 745517875
}
],
"type" : "circuit_breaking_exception",
"reason" : "[parent] Data too large, data for [<http_request>] would be [745522124/710.9mb], which is larger than the limit of [745517875/710.9mb]",
"bytes_wanted" : 745522124,
"bytes_limit" : 745517875
},
"status" : 503
}
Initially, I was getting this error while doing simple term queries when I got this circuit_breaking_exception error, To debug this I tried _cat/health query on elasticsearch cluster, but still, the same error, even the simplest query localhost:9200 is giving the same error Not sure what happens to the cluster suddenly.
Her is my circuit breaker status:
"breakers" : {
"request" : {
"limit_size_in_bytes" : 639015321,
"limit_size" : "609.4mb",
"estimated_size_in_bytes" : 0,
"estimated_size" : "0b",
"overhead" : 1.0,
"tripped" : 0
},
"fielddata" : {
"limit_size_in_bytes" : 639015321,
"limit_size" : "609.4mb",
"estimated_size_in_bytes" : 406826332,
"estimated_size" : "387.9mb",
"overhead" : 1.03,
"tripped" : 0
},
"in_flight_requests" : {
"limit_size_in_bytes" : 1065025536,
"limit_size" : "1015.6mb",
"estimated_size_in_bytes" : 560,
"estimated_size" : "560b",
"overhead" : 1.0,
"tripped" : 0
},
"accounting" : {
"limit_size_in_bytes" : 1065025536,
"limit_size" : "1015.6mb",
"estimated_size_in_bytes" : 146387859,
"estimated_size" : "139.6mb",
"overhead" : 1.0,
"tripped" : 0
},
"parent" : {
"limit_size_in_bytes" : 745517875,
"limit_size" : "710.9mb",
"estimated_size_in_bytes" : 553214751,
"estimated_size" : "527.5mb",
"overhead" : 1.0,
"tripped" : 0
}
}
I found a similar issue hereGithub Issue that suggests increasing circuit breaker memory or disabling the same. But I am not sure what to choose. Please help!
Elasticsearch Version 6.3
After some more research finally, I found a solution for this i.e
We should not disable circuit breaker as it might result in OOM error and eventually might crash elasticsearch.
dynamically increasing circuit breaker memory percentage is good but it is also a temporary solution because at the end after solution increased percentage might also fill up.
Finally, we have a third option i.e increase overall JVM heap size which is 1GB by default but as recommended it should be around 30-32 GB on production, also it should be less than 50% of available total memory.
For more info check this for good JVM memory configurations of elasticsearch on production, Heap: Sizing and Swapping
In my case I have an index with large documents, each document has ~30 KB and more than 130 fields (nested objects, arrays, dates and ids).
and I was searching all fields using this DSL query:
query_string: {
query: term,
analyze_wildcard: true,
fields: ['*'], // search all fields
fuzziness: 'AUTO'
}
Since full-text searches are expensive. Searching through multiple fields at once is even more expensive. Expensive in terms of computing power, not storage.
Therefore:
The more fields a query_string or multi_match query targets, the
slower it is. A common technique to improve search speed over multiple
fields is to copy their values into a single field at index time, and
then use this field at search time.
please refer to ELK docs that recommends searching as few fields as possible with the help of copy-to directive.
After I changed my query to search one field:
query_string: {
query: term,
analyze_wildcard: true,
fields: ['search_field'] // search in one field
}
everything worked like a charm.
I got this error with my docker container so I increase the java_opts to 1GB and now it works without any error.
Here are the docker-compose.yml
version: '1'
services:
elasticsearch-cont:
image: docker.elastic.co/elasticsearch/elasticsearch:7.9.2
container_name: elasticsearch
environment:
- "ES_JAVA_OPTS=-Xms1024m -Xmx1024m"
- discovery.type=single-node
ulimits:
memlock:
soft: -1
hard: -1
ports:
- 9200:9200
- 9300:9300
networks:
- elastic
networks:
elastic:
driver: bridge
In my case, I also have an index with large documents which store system running logs and I searched the index with all fields. I use the Java Client API, like this:
TermQueryBuilder termQueryBuilder = QueryBuilders.termQuery("uid", uid);
searchSourceBuilder.query(termQueryBuilder);
When I changed my code like this:
TermQueryBuilder termQueryBuilder = QueryBuilders.termQuery("uid", uid);
searchSourceBuilder.fetchField("uid");
searchSourceBuilder.fetchSource(false);
searchSourceBuilder.query(termQueryBuilder);
the error disappeared.
I have a log string like :
2018-08-02 12:02:25.904 [http-nio-8080-exec-1] WARN o.s.w.s.m.s.DefaultHandlerExceptionResolver.handleTypeMismatch - Failed to bind request element
In the above string [http-nio-8080-exec-1] is a optional field, it can be there in some log statements.
i created a grok patterns like with some references on net :
%{TIMESTAMP_ISO8601:timestamp} (\[%{DATA:thread}\])? %{LOGLEVEL:level}%{SPACE}%{JAVACLASS:class}\.%{DATA:method} - %{GREEDYDATA:loggedString}
seems its not working if i remove the thread name string.
you need to make the space character following the thread name optional: (\[%{DATA:thread}\] )?
input:
2018-08-02 12:02:25.904 WARN o.s.w.s.m.s.DefaultHandlerExceptionResolver.handleTypeMismatch - Failed to bind request element
pattern:
%{TIMESTAMP_ISO8601:timestamp} (\[%{DATA:thread}\] )?%{LOGLEVEL:level}%{SPACE}%{JAVACLASS:class}\.%{DATA:method} - %{GREEDYDATA:loggedString}
output:
{
"loggedString": "Failed to bind request element",
"method": "handleTypeMismatch",
"level": "WARN",
"class": "o.s.w.s.m.s.DefaultHandlerExceptionResolver",
"timestamp": "2018-08-02 12:02:25.904"
}
I am using elastic search 1.7.1 and when i am trying to use script_score or script_fields it is showing the error ScriptException[scripts of type inline], operation [search] and lang [groovy] is disabled can anyone please tell me how can i remove this error. my code is given below
function_score: {
query: {
query_string: {
query: shop_search,
fields: [ 'shop_name']
}
},
functions: [
{
script_score: {
script: "_score * doc['location'].value"
}
}
]
}
Add script.engine.groovy.inline.search: on to elasticsearch.yml configuration file and restart the node.
adding script.groovy.sandbox.enabled: true to .yml works for me
For ES Version 2.x+
script.inline: on
script.indexed: on
Add
script.engine.groovy.inline.aggs: on
script.engine.groovy.inline.update: on
to elasticsearch.yml
and restart
For those with ES 2.x+
script.inline: true
script.indexed: true
Make sure you prefix the lines with a space!
The following groovy scripts fail using command line
#Grab("org.apache.poi:poi:3.9")
println "test"
Error:
unexpected token: println # line 2, column 1.
println "test"
^
1 error
Removing the Grab, it works!
Anything I missed?
$>groovy -v
Groovy Version: 2.1.7 JVM: 1.7.0_25 Vendor: Oracle Corporation OS: Linux
Annotations can only be applied to certain targets. See SO: Why can't I do a method call after a #Grab declaration in a Groovy script?
#Grab("org.apache.poi:poi:3.9")
dummy = null
println "test"
Alternatively you can use grab as a method call:
import static groovy.grape.Grape.grab
grab(group: "org.apache.poi", module: "poi", version: "3.9")
println "test"
For more information refer to Groovy Language Documentation > Dependency management with Grape.
File 'Grabber.groovy'
package org.taste
import groovy.grape.Grape
//List<List[]> artifacts => [[<group>,<module>,<version>,[<Maven-URL>]],..]
static def grab (List<List[]> artifacts) {
ClassLoader classLoader = new groovy.lang.GroovyClassLoader()
def eal = Grape.getEnableAutoDownload()
artifacts.each { artifact -> {
Map param = [
classLoader: classLoader,
group : artifact.get(0),
module : artifact.get(1),
version : artifact.get(2),
classifier : (artifact.size() < 4) ? null : artifact.get(3)
]
println param
Grape.grab(param)
}
}
Grape.setEnableAutoDownload(eal)
}
Usage :
package org.taste
import org.taste.Grabber
Grabber.grab([
[ "org.codehaus.groovy.modules.http-builder", "http-builder", '0.7.1'],
[ "org.postgresql", "postgresql", '42.3.1', null ],
[ "com.oracle.database.jdbc", "ojdbc8", '12.2.0.1', null]
])