Elastic search error operation [search] and lang [groovy] is disabled? - groovy

I am using elastic search 1.7.1 and when i am trying to use script_score or script_fields it is showing the error ScriptException[scripts of type inline], operation [search] and lang [groovy] is disabled can anyone please tell me how can i remove this error. my code is given below
function_score: {
query: {
query_string: {
query: shop_search,
fields: [ 'shop_name']
}
},
functions: [
{
script_score: {
script: "_score * doc['location'].value"
}
}
]
}

Add script.engine.groovy.inline.search: on to elasticsearch.yml configuration file and restart the node.

adding script.groovy.sandbox.enabled: true to .yml works for me
For ES Version 2.x+
script.inline: on
script.indexed: on

Add
script.engine.groovy.inline.aggs: on
script.engine.groovy.inline.update: on
to elasticsearch.yml
and restart

For those with ES 2.x+
script.inline: true
script.indexed: true
Make sure you prefix the lines with a space!

Related

how to write the correct pipline jenkins docker grovy node

I am rewriting my pipline in node, I need to understand how to perform a step with a gait in node now an error is coming from stage('Deploy')
node {
checkout scm
def customImage = docker.build("python-web-tests:${env.BUILD_ID}")
customImage.inside {
sh "python ${env.CMD_PARAMS}"
}
stage('Deploy') {
post {
always {
allure([
includeProperties: false,
jdk: '',
properties: [],
reportBuildPolicy: 'ALWAYS',
results: [[path: 'report']]
])
cleanWs()
}
}
}
and this is the old pipeline
pipeline {
agent {label "slave_first"}
stages {
stage("Создание контейнера image") {
steps {
catchError {
script {
docker.build("python-web-tests:${env.BUILD_ID}", "-f Dockerfile .")
}
}
}
}
stage("Running and debugging the test") {
steps {
sh 'ls'
sh 'docker run --rm -e REGION=${REGION} -e DATA=${DATA} -e BUILD_DESCRIPTION=${BUILD_URL} -v ${WORKSPACE}:/tmp python-web-tests:${BUILD_ID} /bin/bash -c "python ${CMD_PARAMS} || exit_code=$?; chmod -R 777 /tmp; exit $exit_code"'
}
}
}
post {
always {
allure([
includeProperties: false,
jdk: '',
properties: [],
reportBuildPolicy: 'ALWAYS',
results: [[path: 'report']]
])
cleanWs()
}
}
}
I tried to transfer the method of creating an allure report, but nothing worked, I use the version above, almost everything turned out, you can still add environment variables to the build, for example, those that are specified -e DATA=${DATA} how do I add it
I don't recommend to switch from declarative to scriptive pipeline.
You are losing possibility to use multiple tooling connected with declarative approach like syntax checkers.
If you still want to use scriptive approach try this:
node('slave_first') {
stage('Build') {
checkout scm
def customImage = docker.build("python-web-tests:${env.BUILD_ID}")
customImage.inside {
sh "python ${env.CMD_PARAMS}"
}
}
stage('Deploy') {
allure([
includeProperties: false,
jdk: '',
properties: [],
reportBuildPolicy: 'ALWAYS',
results: [[path: 'report']]])
cleanWs()
}
}
There is no post and always directive in scriptive pipelines. It's on your head to catch all exceptions and set status of the job. I guess you were using this page: https://www.jenkins.io/doc/book/pipeline/syntax/, but it's a mistake.
This page only refers to declarative approach and in few cases you have hidden scriptive code as examples.
Also i don't know if you have default agent label set in your Jenkins config, but by looking at your declarative one I think you missed 'slave_first' arg in node object.
those that are specified -e DATA=${DATA} how do I add it
That's a docker question not a Jenkins. If you want to launch docker image and then also have access to some reports located in this container you should mount workspace/file where those output files landed. You should also pass location of those files to allure.
I suggest you to try this:
mount some subfolder in workspace to docker container
cat test report file if it's visible
add allure report with passing this file location to allure step

Puppet concatenate list conditionally

I try to only deploy fail2ban Apache jails if apache is actually installed. I have a fact for that that works.
# fail2ban
$jails = [
'ssh', 'ssh-ddos',
'pam-generic'
] + if $f2b_enable_apache { ['apache-auth', 'apache-badbots', 'apache-multiport', 'apache-noscript', 'apache-overflows'] }
notify{"Enable apache jails: ${f2b_enable_apache}":}
notify{"Jails: ${jails}":}
class { 'fail2ban':
package_ensure => 'latest',
jails => $jails
}
When I run it though, then I get the follwing output
Without apache:
Puppet : Enable apache jails: false
Puppet : Jails: [ssh, ssh-ddos, pam-generic, apache-auth, apache-badbots, apache-multiport, apache-noscript, apache-overflows]
With apache:
Puppet : Enable apache jails: true
Puppet : Jails: [ssh, ssh-ddos, pam-generic, apache-auth, apache-badbots, apache-multiport, apache-noscript, apache-overflows]
What am I doing wrong? Why is it in both cases appended? Is there a better way to achieve this that is extensible?
I would likely use a selector expression for this:
$jails = $f2b_enable_apache ? {
true => ['ssh', 'ssh-ddos', 'pam-generic', 'apache-auth', 'apache-badbots', 'apache-multiport', 'apache-noscript', 'apache-overflows'],
false => ['ssh', 'ssh-ddos', 'pam-generic'],
}
There are indeed algorithms for using Array[String] concatenation here, but they become messy due to Puppet DSL enforcing the immutability of variables. This uses one variable, one conditional expression, and no lambda iterator functions.

ESlint override rule by nested directory

I want to disable rule for all files inside nested directory. I found examples only for exact path or by file extension. But it is not what I want.
We use it for shared config and don't know where this directory will be. We have many of them.
I'm trying config like this:
{
overrides: [
{
files: [
'**/test/**/*',
],
rules: {
"import/no-extraneous-dependencies": "off"
}
},
],
}
But glob like **/test/**/* and many others didn't not work.
Can someone help to reach this goal?
The above code should work.
How were you testing this? If it's an extension like VSCode you may need to refresh things to see latest definitions loaded.
If you are using a eslint service like esprint you will also need to restart it to grab latest definitions.
Caching
Make sure that eslint is not configured to cache results to avoid having to cache bust when debugging things. eslint docs
Here's an example for a react-native app with multiple overrides
module.exports = {
...baseConfig,
overrides: [
typescriptOverrides,
e2eOverrides,
themeOverrides,
{
files: ['**/*.style.js'],
rules: {
'sort-keys': [
'error',
'asc',
{
caseSensitive: true,
natural: true,
},
],
},
},
{
files: ['**/*.test.js'],
rules: {
'arrow-body-style': 'off',
},
},
],
};
Debugging the glob matcher
Run eslint in debug mode and see all the files being run example DEBUG=eslint:cli-engine npx eslint src/**/*.test.js
You can test your glob patterns by running a ls command. Example: ls ./src/**/*.test.js will either return all the files or 'no matches found'.

How to use wait-for-sync properly

For experiments with single node configuration I run ArangoDB with the command:
arangod --server.endpoint=tcp://0.0.0.0:8529 --server.disable-authentication=true --database.wait-for-sync=true
Then I do a few commands:
db._createDatabase("foo")
db._useDatabase("foo")
db._create("a")
db.a.properties()
Get the result:
{
"doCompact" : true,
"journalSize" : 33554432,
"isSystem" : false,
"isVolatile" : false,
"waitForSync" : false,
"keyOptions" : {
"type" : "traditional",
"allowUserKeys" : true
},
"indexBuckets" : 8
}
And where is my "waitForSync": true by default? Where do I do a mistake?
I can confirm your problem using ArangoDB 2.8.7 and the arangosh. This is a bug. If the same is done on the console (with --console), then it works.
From arangosh the request goes via the HTTP API and there the default of "false" for "waitForSync" is added, the command line option is ignored, which is the bug. I will make sure that this will be fixed in the next release of ArangoDB.
In the meantime, please add "waitForSync": true in all db._create calls in arangosh and all POST /_api/collection API calls via HTTP.

how to implement the unit or integration tests for logstash configuration?

With the logstash 1.2.1 one can now have conditional to do various stuff. Even the earlier version's conf file can get complicated if one is managing many log files and implement metric extraction.
After looking at this comprehensive example, I really wondered my self, how can I detect any breakages in this configuration?
Any ideas.
For a syntax check, there is --configtest:
java -jar logstash.jar agent --configtest --config <yourconfigfile>
To test the logic of the configuration you can write rspec tests. This is an example rspec file to test a haproxy log filter:
require "test_utils"
describe "haproxy logs" do
extend LogStash::RSpec
config <<-CONFIG
filter {
grok {
type => "haproxy"
add_tag => [ "HTTP_REQUEST" ]
pattern => "%{HAPROXYHTTP}"
}
date {
type => 'haproxy'
match => [ 'accept_date', 'dd/MMM/yyyy:HH:mm:ss.SSS' ]
}
}
CONFIG
sample({'#message' => '<150>Oct 8 08:46:47 syslog.host.net haproxy[13262]: 10.0.1.2:44799 [08/Oct/2013:08:46:44.256] frontend-name backend-name/server.host.net 0/0/0/1/2 404 1147 - - ---- 0/0/0/0/0 0/0 {client.host.net||||Apache-HttpClient/4.1.2 (java 1. 5)} {text/html;charset=utf-8|||} "GET /app/status HTTP/1.1"',
'#source_host' => '127.0.0.1',
'#type' => 'haproxy',
'#source' => 'tcp://127.0.0.1:60207/',
}) do
insist { subject["#fields"]["backend_name"] } == [ "backend-name" ]
insist { subject["#fields"]["http_request"] } == [ "/app/status" ]
insist { subject["tags"].include?("HTTP_REQUEST") }
insist { subject["#timestamp"] } == "2013-10-08T06:46:44.256Z"
reject { subject["#timestamp"] } == "2013-10-08T06:46:47Z"
end
end
This will, based on a given filter configuration, run input samples and test if the expected output is produced.
To run the test, save the test as haproxy_spec.rb and run `logstash rspec:
java -jar logstash.jar rspec haproxy_spec.rb
There are lots of spec examples in the Logstash source repository.
since logstash has been upgraded and now the command will be something like (give the folder)
/opt/logstash/bin/logstash agent --configtest -f /etc/logstash/logstash-indexer/conf.d
If you see some warning, but the error message is mixed together, and you didn't know which one have issue. You have to check its file one by one

Resources