How to use wait-for-sync properly - arangodb

For experiments with single node configuration I run ArangoDB with the command:
arangod --server.endpoint=tcp://0.0.0.0:8529 --server.disable-authentication=true --database.wait-for-sync=true
Then I do a few commands:
db._createDatabase("foo")
db._useDatabase("foo")
db._create("a")
db.a.properties()
Get the result:
{
"doCompact" : true,
"journalSize" : 33554432,
"isSystem" : false,
"isVolatile" : false,
"waitForSync" : false,
"keyOptions" : {
"type" : "traditional",
"allowUserKeys" : true
},
"indexBuckets" : 8
}
And where is my "waitForSync": true by default? Where do I do a mistake?

I can confirm your problem using ArangoDB 2.8.7 and the arangosh. This is a bug. If the same is done on the console (with --console), then it works.
From arangosh the request goes via the HTTP API and there the default of "false" for "waitForSync" is added, the command line option is ignored, which is the bug. I will make sure that this will be fixed in the next release of ArangoDB.
In the meantime, please add "waitForSync": true in all db._create calls in arangosh and all POST /_api/collection API calls via HTTP.

Related

Cypress build error in Azure pipeline: Cannot find module '#cypress/code-coverage/task'

Here is my config:
// cypress/plugins/index.js
module.exports = (on, config) => {
require('#cypress/code-coverage/task')(on, config);
//require('#bahmutov/cypress-extends')(on, config);
return config
}
I am getting an ERROR when trying to run cypress in a Azure pipeline script (within a cypress/included container). This error doesn't occur when I run on my local.
The function exported by the plugins file threw an error.
We invoked the function exported by `/root/e2e/cypress/plugins/index.js`, but it threw an error.
Error: Cannot find module '#cypress/code-coverage/task'
Require stack:
- /root/e2e/cypress/plugins/index.js
- /root/.cache/Cypress/9.1.1/Cypress/resources/app/packages/server/lib/plugins/child/run_plugins.js
The only unusual thing I am doing is this:
// cypress/config/cypress.local.json
{
"extends": "../../cypress.json",
"baseUrl": "https://localhost:4200"
}
And a normal cypress.json config:
// /cypress.json
{
"baseUrl": "http://localhost:4200",
"proxyUrl": "",
"defaultCommandTimeout": 10000,
"video" : false,
"screenshotOnRunFailure" : true,
"experimentalStudio": true,
"projectId": "seixri",
"trashAssetsBeforeRuns" : true,
"videoUploadOnPasses" : false,
"retries": {
"runMode": 0,
"openMode": 0
},
"viewportWidth": 1000,
"viewportHeight": 1200
}
The problem here might be that Cypress does not support extending the configuration file in the way you did, as also stated here: https://www.cypress.io/blog/2020/06/18/extending-the-cypress-config-file/
In my opinion there are two suitable solution approaches:
1. Approach: Use separate configuration files (my recommendation)
As extending an existing configuration file does not work, I would recommend having separate configuration files, e.g. one for local usage and one for the execution in Azure pipelines. You could then simple add two separate commands in your package.json like:
"scripts": {
"cy:ci": "cypress run --config-file cypress/cypress.json",
"cy:local": "cypress run --config-file cypress/cypress.local.json"
},
Docs: https://docs.cypress.io/guides/references/configuration
2. Approach: Set configuration options in your tests
Cypress gives you the option to overwrite configurations directly in your tests. For example, if you have configured the following in cypress.json:
{
"viewportWidth": 1280,
"viewportHeight": 720
}
You can change the viewportWidth in your test like:
Cypress.config('viewportWidth', 800)
Docs: https://docs.cypress.io/api/cypress-api/config#Syntax

"[circuit_breaking_exception] [parent]" Data too large, data for "[<http_request>]" would be error

After smoothly working for more than 10 months, I start getting this error on production suddenly while doing simple search queries.
{
"error" : {
"root_cause" : [
{
"type" : "circuit_breaking_exception",
"reason" : "[parent] Data too large, data for [<http_request>] would be [745522124/710.9mb], which is larger than the limit of [745517875/710.9mb]",
"bytes_wanted" : 745522124,
"bytes_limit" : 745517875
}
],
"type" : "circuit_breaking_exception",
"reason" : "[parent] Data too large, data for [<http_request>] would be [745522124/710.9mb], which is larger than the limit of [745517875/710.9mb]",
"bytes_wanted" : 745522124,
"bytes_limit" : 745517875
},
"status" : 503
}
Initially, I was getting this error while doing simple term queries when I got this circuit_breaking_exception error, To debug this I tried _cat/health query on elasticsearch cluster, but still, the same error, even the simplest query localhost:9200 is giving the same error Not sure what happens to the cluster suddenly.
Her is my circuit breaker status:
"breakers" : {
"request" : {
"limit_size_in_bytes" : 639015321,
"limit_size" : "609.4mb",
"estimated_size_in_bytes" : 0,
"estimated_size" : "0b",
"overhead" : 1.0,
"tripped" : 0
},
"fielddata" : {
"limit_size_in_bytes" : 639015321,
"limit_size" : "609.4mb",
"estimated_size_in_bytes" : 406826332,
"estimated_size" : "387.9mb",
"overhead" : 1.03,
"tripped" : 0
},
"in_flight_requests" : {
"limit_size_in_bytes" : 1065025536,
"limit_size" : "1015.6mb",
"estimated_size_in_bytes" : 560,
"estimated_size" : "560b",
"overhead" : 1.0,
"tripped" : 0
},
"accounting" : {
"limit_size_in_bytes" : 1065025536,
"limit_size" : "1015.6mb",
"estimated_size_in_bytes" : 146387859,
"estimated_size" : "139.6mb",
"overhead" : 1.0,
"tripped" : 0
},
"parent" : {
"limit_size_in_bytes" : 745517875,
"limit_size" : "710.9mb",
"estimated_size_in_bytes" : 553214751,
"estimated_size" : "527.5mb",
"overhead" : 1.0,
"tripped" : 0
}
}
I found a similar issue hereGithub Issue that suggests increasing circuit breaker memory or disabling the same. But I am not sure what to choose. Please help!
Elasticsearch Version 6.3
After some more research finally, I found a solution for this i.e
We should not disable circuit breaker as it might result in OOM error and eventually might crash elasticsearch.
dynamically increasing circuit breaker memory percentage is good but it is also a temporary solution because at the end after solution increased percentage might also fill up.
Finally, we have a third option i.e increase overall JVM heap size which is 1GB by default but as recommended it should be around 30-32 GB on production, also it should be less than 50% of available total memory.
For more info check this for good JVM memory configurations of elasticsearch on production, Heap: Sizing and Swapping
In my case I have an index with large documents, each document has ~30 KB and more than 130 fields (nested objects, arrays, dates and ids).
and I was searching all fields using this DSL query:
query_string: {
query: term,
analyze_wildcard: true,
fields: ['*'], // search all fields
fuzziness: 'AUTO'
}
Since full-text searches are expensive. Searching through multiple fields at once is even more expensive. Expensive in terms of computing power, not storage.
Therefore:
The more fields a query_string or multi_match query targets, the
slower it is. A common technique to improve search speed over multiple
fields is to copy their values into a single field at index time, and
then use this field at search time.
please refer to ELK docs that recommends searching as few fields as possible with the help of copy-to directive.
After I changed my query to search one field:
query_string: {
query: term,
analyze_wildcard: true,
fields: ['search_field'] // search in one field
}
everything worked like a charm.
I got this error with my docker container so I increase the java_opts to 1GB and now it works without any error.
Here are the docker-compose.yml
version: '1'
services:
elasticsearch-cont:
image: docker.elastic.co/elasticsearch/elasticsearch:7.9.2
container_name: elasticsearch
environment:
- "ES_JAVA_OPTS=-Xms1024m -Xmx1024m"
- discovery.type=single-node
ulimits:
memlock:
soft: -1
hard: -1
ports:
- 9200:9200
- 9300:9300
networks:
- elastic
networks:
elastic:
driver: bridge
In my case, I also have an index with large documents which store system running logs and I searched the index with all fields. I use the Java Client API, like this:
TermQueryBuilder termQueryBuilder = QueryBuilders.termQuery("uid", uid);
searchSourceBuilder.query(termQueryBuilder);
When I changed my code like this:
TermQueryBuilder termQueryBuilder = QueryBuilders.termQuery("uid", uid);
searchSourceBuilder.fetchField("uid");
searchSourceBuilder.fetchSource(false);
searchSourceBuilder.query(termQueryBuilder);
the error disappeared.

Suppressing nightwatchjs warnings in terminal output

I'm using nightwatchjs to run my test suite, and I would like to remove the warning messages being outputted to my terminal display.
At the moment, I'm getting loads of these (admittedly genuine) warning messages whilst my scripts are running and it's making the reading of the results harder and harder.
As an example;
Yes they are valid messages, but it's not often possible for me to uniquely pick out each individual element and I'm not interested in them for my output.
So, I'd like to know how I can stop them from being reported in my terminal.
Below is what I've tried so far in my nightwatch.conf.js config file;
desiredCapabilities: {
browserName: 'chrome',
javascriptEnabled : true,
acceptSslCerts: true,
acceptInscureCerts: true,
chromeOptions : {
args: [
'--ignore-certificate-errors',
'--allow-running-insecure-content',
'--disable-web-security',
'--disable-infobars',
'--disable-popup-blocking',
'--disable-notifications',
'--log-level=3'],
prefs: {
'profile.managed_default_content_settings.popups' : 1,
'profile.managed_default_content_settings.notifications' : 1
},
},
},
},
but it's still displaying the warnings.
Any help on this would be really appreciated.
Many thanks.
You can try setting detailed_output property to false in the configuration file. This should stop these details from printing in the console.
You can find a sample config file here.
You can find relevant details available under Output Settings section of official docs here.
Update 1: This looks like a combo of properties which controls this and the below combo works for me.
live_output: false,
silent: true,
output: true,
detailed_output: false,
disable_error_log: false,

Elastic search error operation [search] and lang [groovy] is disabled?

I am using elastic search 1.7.1 and when i am trying to use script_score or script_fields it is showing the error ScriptException[scripts of type inline], operation [search] and lang [groovy] is disabled can anyone please tell me how can i remove this error. my code is given below
function_score: {
query: {
query_string: {
query: shop_search,
fields: [ 'shop_name']
}
},
functions: [
{
script_score: {
script: "_score * doc['location'].value"
}
}
]
}
Add script.engine.groovy.inline.search: on to elasticsearch.yml configuration file and restart the node.
adding script.groovy.sandbox.enabled: true to .yml works for me
For ES Version 2.x+
script.inline: on
script.indexed: on
Add
script.engine.groovy.inline.aggs: on
script.engine.groovy.inline.update: on
to elasticsearch.yml
and restart
For those with ES 2.x+
script.inline: true
script.indexed: true
Make sure you prefix the lines with a space!

Unable to execute nightwatch tests on chrome using Linux

Here's the bit in question from my nightwatch.json file :
"selenium" : {
"start_process" : true,
"server_path" : "lib/selenium-server-standalone.jar",
"log_path" : "test_logs"
},
"test_settings" : {
"jenkins" : {
"launch_url" : "url not disclosed",
"selenium_port" : 4444,
"selenium_host" : "jenkins.undisclosed-cloud.com",
"cli_args" : {
"webdriver.chrome.driver" : "/usr/local/bin/chromedriver"
},
"desiredCapabilities": {
"browserName": "chrome",
"javascriptEnabled": true,
"acceptSslCerts": true,
"platform" : "LINUX"
}
}
}
If I change the browserName to firefox then the test runs fine in the specified linux server, which is running on a DOCKER CONTAINER.
But when i choose chrome, i am getting the error:
Connection refused! Is selenium server started?
I've seen this error before on my local machine and managed to fix it by adding chromedriver to the path. I thought it would be the same issue on this linux server but it did not resolve it. I went on to the linux box and verified I can start the chromedriver directly in
"/usr/local/bin/chromedriver"
By the way I have verified I'm on 64-bit linux machine and the symlinks are all set.
Linux version: Linux 3.11.0-26-generic | v2.43.1 | r5163bce
ERROR LOG AFTER RUNNING TEST WITH --verbose
INFO Request: POST /wd/hub/session
- data: {"desiredCapabilities": {"browserName":"chrome","javascriptEnabled":true,"acceptSslCerts":true,"platform": "LINUX","name":"Free Resource Download Test"}}
- headers: {"Content-Type":"application/json; charset=utf-8","Content- Length":151}
ERROR Response 500 POST /wd/hub/session{ status: 13,
sessionId: null,
value:
{ message: 'chrome not reachable\n
So, this is a docker container and a problem is within Chrome&Docker.
you have 2 options, either run add "--privileged" parameter to a Docker or run Chrome with "--no-sandbox" argument.

Resources