ERROR: Could not successfully connect to ElasticSearch. Check that your cluster state is not RED and that ElasticSearch is running properly - graylog2

I've referred to Setup a graylog2 server with elasticsearch in a vagrant machine and I have the correct version of ElasticSearch.
I've also added the right options for Graylog2 and ElasticSearch as per the tutorial.
ERROR: Could not successfully connect to ElasticSearch. Check that your cluster state is not RED and that ElasticSearch is running properly.
Need help?
* Official documentation: http://support.torch.sh/help/kb
* Mailing list: http://support.torch.sh/help/kb/general/forums-mailing-list
* Issue tracker: http://support.torch.sh/help/kb/general/issue-trackers
* Commercial support: http://www.torch.sh/
But we also got some specific help pages that might help you in this case:
* http://support.torch.sh/help/kb/graylog2-server/configuring-and-tuning-elasticsearch-for-graylog2-v0200
Terminating. :(
I'm still getting that error when I run:
sudo java -jar /opt/graylog2-server/graylog2-server.jar --debug
I've also checked that ElasticSearch is running properly -
central#central:~$ curl -XGET 'http://127.0.0.1:9200/_cluster/health?pretty=true'
{
"cluster_name" : "graylog2",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 0,
"active_shards" : 0,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0
}
Any suggestions on what I should do? I'm not understanding what the problem is.

I got the setup to run with the following additions -
# /etc/elasticsearch/elasticsearch.yml
cluster.name: graylog2
node.master: true
node.data: true
bootstrap.mlockall: true
ES_HEAP_SIZE: 8192 (# 16GB memory)
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicasts.hosts: [“127.0.0.1”, "SERVER IP"]
# /etc/graylog2.conf
elasticsearch_discovery_zen_ping_multicast_enabled = false
elasticsearch_discovery_zen_ping_unicast_hosts = IP_ARR:9300

I'm not sure why A23's question is being downvoted. I too was unable to get graylog2-server running. The key seems to be setting the elasticsearch_discovery_zen_ping_unicast_hosts to a valid IP set.
This of course is not mentioned in the graylog docs as a setting that should be changed in the graylog2.conf file:
http://www.graylog2.org/resources/documentation/setup/server
Which is a bit annoying.

Related

Nightwatch pipeline in Azure Devops CI error

I am trying to create a pipeline in Azure DevOps for my Nightwatch-Cucumber project. I have everything set, and when I run the tests locally everything is working fine, but when I run the tests in Azure DevOps I get an error. This is the error from the log that I get.
This are the tasks that I added
Can anyone help me with this error and how to make it work
Error connecting to localhost on port 4445
The possible cause of this issue is that port 4445 of the machine where the agent is located is not open.
Based on the error log, it seems that you are using the Microsoft-hosted agent(ubuntu agent).
You could try the following two methods:
1.You can try to change the connection port to 80. Based on my test, the port 80 is opened by default.
Here is an example:
nightwatch.json:
"test_settings" : {
"default" : {
"launch_url" : "http://localhost",
"selenium_port" : 80,
"selenium_host" : "hub.testingbot.com",
"silent": true,
"screenshots" : {
"enabled" : false,
"path" : ""
},
"skip_testcases_on_fail": false,
"desiredCapabilities": {
"javascriptEnabled": true,
"acceptSslCerts": true
}
},
2.Since this project could work fine on your local machine, the configuration should be correct on your local machine. So you could try to create a Self-hosted agent.
Then you could run the pipeline on your local machine.
I made it work. I switched to Ubuntu agent and installed chrome latest version and latest jdk. Also I had wrong chromedriver version installed, changed that in package.json file. Now its working fine. Thanks all for your answers.

HTTP error while using conda for installation of any packages

CondaHTTPError: HTTP 000 CONNECTION FAILED for url
onda-forge/linux-64/repodata.json> Elapsed: -
An HTTP error occurred when trying to retrieve this URL. HTTP errors
are often intermittent, and a simple retry will get you on your way.
SSLError(MaxRetryError('HTTPSConnectionPool(host=\'conda.anaconda.org\',
port=44 3):
Max retries exceeded with url: /conda-forge/linux-64/repodata.json
(Caused b y
SSLError(SSLError("bad handshake: Error([(\'SSL routines\',
\'SSL23_GET_SERVER
_HELLO\', \'unknown protocol\')],)",),))',),)
(tensorflow) harshvardhan#ravan:~/project$ conda info Current conda
install:
platform : linux-64
conda version : 4.3.30
conda is private : False
conda-env version : 4.3.30
conda-build version : 3.0.27
python version : 2.7.14.final.0
requests version : 2.18.4
root environment : /ug/dd/harshvardhan/anaconda2 (writable)
default environment : /ug/dd/harshvardhan/anaconda2/envs/tensorflow
envs directories : /ug/dd/harshvardhan/anaconda2/envs
/ug/dd/harshvardhan/.conda/envs
package cache : /ug/dd/harshvardhan/anaconda2/pkgs
/ug/dd/harshvardhan/.conda/pkgs
channel URLs : https://repo.continuum.io/pkgs/main/linux-64
https://repo.continuum.io/pkgs/main/noarch
https://repo.continuum.io/pkgs/free/linux-64
https://repo.continuum.io/pkgs/free/noarch
https://repo.continuum.io/pkgs/r/linux-64
https://repo.continuum.io/pkgs/r/noarch
https://repo.continuum.io/pkgs/pro/linux-64
https://repo.continuum.io/pkgs/pro/noarch
config file : /ug/dd/harshvardhan/.condarc
netrc file : None
offline mode : False
user-agent : conda/4.3.30 requests/2.18.4 CPython/2.7.14 Linux/3.2.0-4-amd64 debian/7.11 glibc/2.13
UID:GID : 85090:2114
Can you tell me what should I do next?
I surfed other links and setting ssl_verify: False didn't work for me
I am experiencing the same issue, and am quite confident it is due to SSL/Firewall issues internally in the workplace.
Do you have proxy servers that you need to specify in your .condarc file? (.e.g)
proxy_servers:
http : http://101.101.101.255:8080
https : https://101.101.101.255:8080
I know it says WinXP but this still applies - https://conda.io/docs/user-guide/configuration/use-winxp-with-proxy.html
If you are specifying your organisation's servers correctly, and are still having these issues, then there is likely a firewall exception / SSL cert configuration issue that needs to be resolved, and is not a conda issue per se, but a networking problem
You might want to try copying these files from Anaconda3/Library/bin to Anaconda3/DLLs :
libcrypto-1_1-x64.dll
libssl-1_1-x64.dll
run these commands one by one....
$ export {http,https,ftp}_proxy="http://proxy-server:port"<br>
$ unset {http,https,ftp}_proxy
$ export {HTTP,HTTPS,FTP}_PROXY="http://proxy-server:port<br>
$ unset {HTTP,HTTPS,FTP}_PROXY<br>
$ source ~/.bashrc

Filebeat service hangs on restart

I have some weird problem with filebeat
I am using cloud formation to run my stack, and a part of that i am installing and running filebeat for log aggregation,
I inject the /etc/filebeat/filebeat.yml into the machine and then i need to restart filebeat.
The problem is that filebeat hangs. and the entire provisioning is stuck (note that if i ssh into the machine and issue the "sudo service filebeat restart myself, the entire provisioning becomes unstuck and continues). I tried restarting it both via the services section and the commands section of the cloudformation::init and they both hang.
I havent tried it via the userdata but thats the worst possible solution for it.
Any ideas why?
snippets for the template. both these hang as mentioned.
"commands" : {
"01" : {
"command" : "sudo service filebeat restart",
"cwd" : "~",
"ignoreErrors" : "false"
}
}
"services" : {
"sysvinit" : {
"filebeat" : {
"enabled" : "true",
"ensureRunning" : "true",
"files" : ["/etc/filebeat/filebeat.yml"]
}
}
}
Well, this does sound like some sort of lock.. According to the docs, you should insert a dependency to the file, in the filebeat service, under the services section, and that will cause the filebeat service restart you need.
Apparently, the services section supports a files attribute:
A list of files. If cfn-init changes one directly via the files block, this service will be restarted.

Mongodb Replication error in Linux

I tried to configure a 3-member replica set in a single server for test deployment. When I tried to execute the command rs.initiate() the following error appears:
{
"info2" : "no configuration explicitly specified -- making one",
"me" : "localhostName:27017",
"ok" : 0,
"errmsg" : "No host described in new configuration 1 for replica set rs maps to this node",
"code" : 93
}
Please let me know how to solve this.
That issue was resolved. The hostname was not mapped in etc/hosts file. – Ganu

Puppet configuration is getting applied everywhere

I am new in Puppet and just facing very unusual problem in that, my Puppet master is running and all agents are configured to that master perfectly.
Files in Puppet Master:
Now here is my site.pp file :
class fileForNodeA{
file { "/tmp/hello.txt" :
content => "This is hello.txt"
}
}
class fileForNodeB{
file{ "/tmp/hello.txt" :
content => "This is hello1.txt"
}
}
node 'NodeA'{
include fileForNodeA
}
node 'NodeB'{
include fileForNodeB
}
Now the hostnames of clients are NodeA and NodeB respectively :
On NodeA or NodeB when I say :
puppet agent --no-daemonize --verbose --waitforcert 60 --test
It shows this
Could not retrieve catalog from remote server : Error 400 on server : Could not find|
default node or by name with 'NodeA.com' .....
Notice : Using cached catalog
Info : Applying configuratiuon version '1234567890'
Notice : Finished catalog run in 0.06 seconds
After this statement when I browse the dir /tmp it says two files namely hello.txt
and hello1.txt, I am bit confused when it is saying "Could not find..blah blah", then why it applies both the part of NodeA and NodeB?
Please shed some light on to it, if I am doing something wrong how to configure the things according to the nodes setup ?
Please help>>>Thanks AV
Please add this as first line in your site.pp
node default {
}
Enjoy.

Resources