I'm trying to connect logstash (Version 1.5.0) to get logs of services (that run on apache-tomcat). These logs are log4j.
I use this config for logstash:
input {
log4j {
mode => server
host => localhost
port => 4560
type => "log4j"
}
}...
and in my service' log4j.xml I've set my SocketAppender:
<appender name="OHADS" class="org.apache.log4j.net.SocketAppender">
<param name="port" value="4560" />
<param name="remoteHost" value="localhost" />
</appender>
It works fine.
The questions:
I want logstash to collect logs not from my 'localhost', but from other tomcats, from other machines, as well. How can I do that? when I tried to put in the "host" (in logstash config) something other than localhost (or the IP of the local machine), i got error on startup:
"Cannot assign requested address - bind - Cannot assign requested
address".
How can I connect it to several IPs simultanously?
any ideas?
Related
I want to secure the communication to logstash 7.4.2. Im using Syslog Input Plugin listening over specified port. How can I enable SSL/TLS on syslog?
Regards
Ram
Actually, the Logstash Syslog input plugin doesn't support TLS.
You can use Fluentd instead which supports TLS encryption. Here's a working example with an Elasticsearch Output :
<source>
#type syslog
port 5140
bind 0.0.0.0
<transport tls>
ca_path /etc/pki/ca.pem
cert_path /etc/pki/cert.pem
private_key_path /etc/pki/key.pem
private_key_passphrase PASSPHRASE
</transport>
</source>
<match **>
#type elasticsearch
host localhost
port 9200
</match>
Best regards
I was using SoapUI to hit an Azure Service Fabric service, but I was using the Client connection endpoint for my cluster. I suspected I was hitting the wrong port due to this error:
Error getting response; org.apache.http.NoHttpResponseException: The target server failed to respond
Where do I find the port for my cluster?
Where do I find the port for my cluster?
1) The port may be found in the ServiceManifest.xml file:
<Endpoint Protocol="http" Name="ServiceEndpoint" Type="Input" Port="1234" />
2) The port may also be found in the Azure portal: load balancer > Frontend IP configuration > LBAppRule > Port
While I use the connect statement, it shows the following error
The controller is not available at localhost:9999: java.net.ConnectException: JBAS012174:
Could not connect to remote://localhost:9999. The connection failed: JBAS012174: Could not
connect to remote://localhost:9999. The connection failed: Connection refused
[disconnected /]
I had the problem with connecting to the native management interface on port 9999 and all I ended up needing to do was enable the interface by adding the following to the standalone.XML file:
<management-interfaces>
<native-interface security-realm="ManagementRealm">
<socket-binding native="management-native"/>
</native-interface>
:
<management-interfaces>
The actual native management binding (HOST:PORT) is defined in JBoss configuration file as interface name="management" and socket-binding name="management-native". By default they use localhost and 9999.
When install system service, it is necessary to specify the correct /controller host:port values if management binding has been updated.
The error below in the log indicates the CLI command line cannot connect to the management interface when shutting down:
Could not connect to remote://localhost:9999. The connection failed
Confirm the /controller parameter used when configuring system service matches the management interface and socket binding definition in JBoss configuration file (standalone.xml / domain.xml and host.xml).
When configuring system service, if not specified, the default configuration for controller is :
/controller HOST:PORT The host and port of the management interface. If omitted, the default is localhost:9999.
Change this parameter to match the management interface and socket binding settings in the JBoss configuration file:
<interfaces>
<interface name="management">
<inet-address value="192.168.0.1"/>
</interface>
......
</interfaces>
<socket-binding-group name="standard-sockets" default-interface="public" port-offset="${jboss.socket.binding.port-offset:0}">
<socket-binding name="management-native" interface="management" port="${jboss.management.native.port:9999}"/>
With such configuration above you need to specify /controller as below to install the system service
service.bat install /startup /controller=192.168.0.1:9999 /config standalone-customized-1.xml
If jboss.socket.binding.port-offset is set, confirm actual port number (after offset) is passed in /controller parameter. For example, if jboss.socket.binding.port-offset is set as 300 in standalon.xml, you need to use 10299 (default 9999 + 300) as PORT number to install system service:
service.bat install /startup /controller=192.168.0.1:10299 /config standalone-customized-2.xml
I was having the exact same problem as this guy.I followed the answer but that gives me
INVALID Profile
I have only tried to configure external.xml file to my External IP.But everytime i reloadxml it tells me my sip profile conf is invalid
What i tried:
Made sure that the ports are not blocked
Can reach the port from telnet command telnet MY_IP MY_PORT
Please throw some light anyone.Thanks
Solved it.
I was changing the wrong xml tag.Needed to change
<param name="ext-rtp-ip" value="MY_EX_IP"/>
<param name="ext-sip-ip" value="MY_EX_IP"/>
I'm trying to connect elasticsearch to logstash on a centralized logstash aggregator
I'm running the logstash web interface over port 80 with kibana.
This is the command I'm using to start logstash :
/usr/bin/java -jar /etc/alternatives/logstashhome/logstash.jar agent -f /etc/logstash/logstash.conf web --port 80
This is the conf I am using:
[root#logstash:~] #cat /etc/logstash/logstash.conf
input { redis { host => "my-ip-here"
type => "redis-input"
data_type => "list"
key => "logstash" }
}
output {
stdout { }
elasticsearch{
type => "all"
embedded => false
host => "my-ip-here"
port => "9300"
cluster => "jf"
node_name => "logstash"
}
}
And it looks as if I am receiving data from the logstash agent (installed on another host). I see log entries streaming by after I start logstash via init script.
2013-10-31T02:51:53.916+0000 beta Oct 30 22:51:53 49eb8f3e-a2c1-4c12-a41f-42dbe635a9f0 sshd[23324]: Connection closed by xx.xx.xx.xx
2013-10-31T02:52:13.002+0000 beta Oct 30 22:52:12 49eb8f3e-a2c1-4c12-a41f-42dbe635a9f0 proftpd[23403]: xx.xx.xx.xx (xx.xx.xx.xx[xx.xx.xx.xx]) - FTP session opened.
2013-10-31T02:52:13.002+0000 beta Oct 30 22:52:12 49eb8f3e-a2c1-4c12-a41f-42dbe635a9f0 proftpd[23403]: xx.xx.xx.xx (xx.xx.xx.xx[xx.xx.xx.xx]) - FTP session closed.
2013-10-31T02:52:30.080+0000 beta Oct 30 22:52:29 49eb8f3e-a2c1-4c12-a41f-42dbe635a9f0 xinetd[1757]: START: nrpe pid=23405 from=xx.xx.xx.xx
2013-10-31T02:52:30.081+0000 beta Oct 30 22:52:29 49eb8f3e-a2c1-4c12-a41f-42dbe635a9f0 xinetd[1757]: EXIT: nrpe status=0 pid=23405 duration=0(sec)
I can see my nagios server connecting to the beta host ( beta is the external host with the logstash agent installed and running) and some FTP sessions (not that I'm in love FTP, but hey what can ya do?)
Yet when I point my browser to the logstash server I see this message:
Error No index found at http://logstash.mydomain.com:9200/_all/_mapping. Please create at least one index.If you're using a proxy ensure it is configured correctly.1 alert(s)
This is my cluster setting in elasticsearch.yaml
grep -i cluster /etc/elasticsearch/elasticsearch.yml | grep jf
cluster.name: jf
My host in elasticsearch.yaml
grep -i host /etc/elasticsearch/elasticsearch.yml
network.bind_host: xxx.xx.xx.xxx # <- my logstash ip
I did try to add an index using the following curl:
[root#logstash:~] #curl -PUT http://logstash.mydomain.com:9200/_template/logstash_per_index
But when I reload the page I get the same error message. A bit stuck at this point. I'd appreciate any advice anyone may have!
Thanks!
try to execute this
curl -XPUT http://127.0.0.1:9200/test/item/1 -d '{"name":"addhe warman", "description": "White hat hacker."}'
what was it because your elasticsearch is empty try to fill it with sample data and then find out the real problem is. good luck
You can check this chrome plugin once.
https://chrome.google.com/webstore/detail/sense/doinijnbnggojdlcjifpdckfokbbfpbo?hl=en
It's a JSON aware developer tool to ElasticSearch.Also after creating index,clear the browser cache,close the browser and retest.
What is the output of logstash? (meaning like log-file)
The version of the logstash-embedded elasticsearch must match your standalone-version.
e.g. logstash 1.3 uses elasticsearch 0.90.
logstash 1.4 uses elasticsearch 1.0
So either you have to take care to use the right elasticsearch-version or use elasticsearch_http as output (with port 9200) to use the REST-API.
you should let default configuration by removing this line
port => "9300"
I was also having similar issue, elasticsearch was running is different port and kibana was accessing it in 9200 port, which is mentioned in ./vendor/kibana/config.js file inside logstash home folder.