I have in the same machine Elasticsearh, Logstash and Beat/filebeat.
Filebeat is configured to send information to localhost:5043.
Logstash has a pipe configuration listening on port 5043.
If I ran netstat -tuplen I see:
[root#elk bin]# netstat -tuplen | grep 5043
tcp6 0 0 :::5043 :::* LISTEN 994 147016 31435/java
Which means logstash loaded the pipe and is listening on the expected port.
If I telnet to localhost and port 5043:
[root#elk bin]# telnet localhost 5043
Trying ::1...
Connected to localhost.
Escape character is '^]'.
^CConnection closed by foreign host.
[root#elk bin]#
Which means the port is open.
However when I read the filebeat's log, I see:
2017-02-15T17:35:32+01:00 INFO Home path: [/usr/share/filebeat] Config path: [/etc/filebeat] Data path: [/var/lib/filebeat] Logs path: [/var/log/filebeat]
2017-02-15T17:35:32+01:00 INFO Setup Beat: filebeat; Version: 5.2.1
2017-02-15T17:35:32+01:00 INFO Loading template enabled. Reading template file: /etc/filebeat/filebeat.template.json
2017-02-15T17:35:32+01:00 INFO Loading template enabled for Elasticsearch 2.x. Reading template file: /etc/filebeat/filebeat.template-es2x.json
2017-02-15T17:35:32+01:00 INFO Elasticsearch url: http://localhost:5043
2017-02-15T17:35:32+01:00 INFO Activated elasticsearch as output plugin.
2017-02-15T17:35:32+01:00 INFO Publisher name: elk.corp.ncr
2017-02-15T17:35:32+01:00 INFO Flush Interval set to: 1s
2017-02-15T17:35:32+01:00 INFO Max Bulk Size set to: 50
2017-02-15T17:35:32+01:00 INFO filebeat start running.
2017-02-15T17:35:32+01:00 INFO No registry file found under: /var/lib/filebeat/registry. Creating a new registry file.
2017-02-15T17:35:32+01:00 INFO Loading registrar data from /var/lib/filebeat/registry
2017-02-15T17:35:32+01:00 INFO States Loaded from registrar: 0
2017-02-15T17:35:32+01:00 INFO Loading Prospectors: 1
2017-02-15T17:35:32+01:00 INFO Starting Registrar
2017-02-15T17:35:32+01:00 INFO Start sending events to output
2017-02-15T17:35:32+01:00 INFO Prospector with previous states loaded: 0
2017-02-15T17:35:32+01:00 INFO Loading Prospectors completed. Number of prospectors: 1
2017-02-15T17:35:32+01:00 INFO All prospectors are initialised and running with 0 states to persist
2017-02-15T17:35:32+01:00 INFO Starting spooler: spool_size: 2048; idle_timeout: 5s
2017-02-15T17:35:32+01:00 INFO Starting prospector of type: log
2017-02-15T17:35:32+01:00 INFO Harvester started for file: /tmp/logstash-tutorial.log
2017-02-15T17:35:32+01:00 INFO Harvester started for file: /tmp/yum.log
2017-02-15T17:35:38+01:00 ERR Connecting error publishing events (retrying): Get http://localhost:5043: read tcp [::1]:40240->[::1]:5043: read: connection reset by peer
And the message 2017-02-15T17:35:41+01:00 ERR Connecting error publishing events (retrying): Get http://localhost:5043: read tcp 127.0.0.1:39214->127.0.0.1:5043: read: connection reset by peer is repeated ad-nauseam.
I am missing a elephant in the room? Why is the connection "reset by peer"?
pipeline.conf
input {
beats {
port => "5043"
}
}
# The filter part of this file is commented out to indicate that it is
# optional.
# filter {
#
# }
output {
stdout { codec => rubydebug }
}
filebeat.yml
###################### Filebeat Configuration Example #########################
# This file is an example configuration file highlighting only the most common
# options. The filebeat.full.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html
#=========================== Filebeat prospectors =============================
filebeat.prospectors:
# Each - is a prospector. Most options can be set at the prospector level, so
# you can use different prospectors for various configurations.
# Below are the prospector specific configurations.
- input_type: log
# Paths that should be crawled and fetched. Glob based paths.
paths:
- /tmp/*.log
#- c:\programdata\elasticsearch\logs\*
# Exclude lines. A list of regular expressions to match. It drops the lines that are
# matching any regular expression from the list.
#exclude_lines: ["^DBG"]
# Include lines. A list of regular expressions to match. It exports the lines that are
# matching any regular expression from the list.
#include_lines: ["^ERR", "^WARN"]
# Exclude files. A list of regular expressions to match. Filebeat drops the files that
# are matching any regular expression from the list. By default, no files are dropped.
#exclude_files: [".gz$"]
# Optional additional fields. These field can be freely picked
# to add additional information to the crawled log files for filtering
#fields:
# level: debug
# review: 1
### Multiline options
# Mutiline can be used for log messages spanning multiple lines. This is common
# for Java Stack Traces or C-Line Continuation
# The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
#multiline.pattern: ^\[
# Defines if the pattern set under pattern should be negated or not. Default is false.
#multiline.negate: false
# Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
# that was (not) matched before or after or as long as a pattern is not matched based on negate.
# Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
#multiline.match: after
#================================ General =====================================
# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:
# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]
# Optional fields that you can specify to add additional information to the
# output.
#fields:
# env: staging
#================================ Outputs =====================================
# Configure what outputs to use when sending the data collected by the beat.
# Multiple outputs may be used.
#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:
# Array of hosts to connect to.
#hosts: ["localhost:9200"]
# Optional protocol and basic auth credentials.
#protocol: "https"
#username: "elastic"
#password: "changeme"
#----------------------------- Logstash output --------------------------------
#output.logstash:
# The Logstash hosts
hosts: ["localhost:5043"]
# Optional SSL. By default is off.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
#================================ Logging =====================================
# Sets log level. The default log level is info.
# Available log levels are: critical, error, warning, info, debug
#logging.level: debug
# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]
I found out:
#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:
...
...
#----------------------------- Logstash output --------------------------------
#output.logstash:
...
...
Where I should have:
#-------------------------- Elasticsearch output ------------------------------
#output.elasticsearch:
...
...
#----------------------------- Logstash output --------------------------------
output.logstash:
...
...
Get http://localhost:5043
This suggests that your filebeat configuration and what Logstash is configured to listen for are not in sync. Logstash has a beats {} input specifically designed to be a server for beats connections. The default port is 5044. On the beats side, the Logstash Output needs to be used to connect to that server. Doing it this way ensures both sides are speaking the same language, which that error suggests is not the case.
In your Filebeat configuration try changing tls to ssl. See the list of breaking changes
In my case I was missing logstash template options
output.logstash:
hosts: ["localhost:5044"]
template.enabled: true
template.path: "/etc/filebeat/filebeat.template.json"
index: "filebeat"
Related
I am try learning service in linux and i install the elasticsearch, but it seems its not work when typing the command "sudo service elasticsearch restart. the website just show " the connect is reset" after browsing the "http://localhost:9200/".It seems that the connection is blocked.It just wondering is the problem on the ssl?I have try to use https
it gives:
{"error":{"root_cause":[{"type":"security_exception","reason":"unable to authenticate user [] for REST request [/]","header":{"WWW-Authenticate":["Basic realm=\"security\" charset=\"UTF-8\"","Bearer realm=\"security\"","ApiKey"]}}],"type":"security_exception","reason":"unable to authenticate user [] for REST request [/]","header":{"WWW-Authenticate":["Basic realm=\"security\" charset=\"UTF-8\"","Bearer realm=\"security\"","ApiKey"]}},"status":401}
".
and inaddition when i run the " curl " http://google.com:443",it shows
"curl: (52) Empty reply from server"
elastic.log:
[2022-12-01T02:51:22,944][WARN ][o.e.x.s.t.n.SecurityNetty4HttpServerTransport] [alex-VirtualBox] received plaintext http traffic on an https channel, closing connection Netty4HttpChannel{localAddress=/127.0.0.1:9200, remoteAddress=/127.0.0.1:54614}
elasticsearch.yml:
# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
# Before you set out to tweak and tune the configuration, make sure you
# understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
#cluster.name: my-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:#
#node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /var/lib/elasticsearch
#
# Path to log files:
#
path.logs: /var/log/elasticsearch
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# By default Elasticsearch is only accessible on localhost. Set a different
# address here to expose this node on the network:
#
network.host: localhost
#
# By default Elasticsearch listens for HTTP traffic on the first free port it
# finds starting at 9200. Set a specific HTTP port here:
When typing the systemctl status elasticsearch.service,it seems work fine:
systemctl status elasticsearch.service
● elasticsearch.service - Elasticsearch
Loaded: loaded (/lib/systemd/system/elasticsearch.service; enabled; vendor preset: enabl>
Active: active (running) since Thu 2022-12-01 02:49:56 HKT; 10s ago
Docs: https://www.elastic.co
Main PID: 11238 (java)
Tasks: 79 (limit: 1783)
Memory: 780.9M
CPU: 20.908s
CGroup: /system.slice/elasticsearch.service
├─11238 /usr/share/elasticsearch/jdk/bin/java -Xms4m -Xmx64m -XX:+UseSerialGC -D>
├─11297 /usr/share/elasticsearch/jdk/bin/java -Des.networkaddress.cache.ttl=60 ->
└─11318 /usr/share/elasticsearch/modules/x-pack-ml/platform/linux-x86_64/bin/con>
System.log(seems not very useful):
Dec 1 02:54:54 alex-VirtualBox kernel: [12786.591861] [UFW BLOCK] IN=enp0s3 OUT= MAC=08:00:27:55:db:e2:52:54:00:12:35:02:08:00 SRC=10.0.2.2 DST=10.0.2.15 LEN=40 TOS=0x00 PREC=0x00 TTL=64 ID=5926 PROTO=TCP SPT=443 DPT=40826 WINDOW=65535 RES=0x00 ACK RST URGP=0
I am trying to setup filebeat to logstash and get below errors at filebeat and logstash end:
filebeat; Version: 7.7.0
logstash "number" : "7.8.0"
Modified /etc/filebeat/filebeat.yml:
enabled: true
paths:
commented output.elasticsearch
uncommented output.logstash and added hosts: ["hostname:5044"]
Modified /etc/logstash/conf.d/beats_elasticsearch.conf:
input {
beats {
port => 5044
}
}
#filter {
#}
output {
elasticsearch {
hosts => ["hostname:9200"]
}
}
I started filebeat and got below error:
2020-07-06T08:51:23.912-0700 ERROR [publisher_pipeline_output] pipeline/output.go:106 Failed to connect to backoff(elasticsearch(http://hostname:5044)): Get http://hostname:5044: dial tcp ip_address:5044: connect: connection refused
Started logstash and its log below:
[INFO ] 2020-07-06 09:00:20.562 [[main]<beats] Server - Starting server on port: 5044
[INFO ] 2020-07-06 09:00:20.835 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}
[INFO ] 2020-07-06 09:00:45.266 [defaultEventExecutorGroup-4-1] BeatsHandler - [local: x.x.x.x:5044, remote: y.y.y.y:53628] Handling exception: org.logstash.beats.InvalidFrameProtocolException: Invalid version of beats protocol: 71
[WARN ] 2020-07-06 09:00:45.267 [nioEventLoopGroup-2-2] DefaultChannelPipeline - An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
io.netty.handler.codec.DecoderException: org.logstash.beats.InvalidFrameProtocolException: Invalid version of beats protocol: 71
Please explain what else I should do.
Started filebeat and logstash as:
sudo /usr/share/filebeat/bin/filebeat -e -c /etc/filebeat/filebeat.yml
sudo /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/beats_elasticsearch.conf
Thanks
The version on the filebeat and logstash was different. Upgraded the filebeat version and fixed the issue. Thanks
I Have installed influxdb on a linux distro running on a raspberrypi...
pi#raspberrypi:~ $ influx -version
InfluxDB shell version: 1.1.1
Then i create a DB, followed by an Admin user with
CREATE USER admin WITH PASSWORD 'password' WITH ALL PRIVILEGES
After this i edit the influx.conf file located at:
/etc/influxdb/influxdb.conf
As i want the influxdb to ask for user auth when it is accessed (http external or internal and console?is it possible console?) i browse and look for the [[http]] block on the file.... this is what i have.
###
### [http]
###
### Controls how the HTTP endpoints are configured. These are the primary
### mechanism for getting data into and out of InfluxDB.
###
# [http]
# Determines whether HTTP endpoint is enabled.
enabled = true
# The bind address used by the HTTP service.
# bind-address = ":8086"
# Determines whether HTTP authentication is enabled.
auth-enabled = true
# The default realm sent back when issuing a basic auth challenge.
# realm = "InfluxDB"
# Determines whether HTTP request logging is enable.d
# log-enabled = true
# Determines whether detailed write logging is enabled.
# write-tracing = false
# Determines whether the pprof endpoint is enabled. This endpoint is used for
# troubleshooting and monitoring.
pprof-enabled = true
# Determines whether HTTPS is enabled.
https-enabled = false
# The SSL certificate to use when HTTPS is enabled.
https-certificate = "/etc/ssl/influxdb.pem"
# Use a separate private key location.
https-private-key = ""
# The JWT auth shared secret to validate requests using JSON web tokens.
shared-sercret = ""
# The default chunk size for result sets that should be chunked.
# max-row-limit = 10000
# The maximum number of HTTP connections that may be open at once. New connections that
# would exceed this limit are dropped. Setting this value to 0 disables the limit.
# max-connection-limit = 0
# Enable http service over unix domain socket
# unix-socket-enabled = false
# The path of the unix domain socket.
# bind-socket = "/var/run/influxdb.sock"
Changing the 1st and 3rd sub-group entries.
Finnaly i restart the influxdb service with:
sudo service influxdb restart
Problems
1 - Creating a database from another computer on the network (without login tokens) is successful (and it shouldn't):
http://192.168.7.125:8086/query?q=CREATE DATABASE test
returns:
{
"results": [
{}
]
}
calling influxdb on raspberry cmdline does not ask for auth:
pi#raspberrypi:~ $ influx
Visit https://enterprise.influxdata.com to register for updates, InfluxDB server management, and monitoring.
Connected to http://localhost:8086 version 1.1.1
InfluxDB shell version: 1.1.1
>
Does anyone know what am i doing wrong?
EDIT
Furthermore, checking the /var/log/syslog i can see that:
1- It is loading the file from the currect directory
[run] 2017/01/17 11:27:36 InfluxDB starting, version 1.1.1, branch master, commit e47c
f1f2e83a02443d7115c54f838be8ee959644
Jan 17 11:27:36 raspberrypi influxd[901]: [run] 2017/01/17 11:27:36 Go version go1.7.4, GOMAXPROCS set to 4
Jan 17 11:27:36 raspberrypi influxd[901]: [run] 2017/01/17 11:27:36 Using configuration at: /etc/influxdb/influxdb.conf
Jan 17 11:27:36 raspberrypi influxd[901]: [store] 2017/01/17 11:27:36 Using data dir: /var/lib/influxdb/data
2- It fails in starting with authentication (auth is deactivated)
Jan 17 11:27:37 raspberrypi influxd[901]: [httpd] 2017/01/17 11:27:37 Starting HTTP service
Jan 17 11:27:37 raspberrypi influxd[901]: [httpd] 2017/01/17 11:27:37 Authentication enabled: false
Jan 17 11:27:37 raspberrypi influxd[901]: [httpd] 2017/01/17 11:27:37 Listening on HTTP: [::]:8086
The culprit is on the [http] here:
###
### [http]
###
### Controls how the HTTP endpoints are configured. These are the primary
### mechanism for getting data into and out of InfluxDB.
###
[http]
# Determines whether HTTP endpoint is enabled.
enabled = true
# The bind address used by the HTTP service.
# bind-address = ":8086"
# Determines whether HTTP authentication is enabled.
auth-enabled = true
I have a scenario to run cassandra of two different versions in the same machine but at different ports.
I started one cluster with following cassandra config at port 9161,
# TCP port, for commands and data
storage_port: 7000
# SSL port, for encrypted communication. Unused unless enabled in
# encryption_options
ssl_storage_port: 7004
port for the CQL native transport to listen for clients on
native_transport_port: 9043
port for Thrift to listen for clients on
rpc_port: 9161
seed_provider:
# Addresses of hosts that are deemed contact points.
# Cassandra nodes use this list of hosts to find each other and learn
# the topology of the ring. You must change this if you are running
# multiple nodes!
- class_name: org.apache.cassandra.locator.SimpleSeedProvider
parameters:
# seeds is actually a comma-delimited list of addresses.
# Ex: "<ip1>,<ip2>,<ip3>"
- seeds: "127.0.0.1"
It runs well,
$ /usr/local/apache-cassandra-2.1.1/bin/cassandra -f
...
INFO 05:08:42 Loading settings from file:/usr/local/apache-cassandra-2.1.1/conf/cassandra.yaml
...
INFO 05:09:29 Starting listening for CQL clients on localhost/127.0.0.1:9043...
INFO 05:09:29 Binding thrift service to localhost/127.0.0.1:9161
INFO 05:09:29 Listening for thrift clients...
INFO 05:19:25 No files to compact for user defined compaction
$ jps
5866 CassandraDaemon
8848 Jps
However while starting another cassandra cluster configured to run at port 9160 with config,
# TCP port, for commands and data
storage_port: 7000
# SSL port, for encrypted communication. Unused unless enabled in
# encryption_options
ssl_storage_port: 7004
port for the CQL native transport to listen for clients on
native_transport_port: 9042
port for Thrift to listen for clients on
rpc_port: 9160
seed_provider:
# Addresses of hosts that are deemed contact points.
# Cassandra nodes use this list of hosts to find each other and learn
# the topology of the ring. You must change this if you are running
# multiple nodes!
- class_name: org.apache.cassandra.locator.SimpleSeedProvider
parameters:
# seeds is actually a comma-delimited list of addresses.
# Ex: "<ip1>,<ip2>,<ip3>"
- seeds: "127.0.0.1"
it fails with message
$ /usr/local/apache-cassandra-2.0.11/bin/cassandra -f
Unable to bind JMX, is Cassandra already running?
How can I make it run two different version of cassandra in the same machine?
The problem is that I have no permission to stop the previous version. Neither can I use https://github.com/pcmanus/ccm
The problem is your new version cassandra is also trying to use port 7199 for the JMX monitoring. Change the JMX port to some other unused port, then it will start. The JMX port can be change in the file called cassandraFolder/bin/cassandra.bat. There will be a line
-Dcom.sun.management.jmxremote.port=7199^
Change the above port to some other unused port.
If you are using cassandra in linux environment the JMX configuration will be located in the file called cassandraFolder/conf/cassandra-env.sh. There will be a line
JMX_PORT="7199"
Change this to some other unused port.
But I was unclear with your question.
Are you trying to run new cassandra to join in the existing cluster?
If yes, changing the JMX port will be sufficient.
Are you trying to run new cassandra in a stand alone mode?
If yes, change following configuration in yaml file.
seeds: "127.0.0.2"
listen_address: 127.0.0.2
rpc_address: 127.0.0.2
add the following entry,
127.0.0.1 127.0.0.2
in the /etc/hosts file if your you are running in linux. If you are running in windows add the above entry in C:\Windows\System32\drivers\etc\hosts file. If your intension is to run in stand alone mode then be careful in your configuration. If you made anything wrong then your new cassandra will join into the existing cluster.
This link helps you to run cassandra cluster in a single windows machine
Well I fixed it with changing few more configurations which are storage_port/ssl_storage_port in conf/cassandra.yaml and JMX_PORT in conf/cassandra-env.sh,
conf/cassandra.yaml
# TCP port, for commands and data
storage_port: 7004
# SSL port, for encrypted communication. Unused unless enabled in
# encryption_options
ssl_storage_port: 7005
port for the CQL native transport to listen for clients on
native_transport_port: 9043
port for Thrift to listen for clients on
rpc_port: 9161
seed_provider:
# Addresses of hosts that are deemed contact points.
# Cassandra nodes use this list of hosts to find each other and learn
# the topology of the ring. You must change this if you are running
# multiple nodes!
- class_name: org.apache.cassandra.locator.SimpleSeedProvider
parameters:
# seeds is actually a comma-delimited list of addresses.
# Ex: "<ip1>,<ip2>,<ip3>"
- seeds: "127.0.0.1"
conf/cassandra-env.sh
# Specifies the default port over which Cassandra will be available for
# JMX connections.
JMX_PORT="7200"
I am trying to configure fail2ban to get an email when an IP is banned.
I changed the jail.local config file. I changed the action parameter to action_mwl. I installed sendmail.
I receive emails when fail2ban is stopped or started but not when an IP is banned so the emails can be sent from the server.
What am I missing?
Thanks
mike#test:~$ sudo cat /etc/fail2ban/jail.local
# Fail2Ban configuration file.
#
# This file was composed for Debian systems from the original one
# provided now under /usr/share/doc/fail2ban/examples/jail.conf
# for additional examples.
#
# Comments: use '#' for comment lines and ';' for inline comments
#
# To avoid merges during upgrades DO NOT MODIFY THIS FILE
# and rather provide your changes in /etc/fail2ban/jail.local
#
# The DEFAULT allows a global definition of the options. They can be overridden
# in each jail afterwards.
[DEFAULT]
# "ignoreip" can be an IP address, a CIDR mask or a DNS host. Fail2ban will not
# ban a host which matches an address in this list. Several addresses can be
# defined using space separator.
ignoreip = 127.0.0.1/8
# "bantime" is the number of seconds that a host is banned.
bantime = 120
# A host is banned if it has generated "maxretry" during the last "findtime"
# seconds.
findtime = 600
maxretry = 3
# "backend" specifies the backend used to get files modification.
# Available options are "pyinotify", "gamin", "polling" and "auto".
# This option can be overridden in each jail as well.
#
# pyinotify: requires pyinotify (a file alteration monitor) to be installed.
# If pyinotify is not installed, Fail2ban will use auto.
# gamin: requires Gamin (a file alteration monitor) to be installed.
# If Gamin is not installed, Fail2ban will use auto.
# polling: uses a polling algorithm which does not require external libraries.
# auto: will try to use the following backends, in order:
# pyinotify, gamin, polling.
backend = auto
# "usedns" specifies if jails should trust hostnames in logs,
# warn when reverse DNS lookups are performed, or ignore all hostnames in logs
#
# yes: if a hostname is encountered, a reverse DNS lookup will be performed.
# warn: if a hostname is encountered, a reverse DNS lookup will be performed,
# but it will be logged as a warning.
# no: if a hostname is encountered, will not be used for banning,
# but it will be logged as info.
usedns = warn
#
# Destination email address used solely for the interpolations in
# jail.{conf,local} configuration files.
destemail = mike#test.com
#
# Name of the sender for mta actions
sendername = Fail2Ban
#
# ACTIONS
#
# Default banning action (e.g. iptables, iptables-new,
# iptables-multiport, shorewall, etc) It is used to define
# action_* variables. Can be overridden globally or per
# section within jail.local file
banaction = iptables-multiport
# email action. Since 0.8.1 upstream fail2ban uses sendmail
# MTA for the mailing. Change mta configuration parameter to mail
# if you want to revert to conventional 'mail'.
mta = sendmail
# Default protocol
protocol = tcp
# Specify chain where jumps would need to be added in iptables-* actions
chain = INPUT
#
# Action shortcuts. To be used to define action parameter
# The simplest action to take: ban only
action_ = %(banaction)s[name=%(__name__)s, port="%(port)s", protocol="%(protocol)s", chain="%(chain)s"]
# ban & send an e-mail with whois report to the destemail.
action_mw = %(banaction)s[name=%(__name__)s, port="%(port)s", protocol="%(protocol)s", chain="%(chain)s"]
%(mta)s-whois[name=%(__name__)s, dest="%(destemail)s", protocol="%(protocol)s", chain="%(chain)s", sendername="%(sendername)s"]
# ban & send an e-mail with whois report and relevant log lines
# to the destemail.
action_mwl = %(banaction)s[name=%(__name__)s, port="%(port)s", protocol="%(protocol)s", chain="%(chain)s"]
%(mta)s-whois-lines[name=%(__name__)s, dest="%(destemail)s", logpath=%(logpath)s, chain="%(chain)s", sendername="%(sendername)s"]
# Choose default action. To change, just override value of 'action' with the
# interpolation to the chosen action shortcut (e.g. action_mw, action_mwl, etc) in jail.local
# globally (section [DEFAULT]) or per specific section
action = %(action_mwl)s
#
# JAILS
#
# Next jails corresponds to the standard configuration in Fail2ban 0.6 which
# was shipped in Debian. Enable any defined here jail by including
#
# [SECTION_NAME]
# enabled = true
#
# in /etc/fail2ban/jail.local.
#
# Optionally you may override any other parameter (e.g. banaction,
# action, port, logpath, etc) in that section within jail.local
[ssh]
enabled = true
port = 23222
filter = sshd
logpath = /var/log/auth.log
maxretry = 4
Here is an extract of my fail2ban log, we see that an IP is banned but I did not get an email.
2014-07-03 05:14:01,418 fail2ban.server : INFO Stopping all jails
2014-07-03 05:15:02,140 fail2ban.jail : INFO Jail 'ssh' stopped
2014-07-03 05:15:02,144 fail2ban.server : INFO Exiting Fail2ban
2014-07-03 05:15:13,245 fail2ban.server : INFO Changed logging target to /var/log/fail2ban.log for Fail2ban v0.8.11
2014-07-03 05:15:13,246 fail2ban.jail : INFO Creating new jail 'ssh'
2014-07-03 05:15:13,270 fail2ban.jail : INFO Jail 'ssh' uses pyinotify
2014-07-03 05:15:13,294 fail2ban.jail : INFO Initiated 'pyinotify' backend
2014-07-03 05:15:13,295 fail2ban.filter : INFO Added logfile = /var/log/auth.log
2014-07-03 05:15:13,297 fail2ban.filter : INFO Set maxRetry = 4
2014-07-03 05:15:13,297 fail2ban.filter : INFO Set findtime = 600
2014-07-03 05:15:13,298 fail2ban.actions: INFO Set banTime = 120
2014-07-03 05:15:13,344 fail2ban.jail : INFO Jail 'ssh' started
2014-07-03 05:16:13,533 fail2ban.actions: WARNING [ssh] Ban x.x.x.x
2014-07-03 05:17:13,669 fail2ban.actions: INFO [ssh] x.x.x.x already banned
2014-07-03 05:18:13,734 fail2ban.actions: WARNING [ssh] Unban x.x.x.x