Influxdb not asking for authentication - linux

I Have installed influxdb on a linux distro running on a raspberrypi...
pi#raspberrypi:~ $ influx -version
InfluxDB shell version: 1.1.1
Then i create a DB, followed by an Admin user with
CREATE USER admin WITH PASSWORD 'password' WITH ALL PRIVILEGES
After this i edit the influx.conf file located at:
/etc/influxdb/influxdb.conf
As i want the influxdb to ask for user auth when it is accessed (http external or internal and console?is it possible console?) i browse and look for the [[http]] block on the file.... this is what i have.
###
### [http]
###
### Controls how the HTTP endpoints are configured. These are the primary
### mechanism for getting data into and out of InfluxDB.
###
# [http]
# Determines whether HTTP endpoint is enabled.
enabled = true
# The bind address used by the HTTP service.
# bind-address = ":8086"
# Determines whether HTTP authentication is enabled.
auth-enabled = true
# The default realm sent back when issuing a basic auth challenge.
# realm = "InfluxDB"
# Determines whether HTTP request logging is enable.d
# log-enabled = true
# Determines whether detailed write logging is enabled.
# write-tracing = false
# Determines whether the pprof endpoint is enabled. This endpoint is used for
# troubleshooting and monitoring.
pprof-enabled = true
# Determines whether HTTPS is enabled.
https-enabled = false
# The SSL certificate to use when HTTPS is enabled.
https-certificate = "/etc/ssl/influxdb.pem"
# Use a separate private key location.
https-private-key = ""
# The JWT auth shared secret to validate requests using JSON web tokens.
shared-sercret = ""
# The default chunk size for result sets that should be chunked.
# max-row-limit = 10000
# The maximum number of HTTP connections that may be open at once. New connections that
# would exceed this limit are dropped. Setting this value to 0 disables the limit.
# max-connection-limit = 0
# Enable http service over unix domain socket
# unix-socket-enabled = false
# The path of the unix domain socket.
# bind-socket = "/var/run/influxdb.sock"
Changing the 1st and 3rd sub-group entries.
Finnaly i restart the influxdb service with:
sudo service influxdb restart
Problems
1 - Creating a database from another computer on the network (without login tokens) is successful (and it shouldn't):
http://192.168.7.125:8086/query?q=CREATE DATABASE test
returns:
{
"results": [
{}
]
}
calling influxdb on raspberry cmdline does not ask for auth:
pi#raspberrypi:~ $ influx
Visit https://enterprise.influxdata.com to register for updates, InfluxDB server management, and monitoring.
Connected to http://localhost:8086 version 1.1.1
InfluxDB shell version: 1.1.1
>
Does anyone know what am i doing wrong?
EDIT
Furthermore, checking the /var/log/syslog i can see that:
1- It is loading the file from the currect directory
[run] 2017/01/17 11:27:36 InfluxDB starting, version 1.1.1, branch master, commit e47c
f1f2e83a02443d7115c54f838be8ee959644
Jan 17 11:27:36 raspberrypi influxd[901]: [run] 2017/01/17 11:27:36 Go version go1.7.4, GOMAXPROCS set to 4
Jan 17 11:27:36 raspberrypi influxd[901]: [run] 2017/01/17 11:27:36 Using configuration at: /etc/influxdb/influxdb.conf
Jan 17 11:27:36 raspberrypi influxd[901]: [store] 2017/01/17 11:27:36 Using data dir: /var/lib/influxdb/data
2- It fails in starting with authentication (auth is deactivated)
Jan 17 11:27:37 raspberrypi influxd[901]: [httpd] 2017/01/17 11:27:37 Starting HTTP service
Jan 17 11:27:37 raspberrypi influxd[901]: [httpd] 2017/01/17 11:27:37 Authentication enabled: false
Jan 17 11:27:37 raspberrypi influxd[901]: [httpd] 2017/01/17 11:27:37 Listening on HTTP: [::]:8086

The culprit is on the [http] here:
###
### [http]
###
### Controls how the HTTP endpoints are configured. These are the primary
### mechanism for getting data into and out of InfluxDB.
###
[http]
# Determines whether HTTP endpoint is enabled.
enabled = true
# The bind address used by the HTTP service.
# bind-address = ":8086"
# Determines whether HTTP authentication is enabled.
auth-enabled = true

Related

Elasticsearch showing received plaintext http traffic on an https channel

I am try learning service in linux and i install the elasticsearch, but it seems its not work when typing the command "sudo service elasticsearch restart. the website just show " the connect is reset" after browsing the "http://localhost:9200/".It seems that the connection is blocked.It just wondering is the problem on the ssl?I have try to use https
it gives:
{"error":{"root_cause":[{"type":"security_exception","reason":"unable to authenticate user [] for REST request [/]","header":{"WWW-Authenticate":["Basic realm=\"security\" charset=\"UTF-8\"","Bearer realm=\"security\"","ApiKey"]}}],"type":"security_exception","reason":"unable to authenticate user [] for REST request [/]","header":{"WWW-Authenticate":["Basic realm=\"security\" charset=\"UTF-8\"","Bearer realm=\"security\"","ApiKey"]}},"status":401}
".
and inaddition when i run the " curl " http://google.com:443",it shows
"curl: (52) Empty reply from server"
elastic.log:
[2022-12-01T02:51:22,944][WARN ][o.e.x.s.t.n.SecurityNetty4HttpServerTransport] [alex-VirtualBox] received plaintext http traffic on an https channel, closing connection Netty4HttpChannel{localAddress=/127.0.0.1:9200, remoteAddress=/127.0.0.1:54614}
elasticsearch.yml:
# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
# Before you set out to tweak and tune the configuration, make sure you
# understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
#cluster.name: my-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:#
#node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /var/lib/elasticsearch
#
# Path to log files:
#
path.logs: /var/log/elasticsearch
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# By default Elasticsearch is only accessible on localhost. Set a different
# address here to expose this node on the network:
#
network.host: localhost
#
# By default Elasticsearch listens for HTTP traffic on the first free port it
# finds starting at 9200. Set a specific HTTP port here:
When typing the systemctl status elasticsearch.service,it seems work fine:
systemctl status elasticsearch.service
● elasticsearch.service - Elasticsearch
Loaded: loaded (/lib/systemd/system/elasticsearch.service; enabled; vendor preset: enabl>
Active: active (running) since Thu 2022-12-01 02:49:56 HKT; 10s ago
Docs: https://www.elastic.co
Main PID: 11238 (java)
Tasks: 79 (limit: 1783)
Memory: 780.9M
CPU: 20.908s
CGroup: /system.slice/elasticsearch.service
├─11238 /usr/share/elasticsearch/jdk/bin/java -Xms4m -Xmx64m -XX:+UseSerialGC -D>
├─11297 /usr/share/elasticsearch/jdk/bin/java -Des.networkaddress.cache.ttl=60 ->
└─11318 /usr/share/elasticsearch/modules/x-pack-ml/platform/linux-x86_64/bin/con>
System.log(seems not very useful):
Dec 1 02:54:54 alex-VirtualBox kernel: [12786.591861] [UFW BLOCK] IN=enp0s3 OUT= MAC=08:00:27:55:db:e2:52:54:00:12:35:02:08:00 SRC=10.0.2.2 DST=10.0.2.15 LEN=40 TOS=0x00 PREC=0x00 TTL=64 ID=5926 PROTO=TCP SPT=443 DPT=40826 WINDOW=65535 RES=0x00 ACK RST URGP=0

Denyhosts on Centos7 option DENY_THRESHOLD_INVALID does not work

using centos7 and denyhosts 2.9 i noticed some strange behavior.
My config is set to:
DENY_THRESHOLD_INVALID = 3
DENY_THRESHOLD_VALID = 10
Which, in my understanding is like: after 3 failed login attempts of NON-EXISTING users from hosts X, deny that host.
After 10 failed logins attempts from EXISTING users from hosts X, deny that host.
While the latter works just fine, the DENY_THRESHOLD_INVALID = 3 setting does not work.
What i noticed is that the /var/log/secure, that danyhosts parses, does handly logns from non-existing accounts and logins from account that exist but are using the wrong pasword, are handled differently.
Aug 10 12:32:42 ftp sshd[27176]: Invalid user adminx from xxx.128.30.135 port 42800
Aug 10 12:32:42 ftp sshd[27176]: input_userauth_request: invalid user adminx [preauth]
Aug 10 12:32:42 ftp sshd[27176]: Connection closed by xxx.128.30.135 port 42800 [preauth]
vs.
Aug 10 12:33:46 ftp sshd[27238]: Failed password for exchange from xxx.128.30.135 port 42802 ssh2
Does anyone know of denyhosts has problems parsing the /var/log/secure file on centos with non-existing accounts vs. existing accounts that use wrong passwords?
Denyhosts debug log also does not say anything. It seems to ignore the login attempt from non-existend users.
any help would be appreciated. Thanks.

AWS - EC2 - MongoDB replica set time sync issue - NTP - replication lag

We are encountering clock drift issues with our MongoDB replica set running on AWS. This just seemed to start happening recently after we added additional data to the set, before then we did not really notice this issue unless the system was under heavy load. The following error is logged in the mongod.log file sporadically and the system is not under load.
To test this we have isolated a set of machines with the same dataset and not in use by our web application though the error is still occurring;
2014-12-12T13:33:51.333+0000 [rsBackgroundSync] changing sync target
because current sync target's most recent OpTime is Dec 12 13:32:42:c
which is more than 30 seconds behind member mongo1:27017 whose most
recent OpTime is 1418391230
From the above the time stamp shows that one of the mongodb replica set members is over a minute behind. The worst we have seen is 12 minutes out of sync.
This error in turn causes replication lag and we receive the notification about this from the Mongo Monitoring Service although it does correct itself.
The setup is 3 x r3.xlarge AWS Linux instances, 1 in each availability zone of the EU-West-1A region. The machines have been setup using the Mongo recommended settings with a Raid array and the cloud formation scripts provided by Mongo. The data is around 4GB in size.
We think the issue is related to the NTP sync, by default on the AWS Linux Amazon Machine Image the ntpd service is configured to go to a pool of aws ntp servers hosted on www.pool.ntp.org.
To try and rule this out we setup our own NTP server on AWS that the MongoDB servers could sync to. The issue still occurred so we changed the maxpoll and minpoll time for the ntpd service on the mongo machines to sync the time every 16 seconds from the NTP server but the error is still occurring.
We increased the MongoDB OpLog size as well to see if that would make any difference but it didn’t.
Does anyone else encounter this type of issue? Is there something we are missing?
Cheers,
Colin.
ps -ef |grep ntp;
mongodb1
ntp 5163 1 0 Dec11 ? 00:00:00 ntpd -u ntp:ntp -p /var/run/ntpd.pid -g
ec2-user 15865 15839 0 09:31 pts/2 00:00:00 grep ntp
mongodb2
ntp 4834 1 0 Dec11 ? 00:00:00 ntpd -u ntp:ntp -p /var/run/ntpd.pid -g
ec2-user 19056 19029 0 09:31 pts/0 00:00:00 grep ntp
mongodb3
ntp 5795 1 0 Dec11 ? 00:00:00 ntpd -u ntp:ntp -p /var/run/ntpd.pid -g
ec2-user 26199 26173 0 09:31 pts/0 00:00:00 grep ntp
cat /etc/ntp.conf;
# For more information about this file, see the man pages
# ntp.conf(5), ntp_acc(5), ntp_auth(5), ntp_clock(5), ntp_misc(5), ntp_mon(5).
driftfile /var/lib/ntp/drift
# Permit time synchronization with our time source, but do not
# permit the source to query or modify the service on this system.
restrict default kod nomodify notrap nopeer noquery
restrict -6 default kod nomodify notrap nopeer noquery
# Permit all access over the loopback interface. This could
# be tightened as well, but to do so would effect some of
# the administrative functions.
restrict 127.0.0.1
restrict -6 ::1
# Hosts on local network are less restricted.
#restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap
# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
#server 0.amazon.pool.ntp.org iburst dynamic
#server 1.amazon.pool.ntp.org iburst dynamic
#server 2.amazon.pool.ntp.org iburst dynamic
#server 3.amazon.pool.ntp.org iburst dynamic
server time-server.domain.com iburst
#broadcast 192.168.1.255 autokey # broadcast server
#broadcastclient # broadcast client
#broadcast 224.0.1.1 autokey # multicast server
#multicastclient 224.0.1.1 # multicast client
#manycastserver 239.255.254.254 # manycast server
#manycastclient 239.255.254.254 autokey # manycast client
# Enable public key cryptography.
#crypto
includefile /etc/ntp/crypto/pw
# Key file containing the keys and key identifiers used when operating
# with symmetric key cryptography.
keys /etc/ntp/keys
# Specify the key identifiers which are trusted.
#trustedkey 4 8 42
# Specify the key identifier to use with the ntpdc utility.
#requestkey 8
# Specify the key identifier to use with the ntpq utility.
#controlkey 8
# Enable writing of statistics records.
#statistics clockstats cryptostats loopstats peerstats
# Enable additional logging.
logconfig =clockall =peerall =sysall =syncall
# Listen only on the primary network interface.
interface listen eth0
interface ignore ipv6
ntpq -npcrv;
remote refid st t when poll reach delay offset jitter
==============================================================================
*172.31.14.137 91.*.*.* 3 u 557 1024 377 1.121 -0.264 0.161
associd=0 status=0615 leap_none, sync_ntp, 1 event, clock_sync,
version="ntpd 4.2.6p5#1.2349-o Sat Mar 23 00:37:31 UTC 2013 (1)",
processor="x86_64", system="Linux/3.14.23-22.44.amzn1.x86_64", leap=00,
stratum=4, precision=-23, rootdelay=23.597, rootdisp=109.962,
refid=172.31.14.137,
reftime=d83a757a.175b5fa1 Tue, Dec 16 2014 9:10:18.091,
clock=d83a77a7.82431efa Tue, Dec 16 2014 9:19:35.508, peer=27361,
tc=10, mintc=3, offset=-0.264, frequency=-13.994, sys_jitter=0.000,
clk_jitter=0.358, clk_wander=0.053
After upgrading to MongoDB 3 using the WiredTiger storage engine we do not see this issue any more.

Couchdb Logging

Due to application requirements, I have an externally accessible CouchDB instance. I would like to see what IP addresses are attempting to authenticate with my database. By checking the couchdb.log file, I can see failed authentication attempts. They look similar to this.
[Mon, 29 Sep 2014 13:43:32 GMT] [info] [<0.28472.7>] 127.0.0.1 - - GET
/offline_master/ 401
However, no matter where I connect from, it seems that the IP address that is logged is always 127.0.0.1. Am I mis-understanding how this works? I would really like to see the IP address that is attempting to connect.
The 127.0.0.1 is the address couchDB is bound to. It's there because you can set up couchdb to respond differently depending on what host name is being used.
The only way to get the client ip address is by turning the logging level to "debug". You can do this in the configuration page in futon.
You get records like this (client IP is on 1st line):
[Tue, 30 Sep 2014 00:14:27 GMT] [debug] [<0.451.4>] 'GET' / {1,1} from "192.168.1.52"
Headers: [{'Accept',"*/*"},
{'Host',"localhost:5984"},
{'User-Agent',"curl/7.30.0"}]
[Tue, 30 Sep 2014 00:14:27 GMT] [debug] [<0.451.4>] OAuth Params: []
[Tue, 30 Sep 2014 00:14:27 GMT] [info] [<0.451.4>] 127.0.0.1 - - GET / 200
Be careful with this. The debug logs are extremely verbose. It doesn't take long to fill up a hard drive.
It is possible to set log levels by module. The module you need to set is couch_httpd. Set the default for the rest to "error" or "fatal".
See: 3.6.2 Per module logging

fail2ban does not send email when an IP is banned

I am trying to configure fail2ban to get an email when an IP is banned.
I changed the jail.local config file. I changed the action parameter to action_mwl. I installed sendmail.
I receive emails when fail2ban is stopped or started but not when an IP is banned so the emails can be sent from the server.
What am I missing?
Thanks
mike#test:~$ sudo cat /etc/fail2ban/jail.local
# Fail2Ban configuration file.
#
# This file was composed for Debian systems from the original one
# provided now under /usr/share/doc/fail2ban/examples/jail.conf
# for additional examples.
#
# Comments: use '#' for comment lines and ';' for inline comments
#
# To avoid merges during upgrades DO NOT MODIFY THIS FILE
# and rather provide your changes in /etc/fail2ban/jail.local
#
# The DEFAULT allows a global definition of the options. They can be overridden
# in each jail afterwards.
[DEFAULT]
# "ignoreip" can be an IP address, a CIDR mask or a DNS host. Fail2ban will not
# ban a host which matches an address in this list. Several addresses can be
# defined using space separator.
ignoreip = 127.0.0.1/8
# "bantime" is the number of seconds that a host is banned.
bantime = 120
# A host is banned if it has generated "maxretry" during the last "findtime"
# seconds.
findtime = 600
maxretry = 3
# "backend" specifies the backend used to get files modification.
# Available options are "pyinotify", "gamin", "polling" and "auto".
# This option can be overridden in each jail as well.
#
# pyinotify: requires pyinotify (a file alteration monitor) to be installed.
# If pyinotify is not installed, Fail2ban will use auto.
# gamin: requires Gamin (a file alteration monitor) to be installed.
# If Gamin is not installed, Fail2ban will use auto.
# polling: uses a polling algorithm which does not require external libraries.
# auto: will try to use the following backends, in order:
# pyinotify, gamin, polling.
backend = auto
# "usedns" specifies if jails should trust hostnames in logs,
# warn when reverse DNS lookups are performed, or ignore all hostnames in logs
#
# yes: if a hostname is encountered, a reverse DNS lookup will be performed.
# warn: if a hostname is encountered, a reverse DNS lookup will be performed,
# but it will be logged as a warning.
# no: if a hostname is encountered, will not be used for banning,
# but it will be logged as info.
usedns = warn
#
# Destination email address used solely for the interpolations in
# jail.{conf,local} configuration files.
destemail = mike#test.com
#
# Name of the sender for mta actions
sendername = Fail2Ban
#
# ACTIONS
#
# Default banning action (e.g. iptables, iptables-new,
# iptables-multiport, shorewall, etc) It is used to define
# action_* variables. Can be overridden globally or per
# section within jail.local file
banaction = iptables-multiport
# email action. Since 0.8.1 upstream fail2ban uses sendmail
# MTA for the mailing. Change mta configuration parameter to mail
# if you want to revert to conventional 'mail'.
mta = sendmail
# Default protocol
protocol = tcp
# Specify chain where jumps would need to be added in iptables-* actions
chain = INPUT
#
# Action shortcuts. To be used to define action parameter
# The simplest action to take: ban only
action_ = %(banaction)s[name=%(__name__)s, port="%(port)s", protocol="%(protocol)s", chain="%(chain)s"]
# ban & send an e-mail with whois report to the destemail.
action_mw = %(banaction)s[name=%(__name__)s, port="%(port)s", protocol="%(protocol)s", chain="%(chain)s"]
%(mta)s-whois[name=%(__name__)s, dest="%(destemail)s", protocol="%(protocol)s", chain="%(chain)s", sendername="%(sendername)s"]
# ban & send an e-mail with whois report and relevant log lines
# to the destemail.
action_mwl = %(banaction)s[name=%(__name__)s, port="%(port)s", protocol="%(protocol)s", chain="%(chain)s"]
%(mta)s-whois-lines[name=%(__name__)s, dest="%(destemail)s", logpath=%(logpath)s, chain="%(chain)s", sendername="%(sendername)s"]
# Choose default action. To change, just override value of 'action' with the
# interpolation to the chosen action shortcut (e.g. action_mw, action_mwl, etc) in jail.local
# globally (section [DEFAULT]) or per specific section
action = %(action_mwl)s
#
# JAILS
#
# Next jails corresponds to the standard configuration in Fail2ban 0.6 which
# was shipped in Debian. Enable any defined here jail by including
#
# [SECTION_NAME]
# enabled = true
#
# in /etc/fail2ban/jail.local.
#
# Optionally you may override any other parameter (e.g. banaction,
# action, port, logpath, etc) in that section within jail.local
[ssh]
enabled = true
port = 23222
filter = sshd
logpath = /var/log/auth.log
maxretry = 4
Here is an extract of my fail2ban log, we see that an IP is banned but I did not get an email.
2014-07-03 05:14:01,418 fail2ban.server : INFO Stopping all jails
2014-07-03 05:15:02,140 fail2ban.jail : INFO Jail 'ssh' stopped
2014-07-03 05:15:02,144 fail2ban.server : INFO Exiting Fail2ban
2014-07-03 05:15:13,245 fail2ban.server : INFO Changed logging target to /var/log/fail2ban.log for Fail2ban v0.8.11
2014-07-03 05:15:13,246 fail2ban.jail : INFO Creating new jail 'ssh'
2014-07-03 05:15:13,270 fail2ban.jail : INFO Jail 'ssh' uses pyinotify
2014-07-03 05:15:13,294 fail2ban.jail : INFO Initiated 'pyinotify' backend
2014-07-03 05:15:13,295 fail2ban.filter : INFO Added logfile = /var/log/auth.log
2014-07-03 05:15:13,297 fail2ban.filter : INFO Set maxRetry = 4
2014-07-03 05:15:13,297 fail2ban.filter : INFO Set findtime = 600
2014-07-03 05:15:13,298 fail2ban.actions: INFO Set banTime = 120
2014-07-03 05:15:13,344 fail2ban.jail : INFO Jail 'ssh' started
2014-07-03 05:16:13,533 fail2ban.actions: WARNING [ssh] Ban x.x.x.x
2014-07-03 05:17:13,669 fail2ban.actions: INFO [ssh] x.x.x.x already banned
2014-07-03 05:18:13,734 fail2ban.actions: WARNING [ssh] Unban x.x.x.x

Resources