FileBeat not load balancing to multiple logstash (indexer) servers - logstash

I tried load balancing with 2 different logstash indexer servers, but when I add, say 1000 lines to my log, filebeats sends logs exclusively to only one server (I enabled stdout and can visually check output too see which logstash server is receiving the log events)
My filebeats conf:
filebeat:
prospectors:
-
paths:
- "D:/ApacheLogs/Test/access.*.log"
input_type: log
document_type: my_test_log
scan_frequency: 1s
registry_file: "C:/ProgramData/filebeat/registry"
output:
logstash:
hosts: ["10.231.2.223:5044","10.231.4.143:5044"]
loadbalance: true
shipper:
logging:
files:
will there be added support to disable persistent TCP connection on filebeats? I currently cannot use AWS ELB since due to sticky connection it always sends to one logstash server until it gets reset. Is this not the right architecture for it? Should I be sending to redis queue instead? In filebeats I have no idea nor could find any documentation how to send to redis queue?
Something like these did not work, I can't even find a way to debug it because filebeats leaves no logs
filebeat:
prospectors:
-
paths:
- "D:/ApacheLogs/Test/access.*.log"
input_type: log
document_type: my_test_log
scan_frequency: 1s
registry_file: "C:/ProgramData/filebeat/registry"
output:
redis:
# Set the host and port where to find Redis.
host: "logstash-redis.abcde.0001.usw2.cache.amazonaws.com"
port: 6379
shipper:
logging:
level: warning
# enable file rotation with default configuration
to_files: true
files:
path: C:\temp\filebeat.log
Version:
On windows server: FileBeat (Windows - filebeat version 1.2.2 (386))
On logstash indexer server: logstash 2.3.2
Operating System:
Windows server: Microsoft Windows NT 6.0.6002 Service Pack 2
Logstash indexer server: RHEL Linux 4.1.13-19.30.amzn1.x86_64

Filebeat should really solve this, but since they advertise it as being as lightweight as possible, don't hold your breath.
I don't know how easy it is to get HAProxy running on Windows, but it should solve your problem if you can get it installed:
https://serverfault.com/questions/95427/windows-replacement-for-haproxy
Use Layer4 roundrobin load balancing. You'll probably want to install an HAProxy on every machine with Filebeat. 1 HAProxy frontend will listen on localhost:5044, and it will map to multiple Logstash backends.

You can send your filebeat output to redis via below config:
output:
redis:
host: "host"
port: <port>
save_topology: true
index: "index-name"
db: 0
db_topology: 1
timeout: 5
reconnect_interval: 1

Related

How to set hazelcast tcp strategy?

We are running services on a cluster as docker containers.
Since we cannot use eureka or multicast, we are trying to use hazelcast TCP discovery.Currently, configuration is like this (example):
cluster:
enabled: true
hazelcast:
useSiteLocalInterfaces: true
discovery:
tcp:
enabled: true
members:
- 10.10.10.1
- 10.10.10.2
- 10.10.10.3
azure:
enabled: false
multicast:
enabled: false
kubernetesDns:
enabled: false
During service start, we get the following log message:
Members configured for TCP Hazelcast Discovery after removing local addresses: [10.10.10.1, 10.10.10.2, 10.10.10.3]
That means, the service didn't discover its local ip right.
Later in the log, the following message appears: [LOCAL] [hazelcast-test-service:hz-profile] [3.12.2] Picked [172.10.0.1]:5701, using socket ServerSocket[addr=/0.0.0.0,localport=5701], bind any local is true
Obviously, the services determines its local ip to be 172.10.0.1. We have no idea, where this ip comes from. It doesn't exist on the cluster.
Is there a way to give hazelcast a hint how to discover its local ip?
The address 172.10.0.1 must be one of the network interfaces inside your container. You can ssh into your Docker container and check the network interfaces (e.g. with ifconfig).
If you want to use another network interface, you can configure it with the environment variable HZ_NETWORK_PUBLICADDRESS. For example, in your case, one of the members can be started with the following docker command.
docker run -e HZ_NETWORK_PUBLICADDRESS=10.10.10.1:5701 -p 5701:5701 hazelcast/hazelcast
Please read more at Hazelcast Docker Image: Hazelcast Hello World

Replication failed to connect to server [IP:27017] on first connect [MongoError: getaddrinfo ENOTFOUND

I have setup Replication and trying to connect
URI:
mongodb://[userName:password]#IP1:27017,
[userName:password]#IP2:27017/dbName?
authSource=admin&w=1&replicaSet=replicaqa
but I am getting below error:
{ MongoError: failed to connect to server [host_name_ip1:27017] on first connect
[MongoError: getaddrinfo ENOTFOUND [host_name_ArbiterIP] [host_name_ArbiterIP]:27017]
When I am individually trying to connect [IP1:27017] without replication it works.
Here is my Mongod.conf
systemLog:
destination: file
logAppend: true
path: /var/mongodb/mongod.log
# Where and how to store data.
storage:
dbPath: /data/var/lib/mongo
journal:
enabled: true
# engine:
# mmapv1:
# wiredTiger:
# how the process runs
processManagement:
fork: true # fork and run in background
pidFilePath: /var/run/mongodb/mongod.pid # location of pidfile
timeZoneInfo: /usr/share/zoneinfo
# network interfaces
net:
port: 27017
bindIp: 0.0.0.0
security:
authorization: "disabled"
#operationProfiling:
replication:
replSetName: "replicaqa"
Am I missing something while configuring replication?
I can see mongod instance is started with replication where I can see (Primary>, Secondary> and Arbitary> And each instance is on remote location.
Found mistake,
When we configure replication with rs.conf() we give domain name and update DNS in etc/hosts file.
In my case, I have updated it in mongo instance(etc/hosts) but not updated where my server is hosted (server is hosted on another instance).
Updated hosts file where Node service is running and now it's working for me.
when I connect without replication I was able to connect, but the issue is when I am trying to connect with Replication.
(No matter even if I have given IP address in my DB connection string URL.
like
mongodb://[userName:password]#IP1:27017,[userName:password]#IP2:27017/dbName?
authSource=admin&w=1&replicaSet=replicaqa
It will try to resolve the domain name. Because we have configured in rs.conf(), that's what happened with me )
Path of hosts file in windows
C:\Windows\System32\drivers\etc\hosts file
A path for hosts file in Linux/ CentOS
/etc/hosts
DNS
10.XX.XX.XX viavi.local
10.XX.XX.XXX domain_name_configured_while_replication

How to connect jhipster on aws to elk cloud

I need to connect my JHIPSTER app that is in AWS / elasticbeanstalk to ELK Cloud.
It does not work with this:
jhipster:
logging:
logstash:
enabled: true
host: localhost # If using a Virtual Machine on Mac OS X or Windows with docker-machine, use the Docker's host IP here
port: 5000
queueSize: 512
as reference https://www.jhipster.tech/monitoring/
The answer is that you have to connect to logstash. Jhipster delivers a format to send it to a logstash service. Which in turn transfers it to elasticsearch -> Kibana.

How to use port forwarding to connect to docker container using DNS name

I have 2 redis containers running on same machine m1.
container1 has port mapping 6379 to 6400
docker run -d -p 6379:6400 myredisimage1
container2 has port mapping 6379 to 7500
docker run -d -p 6379:7500 myredisimage2
I am looking for a solution where other machine m2 can communicate to machine m1, using different DNS names but same port number.
redis.container1.com:6379
redis.container2.com:6379
and I would like to redirect that request to proper containers inside machine m1.
Is this possible to achieve this ?
This is possible, but hacky. First, ask yourself if you really need to do this, or if you can get away with just using different ports for the containers. Anyway, if you do absolutely need to do this, here's how:
Each docker container gets its own ip address accessible from the host machine. AFAIK, these are generated pseudo-randomly at run-time, but they are accessible by doing a docker inspect $CONTAINER_ID, for example:
docker inspect e804af2472ca
[
{
"Id": "e804af2472ca605dec0035f45d3bd05c1fbccee31e6c09381b0c16657378932f",
"Created": "2016-02-02T21:34:12.49059198Z",
...
"Gateway": "172.17.0.1",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
**"IPAddress": "172.17.0.6"**,
"IPPrefixLen": 16,
"IPv6Gateway": "",
...
}
}
]
In this case, we know this container's ip address accessible from the host is 172.17.0.1. That ip address is fully usable from the host, so you can have something proxy redis.container1.com to it and redis.container2.com to your other ip. You'd need to reload the proxy address every time the box goes up, so this would definitely not be ideal, but it should work.
Again, my recommendation overall is don't do this.
I'm not sure if I'm getting you right.
But how could you start two container with both working on the same port?
It seems to me that this should be dealt with by using load balancer. Try HAProxy and set up two acl's for each domain name.
I would go with something like this: (Using docker-compose)
Docker Copose setup to deploy docker images:
redis-1:
container_name: redis-1
image: myredis
restart: always
expose:
- "6400"
redis-2:
container_name: redis-2
image: myredis
restart: always
expose:
- "6400"
haproxy:
container_name: haproxy
image: million12/haproxy
restart: always
command: -n 500
ports:
- "6379:6379"
links:
- redis-1:redis.server.one
- redis-2:redis.server.two
volumes:
- /path/to/my/haproxy.cfg:/etc/haproxy/haproxy.cfg
And then custom haproxy config:
global
chroot /var/lib/haproxy
user haproxy
group haproxy
# Default SSL material locations
ca-base /etc/ssl/certs
crt-base /etc/ssl/private
# Default ciphers to use on SSL-enabled listening sockets.
ssl-default-bind-ciphers AES256+EECDH:AES256+EDH:AES128+EDH:EECDH:!aNULL:!eNULL:!LOW:!DES:!3DES:!RC4
spread-checks 4
tune.maxrewrite 1024
tune.ssl.default-dh-param 2048
defaults
mode http
balance roundrobin
option dontlognull
option dontlog-normal
option redispatch
maxconn 5000
timeout connect 10s
timeout client 25s
timeout server 25s
timeout queue 30s
timeout http-request 10s
timeout http-keep-alive 30s
# Stats
stats enable
stats refresh 30s
stats hide-version
frontend http-in
bind *:6379
mode tcp
acl is_redis1 hdr_end(host) -i redis.server.one
acl is_redis2 hdr_end(host) -i redis.server.two
use_backend redis1 if is_redis1
use_backend redis2 if is_redis2
default_backend redis1
backend redis1
server r1 redis.server.one:6379
backend redi2
server r2 redis.server.two:6379

Connection attempt failed when connecting to MongoDB deployment from mongo shell

first question and complete beginner so apologies in advanced for any silly mistakes.
I have created a server on Amazon Web Services and then linked that through the MongoDB Cloud Manager where I made a replica set.
I have been following the tutorial on the mongoDB cloud documentation but have become stuck on the final part - "Connect to a MongoDB Process".
It says "Cloud Manager provides a mongo shell command that you can use to connect to the MongoDB process if you are connecting from the system where the deployment runs" - Can I not do this because the deployment is running on the Amazon Server?
When I enter the mongo shell command this is what it reads:
MongoDB shell version: 3.0.4
connecting to: AM-0.amigodb.0813.mongodbdns.com:27001/AmigoMain_1
2015-08-07T18:41:56.806+0100 W NETWORK
Failed to connect to 52.18.23.14:27001 after 5000 milliseconds, giving up.
2015-08-07T18:41:56.809+0100 E QUERY
Error: couldn't connect to server AM-0.amigodb.0813.mongodbdns.com:27001 (52.18.23.14), connection attempt failed
at connect (src/mongo/shell/mongo.js:181:14)
at (connect):1:6 at src/mongo/shell/mongo.js:181
exception: connect failed
I followed the instructions for the security settings on the Amazon Web Service but thinking that I may well have made a mistake.
Would greatly appreciate any help or where to go for answers.
Thanks,
Louis
Mongo by default only listens to connections on localhost. You'll need to edit mongo.conf and add your IP to the "bind ip" setting.
This is my mongod.conf file:
processManagement:
fork: true
net:
bindIp: 127.0.0.1
port: 27017
storage:
dbPath: "/data/db"
systemLog:
destination: file
path: "/var/log/mongod.log"
logAppend: true
storage:
journal:
enabled: true
It also gave me the same error message when I changed my bindIp to 0.0.0.0

Resources