ERROR: for logstash "host" network_mode is incompatible with port_bindings - logstash

I was trying to create a listener for SNMP traps in Logstash using the sanmptrap input plugin in Docker. But while running the container it is showing this error.
This is the docker-compose file
version: '3.8'
services:
logstash_audiocode_snmptrap_listener:
container_name: logstash_audiocode_snmptrap_listener
image: docker.elastic.co/logstash/logstash:7.16.3
# command: bash -c "bin/logstash-plugin install logstash-input-websocket && logstash"
user: "root:root"
ports:
- 1063:1063
network_mode: host
restart: unless-stopped
volumes:
- ./config/:/usr/share/logstash/config/:ro
- ./pipeline/:/usr/share/logstash/pipeline/:ro
- ./audiocodes_yaml/:/usr/share/logstash/audiocodes_yaml/:ro
This is the listener
input {
snmptrap {
port => 1063
community => ["IPTtrapcommunity"]
yamlmibdir => "/usr/share/logstash/audiocodes_yaml/"
# type => "snmp_trap"
codec => json
}
}
filter {
csv {
separator => ","
}
}
output {
stdout {codec => rubydebug}
elasticsearch {
hosts => ["0.0.0.0:9200"]
index => "aoudiocodes_trap_listener"
}
}
I have tried it without this.
ports:
- 1063:1063
At that time it is running but not listening to any data.
When I used the tcpdump to check whether the traps are coming or not, the traps are coming and I am able to see them using this cmd.
tcpdump -i eth0 port 1063

Related

Docker - Files Contain Bad Line Terminators

I setup a development environment using Docker on Windows 10. My Dockerfile and docker-compose.yml file uses php:8.2.2-apache, mysql:8.0.32, composer:2.5.3, and phpMyAdmin:5.2.1.
I will admit that getting Docker up and running to basically mimic my old xampp development environment has been incredibly frustrating.
Recently, I added robmorgan/phinx 0.13 to my composer.json. Initially, I ran from the docker container's terminal vendor/bin/phinx init and it successfully created a phinx.php file. I stopped the container, modified my phinx.php file to use values from my .env file. When I reran Docker, back into the container's terminal to run vendor/bin/phinx create <name>, I now get this error in the terminal:
usr/bin/env: 'php\r': No such file or directory
I have read in several places that this is because files have the Windows line terminators instead of the Unix line terminators.
The issue is that I do not understand which file is affected. How can I audit my files to find out what is the culprit?
In case you are curious, this is my docker-compose.yml and phinx.json:
version: '3.9'
services:
webserver:
build: ./docker
image: -redacted-
ports:
- "80:80"
- "443:443"
volumes:
- ./www:/var/www/html
links:
- db
db:
image: mysql:8.0.32
ports:
- "3306:3306"
volumes:
- ./database:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}
- MYSQL_DATABASE=${MYSQL_DATABASE}
- MYSQL_USER=${MYSQL_USER}
- MYSQL_PASSWORD=${MYSQL_PASSWORD}
composer:
image: composer:2.5.3
command: ["composer", "install"]
volumes:
- ./www:/app
phpmyadmin:
depends_on:
- db
image: phpmyadmin:5.2.1
restart: always
ports:
- 8080:80
environment:
PMA_HOST: db
<?php
$dotenv = Dotenv\Dotenv::createImmutable(__DIR__);
$dotenv->load();
$databaseName = $_ENV['MYSQL_DATABASE'];
$username = $_ENV['MYSQL_USER'];
$password = $_ENV['MYSQL_PASSWORD'];
return
[
'paths' => [
'migrations' => '%%PHINX_CONFIG_DIR%%/db/migrations',
'seeds' => '%%PHINX_CONFIG_DIR%%/db/seeds'
],
'environments' => [
'default_migration_table' => 'phinxlog',
'default_environment' => 'development',
'production' => [
'adapter' => 'mysql',
'host' => 'localhost',
'name' => $databaseName,
'user' => $username,
'pass' => $password,
'port' => '3306',
'charset' => 'utf8',
]
],
'version_order' => 'creation'
];
And I am running this to load Docker: docker-compose --env-file=./www/.env up
This should find files w/ CRLFs:
find -type f -print0 | xargs -0 file | grep CRLF

ampqlib Error:"Frame size exceeds frame max" inside docker container

I am trying to do simple application with backend on node.js + ts and rabbitmq, based on docker. So there are 2 containers: rabbitmq container and backend container with 2 servers running - producer and consumer. So now I am trying to get an access to rabbitmq server, but I get this error "Frame size exceeds frame max".
The full code is:
My producer server code is:
import express from 'express';
import amqplib, { Connection, Channel, Options } from 'amqplib';
const producer = express();
const sendRabbitMq = () =>{
amqplib.connect('amqp://localhost', function(error0: any, connection: any) {
if(error0){
console.log('Some error...')
throw error0
}
})
}
producer.post('/send', (_req, res) => {
sendRabbitMq();
console.log('Done...');
res.send("Ok")
})
export { producer };
It is connected to main file index.ts and running inside this file.
Also maybe I have some bad configuration inside docker. My Dockerfile is
FROM node:16
WORKDIR /app/backend/src
COPY *.json ./
RUN npm install
COPY . .
And my docker-compose include this code:
version: '3'
services:
backend:
build: ./backend
container_name: 'backend'
command: npm run start:dev
restart: always
volumes:
- ./backend:/app/backend/src
- ./conf/myrabbit.conf:/etc/rabbitmq/rabbitmq.config
ports:
- 3000:3000
environment:
- PRODUCER_PORT=3000
- CONSUMER_PORT=5672
depends_on:
- rabbitmq
rabbitmq:
image: rabbitmq:3.9.13
container_name: 'rabbitmq'
ports:
- 5672:5672
- 15672:15672
environment:
- RABBITMQ_DEFAULT_USER=user
- RABBITMQ_DEFAULT_PASS=user
I will be very appreciated for your help

(Docker-Compose) UnhandledPromiseRejectionWarning when connecting node and postgres

I am trying to connect the containers for postgres and node. Here is my setup:
yml file:
version: "3"
services:
postgresDB:
image: postgres:alpine
container_name: postgresDB
ports:
- "5432:5432"
environment:
- POSTGRES_DB=myDB
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=Thisisngo1995!
express-server:
build: ./
environment:
- DB_SERVER=postgresDB
links:
- postgresDB
ports:
- "3000:3000"
Dockerfile:
FROM node:12
WORKDIR /usr/src/app
COPY package.json ./
RUN npm install
COPY . .
COPY ormconfig.docker.json ./ormconfig.json
EXPOSE 3000
CMD ["npm", "start"]
connect to postgres:
let { Pool, Client } = require("pg");
let postgres = new Pool({
host: "postgresDB",
port: 5432,
user: "postgres",
password: "Thisisngo1995!",
database: "myDB",
});
module.exports = postgres;
and here is how I handled my endpoint:
exports.postgres_get_controller = (req, resp) => {
console.log("Reached Here");
postgres
.query('SELECT * FROM public."People"')
.then((results) => {
console.log(results);
resp.send({ allData: results.rows });
})
.catch((e) => console.log(e));
};
Whenever I try to touch the endpoint above, I get this error in the container:
Reasons why?
Note: I am able to have everything functioning on my local machine (without docker) simply by changing "host: localhost"
Your postgres database name and username should be the same
You can use docker-compose-wait to make sure interdependent services are launched in proper order.
See below on how to use it for your case.
update the final part of your Dockerfile as below;
# ...
# this will be used to check if DB is up
ADD https://github.com/ufoscout/docker-compose-wait/releases/download/2.7.3/wait ./wait
RUN chmod +x ./wait
CMD ./wait && npm start
Update some parts of your docker-compose.yml as below:
express-server:
build: ./
environment:
- DB_SERVER=postgresDB
- WAIT_HOSTS=postgresDB:5432
- WAIT_BEFORE_HOSTS=4
links:
- postgresDB
depends_on:
- postgresDB
ports:
- "3000:3000"

Not able to connect to Elasticsearch from docker container (node.js client)

I have set up an elasticsearch/kibana docker configuration and I want to connect to elasticsearch from inside of a docker container using the #elastic/elasticsearch client for node. However, the connection is "timing out".
The project is taken with inspiration from Patrick Triest : https://blog.patricktriest.com/text-search-docker-elasticsearch/
However, I have made some modification in order to connect kibana, use a newer ES image and the new elasticsearch node client.
I am using the following docker-compose file:
version: "3"
services:
api:
container_name: mp-backend
build: .
ports:
- "3000:3000"
- "9229:9229"
environment:
- NODE_ENV=local
- ES_HOST=elasticsearch
- PORT=3000
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.5.1
container_name: elasticsearch
environment:
- node.name=elasticsearch
- cluster.name=es-docker-cluster
- discovery.type=single-node
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- "http.cors.allow-origin=*"
- "http.cors.enabled=true"
- "http.cors.allow-headers=X-Requested-With,X-Auth-Token,Content-Type,Content-Length,Authorization"
- "http.cors.allow-credentials=true"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data01:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- elastic
kibana:
image: docker.elastic.co/kibana/kibana:7.5.1
ports:
- "5601:5601"
links:
- elasticsearch
networks:
- elastic
depends_on:
- elasticsearch
volumes:
data01:
driver: local
networks:
elastic:
driver: bridge
When building/ bringing the container up, I able to get a response from ES: curl -XGET "localhost:9200", "you know, for search"... And kibana is running and able to connect to the index.
I have the following file located in the backend container (connection.js):
const { Client } = require("#elastic/elasticsearch");
const client = new Client({ node: "http://localhost:9200" });
/*Check the elasticsearch connection */
async function health() {
let connected = false;
while (!connected) {
console.log("Connecting to Elasticsearch");
try {
const health = await client.cluster.health({});
connected = true;
console.log(health.body);
return health;
} catch (err) {
console.log("ES Connection Failed", err);
}
}
}
health();
If I run it outside of the container then I get the expected response:
node server/connection.js
Connecting to Elasticsearch
{
cluster_name: 'es-docker-cluster',
status: 'yellow',
timed_out: false,
number_of_nodes: 1,
number_of_data_nodes: 1,
active_primary_shards: 7,
active_shards: 7,
relocating_shards: 0,
initializing_shards: 0,
unassigned_shards: 3,
delayed_unassigned_shards: 0,
number_of_pending_tasks: 0,
number_of_in_flight_fetch: 0,
task_max_waiting_in_queue_millis: 0,
active_shards_percent_as_number: 70
}
However, if I run it inside of the container:
docker exec mp-backend "node" "server/connection.js"
Then I get the following response:
Connecting to Elasticsearch
ES Connection Failed ConnectionError: connect ECONNREFUSED 127.0.0.1:9200
at onResponse (/usr/src/app/node_modules/#elastic/elasticsearch/lib/Transport.js:214:13)
at ClientRequest.<anonymous> (/usr/src/app/node_modules/#elastic/elasticsearch/lib/Connection.js:98:9)
at ClientRequest.emit (events.js:223:5)
at Socket.socketErrorListener (_http_client.js:415:9)
at Socket.emit (events.js:223:5)
at emitErrorNT (internal/streams/destroy.js:92:8)
at emitErrorAndCloseNT (internal/streams/destroy.js:60:3)
at processTicksAndRejections (internal/process/task_queues.js:81:21) {
name: 'ConnectionError',
meta: {
body: null,
statusCode: null,
headers: null,
warnings: null,
meta: {
context: null,
request: [Object],
name: 'elasticsearch-js',
connection: [Object],
attempts: 3,
aborted: false
}
}
}
So, I tried changing the client connection to (I read somewhere that this might help):
const client = new Client({ node: "http://172.24.0.1:9200" });
Then I am just "stuck" waiting for a response. Only one console.log of "Connecting to Elasticsearch"
I am using the following version:
"#elastic/elasticsearch": "7.5.1"
As you probably see, I do not have a full grasp of what is happening here... I have also tried to add:
links:
- elasticsearch
networks:
- elastic
To the api service, without any luck.
Does anyone know what I am doing wrong here? Thank you in advance :)
EDIT:
I did a "docker network inspect" on the network with *_elastic. There I see the following:
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.22.0.0/16",
"Gateway": "172.22.0.1"
}
]
},
Changing the client to connect to the "GateWay" Ip:
const client = new Client({ node: "http://172.22.0.1:9200" });
Then it works! I am still wondering why as this was just "trial and error" Is there any way to obtain this Ip without having to inspect the network?
In Docker, localhost (or the corresponding IPv4 address 127.0.0.1, or the corresponding IPv6 address ::1) generally means "this container"; you can't use that host name to access services running in another container.
In a Compose-based setup, the names of the services: blocks (api, elasticsearch, kibana) are usable as host names. The caveat is that all of the services have to be on the same Docker-internal network. Compose creates one for you and attaches containers to it by default. (In your example api is on the default network but the other two containers are on a separate elastic network.) Networking in Compose in the Docker documentation has some more details.
So to make this work, you need to tell your client code to honor the environment variable you're setting that points at Elasticsearch
const esHost = process.env.ES_HOST || 'localhost';
const esUrl = 'http://' + esHost + ':9200';
const client = new Client({ node: esUrl });
In your docker-compose.yml file delete all of the networks: blocks to use the provided default network. (While you're there, links: is unnecessary and Compose provides reasonable container_name: for you; api can reasonably depends_on: [elasticsearch].)
Since we've provided a fallback for $ES_HOST, if you're working in a host development environment, it will default to using localhost; outside of Docker where it means "the current host" it will reach the published port of the Elasticsearch container.

Filebeat fails to connect to logstash

I'm using two servers on the cloud on one server (A) I installed filebeat and on second server (B) I have installed logstash, elasticsearch, and kibana. So I'm facing problem while sending logs from server A to server B on logstash.
My filebeat configuration is
filebeat.inputs:
- type: log
enabled: true
paths:
- /home/vinit/demo/*.log
fields:
log_type: apache
fields_under_root: true
#output.elasticsearch:
#hosts: ["localhost:9200"]
#protocol: "https"
#username: "elastic"
#password: "changeme"
output.logstash:
hosts: ["XXX.XX.X.XXX:5044"]
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
#ssl.certificate: "/etc/pki/client/cert.pem"
#ssl.key: "/etc/pki/client/cert.key"
In logstash, I have enabled modules system, filebeat, and logstash.
Logstash configuration is
input {
beats {
port => 5044
}
}
filter {
grok {
match => { "message" => "^%{IP:CLIENT_IP} (?:-|%{USER:IDEN}) (?:-|%{USER:AUTH}) \[%{HTTPDATE:CREATED_ON}\] \"(?:%{WORD:REQUEST_METHOD} (?:/|%{NOTSPACE:REQUEST})(?: HTT$
add_field => {
"LOG_TYPES" => "apache-log"
}
overwrite => [ "message" ]
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
manage_template => false
index => "apache-info-log"
}
stdout { codec => rubydebug }
}
In Elasticsearch I did
network.host: localhost
I'm getting error are below-
|2019-01-18T15:05:47.738Z|INFO|crawler/crawler.go:72|Loading Inputs: 1|
|---|---|---|---|
|2019-01-18T15:05:47.739Z|INFO|log/input.go:138|Configured paths: [/home/vinit/demo/*.log]|
|2019-01-18T15:05:47.739Z|INFO|input/input.go:114|Starting input of type: log; ID: 10340820847180584185 |
|2019-01-18T15:05:47.740Z|INFO|log/input.go:138|Configured paths: [/var/log/logstash/logstash-plain*.log]|
|2019-01-18T15:05:47.740Z|INFO|log/input.go:138|Configured paths: [/var/log/logstash/logstash-slowlog-plain*.log]|
|2019-01-18T15:05:47.742Z|INFO|log/harvester.go:254|Harvester started for file: /home/vinit/demo/info-log.log|
|2019-01-18T15:05:47.749Z|INFO|log/input.go:138|Configured paths: [/var/log/auth.log* /var/log/secure*]|
|2019-01-18T15:05:47.763Z|INFO|log/input.go:138|Configured paths: [/var/log/messages* /var/log/syslog*]|
|2019-01-18T15:05:47.763Z|INFO|crawler/crawler.go:106|Loading and starting Inputs completed. Enabled inputs: 1|
|2019-01-18T15:05:47.763Z|INFO|cfgfile/reload.go:150|Config reloader started|
|2019-01-18T15:05:47.777Z|INFO|log/input.go:138|Configured paths: [/var/log/auth.log* /var/log/secure*]|
|2019-01-18T15:05:47.790Z|INFO|log/input.go:138|Configured paths: [/var/log/messages* /var/log/syslog*]|
|2019-01-18T15:05:47.790Z|INFO|input/input.go:114|Starting input of type: log; ID: 15514736912311113705 |
|2019-01-18T15:05:47.790Z|INFO|input/input.go:114|Starting input of type: log; ID: 4004097261679848995 |
|2019-01-18T15:05:47.791Z|INFO|log/input.go:138|Configured paths: [/var/log/logstash/logstash-plain*.log]|
|2019-01-18T15:05:47.791Z|INFO|log/input.go:138|Configured paths: [/var/log/logstash/logstash-slowlog-plain*.log]|
|2019-01-18T15:05:47.791Z|INFO|input/input.go:114|Starting input of type: log; ID: 2251543969305657601 |
|2019-01-18T15:05:47.791Z|INFO|input/input.go:114|Starting input of type: log; ID: 9013300092125558684 |
|2019-01-18T15:05:47.791Z|INFO|cfgfile/reload.go:205|Loading of config files completed.|
|2019-01-18T15:05:47.792Z|INFO|log/harvester.go:254|Harvester started for file: /var/log/secure-20181223|
|2019-01-18T15:05:47.794Z|INFO|log/harvester.go:254|Harvester started for file: /var/log/messages-20181223|
|2019-01-18T15:05:47.797Z|INFO|log/harvester.go:254|Harvester started for file: /var/log/secure-20181230|
|2019-01-18T15:05:47.800Z|INFO|log/harvester.go:254|Harvester started for file: /var/log/messages-20181230|
|2019-01-18T15:05:47.804Z|INFO|log/harvester.go:254|Harvester started for file: /var/log/secure-20190106|
|2019-01-18T15:05:47.804Z|INFO|log/harvester.go:254|Harvester started for file: /var/log/secure|
|2019-01-18T15:05:47.804Z|INFO|log/harvester.go:254|Harvester started for file: /var/log/secure-20190113|
|2019-01-18T15:05:47.816Z|INFO|log/harvester.go:254|Harvester started for file: /var/log/messages-20190106|
|2019-01-18T15:05:47.817Z|INFO|log/harvester.go:254|Harvester started for file: /var/log/messages|
|2019-01-18T15:05:47.818Z|INFO|log/harvester.go:254|Harvester started for file: /var/log/messages-20190113|
|2019-01-18T15:05:47.855Z|INFO|pipeline/output.go:95|Connecting to backoff(async(tcp://XXX.XX.X.XXX:5044))|
|2019-01-18T15:06:18.855Z|ERROR|pipeline/output.go:100|Failed to connect to backoff(async(tcp://XXX.XX.X.XXX:5044)): dial tcp XXX.XX.X.XXX:5044: i/o timeout|
|---|---|---|---|
|2019-01-18T15:06:18.855Z|INFO|pipeline/output.go:93|Attempting to reconnect to backoff(async(tcp://XXX.XX.X.XXX:5044)) with 1 reconnect attempt(s)|
Is anyone have any idea how to resolve this and make it work properly?
Related question is Failed to connect to backoff(async(tcp://ip:5044)): dial tcp ip:5044: i/o timeout.
There in the answer it was proposed to allow outgoing TCP connection on port 5044 directly in your cloud provider settings page, since it may be blocked by default.
In addition to present comments by #Vinit Jordan, who whitelisted port 5044 on EC2 with this steps, I propose possible solution for general case.
Please, check your default firewall on logstash server. Probably you have ufw simple firewall that was preconfigured during initial Nginx setup. I ran into this problem right after installation of ELK on the machine B and filebeat on the machine A.
I just added a new rule for filebeat server firewall and the error disappeared:
sudo ufw allow from <IP_address_of_machine_A> to any port 5044
Then filbeat log on machine A showed me:
"message":"Connection to backoff(async(tcp://<IP_address_of_machine_B>:5044)) established"
Probably it is also reasonable to add more general rule for your trusted servers:
sudo ufw allow from <IP_ADDRESS>

Resources