I'm getting this error when entering to web interface url:
Server currently unavailable
We are experiencing problems connecting to the Graylog server running on http://127.0.0.1:9000/api. Please verify that the server is healthy and working correctly.
You will be automatically redirected to the previous page once we can connect to the server.
Do you need a hand? We can help you.
More details
docker-compose.yml:
graylog:
image: graylog2/server:2.3.0-1
environment:
GRAYLOG_PASSWORD_SECRET: xxx
GRAYLOG_ROOT_PASSWORD_SHA2: xxx
GRAYLOG_WEB_ENDPOINT_URI: http://example.com/api/
GRAYLOG_REST_LISTEN_URI: http://0.0.0.0:9000/api/
GRAYLOG_WEB_LISTEN_URI: http://0.0.0.0:9000/
GRAYLOG_ELASTICSEARCH_CLUSTER_NAME: graylog
GRAYLOG_ELASTICSEARCH_HOSTS: http://graylog-elasticsearch:9200
depends_on:
- graylog-elasticsearch
- mongo
networks:
- traefik
- default
- graylog
deploy:
labels:
- "traefik.port=9000"
- "traefik.tags=logging"
- "traefik.docker.network=infra_traefik"
- "traefik.backend=graylog"
restart_policy:
condition: on-failure
replicas: 1
placement:
constraints:
- node.labels.name == manager-1
As you see everything should work without any problem.
Here's what netstat shows:
root#6399d2a13c5d:/usr/share/graylog# netstat -tulnp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:9000 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.11:42255 0.0.0.0:* LISTEN -
udp 0 0 127.0.0.11:51199 0.0.0.0:* -
Here's container printenv:
root#6399d2a13c5d:/usr/share/graylog# printenv
GRAYLOG_ELASTICSEARCH_CLUSTER_NAME=graylog
HOSTNAME=6399d2a13c5d
TERM=xterm
GRAYLOG_WEB_ENDPOINT_URI=http://example.com/api/
GRAYLOG_REST_LISTEN_URI=http://0.0.0.0:9000/api/
GRAYLOG_ROOT_PASSWORD_SHA2=ччч
CA_CERTIFICATES_JAVA_VERSION=20140324
GRAYLOG_PASSWORD_SECRET=ччч
GRAYLOG_REST_TRANSPORT_URI=http://example.com/api/
PWD=/usr/share/graylog
JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64/jre
LANG=C.UTF-8
JAVA_VERSION=8u72
SHLVL=1
HOME=/root
JAVA_DEBIAN_VERSION=8u72-b15-1~bpo8+1
GRAYLOG_ELASTICSEARCH_HOSTS=http://graylog-elasticsearch:9200
GRAYLOG_WEB_LISTEN_URI=http://0.0.0.0:9000/
GOSU_VERSION=1.7
GRAYLOG_SERVER_JAVA_OPTS=-Xms1g -Xmx2g -XX:NewRatio=1 -XX:MaxMetaspaceSize=256m -server -XX:+ResizeTLAB -XX:+UseConcMarkSweepGC -XX:+CMSConcurrentMTEnabled -XX:+CMSClassUnloadingEnabled -XX:+UseParNewGC -XX:-OmitStackTraceInFastThrow
_=/usr/bin/printenv
I assume problem maybe within this custom header which graylog probably need:
RequestHeader set X-Graylog-Server-URL "http://graylog.example.org/api/"
For the implementation example you have given I am assuming that example.org is a remote host rather than 127.0.0.1. In that situation, you are correct in assuming that the additional header is needed.
From my experience you should strip api/ for the the end user web access
You will need to set the request header X-Graylog-Server-URL. eg curl -H "X-Graylog-Server-URL: http://graylog.example.org/" http://127.0.0.1:9000.Graylog web proxy config gives some good info if you want to setup a webserver in front of your graylog server. It's just a pity that (at the time of writing this) the Docker Examples do not include a basic nginx config
I am using kubernetes and had to add the following to my ingress annotation to get traffic in correctly
ingress:
annotations:
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_set_header X-Graylog-Server-URL https://$server_name/;
I think you should just use
GRAYLOG_WEB_ENDPOINT_URI=http://<your-host-ip>/api/
without rest URI settings.
Related
I have an unsecured Postfix instance in a container that listens to port 25. This port is not exposed using a Service. The idea is that only a PHP container that runs inside the same pod should be able to connect to Postfix and there is no need for additional Postfix configuration .
Is there any way for other processes that run in the same network or Kubernetes cluster to connect to this hidden port?
From what I know, only other containers in the same Pod can connect to an unexposed port, via localhost.
I'm interested from a security point of view.
P.S. I now that one should make sure it has multiple levels of security in place but I'm interested only theoretically if there is some way to connect to this port from outside the pod.
From what I know, only other containers in the same Pod can connect to an unexposed port, via localhost.
Not exactly.
How this is implemented is a detail of the particular container runtime in use.
...I'm interested only theoretically if there is some way to connect to this port from outside the pod.
So here we go :)
For example on GKE you can easily access Pod from other Pod if you know Target Pod's IP.
I have used the following setup on GKE:
apiVersion: v1
kind: Pod
metadata:
annotations:
run: fake-web
name: fake-default-knp
spec:
containers:
- image: mendhak/http-https-echo
imagePullPolicy: IfNotPresent
name: fake-web
The Docker file for that image can be found here.
It specifies EXPOSE 80 443
So, container listens on these 2 Ports.
$kubectl exec fake-default-knp -- netstat -tulpn
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 :::443 :::* LISTEN 1/node
tcp 0 0 :::80 :::* LISTEN 1/node
I have no services:
$kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 40d
and only 2 Pods.
$kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP
busybox-sleep-less 1/1 Running 3476 40d 10.52.1.6
fake-default-knp 1/1 Running 0 13s 10.52.0.50
And I can connect to
$kubectl exec busybox-sleep-less -- telnet 10.52.0.50 80
Connected to 10.52.0.50
$kubectl exec busybox-sleep-less -- telnet 10.52.0.50 443
Connected to 10.52.0.50
As you can see, container is accessible on POD_IP:container_port from other pod (located on another node)
P.S> It worth checking "Inter-process communications (IPC)" if you really would like to continue using unsecured Postfix and prefer avoiding "unauthorized access from outside of Pod". It is described here.
Hope that helps!
Edit 30-Jan-2020
I decided to play with it a little bit. Technically, you can achieve what you want with the help of iptables. You need to specifically ACCEPT all traffic from localhost on port25 and DROP from everywhere else.
something like:
cat iptab.txt
# Generated by xtables-save v1.8.2 on Thu Jan 30 16:37:27 2020
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A INPUT -s 127.0.0.1/32 -p 6 -m tcp --dport 80 -j ACCEPT
-A INPUT -p 6 -m tcp --dport 80 -j DROP
COMMIT
I've tested it and can't telnet on port 80 from anywhere except that very Pod. Please note that I had to run my container in privileged mode in order to be able editing iptables rules directly from Pod. But that is going beyond initial question. :)
Yes, you can use kubectl port-forward to set up a tunnel directly to it for testing purposes.
What I want to achieve:
docker swarm on localhost
dockerized reverse proxy which would forward subdomain.domain to container with app
container with app
What I have done:
changed /etc/hosts that now looks like:
127.0.0.1 localhost
127.0.0.1 subdomain.localhost
set up traefik to forward word.beluga to specific container
What is the problem:
can't get to container via subdomain. It works if I use port though
curl gives different results for subdomain and port
The question:
what is the problem? and why?
how to debug it and find whether problem is docker or network based? (how to check whether request even got to my container)
I'll add that I have also tried to do it on docker-machine (virtualbox) but it wasn't working. So I have moved to localhost, but as you can see it didn't help a lot.
I am losing hope, so any hint would be appreciated. Thank you in advance
There’s no such thing as subdomains of localhost. By near-universal convention, localhost resolves to the IPv4 address 127.0.0.1 and the IPv6 address ::1.
You can still test virtual host with docker, but you will have to use port:
curl -H Host:sub.localhost http://localhost:8000
Late to respond but I was able to achieve this using Traefik's 2.x routers feature like so:
labels:
- "traefik.http.routers.<unique name>.rule=Host(`subdomain.localhost`)"
in docker-compose file
version: '3.9'
services:
app:
image: myapp:latest
labels:
- "traefik.http.routers.myapp.rule=Host(`myapp.localhost`)"
reverse-proxy:
image: traefik:v2.4
command: --api.insecure=true --providers.docker
ports:
- "80:80"
# The Web UI (enabled by --api.insecure=true)
- "9000:8080"
volumes:
# So that Traefik can listen to the Docker events
- /var/run/docker.sock:/var/run/docker.sock
I think the reason, why it works, is that Treafik is intercepting everything on localhost and just then applying rules so its Traefik specific answer.
I recently installed Apache 2.4.20 with SSL enabled using openssl 1.0.2j.
After updating the httpd.conf and httpd-ssl.conf files and trying to start Apache while listening to port 443, I get the following error:
(13)Permission denied: -----: make_sock: could not bind to address [::]:443
(13)Permission denied: -----: make_sock: could not bind to address 0.0.0.0:443
no listening sockets available, shutting down
Here is what I have for config:
httpd.conf:
Listen 51000
#Listen 443
#Secure (SSL/TLS) connections
Include conf/extra/httpd-ssl.conf
httpd-ssl.conf
Listen 443
If I comment out this line in the httpd-ssl.conf file, my apache starts up fine:
attempting to start apache
done
However with it I get the socket error every time.
I ran the following as root:
netstat -tlpn | grep :443
Returned nothing.
lsof -i tcp:443
Returned nothing.
I've read somewhere that only root can bind to addresses below 1024, but I don't know the validity of that statement. Apache is not being run here as root - would that be the issue?
The problem is that 443 is a privileged port, and you are trying to listen as a non-root user.
See: privileged ports and why are privileged ports restricted to root.
There are also ways to get non-root users to bind to privileged ports.
If you are using docker with docker-compose,
It happens when we use a non-root container like bitnami official images.
We used user:root and network_mode: host when it needs to get bind with host network.
apache:
image: bitnami/apache:2.4
container_name: "apache"
ports:
- 80:80
network_mode: host
privileged: true
user: root
environment:
DOCKER_HOST: "unix:///var/run/docker.sock"
env_file:
- .env
volumes:
- ./setup/apache/httpd.conf:/opt/bitnami/apache/conf/httpd.conf
Hope it helps!
I am trying to get the client's IP address from the request objects in my nodejs server.
My technology structure is:
I run two docker containers. One for haproxy and other for nodejs which uses expressjs framework. All incoming traffic is first received by haproxy which I use for proxying and load balancing. Haproxy forwards the requests to the appropriate backends based on the ACLs in the configuration file.
I tried accessing x-forwarded-for request header inside my nodejs but it only returned the IP for the docker network gateway interface 172.17.0.1.
Heading over to haproxy configuration and using option forwardfor header X-Client-IP in the defaults block also set the x-client-ip header to the docker network gateway interface ip. Also the debug logs also are logging the same ip.
So this is what the trouble is. Since haproxy is running inside a container it believes that the docker network gateway interface is the client.
How can I get the actual client's IP to haproxy inside the container so that it can forward it to nodejs?
This is my haproxy configuration file:
global
debug
maxconn 4096
defaults
mode http
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
timeout http-keep-alive 50000ms
option http-keep-alive
option http-server-close
option forwardfor header X-Client-IP
frontend http-in
bind *:80
acl is_api hdr_end(host) -i api.3dphy-dev.com
use_backend api if is_api
default_backend default
backend default
server s0 "${DOCKER_INTERFACE_IP}:3000"
backend api
balance leastconn
option httpclose
option forwardfor
server s1 "${DOCKER_INTERFACE_IP}:17884"
I run my haproxy container using:
docker run -d --name haproxy_1 -p 80:80 -e DOCKER_INTERFACE_IP=`ifconfig docker0 | grep -oP 'inet addr:\K\S+'` -v $(pwd)/config/haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg:ro haproxy:1.6
Note: I am not using any firewall. Also, feel free to suggest any improvements in my configuration. Keep-alive is also proving to be an issue.
Finally managed to find a solution after scouring through the docker forum.
The solution is a two step process.
First I needed to update my haproxy configuration to this:
global
debug
maxconn 4096
defaults
mode http
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
timeout http-keep-alive 50000ms
option http-keep-alive
option http-server-close
frontend http-in
bind *:80
option forwardfor
acl is_site hdr_end(host) -i surenderthakran-dev.com
use_backend site if is_site
default_backend default
backend default
server s0 "${DOCKER_INTERFACE_IP}:3000"
backend site
balance leastconn
option httpclose
option forwardfor
server s1 "${DOCKER_INTERFACE_IP}:17884"
Notice the addition of option forwardfor in frontend http-in block. This tells the frontend part of haproxy to add the client IP to the request header.
Second, the docker run command should be updated to:
docker run -d --name haproxy_1 -p 80:80 -e DOCKER_INTERFACE_IP=`ifconfig docker0 | grep -oP 'inet addr:\K\S+'` -v $(pwd)/config/haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg:ro --net=host haproxy:1.6
Notice the addition of --net=host option in the docker run command. It tells docker to launch the new container and use the same network card as the host.
Now the original client IP is added to the request header and can be accessed in the x-forwarded-for request header in any application to which the request is forwarded.
This is not possible the way haproxy works, since it keeps on throwing, when you don't connect the host at startup because it needs to resolve the address fully. I have tried a lot of workarounds (maybe it's possible), but I gave up and made this run with docker-compose
I posted a running example that might help earlier in a post.
The gist is to link the containers with a host that actually already exists. This is done by the docker linking.
docker-compose.yml
api1:
build: .
dockerfile: ./Dockerfile
ports:
- 3955
links:
- mongo
- redis
environment:
- REDIS_HOST=redis
- MONGO_HOST=mongo
- IS_TEST=true
command: "node app.js"
api2:
build: .
dockerfile: ./Dockerfile
ports:
- 3955
links:
- mongo
- redis
environment:
- REDIS_HOST=redis
- MONGO_HOST=mongo
- IS_TEST=true
command: "node app.js"
mongo:
image: mongo
ports:
- "27017:27017"
command: "--smallfiles --logpath=/dev/null"
redis:
image: redis
ports:
- "6379:6379"
haproxy:
image: haproxy:1.5
volumes:
- ./cluster:/usr/local/etc/haproxy/
links:
- "api1"
- "api2"
ports:
- 80:80
- 70:70
expose:
- "80"
- "70"
haproxy.cfg
global
log 127.0.0.1 local0
log 127.0.0.1 local1 notice
defaults
log global
mode http
option httplog
option dontlognull
timeout connect 5000
timeout client 10000
timeout server 10000
listen stats :70
stats enable
stats uri /
frontend balancer
bind 0.0.0.0:80
mode http
default_backend aj_backends
backend aj_backends
mode http
balance roundrobin
option forwardfor
http-request set-header X-Forwarded-Port %[dst_port]
http-request add-header X-Forwarded-Proto https if { ssl_fc }
option httpchk HEAD / HTTP/1.1\r\nHost:localhost
default-server inter 3s fall 5
server api1 api1:3955
server api2 api2:3955
I have 2 redis containers running on same machine m1.
container1 has port mapping 6379 to 6400
docker run -d -p 6379:6400 myredisimage1
container2 has port mapping 6379 to 7500
docker run -d -p 6379:7500 myredisimage2
I am looking for a solution where other machine m2 can communicate to machine m1, using different DNS names but same port number.
redis.container1.com:6379
redis.container2.com:6379
and I would like to redirect that request to proper containers inside machine m1.
Is this possible to achieve this ?
This is possible, but hacky. First, ask yourself if you really need to do this, or if you can get away with just using different ports for the containers. Anyway, if you do absolutely need to do this, here's how:
Each docker container gets its own ip address accessible from the host machine. AFAIK, these are generated pseudo-randomly at run-time, but they are accessible by doing a docker inspect $CONTAINER_ID, for example:
docker inspect e804af2472ca
[
{
"Id": "e804af2472ca605dec0035f45d3bd05c1fbccee31e6c09381b0c16657378932f",
"Created": "2016-02-02T21:34:12.49059198Z",
...
"Gateway": "172.17.0.1",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
**"IPAddress": "172.17.0.6"**,
"IPPrefixLen": 16,
"IPv6Gateway": "",
...
}
}
]
In this case, we know this container's ip address accessible from the host is 172.17.0.1. That ip address is fully usable from the host, so you can have something proxy redis.container1.com to it and redis.container2.com to your other ip. You'd need to reload the proxy address every time the box goes up, so this would definitely not be ideal, but it should work.
Again, my recommendation overall is don't do this.
I'm not sure if I'm getting you right.
But how could you start two container with both working on the same port?
It seems to me that this should be dealt with by using load balancer. Try HAProxy and set up two acl's for each domain name.
I would go with something like this: (Using docker-compose)
Docker Copose setup to deploy docker images:
redis-1:
container_name: redis-1
image: myredis
restart: always
expose:
- "6400"
redis-2:
container_name: redis-2
image: myredis
restart: always
expose:
- "6400"
haproxy:
container_name: haproxy
image: million12/haproxy
restart: always
command: -n 500
ports:
- "6379:6379"
links:
- redis-1:redis.server.one
- redis-2:redis.server.two
volumes:
- /path/to/my/haproxy.cfg:/etc/haproxy/haproxy.cfg
And then custom haproxy config:
global
chroot /var/lib/haproxy
user haproxy
group haproxy
# Default SSL material locations
ca-base /etc/ssl/certs
crt-base /etc/ssl/private
# Default ciphers to use on SSL-enabled listening sockets.
ssl-default-bind-ciphers AES256+EECDH:AES256+EDH:AES128+EDH:EECDH:!aNULL:!eNULL:!LOW:!DES:!3DES:!RC4
spread-checks 4
tune.maxrewrite 1024
tune.ssl.default-dh-param 2048
defaults
mode http
balance roundrobin
option dontlognull
option dontlog-normal
option redispatch
maxconn 5000
timeout connect 10s
timeout client 25s
timeout server 25s
timeout queue 30s
timeout http-request 10s
timeout http-keep-alive 30s
# Stats
stats enable
stats refresh 30s
stats hide-version
frontend http-in
bind *:6379
mode tcp
acl is_redis1 hdr_end(host) -i redis.server.one
acl is_redis2 hdr_end(host) -i redis.server.two
use_backend redis1 if is_redis1
use_backend redis2 if is_redis2
default_backend redis1
backend redis1
server r1 redis.server.one:6379
backend redi2
server r2 redis.server.two:6379