Apache cannot bind :443 address for SSL even though port 443 is unused - linux

I recently installed Apache 2.4.20 with SSL enabled using openssl 1.0.2j.
After updating the httpd.conf and httpd-ssl.conf files and trying to start Apache while listening to port 443, I get the following error:
(13)Permission denied: -----: make_sock: could not bind to address [::]:443
(13)Permission denied: -----: make_sock: could not bind to address 0.0.0.0:443
no listening sockets available, shutting down
Here is what I have for config:
httpd.conf:
Listen 51000
#Listen 443
#Secure (SSL/TLS) connections
Include conf/extra/httpd-ssl.conf
httpd-ssl.conf
Listen 443
If I comment out this line in the httpd-ssl.conf file, my apache starts up fine:
attempting to start apache
done
However with it I get the socket error every time.
I ran the following as root:
netstat -tlpn | grep :443
Returned nothing.
lsof -i tcp:443
Returned nothing.
I've read somewhere that only root can bind to addresses below 1024, but I don't know the validity of that statement. Apache is not being run here as root - would that be the issue?

The problem is that 443 is a privileged port, and you are trying to listen as a non-root user.
See: privileged ports and why are privileged ports restricted to root.
There are also ways to get non-root users to bind to privileged ports.

If you are using docker with docker-compose,
It happens when we use a non-root container like bitnami official images.
We used user:root and network_mode: host when it needs to get bind with host network.
apache:
image: bitnami/apache:2.4
container_name: "apache"
ports:
- 80:80
network_mode: host
privileged: true
user: root
environment:
DOCKER_HOST: "unix:///var/run/docker.sock"
env_file:
- .env
volumes:
- ./setup/apache/httpd.conf:/opt/bitnami/apache/conf/httpd.conf
Hope it helps!

Related

How to make traefik work with graylog2?

I'm getting this error when entering to web interface url:
Server currently unavailable
We are experiencing problems connecting to the Graylog server running on http://127.0.0.1:9000/api. Please verify that the server is healthy and working correctly.
You will be automatically redirected to the previous page once we can connect to the server.
Do you need a hand? We can help you.
More details
docker-compose.yml:
graylog:
image: graylog2/server:2.3.0-1
environment:
GRAYLOG_PASSWORD_SECRET: xxx
GRAYLOG_ROOT_PASSWORD_SHA2: xxx
GRAYLOG_WEB_ENDPOINT_URI: http://example.com/api/
GRAYLOG_REST_LISTEN_URI: http://0.0.0.0:9000/api/
GRAYLOG_WEB_LISTEN_URI: http://0.0.0.0:9000/
GRAYLOG_ELASTICSEARCH_CLUSTER_NAME: graylog
GRAYLOG_ELASTICSEARCH_HOSTS: http://graylog-elasticsearch:9200
depends_on:
- graylog-elasticsearch
- mongo
networks:
- traefik
- default
- graylog
deploy:
labels:
- "traefik.port=9000"
- "traefik.tags=logging"
- "traefik.docker.network=infra_traefik"
- "traefik.backend=graylog"
restart_policy:
condition: on-failure
replicas: 1
placement:
constraints:
- node.labels.name == manager-1
As you see everything should work without any problem.
Here's what netstat shows:
root#6399d2a13c5d:/usr/share/graylog# netstat -tulnp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:9000 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.11:42255 0.0.0.0:* LISTEN -
udp 0 0 127.0.0.11:51199 0.0.0.0:* -
Here's container printenv:
root#6399d2a13c5d:/usr/share/graylog# printenv
GRAYLOG_ELASTICSEARCH_CLUSTER_NAME=graylog
HOSTNAME=6399d2a13c5d
TERM=xterm
GRAYLOG_WEB_ENDPOINT_URI=http://example.com/api/
GRAYLOG_REST_LISTEN_URI=http://0.0.0.0:9000/api/
GRAYLOG_ROOT_PASSWORD_SHA2=ччч
CA_CERTIFICATES_JAVA_VERSION=20140324
GRAYLOG_PASSWORD_SECRET=ччч
GRAYLOG_REST_TRANSPORT_URI=http://example.com/api/
PWD=/usr/share/graylog
JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64/jre
LANG=C.UTF-8
JAVA_VERSION=8u72
SHLVL=1
HOME=/root
JAVA_DEBIAN_VERSION=8u72-b15-1~bpo8+1
GRAYLOG_ELASTICSEARCH_HOSTS=http://graylog-elasticsearch:9200
GRAYLOG_WEB_LISTEN_URI=http://0.0.0.0:9000/
GOSU_VERSION=1.7
GRAYLOG_SERVER_JAVA_OPTS=-Xms1g -Xmx2g -XX:NewRatio=1 -XX:MaxMetaspaceSize=256m -server -XX:+ResizeTLAB -XX:+UseConcMarkSweepGC -XX:+CMSConcurrentMTEnabled -XX:+CMSClassUnloadingEnabled -XX:+UseParNewGC -XX:-OmitStackTraceInFastThrow
_=/usr/bin/printenv
I assume problem maybe within this custom header which graylog probably need:
RequestHeader set X-Graylog-Server-URL "http://graylog.example.org/api/"
For the implementation example you have given I am assuming that example.org is a remote host rather than 127.0.0.1. In that situation, you are correct in assuming that the additional header is needed.
From my experience you should strip api/ for the the end user web access
You will need to set the request header X-Graylog-Server-URL. eg curl -H "X-Graylog-Server-URL: http://graylog.example.org/" http://127.0.0.1:9000.Graylog web proxy config gives some good info if you want to setup a webserver in front of your graylog server. It's just a pity that (at the time of writing this) the Docker Examples do not include a basic nginx config
I am using kubernetes and had to add the following to my ingress annotation to get traffic in correctly
ingress:
annotations:
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_set_header X-Graylog-Server-URL https://$server_name/;
I think you should just use
GRAYLOG_WEB_ENDPOINT_URI=http://<your-host-ip>/api/
without rest URI settings.

Apache '-k start' failed on Debian

when I try to start an Apache server, this comes out:
/usr/sbin/apachectl -k start
/usr/sbin/apachectl: 87: ulimit: error setting limit (Operation not permitted)
(13)Permission denied: make_sock: could not bind to address [::]:80
(13)Permission denied: make_sock: could not bind to address 0.0.0.0:80
no listening sockets available, shutting down
Unable to open logs
Action '-k start' failed.
The Apache error log may have more information.
What's wrong? I can't do sudo, as this is a practice server provided by school server and I don't have su privileges.
I'm a total newbie btw., trying to learn this.
Thank you in advance.
Apache can't listen on a protected port (80 is under 1024) without root privileges. You should let apache listen on a port bigger than 1024 and set the path of the logfiles to something where you have write permissions.
Ask your admin to change the port to 8080:
edit /etc/apache2/ports.conf with nano or vi
Listen 8080 #instead of Listen 80
don't forget, if you use virtualhosts, to put 8080 like this :< VirtualHost *:8080 >
and to add ":8080" at the end of your browser URL when you would access your site: http://example.com:8080 or http://192.168.1.X:8080(if you are on the same LAN). X is a number between 1 and 254 corresponding to the end of the local IP hosting your apache server.

How to use port forwarding to connect to docker container using DNS name

I have 2 redis containers running on same machine m1.
container1 has port mapping 6379 to 6400
docker run -d -p 6379:6400 myredisimage1
container2 has port mapping 6379 to 7500
docker run -d -p 6379:7500 myredisimage2
I am looking for a solution where other machine m2 can communicate to machine m1, using different DNS names but same port number.
redis.container1.com:6379
redis.container2.com:6379
and I would like to redirect that request to proper containers inside machine m1.
Is this possible to achieve this ?
This is possible, but hacky. First, ask yourself if you really need to do this, or if you can get away with just using different ports for the containers. Anyway, if you do absolutely need to do this, here's how:
Each docker container gets its own ip address accessible from the host machine. AFAIK, these are generated pseudo-randomly at run-time, but they are accessible by doing a docker inspect $CONTAINER_ID, for example:
docker inspect e804af2472ca
[
{
"Id": "e804af2472ca605dec0035f45d3bd05c1fbccee31e6c09381b0c16657378932f",
"Created": "2016-02-02T21:34:12.49059198Z",
...
"Gateway": "172.17.0.1",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
**"IPAddress": "172.17.0.6"**,
"IPPrefixLen": 16,
"IPv6Gateway": "",
...
}
}
]
In this case, we know this container's ip address accessible from the host is 172.17.0.1. That ip address is fully usable from the host, so you can have something proxy redis.container1.com to it and redis.container2.com to your other ip. You'd need to reload the proxy address every time the box goes up, so this would definitely not be ideal, but it should work.
Again, my recommendation overall is don't do this.
I'm not sure if I'm getting you right.
But how could you start two container with both working on the same port?
It seems to me that this should be dealt with by using load balancer. Try HAProxy and set up two acl's for each domain name.
I would go with something like this: (Using docker-compose)
Docker Copose setup to deploy docker images:
redis-1:
container_name: redis-1
image: myredis
restart: always
expose:
- "6400"
redis-2:
container_name: redis-2
image: myredis
restart: always
expose:
- "6400"
haproxy:
container_name: haproxy
image: million12/haproxy
restart: always
command: -n 500
ports:
- "6379:6379"
links:
- redis-1:redis.server.one
- redis-2:redis.server.two
volumes:
- /path/to/my/haproxy.cfg:/etc/haproxy/haproxy.cfg
And then custom haproxy config:
global
chroot /var/lib/haproxy
user haproxy
group haproxy
# Default SSL material locations
ca-base /etc/ssl/certs
crt-base /etc/ssl/private
# Default ciphers to use on SSL-enabled listening sockets.
ssl-default-bind-ciphers AES256+EECDH:AES256+EDH:AES128+EDH:EECDH:!aNULL:!eNULL:!LOW:!DES:!3DES:!RC4
spread-checks 4
tune.maxrewrite 1024
tune.ssl.default-dh-param 2048
defaults
mode http
balance roundrobin
option dontlognull
option dontlog-normal
option redispatch
maxconn 5000
timeout connect 10s
timeout client 25s
timeout server 25s
timeout queue 30s
timeout http-request 10s
timeout http-keep-alive 30s
# Stats
stats enable
stats refresh 30s
stats hide-version
frontend http-in
bind *:6379
mode tcp
acl is_redis1 hdr_end(host) -i redis.server.one
acl is_redis2 hdr_end(host) -i redis.server.two
use_backend redis1 if is_redis1
use_backend redis2 if is_redis2
default_backend redis1
backend redis1
server r1 redis.server.one:6379
backend redi2
server r2 redis.server.two:6379

Release IP address in Apache

I want to run NGINX and Apache Side by Side to use Node.JS with NGINX. The problem is that NGINX is not starting. I have two IPs and at the httpd.conf of Apache I edited the "Listens" to only this:
Listen 1.1.1.1:80
And the default.conf of NGINX to:
server {
listen 2.2.2.2:80 default_server;
...
}
But I'm getting this error when I start NGINX:
Starting nginx: nginx: [warn] server name "/var/log/nginx/access.log" has suspic ious symbols in /etc/nginx/nginx.conf:41
nginx: [emerg] bind() to 2.2.2.2:80 failed (99: Cannot assign requested ad dress)
What am I doing wrong? I searched over the internet and I found how to configure both web servers side by side here: http://kbeezie.com/apache-with-nginx/ and I found this:
To do this you have to make sure Apache and Nginx are bound to their
own IP adddress, In the event of WHM/Cpanel based webserver, you can
Release an IP to be used for Nginx in WHM.
But I don't have cPanel. How can I do this manually?
OBS: 1.1.1.1 and 2.2.2.2 is just an example.

Forward host port to docker container

Is it possible to have a Docker container access ports opened by the host? Concretely I have MongoDB and RabbitMQ running on the host and I'd like to run a process in a Docker container to listen to the queue and (optionally) write to the database.
I know I can forward a port from the container to the host (via the -p option) and have a connection to the outside world (i.e. internet) from within the Docker container but I'd like to not expose the RabbitMQ and MongoDB ports from the host to the outside world.
EDIT: some clarification:
Starting Nmap 5.21 ( http://nmap.org ) at 2013-07-22 22:39 CEST
Nmap scan report for localhost (127.0.0.1)
Host is up (0.00027s latency).
PORT STATE SERVICE
6311/tcp open unknown
joelkuiper#vps20528 ~ % docker run -i -t base /bin/bash
root#f043b4b235a7:/# apt-get install nmap
root#f043b4b235a7:/# nmap 172.16.42.1 -p 6311 # IP found via docker inspect -> gateway
Starting Nmap 6.00 ( http://nmap.org ) at 2013-07-22 20:43 UTC
Nmap scan report for 172.16.42.1
Host is up (0.000060s latency).
PORT STATE SERVICE
6311/tcp filtered unknown
MAC Address: E2:69:9C:11:42:65 (Unknown)
Nmap done: 1 IP address (1 host up) scanned in 13.31 seconds
I had to do this trick to get any internet connection within the container: My firewall is blocking network connections from the docker container to outside
EDIT: Eventually I went with creating a custom bridge using pipework and having the services listen on the bridge IP's. I went with this approach instead of having MongoDB and RabbitMQ listen on the docker bridge because it gives more flexibility.
A simple but relatively insecure way would be to use the --net=host option to docker run.
This option makes it so that the container uses the networking stack of the host. Then you can connect to services running on the host simply by using "localhost" as the hostname.
This is easier to configure because you won't have to configure the service to accept connections from the IP address of your docker container, and you won't have to tell the docker container a specific IP address or host name to connect to, just a port.
For example, you can test it out by running the following command, which assumes your image is called my_image, your image includes the telnet utility, and the service you want to connect to is on port 25:
docker run --rm -i -t --net=host my_image telnet localhost 25
If you consider doing it this way, please see the caution about security on this page:
https://docs.docker.com/articles/networking/
It says:
--net=host -- Tells Docker to skip placing the container inside of a separate network stack. In essence, this choice tells Docker to not containerize the container's networking! While container processes will still be confined to their own filesystem and process list and resource limits, a quick ip addr command will show you that, network-wise, they live “outside” in the main Docker host and have full access to its network interfaces. Note that this does not let the container reconfigure the host network stack — that would require --privileged=true — but it does let container processes open low-numbered ports like any other root process. It also allows the container to access local network services like D-bus. This can lead to processes in the container being able to do unexpected things like restart your computer. You should use this option with caution.
Your docker host exposes an adapter to all the containers. Assuming you are on recent ubuntu, you can run
ip addr
This will give you a list of network adapters, one of which will look something like
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
link/ether 22:23:6b:28:6b:e0 brd ff:ff:ff:ff:ff:ff
inet 172.17.42.1/16 scope global docker0
inet6 fe80::a402:65ff:fe86:bba6/64 scope link
valid_lft forever preferred_lft forever
You will need to tell rabbit/mongo to bind to that IP (172.17.42.1). After that, you should be able to open connections to 172.17.42.1 from within your containers.
As stated in one of the comments, this works for Mac (probably for Windows/Linux too):
I WANT TO CONNECT FROM A CONTAINER TO A SERVICE ON THE HOST
The host has a changing IP address (or none if you have no network access). We recommend that you connect to the special DNS name host.docker.internal which resolves to the internal IP address used by the host. This is for development purpose and will not work in a production environment outside of Docker Desktop for Mac.
You can also reach the gateway using gateway.docker.internal.
Quoted from https://docs.docker.com/docker-for-mac/networking/
This worked for me without using --net=host.
You could also create an ssh tunnel.
docker-compose.yml:
---
version: '2'
services:
kibana:
image: "kibana:4.5.1"
links:
- elasticsearch
volumes:
- ./config/kibana:/opt/kibana/config:ro
elasticsearch:
build:
context: .
dockerfile: ./docker/Dockerfile.tunnel
entrypoint: ssh
command: "-N elasticsearch -L 0.0.0.0:9200:localhost:9200"
docker/Dockerfile.tunnel:
FROM buildpack-deps:jessie
RUN apt-get update && \
DEBIAN_FRONTEND=noninteractive \
apt-get -y install ssh && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
COPY ./config/ssh/id_rsa /root/.ssh/id_rsa
COPY ./config/ssh/config /root/.ssh/config
COPY ./config/ssh/known_hosts /root/.ssh/known_hosts
RUN chmod 600 /root/.ssh/id_rsa && \
chmod 600 /root/.ssh/config && \
chown $USER:$USER -R /root/.ssh
config/ssh/config:
# Elasticsearch Server
Host elasticsearch
HostName jump.host.czerasz.com
User czerasz
ForwardAgent yes
IdentityFile ~/.ssh/id_rsa
This way the elasticsearch has a tunnel to the server with the running service (Elasticsearch, MongoDB, PostgreSQL) and exposes port 9200 with that service.
TLDR;
For local development only, do the following:
Start the service or SSH tunnel on your laptop/computer/PC/Mac.
Build/run your Docker image/container to connect to hostname host.docker.internal:<hostPort>
Note: There is also gateway.docker.internal, which I have not tried.
END_TLDR;
For example, if you were using this in your container:
PGPASSWORD=password psql -h localhost -p 5432 -d mydb -U myuser
change it to this:
PGPASSWORD=password psql -h host.docker.internal -p 5432 -d mydb -U myuser
This magically connects to the service running on my host machine. You do not need to use --net=host or -p "hostPort:ContainerPort" or -P
Background
For details see: https://docs.docker.com/docker-for-mac/networking/#use-cases-and-workarounds
I used this with an SSH tunnel to an AWS RDS Postgres Instance on Windows 10. I only had to change from using localhost:containerPort in the container to host.docker.internal:hostPort.
I had a similar problem accessing a LDAP-Server from a docker container.
I set a fixed IP for the container and added a firewall rule.
docker-compose.yml:
version: '2'
services:
containerName:
image: dockerImageName:latest
extra_hosts:
- "dockerhost:192.168.50.1"
networks:
my_net:
ipv4_address: 192.168.50.2
networks:
my_net:
ipam:
config:
- subnet: 192.168.50.0/24
iptables rule:
iptables -A INPUT -j ACCEPT -p tcp -s 192.168.50.2 -d $192.168.50.1 --dport portnumberOnHost
Inside the container access dockerhost:portnumberOnHost
If MongoDB and RabbitMQ are running on the Host, then the port should already exposed as it is not within Docker.
You do not need the -p option in order to expose ports from container to host. By default, all port are exposed. The -p option allows you to expose a port from the container to the outside of the host.
So, my guess is that you do not need -p at all and it should be working fine :)
For newer versions of Docker, this worked for me. Create the tunnel like this (notice the 0.0.0.0 at the start):
-L 0.0.0.0:8080:localhost:8081
This will allow anyone with access to your computer to connect to port 8080 and thus access port 8081 on the connected server.
Then, inside the container just use "host.docker.internal", for example:
curl host.docker.internal:8081
why not use slightly different solution, like this?
services:
kubefwd:
image: txn2/kubefwd
command: ...
app:
image: bash
command:
- sleep
- inf
init: true
network_mode: service:kubefwd
REF: txn2/kubefwd: Bulk port forwarding Kubernetes services for local development.
Easier way under all platforms nowadays is to use host.docker.internal. Let's first start with the Docker run command:
docker run --add-host=host.docker.internal:host-gateway [....]
Or add the following to your service, when using Docker Compose:
extra_hosts:
- "host.docker.internal:host-gateway"
Full example of such a Docker Compose file should then look like this:
version: "3"
services:
your_service:
image: username/docker_image_name
restart: always
networks:
- your_bridge_network
volumes:
- /home/user/test.json:/app/test.json
ports:
- "8080:80"
extra_hosts:
- "host.docker.internal:host-gateway"
networks:
your_bridge_network:
Again, it's just an example. But in if this docker image will start a service on port 80, it will be available on the host on port 8080.
And more importantly for your use-case; if the Docker container want to use a service from your host system that would now be possible using the special host.docker.internal name. That name will automatically be resolved into the internal Docker IP address (of the docker0 interface).
Anyway, let's say... you also running a web service on your host machine on (port 80). You should now be able to reach that service within your Docker container.. Try it out: nc -vz host.docker.internal 80.
All WITHOUT using network_mode: "host".

Resources