Restrict access to my docker dev environment within the local network - security

I’m using Docker for Mac for my development environment.
The problem is that anybody within our local network can access the server and the MySQL database running on my machine. Of course, they need to know the credentials, which they can possibly brute force.
For example, if my local IP is 10.10.100.22, somebody can access my local site by typing https://10.10.100.22:8300, or database mysql -h 10.10.100.22 -P 8301 -u root -p (port 8300 maps to docker 443, port 8301 maps to docker 3306).
Currently, I use Mac firewall and block incoming connections for vpnkit, which is used by Docker. It works, but I'm not sure if this is the best approach.
UPDATE
The problem with firewall is that you have to coordinate it with all developers in your team. I was hoping to achieve my goal just using Docker configuration same as private networks in Vagrant https://www.vagrantup.com/docs/networking/private_network.html.
What is the best way to restrict access to my docker dev environment within the local network?

Found very simple solution for my problem. In the docker-compose.yml instead of,
services:
mysql:
image: mysql:5.6
environment:
- MYSQL_ROOT_PASSWORD=
- MYSQL_DATABASE=test
ports:
- "8301:3306"
which opens the 8301 port wide open for the local network. I did the following,
services:
mysql:
image: mysql:5.6
environment:
- MYSQL_ROOT_PASSWORD=
- MYSQL_DATABASE=test
ports:
- "127.0.0.1:8301:3306"
which binds the 8301 port to the docker host 127.0.0.1 only and the port is not accessible outside of the host.

Related

How to provide hostName into Docker [duplicate]

I run a service inside a container that binds to 127.0.0.1:8888.
I want to expose this port to the host.
Does docker-compose support this?
I tried the following in docker-compose.yml but did not work.
expose:
- "8888"
ports:
- "8888:8888"
P.S. Binding the service to 0.0.0.0 inside the container is not possible in my case.
UPDATE: Providing a simple example:
docker-compose.yml
version: '3'
services:
myservice:
expose:
- "8888"
ports:
- "8888:8888"
build: .
Dockerfile
FROM centos:7
RUN yum install -y nmap-ncat
CMD ["nc", "-l", "-k", "localhost", "8888"]
Commands:
$> docker-compose up --build
$> # Starting test1_myservice_1 ... done
$> # Attaching to test1_myservice_1
$> nc -v -v localhost 8888
$> # Connection to localhost 8888 port [tcp/*] succeeded!
TEST
$>
After inputing TEST in the console the connection is closed, which means the port is not really exposed, despite the initial success message. The same issue occurs with with my real service.
But If I bind to 0.0.0.0 (instead of localhost) inside the container everything works fine.
Typically the answer is no, and in almost every situation, you should reconfigure your application to listen on 0.0.0.0. Any attempt to avoid changing the app to listen on all interfaces inside the container should be viewed as a hack that is adding technical debt to your project.
To expand on my comment, each container by default runs in its own network namespace. The loopback interface inside a container is separate from the loopback interface on the host and in other containers. So if you listen on 127.0.0.1 inside a container, anything outside of that network namespace cannot access the port. It's not unlike listening on loopback on your VM and trying to connect from another VM to that port, Linux doesn't let you connect.
There are a few workarounds:
You can hack up the iptables to forward connections, but I'd personally avoid this. Docker is heavily based on automated changes to the iptables rules so your risk conflicting with that automation or getting broken the next time the container is recreated.
You can setup a proxy inside your container that listens on all interfaces and forwards to the loopback interface. Something like nginx would work.
You can get things in the same network namespace.
That last one has two ways to implement. Between containers, you can run a container in the network namespace of another container. This is often done for debugging the network, and is also how pods work in kubernetes. Here's an example of running a second container:
$ docker run -it --rm --net container:$(docker ps -lq) nicolaka/netshoot /bin/sh
/ # ss -lnt
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 10 127.0.0.1:8888 *:*
LISTEN 0 128 127.0.0.11:41469 *:*
/ # nc -v -v localhost 8888
Connection to localhost 8888 port [tcp/8888] succeeded!
TEST
/ #
Note the --net container:... (I used docker ps -lq to get the last started container id in my lab). This makes the two separate containers run in the same namespace.
If you needed to access this from outside of docker, you can remove the network namespacing, and attach the container directly to the host network. For a one-off container, this can be done with
docker run --net host ...
In compose, this would look like:
version: '3'
services:
myservice:
network_mode: "host"
build: .
You can see the docker compose documentation on this option here. This is not supported in swarm mode, and you do not publish ports in this mode since you would be trying to publish the port between the same network namespaces.
Side note, expose is not needed for any of this. It is only there for documentation, and some automated tooling, but otherwise does not impact container-to-container networking, nor does it impact the ability to publish a specific port.
According #BMitch voted answer "it is not possible to externally access this port directly if the container runs with it's own network namespace".
Based on this I think it worths it to provide my workaround on the issue:
One way would be to setup an iptables rule inside the container, for port redirection, before running the service. However this seems to require iptables modules to be loaded explicitly on the host (according to this ). This in someway breaks portablity.
My way (using socat: forwarding *:8889 to 127.0.0.1:8888.)
Dockerfile
...
yum install -y socat
RUN echo -e '#!/bin/bash\n./localservice &\nsocat TCP4-LISTEN:8889,fork
TCP4:127.0.0.1:8888\n' >> service.sh
RUN chmod u+x service.sh
ENTRYPOINT ["./service.sh"]
docker-compose.yml
version: '3'
services:
laax-pdfjs:
ports:
# Switch back to 8888 on host
- "8888:8889"
build: .
Check your docker compose version and configure it based on the version.
Compose files that do not declare a version are considered “version 1”. In those files, all the services are declared at the root of the document.
Reference
Here is how I set up my ports:
version: "3"
services:
myservice:
image: myimage:latest
ports:
- "80:80"
We can help you further if you can share the remaining of your docker-compose.yaml.

communicating between docker instances of neo4j and express on local machine

ISSUE: I have a docker image running for neo4j and one for express.js. I cant get the docker images to communicate between eachother.
I can run neo4j desktop, start a nodemon server and they will communicate.
SETUP:
NEO4J official docker image
NEO4J_AUTH none
PORTS localhost:7474 localhost:7687
Version neo4j-community-4.3.3-unix.tar.gz
NODEJS Image
PORTS 0.0.0.0:3000 :::3000
Version 14.17.5
Express conf
DEV_DB_USER_NAME="neo4j"
DEV_DB_PASSWORD="test"
DEV_DB_URI="neo4j://localhost" //for image purpose for local its bolt://localhost:7687
DEV_DB_SECRET_KEY=""
let driver = neo4j.driver(
envConf.dbUri,
neo4j.auth.basic(envConf.dbUserName, envConf.dbUserName)
);
package.json
"#babel/node": "^7.13.10",
"neo4j-driver": "^4.2.3",
I can remote into the neo4j image through http://localhost:7474/browser/ so its running.
I cannot use the server image to call a local neo4j instance.
when i call the apis in the server image i get these errors
If i use neo4j protocal:
Neo4jError: Could not perform discovery. No routing servers available. Known routing table: RoutingTable[database=default database, expirationTime=0, currentTime=1629484043610, routers=[], readers=[], writers=[]]
If i use bolt protocal:
Neo4jError: Failed to connect to server. Please ensure that your database is listening on the correct host and port and that you have compatible encryption settings both on Neo4j server and driver. Note that the default encryption setting has changed in Neo4j 4.0. Caused by: connect ECONNREFUSED 127.0.0.1:7687
Ive been scouring the documentation for a while any ideas would be most welcome!
I was able to achieve the communication by using docker-compose. The problem is that both containers acted as separate networks and i could not find a way to allow the server to communicate with the database. Running docker-compose and building both containers within a single compose network allows communication using the service names.
take note this is tab sensitive!!
docker-compose.yml
version: '3.7'
networks:
lan:
# The different services that make up our "network" of containers
services:
# Express is our first service
express:
container_name: exp_server
networks:
- lan
# The location of dockerfile to build this service
build: <location of dockerfile>
# Command to run once the Dockerfile completes building
command: npm run startdev
# Volumes, mounting our files to parts of the container
volumes:
- .:/src
# Ports to map, mapping our port 3000, to the port 3000 on the container
ports:
- 3000:3000
# designating a file with environment variables
env_file:
- ./.env.express
## Defining the Neo4j Database Service
neo:
container_name: neo4j_server
networks:
- lan
# The image to use
image: neo4j:latest
# map the ports so we can check the db server is up
ports:
- "7687:7687"
- "7474:7474"
# mounting a named volume to the container to track db data
volumes:
- $HOME/neo4j/conf:/conf
- $HOME/neo4j/data:/data
- $HOME/neo4j/logs:/logs
- $HOME/neo4j/plugins:/plugins
env_file:
- .env.neo4j
with this you can use docker to run both the server and database and anything else while still using change detection rebuilding to develop and even build multiple environment images at the same time. NEAT

Connect Linux Containers in Windows Docker Host to external network

I have successfully set up Docker-Desktop for Windows and installed my first linux containers from dockerhub. Network-wise containers can communicate with each other on the docker internal network. I am even able to communication with the host network via host.docker.internal.
Now i am coming to the point where i want to access the outside network (Just some other server on the network of the docker host) from within a docker-container.
I have read on multiple websites that network_mode: host does not seem to work with docker desktop for windows.
I have not configured any switches within Hyper-V Manager and have not added any routes in docker, as i am confused with the overall networking concept of docker-desktop for windows in combination with Hyper-V and Linux Containers.
Below you can see my current docker-compose.yaml with NiFi and Zookeeper installed. NiFi is able to see Zookeeper and NiFi is able to query data from a database installed on the docker host. However i need to query data from a different server other than the host.
version: "3.4"
services:
zookeeper:
restart: always
container_name: zookeeper
ports:
- 2181:2181
hostname: zookeeper
image: 'bitnami/zookeeper:latest'
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
nifi:
restart: always
container_name: nifi
image: 'apache/nifi:latest'
volumes:
- D:\Docker\nifi:/data # Data directory
ports:
- 8080:8080 # Unsecured HTTP Web Port
environment:
- NIFI_WEB_HTTP_PORT=8080
- NIFI_CLUSTER_IS_NODE=false
- NIFI_CLUSTER_NODE_PROTOCOL_PORT=8082
- NIFI_ZK_CONNECT_STRING=zookeeper:2181
- NIFI_ELECTION_MAX_WAIT=1 min
depends_on:
- zookeeper
Check if you have connection type in dockerNAT set to appropriate external network and set IPV4 config to auto.

How do I deploy a website on an nginx docker container on a remote machine using ansible playbook?

I have ansible running one of my ubuntu virtual machines on Azure. I am trying to host a website on Nginx docker container on a remote machine (host machine).
I've done everything provided o this link
http://www.inanzzz.com/index.php/post/6138/setting-up-a-nginx-docker-container-on-remote-server-with-ansible
When I run the curl command it displays all the content of index.html on the terminal as output and when I try to access the website (Welcome to Nginx page) on the browser it doesn't show anything.
I'm not sure what IP address to assign for the NGINX_IP variable in the docker/.env file shown in this tutorial.
Is there any other tutorial that can help me achieve what I want.
Thanks in advance.
For your issue, the problem is that you do not map the container port to the host port. So you just can access the container inside the host.
The solution is that you need to map the port in the docker-compose file like this:
version: '3'
services:
nginx_img:
container_name: ${COMPOSE_PROJECT_NAME}_nginx_con
build:
context: ./nginx
ports:
- "80:80"
networks:
public_net:
ipv4_address: ${NGINX_IP}
networks:
public_net:
driver: bridge
ipam:
driver: default
config:
- subnet: ${NETWORK_SUBNET}
The docker container runs like this:
The last step, you need to allow the port 80 in the NSG which associated with the VM that you run the nginx. Then you can access the nginx outside the VM in the browser.

How to expose in a network?

The below example is from the docker-compose docs.
From my understanding they want to have redis port 6379 available in the web container.
Why don't they have
expose:
- "6379"
in the redis container?
version: '2'
services:
web:
build: .
ports:
- "5000:5000"
volumes:
- .:/code
networks:
- front-tier
- back-tier
redis:
image: redis
volumes:
- redis-data:/var/lib/redis
networks:
- back-tier
From the official Redis image:
This image includes EXPOSE 6379 (the redis port), so standard
container linking will make it automatically available to the linked
containers (as the following examples illustrate).
which is pretty much the typical way of doing things.
Redis Dockerfile.
You don't need links anymore now that we assign containers to docker networks. And without linking, unless you publish all ports with a docker run -P, there's no value to exposing a port on the container. Containers can talk to any port opened on any other container if they are on the same network (assuming default settings for ICC), so exposing a port becomes a noop.
Typically, you only expose a port via the Dockerfile as an indicator to those running your image, or to use the -P flag. There are also some projects that look at exposed ports of other containers to know how to talk to them, specifically I'm thinking of nginx-proxy, but that's a unique case.
However, publishing a port makes that port available from the docker host, which always needs to be done from the docker-compose.yml or run command (you don't want image authors able to affect the docker host without some form of local admin acknowledgement). When you publish a specific port, it doesn't need to be exposed first.

Resources