I am new to AWS and new with Traefik too. I am having a lot of trouble trying to remove the need of port especification in a application I am developing.
When I hit http://api-landingpage.cayama.com.br/ it give`s me an 404 page error, but when I try http://api-landingpage.cayama.com.br:8001/ it goes to my api correctly.
I hosted my domain in AWS Route53 and I am using docker as a provider.
Here my configurations:
docker-compose.yml:
version: "3"
services:
app:
build: .
ports:
- "8001:8001"
command: "npm start"
docker-production.yml:
version: "3"
services:
traefik:
image: traefik
command:
- "--api.insecure=true"
- "--providers.docker=true"
- "--providers.docker.exposedbydefault=false"
- "--entrypoints.web.address=:80"
ports:
- "80:80"
- "8080:8080"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
app:
labels:
- "traefik.enable=true"
- "traefik.http.routers.app.rule=Host(`http://api-landingpage.cayama.com.br/`)"
- "traefik.http.routers.app.entrypoints=web"
I am sure is there a basic thing that I am missing here, can anyone help me please?
I just want to not have to specify, on URL, the port which my application is running.
Thanks guys!
Theoretically, as you said, you shouldn't have to specify the port manually.
I'm not totally sure it's the cause but you are using a full URL instead of a host.
Basically you should replace this:
- "traefik.http.routers.app.rule=Host(`http://api-landingpage.cayama.com.br/`)"
With this:
- "traefik.http.routers.app.rule=Host(`api-landingpage.cayama.com.br`)"
If it does not solve your problem you could try using the loadbalancer directive, even if it is theoretically usefull for Docker Swarm, not for Docker (put this in your app service):
- "traefik.http.services.app.loadbalancer.server.port=8001"
Then if it's still not working enable debugging and look for errors in the logs.
In order to enable debugging, add this to your Traefik service in the command section:
- --log.level=DEBUG
Have you tried:
- "traefik.http.services.app.loadbalancer.server.port=8001"
Related
I'm trying to make a frontend app accesible to the outside. It depends on several other modules, serving as services/backend. This other services also rely on things like Kafka and OpenLink Virtuoso (Database).
How can I make all of them all accesible with each other and how should I expose my frontend to outside internet? Should I also remove any "localhost/port" in my code, and replace it with the service name? Should I also replace every port in the code for the equivalent port of docker?
Here is an extraction of my docker-compose.yml file.
version: '2'
services:
zookeeper:
image: confluentinc/cp-zookeeper:latest
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
ports:
- 22181:2181
kafka:
image: confluentinc/cp-kafka:latest
depends_on:
- zookeeper
ports:
- 29092:29092
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
frontend:
build:
context: ./Frontend
dockerfile: ./Dockerfile
image: "jcpbr/node-frontend-app"
ports:
- "3000:3000"
# Should I use links to connect to every module the frontend access and for the other modules as well?
links:
- "auth:auth"
auth:
build:
context: ./Auth
dockerfile: ./Dockerfile
image: "jcpbr/node-auth-app"
ports:
- "3003:3003"
(...)
How can I make all of [my services] all accesible with each other?
Do absolutely nothing. Delete the obsolete links: block you have. Compose automatically creates a network named default that you can use to communicate between the containers, and they can use the other Compose service names as host names; for example, your auth container could connect to kafka:9092. Also see Networking in Compose in the Docker documentation.
(Some other setups will advocate manually creating Compose networks: and overriding the container_name:, but this isn't necessary. I'd delete these lines in the name of simplicity.)
How should I expose my frontend to outside internet?
That's what the ports: ['3000:3000'] line does. Anyone who can reach your host system on port 3000 (the first port number) will be able to access the frontend container. As far as an outside caller is concerned, they have no idea whether things are running in Docker or not, just that your host is running an HTTP server on port 3000.
Setting up a reverse proxy, maybe based on Nginx, is a little more complicated, but addresses some problems around communication from the browser application to the back-end container(s).
Should I also remove any "localhost/port" in my code?
Yes, absolutely.
...and replace it with the service name? every port?
No, because those settings will be incorrect in your non-container development environment, and will probably be incorrect again if you have a production deployment to a cloud environment.
The easiest right answer here is to use environment variables. In Node code, you might try
const kafkaHost = process.env.KAFKA_HOST || 'localhost';
const kafkaPort = process.env.KAFKA_PORT || '9092';
If you're running this locally without those environment variables set, you'll get the usually-correct developer defaults. But in your Docker-based setup, you can set those environment variables
services:
kafka:
image: confluentinc/cp-kafka:latest
environment:
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092 # must match the Docker service name
app:
build: .
environment:
KAFKA_HOST: kafka
# default KAFKA_PORT is still correct
I have a WebApp that runs in Linux Service Plan as docker-compose. My config is:
version: '3'
networks:
my-network:
driver: bridge
services:
web-site:
image: server.azurecr.io/site/site-production:latest
container_name: web-site
networks:
- my-network
nginx:
image: server.azurecr.io/nginx/nginx-production:latest
container_name: nginx
ports:
- "8080:8080"
networks:
- my-network
And I realize that my app is sometimes freezing for a while (usually less than 1 minute) and when I get to check on Diagnose (Linux - Number of Running Containers per Host) I can see this:
How could it be possible to have 20+ containers running?
Thanks.
I've created a new service plan (P2v2) for my app (and nothing else) and my app (which has just two containers - .net 3.1 and nginx) and it shows 4 containers... but this is not a problem for me... at all...
The problem I've found in Application Insigths was about a method that retrieves a blob to serve an image... blobs are really fast for uploads and downloads but they are terrible for search... my method was checking if the blob exists before sending it to api and this (assync) proccess was blocking my api responses... I just removed the check and my app is running as desired (all under 1sec - almost all under 250ms reponse).
Thanks for you help.
I am currently working on an angular app using Rest API (Express, Nodejs) and Postgresql. Everything worked well when hosted on my local machine. After testing, I moved the images to Ubuntu server so the app can be hosted on an external port. I am able to access the angular frontend using the https://server-external-ip:80 but when trying to login, Nginx is not connecting to NodeApi. Here is my docker-compose file:
version: '3.0'
services:
db:
image: postgres:9.6-alpine
environment:
POSTGRES_DB: myDb
POSTGRES_PASSWORD: myPwd
ports:
- 5432:5432
restart: always
volumes:
- ./postgres-data:/var/lib/postgresql/data
networks:
- my-network
backend: # name of the second service
image: myId/mynodeapi
ports:
- 3000:3000
environment:
POSTGRES_DB: myDb
POSTGRES_PASSWORD: myPwd
POSTGRES_PORT: 5432
POSTGRES_HOST: db
depends_on:
- db
networks:
- my-network
command: bash -c "sleep 20 && node server.js"
myapp:
image: myId/myangularapp
ports:
- "80:80"
depends_on:
- backend
networks:
- my-network
networks:
my-network:
I am not sure what the apiUrl should be? I have tried the following and nothing worked:
apiUrl: "http://backend:3000/api"
apiUrl: "http://server-external-ip:3000/api"
apiUrl: "http://server-internal-ip:3000/api"
apiUrl: "http://localhost:3000/api"
I think you should use the docker-compose service as a DNS. It seems you've several docker hosts/ports available, there are the following in your docker-compose structure:
db:5432
http://backend:3000
http://myapp
Make sure to use db as POSTGRES_DB in the environment part for backend service.
Take a look to my repo, I think is the best way to learn how a similar project works and how to build several apps with nginx, you also can check my docker-compose.yml, it uses several services and are proxied using nginx and are worked together.
On this link you’ll find a nginx/default.conf file and it contains several nginx upstream configuration please take a look at how I used docker-compose service references there as hosts.
Inside the client/ directory, I also have another nginx as a web server of a react.js project.
On server/ directory, it has a Node.js API, It connects to Redis and Postgres SQL database also built from docker-compose.yml.
If you need set or redirect traffic to /api you can use some ngnix config like this
I think this use case can be useful for you and other users!
I am trying to set up Traefik as a reverse proxy for a simple Node.js microservice to be called by an Angular front-end which will be transpiled down to javascript, css, and html and run on nginx. Right now, I am trying to get the Node.js to listen internally on port 3000, and be reachable on exposed endppoints from outside traefik.
I can access the dashboard, and the whoami sample service, but not the endpopints defined in my Node.js microservice. I get "Bad Gateway" for endpoints that match the PathPrefix. If I try to get to an endpoint that does not match the pattern, I get a 404 "Page not found", so I think I have the PathPrefix set up correctly, I just don't have the services setup and/or the backend mated up with a front end.
My microservice works just fine when called outside of Traefik. I have stripped out the handling for certificates and SSL/TLS from the Node.js script, so that Traefik can handle that part. I am pretty confident that part is working too.
My Node.js prints "Hello, world" if you access "/", and does real work if you got to /v1/api/authorize". Again, only if run as a standalone node.js app, not as a service in the docker service stack.
What am I missing? What is causing the "Bad Gateway" errors? I have a feeling this will be an easy fix, this seems like a straightforward use case for Traefik.
I am using Node.js 10, and Traefik 2.1.0.
version: "3.3"
services:
reverse-proxy:
image: "traefik:latest"
command:
- --entrypoints.web.address=:80
- --entrypoints.web-secure.address=:443
- --providers.docker=true
- --api.insecure
- --providers.file.directory=/configuration/
- --providers.file.watch=true
- --log.level=DEBUG
ports:
- "80:80"
- "8080:8080"
- "443:443"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock:ro"
- "/home/cliffm/dev/traefik/configuration/:/configuration/"
auth-micro-svc:
image: "auth-micro-svc:latest"
labels:
- traefik.port=3000
- "traefik.http.routers.my-app.rule=Path(`/`)"
- "traefik.http.routers.my-app.rule=PathPrefix(`/v1/api/`)"
- "traefik.http.routers.my-app.tls=true"
whoami:
image: containous/whoami:latest
redis:
image: redis:latest
ports:
- "6379:6379"
Late to the party, but at the very least, you are overriding configuration here:
- "traefik.http.routers.my-app.rule=Path(`/`)"
- "traefik.http.routers.my-app.rule=PathPrefix(`/v1/api/`)"
Second line overrides bottom line, so the first line is essentially ignored. Even if it it would work with both rules, they are exclusive. Path(/) means explicitly / without suffixes, so /bla would not match, neither will /v1/api even though it matches the second rules
For multiple rules you can use this approach:
- "traefik.http.routers.my-app.rule=Host(`subdomain.yourdomain.com`) && PathPrefix(`/v1/api`)"
I haven't find the answer elsewhere, so I try to ask here. Hope it's not silly question for I'm new both to docker compose and Azure.
I'm using docker compose for deploying Web App (actually bookstack) into Azure. The stack uses few volumes. Everything works just fine, volumes are persistent. Only I'm not able to locate volumes used inside Azure portal. I really need to be able to access these in order to backup or maybe migrate them.
version: '2'
services:
mysql:
image: mysql:5.7.21
environment:
- MYSQL_ROOT_PASSWORD=
- MYSQL_DATABASE=
- MYSQL_USER=
- MYSQL_PASSWORD=
volumes:
- mysql-data:/var/lib/mysql
bookstack:
image: solidnerd/bookstack:latest
depends_on:
- mysql
environment:
- DB_HOST=
- DB_DATABASE=
- DB_USERNAME=
- DB_PASSWORD=
volumes:
- uploads:/var/www/bookstack/public/uploads
- storage-uploads:/var/www/bookstack/public/storage
ports:
- "8088:80"
volumes:
mysql-data:
uploads:
storage-uploads:
Thanks in advance!
Jakub
You just need to change these lines to use the placeholder variable:
- uploads:/var/www/bookstack/public/uploads
- storage-uploads:/var/www/bookstack/public/storage
as follows:
- ${WEBAPP_STORAGE_HOME}/uploads:/var/www/bookstack/public/uploads
- ${WEBAPP_STORAGE_HOME}/storage-uploads:/var/www/bookstack/public/storage
You may want to take a look at the FTP file structure to determine where you want to map the files to :)