Help Debugging
I have been trying to create a reverse proxy with NGINX. For now I'm just trying to get it to redirect traffic on my local network. I think I'm close, but I'm stuck. Any advice is appreciated!
The expected behavior is that a request to http://api.dev.tagnoo.com routes traffic to one container, while http://app.dev.tagnoo.com routes traffic to another.
The actual behavior is that I can't access anything despite my containers running and Nginx seeming to be working. I have no idea how to debug this.
Recreating My Pain
I spin up containers with the following commands:
docker-compose pull --include-deps $#
docker-compose up -d --remove-orphans --build $#
My docker-compose.yaml file looks like this
services:
lb:
image: nginx:1.19.7-alpine
ports:
- 80:80
- 443:443
volumes:
- ./src/nginx.conf:/etc/nginx/nginx.conf
- ./src/fullchain.pem:/etc/ssl/private/fullchain.pem
- ./src/privkey.pem:/etc/ssl/private/privkey.pem
networks:
default:
aliases:
- api.dev.tagnoo.com
- app.dev.tagnoo.com
- dev.tagnoo.com
postgres:
image: postgres:13.3-alpine
environment:
POSTGRES_PASSWORD: postgres
ports:
- 5432:5432
volumes:
- pg-data:/var/lib/postgresql/data
api: &api
image: 410ventures/tagnoo-api:latest
environment: &api_environment
TEST_VAR: 'test123'
VERSION: development
WATCH: 1
api-test:
<<: *api
command: ["true"]
environment:
<<: *api_environment
TAGNOO_API_URL: http://localhost
POSTGRES_URL: pg://postgres:postgres#postgres/tagnooTest
REDIS_KEY_PREFIX: 'api-test:'
api-app-test:
<<: *api
command: ["true"]
environment:
<<: *api_environment
TAGNOO_API_URL: http://api-app-test
POSTGRES_URL: pg://postgres:postgres#postgres/tagnooAppTest
WATCH: 1
app: &app
image: 410ventures/tagnoo-app:latest
environment: &app_environment
VERSION: development
WATCH: 1
app-build:
<<: *app
command: ["true"]
app-livereload:
<<: *app
command: ["true"]
app-test:
<<: *app
command: ["true"]
environment:
<<: *app_environment
TAGNOO_APP_URL: http://localhost
TAGNOO_API_URL: http://api-app-test
volumes:
pg-data:
This works as expected:
container info
My nginx.conf file looks like this
events {}
http { server_tokens off;
map $http_upgrade $connection_upgrade {
'' close;
default upgrade; }
proxy_http_version 1.1; proxy_set_header Connection $connection_upgrade; proxy_set_header Host $host; proxy_set_header Upgrade $http_upgrade; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Real-IP $remote_addr;
ssl_certificate /etc/ssl/private/fullchain.pem; ssl_certificate_key /etc/ssl/private/privkey.pem;
proxy_read_timeout 24h;
# proxy_pass directives are passed hosts via a $proxy_pass_host variable to # allow nginx to start up before the hosts are actually available. Putting the # hosts directly in the proxy_pass directive will fail start up unless all # hosts are available. A resolver is required to use variables in proxy_pass # directives, so use the docker internal DNS IP here. resolver 127.0.0.11;
server {
return 301 https://$host$request_uri; }
server {
listen 443 ssl http2;
server_name app.dev.tagnoo.com;
location /livereload {
set $proxy_pass_host app-livereload:35729;
proxy_pass http://$proxy_pass_host;
}
location / {
set $proxy_pass_host app;
proxy_pass http://$proxy_pass_host;
} }
server {
listen 443 ssl http2;
server_name api.dev.tagnoo.com;
location / {
set $proxy_pass_host api;
proxy_pass http://$proxy_pass_host;
} } }
I have privkey.pem and fullchain.pem files in my root directory. I just self-signed them using openssl but I assume that should still work for my local network if I ignore ssl.
What I've tried
I've tried accessing the containers (to no avail) through:
http://api.dev.tagnoo.com
https://api.dev.tagnoo.com/healthz (an endpoint I use to test connection)
https://api.dev.tagnoo.com
localhost:80
0.0.0.0:80
Here are the logs for the responses I get (in no particular order)
docker compose logs for lb
I haven't set up any DNS records for my domain, tagnoo.com, but I don't think that should matter because this is just a local environment at the moment. But I'm not sure if that's true.
I can't find more information about debugging NGINX at this point.
Summary
I'm mainly concerned that my NGINX config file is not doing what it should. I'm also not sure if my docker-compose file is set up correctly for the reverse-proxy.
My containers are running, but the reverse proxy to them is broken somehow. Requests fail with 302 or 301 getaddrinfo ENOTFOUND api.dev.tagnoo.com whenever I attempt to access.
Here are some questions I have on the matter:
Is there anything else I need to do to setup this reverse proxy? Am I missing a step?
Are the fullchain.pem and privkey.pem files the reason NGINX is failing? If so - how do I create these?
Is docker-compose configured correctly?
How can I debug this further?
Any advice/tips would be greatly appreciated!
The issue was that I had not yet setup A and AAAA records to resolve the *.dev subdomain to localhost. I added those and created a valid (not self-signed) certificate for the host machine.
Related
I have a fairly standard ReactJS frontend (using port 3000) app which is served by a NodeJS backend server (using port 5000). Both apps are Dockerized and I have configured NGINX in order to proxy requests from the frontend to and from the server.
Dockerfile for front end (with NGINX "baked in"):
FROM node:lts-alpine as build
WORKDIR /app
COPY ./package.json ./
COPY ./package-lock.json ./
RUN npm install
COPY . .
RUN npm run build
FROM nginx
EXPOSE 3000
EXPOSE 443
EXPOSE 80
COPY ./cert/app.crt /etc/nginx/
COPY ./cert/app.key /etc/nginx/
ENV HTTPS=true
ENV SSL_CRT_FILE=/etc/nginx/app.crt
ENV SSL_KEY_FILE=/etc/nginx/app.key
RUN rm /etc/nginx/conf.d/default.conf
COPY ./default.conf /etc/nginx/nginx.conf
COPY --from=build /app/build/ /usr/share/nginx/html
CMD ["nginx", "-g", "daemon off;"]
Dockerfile for server:
FROM node:lts-alpine as build
WORKDIR /app
EXPOSE 5000
ENV NODE_TLS_REJECT_UNAUTHORIZED=0
ENV DANGEROUSLY_DISABLE_HOST_CHECK=true
ENV NODE_CONFIG_DIR=./config/
COPY ./package.json ./
COPY ./package-lock.json ./
RUN npm install
COPY . .
CMD [ "npm", "start" ]
The docker-compose.yml for this setup is
version: '3.8'
services:
client:
container_name: client
depends_on:
- server
stdin_open: true
environment:
- CHOKIDAR_USEPOLLING=true
- HTTPS=true
- SSL_CRT_FILE=/etc/nginx/app.crt
- SSL_KEY_FILE=/etc/nginx/app.key
build:
dockerfile: Dockerfile
context: ./client
expose:
- "8000"
- "3000"
ports:
- "3000:443"
- "8000:80"
volumes:
- ./client:/app
- /app/node_modules
- /etc/nginx
networks:
- internal-network
server:
container_name: server
build:
dockerfile: Dockerfile
context: "./server"
expose:
- "5000"
ports:
- "5000:5000"
volumes:
- /app/node_modules
- ./server:/app
networks:
- internal-network
networks:
internal-network:
driver: bridge
And crucially, the NGINX default.conf is
worker_processes auto;
events {
worker_connections 1024;
}
pid /var/run/nginx.pid;
http {
include mime.types;
upstream loadbalancer {
server server:5000 weight=3;
}
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
port_in_redirect off;
absolute_redirect off;
return 301 https://$host$request_uri;
}
server {
listen [::]:443 ssl;
listen 443 ssl;
server_name example.app* example.co* example.uksouth.azurecontainer.io* localhost*;
error_page 497 https://$host:$server_port$request_uri;
error_log /var/log/nginx/client-proxy-error.log;
access_log /var/log/nginx/client-proxy-access.log;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:ECDHE-RSA-RC4-SHA:ECDHE-ECDSA-RC4-SHA:AES128:AES256:RC4-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!3DES:!MD5:!PSK;
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 24h;
keepalive_timeout 300;
add_header Strict-Transport-Security 'max-age=31536000; includeSubDomains';
ssl_certificate /etc/nginx/app.crt;
ssl_certificate_key /etc/nginx/app.key;
root /usr/share/nginx/html;
index index.html index.htm index.nginx-debian.html;
location / {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
try_files $uri $uri/ /index.html;
}
location /tours {
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://loadbalancer;
}
}
}
with this configuration I have two problems:
By running docker-compose up -d, this setup builds and deploys two Docker containers locally. When I use https://localhost:3000/id this works and the data is retrieved and shown in browser correctly - when I type http://localhost:3000/id this gets redirected to http://localhost:443/id and this does not work. I have attempted to use NGINX commands port_in_redirect off; absolute_redirect off; but this has not helped. How can I make sure that the redirect does not edit the port number? (this is likely not going to be an issue in production where the port numbers are not used).
The bigger problem: the deployment to Azure is done using a docker context and running docker-compose -f ./docker-compose-azure.yml up. This runs and creates two Docker containers and a side-car process. The docker-compose-azure.yml file is
version: '3.8'
services:
client:
image: dev.azurecr.io/example-client
depends_on:
- server
stdin_open: true
environment:
- CHOKIDAR_USEPOLLING=true
- HTTPS=true
- SSL_CRT_FILE=/etc/nginx/app.crt
- SSL_KEY_FILE=/etc/nginx/app.key
restart: unless-stopped
domainname: "example-dev"
expose:
- "3000"
ports:
- target: 3000
#published: 3000
protocol: tcp
mode: host
networks:
- internal-network
server:
image: dev.azurecr.io/example-server
restart: unless-stopped
ports:
- "5000:5000"
networks:
- internal-network
networks:
internal-network:
driver: bridge
If I don't use HTTPS and a simple reverse proxy - the two issues outline above go away. But with the configuration above, calls to the Azure FQDN/URL fail; HTTPS requests timing out "ERR_CONNECTION_TIMED_OUT", and for HTTP, the site could not be found. What am I doing wrong here?
Thanks for your time.
I think Jan Garaj's answer has touched upon all the important bits. Here is my take, trying to give a targeted answer.
HTTP to HTTPS redirect
Currently the return 301 statement is using the $host variable that only holds the Hostname and not the port information. To capture both, you can use the $http_host variable instead. source
server {
listen [::]:80;
#//307 to preserve POST data
return 307 https://$http_host$request_uri;
}
Problems with the Azure config
In the Azure config, you have this bit:
ports:
- target: 3000
#published: 3000
protocol: tcp
mode: host
which identifies 3000 as the internal client port which listens to the requests. But you have to remember that you have a NGINX proxy inside that only listens to ports 80 or 443 (the server blocks in Nginx config). So this is the reason you get the ERR_CONNECTION_TIMED_OUT error because the requests are sent to port 3000 where nothing is listening.
As you want to do a HTTPS deployment, you can set this to 443 and the Nginx will take care of the request.
enabling HTTP redirect on Azure
The final bit is to configure the Azure deployment such that when a HTTP request is made to your URL, it should get redirected to the HTTPS counterpart. We already have the NGINX redirect block for port 80.
BUT, it will not help. As we specify the target to be 443 inside the container, the HTTP request will try to hit 443 and get refused.This article also mentions the same towards the end
Use your browser to navigate to the public IP address of the container group. The IP address shown in this example is 52.157.22.76, so the URL is https://52.157.22.76. You must use HTTPS to see the running application, because of the Nginx server configuration. Attempts to connect over HTTP fail.
This could be solved if it were possible to add another port to Azure config, the port 80.
ports:
- port: 443
protocol: TCP
- port: 80
protocol: TCP
I am not sure if Azure allows this, but if it does then thats the final solution.
I think you need to check/update Nginx configuration file properly and also make sure SSL certificate files are available
# http block would be
server {
listen 80 default_server;
return 301 https://$server_name$request_uri;
}
and in https server block, you need to update location block
location /tours {
proxy_pass http://server:5000;
proxy_set_header Connection "";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
}
location / {
try_files $uri $uri/ /index.html;
}
Updated
Your Nginx config file would be
worker_processes auto;
events {
worker_connections 1024;
}
pid /var/run/nginx.pid;
http {
include mime.types;
server {
listen [::]:443 ssl;
listen 443 ssl;
server_name my-redirected-domain.com my-azure-domain.io localhost;
access_log /var/log/nginx/client-proxy.log;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:ECDHE-RSA-RC4-SHA:ECDHE-ECDSA-RC4-SHA:AES128:AES256:RC4-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!3DES:!MD5:!PSK;
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 24h;
keepalive_timeout 300;
add_header Strict-Transport-Security 'max-age=31536000; includeSubDomains';
ssl_certificate /etc/nginx/viewform.app.crt;
ssl_certificate_key /etc/nginx/viewform.app.key;
root /usr/share/nginx/html;
index index.html index.htm index.nginx-debian.html;
location / {
try_files $uri $uri/ /index.html;
}
location /tours {
proxy_pass http://server:5000;
proxy_set_header Connection "";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
}
}
server {
listen 80 default_server;
return 301 https://$server_name$request_uri;
}
}
Use port 443 everywhere to avoid any confusion with port remapping (that can be an advance setup):
1.) Define client container to be running on port 443:
version: '3.8'
services:
client:
...
ports:
- port: 443
protocol: TCP
2.) Define Nginx to be running on the port 443 with proper TLS setup as you have in your updated nginx.conf
Deploy and open https://<public IP> (you will very likely need to add sec. exception in the browser).
BTW: Azure has quite good article about Nginx with TLS (but more advance setup is used):
https://learn.microsoft.com/en-us/azure/container-instances/container-instances-container-group-ssl
IMHO better redirect from http to https is:
server {
listen [::]:80;
return 301 https://$host$request_uri;
}
Im trying to publish my first GraphQl project on a VPN, using docker-compose
It consists on a webapp, running on nodejs, and a GraphQl API, also running on nodejs, with Apollo Express and Prisma
The idea is to have the app and the API running on different containers and use a nginx container to proxy pass the requests to the right container (/ goes to the webapp, /api goes to the API)
I got it working, seems to be fine, but it needs to run on https. So Ive set up a letsencrypt certificate and set it on nginx and is working too, except for one thing: subscriptions
If I try to connect to the websocket using ws://mydomain/api, its refused cause the app is running on https. But if I try to connect on wss://mydomain/api, I get:
WebSocket connection to 'wss://mydomain/api' failed: Error during WebSocket handshake: Unexpected response code: 400
I read a lot of docs and tutorials and it seems to me Im doing right, but it just wont work and I dont know what to try anymore
Here is the relevant docker-compose.yml code:
version: "3"
services:
api:
build:
context: ./bin/api
container_name: 'node10-api'
restart: 'always'
entrypoint: ["sh", "-c"]
command: ["yarn && yarn prisma deploy && yarn prisma generate && yarn start"]
restart: always
ports:
- "8383:8383"
links:
- prisma
volumes:
- /local/api:/api
app:
build:
context: ./bin/app
container_name: 'node12-app'
restart: 'always'
entrypoint: ["sh", "-c"]
command: ["yarn && yarn build && yarn express-start"]
restart: always
ports:
- "3000:3000"
links:
- api
volumes:
- /local/app:/app
nginx:
container_name: 'nginx'
restart: always
image: nginx:1.15-alpine
ports:
- '80:80'
- '443:443'
volumes:
- ./data/nginx:/etc/nginx/conf.d
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot
- ./www:/etc/nginx/html
And here is the nginx conf:
upstream app {
ip_hash;
server app:3000;
}
upstream api {
server api:8383;
}
server {
listen 80;
}
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 443 ssl;
server_name mydomain;
ssl_certificate /etc/letsencrypt/live/mydomain/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/mydomain/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
location / {
proxy_pass http://app;
}
location /api {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_pass http://api;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
}
}
And finally, the server initialization:
const app = express();
app.use(cookieParser());
app.use(process.env.URL_BASE_PATH + '/' + process.env.UPLOAD_URL_DIR, express.static(process.env.UPLOAD_PATH));
app.use(process.env.URL_BASE_PATH + '/assets', express.static('assets'));
app.use(process.env.URL_BASE_PATH + '/etc', router);
app.use(createLocaleMiddleware());
app.use(helmet());
app.disable('x-powered-by');
console.log(process.env.URL_BASE_PATH);
if(process.env.URL_BASE_PATH === '')server.applyMiddleware({app, cors:corsOptions});
else server.applyMiddleware({app, cors:corsOptions, path:process.env.URL_BASE_PATH});
const httpServer = http.createServer(app);
server.installSubscriptionHandlers(httpServer);
//STARTING
httpServer.listen({port: process.env.SERVER_PORT}, () => {
console.log(`π Server ready`)
}
);
Where server is an ApolloServer
Everything works but the wss connection: the app can connect to the api using https://mydomain/api normally, and regular ws connection works too, if I run the app on http
Is just wss that I cant get to work
Any clues? What am I doing wrong here?
I found my own solution: the docker/nginx configs were right, but Apollo was expecting the websocket connection on wss://mydomain/graphql, even though the graphql server is running on https://mydomain/api
I failed to find a way to change that, so I added this to the nginx conf:
location ^~/graphql {
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Frame-Options SAMEORIGIN;
proxy_pass http://api;
}
And it finally worked
I am try to run React/NodeJS app with a nodejs and client setup. I have everything running fine locally with using Docker/docker-compose. But when I deploy it to AWS ECS, I get the following error:
apiRequest.js:52 Mixed Content: The page at 'https://myAppUrl/auth/login' was loaded over HTTPS, but requested an insecure resource 'http://0.0.0.0:4000/auth/login'. This request has been blocked; the content must be served over HTTPS.
Docker-compose file
version: "3"
services:
client:
image: docker-registry:dev-test7
api:
image: docker-registry:dev-test4
ports:
- "4000:4000"
nginx:
image: docker-registry:nginx:dev-test5
ports:
- "80:80"
depends_on:
- client
- api
Nginx default.conf
upstream client {
server 0.0.0.0:3000;
}
upstream api {
server 0.0.0.0:5000;
}
server {
listen 80;
location / {
proxy_pass http://client;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
location /api {
rewrite /api/(.*) /$1 break;
proxy_pass http://api;
}
}
The app deploys fine to AWS ECS and it runs fine locally with a docker-compose up. I thought since the app was calling the nodejs api's locally, it wouldn't need to use https. I have tried many different variations for the url such as using the domain or IP, but still not working
I set up the following application:
version: '3'
services:
nginx:
image: myregistry.azurecr.io/nginx:latest
container_name: nginx
ports:
- 80:80
- 443:443
app2:
image: myregistry.azurecr.io/app2:latest
container_name: app2
expose:
- 8080
nginx.conf:
events {
}
http{
server {
listen 80 default_server;
location / {
auth_basic "Restricted";
auth_basic_user_file /etc/nginx/.htpasswd;
proxy_pass http://app2:8080;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
}
And it works well: a reverse proxy asks for username/password before serving the content.
Now, the app2 constantly changes, and I set up a script that uploads a new image to the azure registry.
And here come my pain: each time I upload a new version I keep getting 502 - Gateway error from nginx for about 2-3 mins.
After this time the application is available again. How come? Is there a way to prevent it? Where is the 24/7 promised to me by azure? :(
When you update some parts of the container settings, in this case the image the container restarts and it depends on your application on how much it takes for it to be started back. Consider using rolling updates with docker swarm for less downtime or optimizing your application.
NGINX also caches DNS queries so it may still point for quite a while to the stopped container until the cache updates. See this documentation entry for more details
This is the first time I ask anything on stackoverflow, but basically I have a bunch of docker containers setup that work well with nginx and google DDNS servers. Recently I've tried to add a nodejs container for a project and I keep getting connection refused error. The weird part is that the ip address I'm getting on the upstream server has nothing to do with my node container. Here are my settings for everything:
docker-compose for nodejs:
version: '3.6'
services:
ddnsTestNode: #Change this line
image: 'dragoncube/google-domains-ddns'
container_name: ddnsTestNode #Change this line
volumes:
- type: bind
source: /media/MainData/ddns/test #Change this line
target: /config/google-domains-ddns.conf
- type: bind
source: /etc/localtime
target: /etc/localtime
networks:
- mainNetwork
testnode:
image: "node:8"
user: "node"
container_name: testnode
working_dir: /home/node/app
environment:
- NODE_ENV=development
volumes:
- /path/to/saved/node/app:/home/node/app
ports:
- 8081:8081
expose:
- "8081"
command: "npm start"
networks:
mainNetwork:
external: true
for NGINX (just the corresponding server):
server {
listen 443;
listen [::]:443;
server_name MY_SERVER_HIDDEN_FOR_QUESTION;
ssl_certificate /etc/nginx/cert.crt;
ssl_certificate_key /etc/nginx/cert.key;
ssl on;
ssl_session_cache builtin:1000 shared:SSL:10m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4;
ssl_prefer_server_ciphers on;
client_max_body_size 10000G;
location / {
# Fix the βIt appears that your reverse proxy set up is broken" error.
proxy_pass http://testnode:8081/;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_cache_bypass $http_upgrade;
}
}
Thanks for your help in advance, I really tried looking for an answer but couldn't find anything with my specific problem. Every other container I have such as seafile or gitlab works with my setup, but a basic node container doesn't.
I figured out what the problem was, turns out I didn't specify
networks:
- mainNetwork
in my testnode docker-compose file