Secure websocket with Apollo Express, Nginx and docker-compose - node.js

Im trying to publish my first GraphQl project on a VPN, using docker-compose
It consists on a webapp, running on nodejs, and a GraphQl API, also running on nodejs, with Apollo Express and Prisma
The idea is to have the app and the API running on different containers and use a nginx container to proxy pass the requests to the right container (/ goes to the webapp, /api goes to the API)
I got it working, seems to be fine, but it needs to run on https. So Ive set up a letsencrypt certificate and set it on nginx and is working too, except for one thing: subscriptions
If I try to connect to the websocket using ws://mydomain/api, its refused cause the app is running on https. But if I try to connect on wss://mydomain/api, I get:
WebSocket connection to 'wss://mydomain/api' failed: Error during WebSocket handshake: Unexpected response code: 400
I read a lot of docs and tutorials and it seems to me Im doing right, but it just wont work and I dont know what to try anymore
Here is the relevant docker-compose.yml code:
version: "3"
services:
api:
build:
context: ./bin/api
container_name: 'node10-api'
restart: 'always'
entrypoint: ["sh", "-c"]
command: ["yarn && yarn prisma deploy && yarn prisma generate && yarn start"]
restart: always
ports:
- "8383:8383"
links:
- prisma
volumes:
- /local/api:/api
app:
build:
context: ./bin/app
container_name: 'node12-app'
restart: 'always'
entrypoint: ["sh", "-c"]
command: ["yarn && yarn build && yarn express-start"]
restart: always
ports:
- "3000:3000"
links:
- api
volumes:
- /local/app:/app
nginx:
container_name: 'nginx'
restart: always
image: nginx:1.15-alpine
ports:
- '80:80'
- '443:443'
volumes:
- ./data/nginx:/etc/nginx/conf.d
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot
- ./www:/etc/nginx/html
And here is the nginx conf:
upstream app {
ip_hash;
server app:3000;
}
upstream api {
server api:8383;
}
server {
listen 80;
}
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 443 ssl;
server_name mydomain;
ssl_certificate /etc/letsencrypt/live/mydomain/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/mydomain/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
location / {
proxy_pass http://app;
}
location /api {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_pass http://api;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
}
}
And finally, the server initialization:
const app = express();
app.use(cookieParser());
app.use(process.env.URL_BASE_PATH + '/' + process.env.UPLOAD_URL_DIR, express.static(process.env.UPLOAD_PATH));
app.use(process.env.URL_BASE_PATH + '/assets', express.static('assets'));
app.use(process.env.URL_BASE_PATH + '/etc', router);
app.use(createLocaleMiddleware());
app.use(helmet());
app.disable('x-powered-by');
console.log(process.env.URL_BASE_PATH);
if(process.env.URL_BASE_PATH === '')server.applyMiddleware({app, cors:corsOptions});
else server.applyMiddleware({app, cors:corsOptions, path:process.env.URL_BASE_PATH});
const httpServer = http.createServer(app);
server.installSubscriptionHandlers(httpServer);
//STARTING
httpServer.listen({port: process.env.SERVER_PORT}, () => {
console.log(`🚀 Server ready`)
}
);
Where server is an ApolloServer
Everything works but the wss connection: the app can connect to the api using https://mydomain/api normally, and regular ws connection works too, if I run the app on http
Is just wss that I cant get to work
Any clues? What am I doing wrong here?

I found my own solution: the docker/nginx configs were right, but Apollo was expecting the websocket connection on wss://mydomain/graphql, even though the graphql server is running on https://mydomain/api
I failed to find a way to change that, so I added this to the nginx conf:
location ^~/graphql {
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Frame-Options SAMEORIGIN;
proxy_pass http://api;
}
And it finally worked

Related

what is the difference between request.ip and request.raw.ip in fastify?

Difference between request.ip and request.raw.ip in fastify.
Request
The first parameter of the handler function is Request.
Request is a core Fastify object containing the following fields:
raw - the incoming HTTP request from Node core
ip - the IP address of the incoming request
It contains more fields I am just showing relevant fields to my question.
you can check more field out here -> https://www.fastify.io/docs/latest/Request/
Difference between request.ip and request.raw.ip in fastify.
There is a difference only if your server runs behind a proxy/balancer.
Let's make an example:
Client IP: 123.123.123.123
Balancer IP: 10.10.10.10
The request.raw.connection.remoteAddress will be 10.10.10.10
The request.ip will be 123.123.123.123 (only if trustProxy option is enabled) or it will ba as the raw one
Many other ones are just shortcuts such as hostname or url.
Ip example:
Runs locally the Fastify server
{"ip":"127.0.0.1","ipRaw":"","ipRemote":"127.0.0.1"}
Runs behind a ngnix with trust proxy (192.168.128.2 is nginx)
{"ip":"192.168.128.1","ipRaw":"","ips":["192.168.128.2","192.168.128.1"],"ipRemote":"192.168.128.2"}
Runs behind a ngnix without trust proxy (return the caller ip)
{"ip":"192.168.128.2","ipRaw":"","ipRemote":"192.168.128.2"}
You can play with it:
const fastify = require('fastify')({
logger: true,
trustProxy: true
})
fastify.get('/', async (request, reply) => {
return {
ip: request.ip,
ipRaw: request.raw.ip || '',
ips: request.ips,
ipRemote: request.raw.connection.remoteAddress
}
})
fastify.listen(5000, '0.0.0.0')
version: "3.9"
services:
localnode:
build:
context: ./
dockerfile: Dockerfile-fastify
ports:
- "5000:5000"
nginx:
build:
context: ./
dockerfile: Dockerfile
ports:
- "8080:80"
Fastify
FROM node:alpine
COPY package.json ./
RUN npm install
COPY ./proxy.js .
EXPOSE 5000
CMD ["node", "proxy.js"]
nginx
FROM nginx
EXPOSE 8080
COPY default.conf /etc/nginx/conf.d/
and the config
server {
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header Connection "";
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://localnode:5000;
}
}
PS: these docker files are not for production

Nginx config for multiple react apps (docker-compose)

For development, I'm trying to run two react apps, locally, on the same port (so both can share localStorage), where app1 runs on localhost:8000 and app2 runs on localhost:8000/app2. However, with the setup below I have the issue that all requests to localhost:8000/app2/... are routed to app1.
Is there a way around this?
nginx.conf
Update: moved /app2 block in front of / (see comment).
server {
listen 8000;
server_name localhost;
location /app2 {
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $http_host;
proxy_pass http://app2:3001;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
location / {
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $http_host;
proxy_pass http://app1:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
docker-compose.yml
version: "3"
services:
nginx:
image: nginx:latest
ports:
- "8000:8000"
volumes:
- ./nginx/nginx.conf:/etc/nginx/conf.d/default.conf
app1:
container_name: "app1"
image: node:12
ports:
- "3000:3000"
build:
context: "./app1"
volumes:
- "./app1:/src/app"
app2:
container_name: "app2"
image: node:12
ports:
- "3001:3001"
build:
context: "./app2"
volumes:
- "./app2:/src/app"
Dockerfile app1
And the app2 Dockerfile has EXPOSE 3001 and npm start --port 3001
FROM node:12
WORKDIR /src/app
COPY package*.json ./
RUN npm install
EXPOSE 3000
CMD ["npm", "start", "--port", "3000"]
I'll update this old question with an answer: The problem was not related to nginx or docker-compose. Instead, I was running two create-react-app applications in development mode and I forgot to set { "homepage": "/app2" } in package.json or set PUBLIC_URL=/app2 as environment variable to serve the app from a subdirectory.

How to resolve mixed content that was loaded over HTTPS, but requested an insecure resource

I am try to run React/NodeJS app with a nodejs and client setup. I have everything running fine locally with using Docker/docker-compose. But when I deploy it to AWS ECS, I get the following error:
apiRequest.js:52 Mixed Content: The page at 'https://myAppUrl/auth/login' was loaded over HTTPS, but requested an insecure resource 'http://0.0.0.0:4000/auth/login'. This request has been blocked; the content must be served over HTTPS.
Docker-compose file
version: "3"
services:
client:
image: docker-registry:dev-test7
api:
image: docker-registry:dev-test4
ports:
- "4000:4000"
nginx:
image: docker-registry:nginx:dev-test5
ports:
- "80:80"
depends_on:
- client
- api
Nginx default.conf
upstream client {
server 0.0.0.0:3000;
}
upstream api {
server 0.0.0.0:5000;
}
server {
listen 80;
location / {
proxy_pass http://client;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
location /api {
rewrite /api/(.*) /$1 break;
proxy_pass http://api;
}
}
The app deploys fine to AWS ECS and it runs fine locally with a docker-compose up. I thought since the app was calling the nodejs api's locally, it wouldn't need to use https. I have tried many different variations for the url such as using the domain or IP, but still not working

Nginx error failed (111: Connection refused) while connecting to upstream, docker-compose nodejs

I'm trying to build multiple services and reverse proxy them with nginx.
So service1 is:
http://api/service1 (nginx) => docker (http://service1:4001/) =>
express (http://localhost:4000)
service2 is :
http://api/service2 (nginx) => docker (http://service2:4002/) =>
express (http://localhost:4000)
It's my first time experimenting with nginx from scratch and I'm stuck, I can't reach any of my service from http://localhost:80/service1 or http://api/service1. And do you think it's a good start of an architecture for micro-services for dev and production?
I also have doubt about my network inside for my docker compose, is it accurate to put that network or let the default docker network ?
(All of the containers working fine);
docker-compose.yml :
version: '3'
services:
mongo:
container_name: mongo
image: mongo:latest
ports:
- '27017:27017'
volumes:
- './mongo/db:/data/db'
nginx:
build: ./nginx
container_name: nginx
links:
- service1
- service2
ports:
- '80:80'
- '443:443'
depends_on:
- mongo
networks:
- api
service1:
build: ./services/service1
container_name: service1
links:
- 'mongo:mongo'
volumes:
- './services/service1:/src/'
ports:
- '4001:4000'
command: yarn dev
networks:
- api
service2:
build: ./services/service2
container_name: service2
links:
- 'mongo:mongo'
volumes:
- './services/service2:/src/'
ports:
- '4002:4000'
command: yarn dev
networks:
- api
networks:
api:
nginx.conf :
worker_processes 1 ;
events {
worker_connections 1024;
}
http {
server {
listen 80;
server_name api;
charset utf-8;
location /service1 {
proxy_pass http://service1:4001;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
location /service2 {
proxy_pass http://service2:4002;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
}
service DockerFile:
FROM node:latest
RUN mkdir /src
WORKDIR /src
COPY package.json /src/package.json
RUN npm install
COPY . /src/
EXPOSE 4000
nginx DockerFile:
FROM nginx
COPY nginx.conf /etc/nginx/nginx.conf
I'm trying to reach either http://localhost:80/service1/ which would normally get me to http://service1:4001
but I'm getting this error:
[error] 7#7: *2 connect() failed (111: Connection
refused) while connecting to upstream, client: 172.23.0.1, server:
172.23.0.1 - - [15/Apr/2020:22:01:44 +0000] "GET / HTTP/1.1" 502 157 "-" "PostmanRuntime/7.24.1"
bts-api, request: "GET / HTTP/1.1", upstream:
"http://172.23.0.2:4001/", host: "localhost:80"
I'm also trying to reach http://api/service1/ (defined in nginx.conf as server_name) but I don't have any response or ping.
Please add the proxy_redirect parameter and container access port in your nginx.conf files as below.
server {
listen 80;
server_name api;
charset utf-8;
location /service1 {
proxy_pass http://service1:4000;
proxy_redirect http://service1:4000: http://www.api/service1;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
location /service2 {
proxy_pass http://service2:4000;
proxy_redirect http://service2:4000: http://www.api/service2;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
Finally it was due to my container name being $(project)-$(container-name) so in nginx i replaced accounts => $(project)-accounts

Nginx Request redirection one container to another

I am running two centos docker container using following compose file-
version: "2"
services:
nginx:
build:
context: ./docker-build
dockerfile: Dockerfile.nginx
restart: always
ports:
- "8080:8080"
command: "/usr/sbin/nginx"
volumes:
- ~/my-dir:/my-dir
data:
build:
context: ./docker-build
dockerfile: Dockerfile.data
restart: always
ports:
- "8081:8081"
command: "/usr/sbin/nginx"
volumes:
- ~/my-dir-1:/my-dir-1
and I have installed the nginx using Dockerfile in both containers to access specific directories.
Trying to redirect request http://host-IP:8080/my-data/ to the data container using nginx.
Below is my Nginx configuration for nginx container
/etc/nginx/conf.d/default.conf
server {
listen 8080;
location / {
root /my-dir/;
index index.html index.htm;
}
}
I am able to access my-dir directory using http://host-IP:8080 URL and my-dir-1 using http://host-IP:8081 URL, how can I configure Nginx to redirect request on data container using http://host-IP:8080/my-data URL
I don't really get the use case of your app and why are you doing this way.
But you can do this with a proxy, untested code look for the docs but something like this.
http {
upstream data_container {
server data:8081;
}
server {
listen 8080;
location / {
root /my-dir/;
index index.html index.htm;
}
location /my-data {
proxy-pass http://data_container$request_uri;
}
}
}
nginx.conf file
http {
server {
listen 80;
location /api {
proxy_pass http://<SERVICE_NAME>:8080/my-api/;
}
location /web {
proxy_pass http://<SERVICE_NAME>:80/my-web-app/;
}
}
}
events { worker_connections 1024; }
NB: Here /my-api and /my-web-app are the application context path . The SERVICE_NAME is the name specified for each service in docker-compose.yml file.
Dockerfile for nginx
FROM nginx
RUN rm /etc/nginx/conf.d/default.conf
COPY nginx.conf /etc/nginx
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
Now access urls via
http://localhost/api/
http://localhost/web/
If your looking for WebSocket conf here it is :
server {
server_name _;
location /ws {
proxy_pass http://localhost:8888;
# this is the key to WebSocket
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
}
location / {
proxy_pass http://localhost:8889;
}
}
Happy coding :)

Resources