what is the difference between request.ip and request.raw.ip in fastify? - fastify

Difference between request.ip and request.raw.ip in fastify.
Request
The first parameter of the handler function is Request.
Request is a core Fastify object containing the following fields:
raw - the incoming HTTP request from Node core
ip - the IP address of the incoming request
It contains more fields I am just showing relevant fields to my question.
you can check more field out here -> https://www.fastify.io/docs/latest/Request/

Difference between request.ip and request.raw.ip in fastify.
There is a difference only if your server runs behind a proxy/balancer.
Let's make an example:
Client IP: 123.123.123.123
Balancer IP: 10.10.10.10
The request.raw.connection.remoteAddress will be 10.10.10.10
The request.ip will be 123.123.123.123 (only if trustProxy option is enabled) or it will ba as the raw one
Many other ones are just shortcuts such as hostname or url.
Ip example:
Runs locally the Fastify server
{"ip":"127.0.0.1","ipRaw":"","ipRemote":"127.0.0.1"}
Runs behind a ngnix with trust proxy (192.168.128.2 is nginx)
{"ip":"192.168.128.1","ipRaw":"","ips":["192.168.128.2","192.168.128.1"],"ipRemote":"192.168.128.2"}
Runs behind a ngnix without trust proxy (return the caller ip)
{"ip":"192.168.128.2","ipRaw":"","ipRemote":"192.168.128.2"}
You can play with it:
const fastify = require('fastify')({
logger: true,
trustProxy: true
})
fastify.get('/', async (request, reply) => {
return {
ip: request.ip,
ipRaw: request.raw.ip || '',
ips: request.ips,
ipRemote: request.raw.connection.remoteAddress
}
})
fastify.listen(5000, '0.0.0.0')
version: "3.9"
services:
localnode:
build:
context: ./
dockerfile: Dockerfile-fastify
ports:
- "5000:5000"
nginx:
build:
context: ./
dockerfile: Dockerfile
ports:
- "8080:80"
Fastify
FROM node:alpine
COPY package.json ./
RUN npm install
COPY ./proxy.js .
EXPOSE 5000
CMD ["node", "proxy.js"]
nginx
FROM nginx
EXPOSE 8080
COPY default.conf /etc/nginx/conf.d/
and the config
server {
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header Connection "";
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://localnode:5000;
}
}
PS: these docker files are not for production

Related

docker-compose publishing port to host

I'm playing a bit with docker, trying to expose an nginx container to the outside, serving 2 express app (not exposed).
I'm facing a problem, it looks like I can't access my nginx container from my host.
Here are my different files:
Each app is, at the moment fairly simple:
const express = require('express');
const app = express();
const port = 80;
app.get('/', (req, res) => {
const os = require("os");
const hostname = os.hostname();
res.json({hostname: hostname, message: 'Hello from auth'}); //or Hello from health
});
app.listen(port, () => {
console.log(`Example app listening on port ${port}`);
});
The Nginx config file I'm using:
server {
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://health/;
}
location /api/auth {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://auth/;
}
}
and finally the docker-compose file
version: "3.8"
services:
auth:
build:
context: ./auth
health:
build:
context: ./health
nginx:
restart: always
build:
context: ./nginx
ports:
- "8080:80"
When I open a cli from the nginx container, I'm able to query both my express app through nginx:
But when I'm trying from my host web browser, it looks like I can't connect to the nginx container.
Any ideas ?
PS
Here are the Dockerfiles:
express app
FROM node:14-alpine
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 80
CMD [ "node", "index.js" ]
nginx
FROM nginx
COPY default.conf /etc/nginx/conf.d/default.conf

How to dockerize reverse proxy nginx routing to 1 react and 1 express app? I am getting Bad Gateway

I have created docker files for 1 react app, 1 express api backend server and 1 nginx server which I want as reverse proxy for react and express apps. I am getting 502 Bad Gateway and I am unable to figure out why nginx is not able to forward the requests.
docker-compose.yml
version: "3"
services:
reverse-proxy:
image: mern-reverse-proxy
build: ./reverse-proxy
container_name: mern-app-reverse-proxy
ports:
- "80:80"
client:
image: mern-react
build: ./client
volumes:
- ./client/:/usr/src/app
- /usr/src/app/node_modules
container_name: mern-app-client
environment:
CHOKIDAR_USEPOLLING: "true"
depends_on:
- reverse-proxy
server:
image: mern-express
build: ./server
volumes:
- ./server/:/usr/src/app
- /usr/src/app/node_modules
container_name: mern-app-server
environment:
CHOKIDAR_USEPOLLING: "true"
depends_on:
- reverse-proxy
react-dockerfile
FROM node:16-alpine3.11
WORKDIR /usr/src/app
COPY ./package.json ./
COPY ./yarn.lock ./
RUN yarn install
COPY . .
EXPOSE 80
CMD ["yarn", "start"]
express-dockerfile
FROM node:16-alpine3.11
WORKDIR /usr/src/app
COPY ./package.json ./
COPY ./yarn.lock ./
RUN yarn install
COPY . .
EXPOSE 80
CMD ["yarn", "start"]
nginx-dockerfile
FROM nginx
RUN rm /etc/nginx/conf.d/default.conf
COPY ./nginx.conf /etc/nginx/nginx.conf
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
nginx.conf
worker_processes 1;
events { worker_connections 1024; }
http {
sendfile on;
server {
listen 80;
location /api {
proxy_pass http://server;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
location / {
proxy_pass http://client;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
}
Is it because of the ports or proxy_pass in nginx.conf? I have exposed port 80 in all 3 containers because of reverse proxy but haven't mapped the ports of client and server services as they lie in the same network? I would be very grateful if somebody can help. Thanks for your time :)

Nginx config for multiple react apps (docker-compose)

For development, I'm trying to run two react apps, locally, on the same port (so both can share localStorage), where app1 runs on localhost:8000 and app2 runs on localhost:8000/app2. However, with the setup below I have the issue that all requests to localhost:8000/app2/... are routed to app1.
Is there a way around this?
nginx.conf
Update: moved /app2 block in front of / (see comment).
server {
listen 8000;
server_name localhost;
location /app2 {
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $http_host;
proxy_pass http://app2:3001;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
location / {
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $http_host;
proxy_pass http://app1:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
docker-compose.yml
version: "3"
services:
nginx:
image: nginx:latest
ports:
- "8000:8000"
volumes:
- ./nginx/nginx.conf:/etc/nginx/conf.d/default.conf
app1:
container_name: "app1"
image: node:12
ports:
- "3000:3000"
build:
context: "./app1"
volumes:
- "./app1:/src/app"
app2:
container_name: "app2"
image: node:12
ports:
- "3001:3001"
build:
context: "./app2"
volumes:
- "./app2:/src/app"
Dockerfile app1
And the app2 Dockerfile has EXPOSE 3001 and npm start --port 3001
FROM node:12
WORKDIR /src/app
COPY package*.json ./
RUN npm install
EXPOSE 3000
CMD ["npm", "start", "--port", "3000"]
I'll update this old question with an answer: The problem was not related to nginx or docker-compose. Instead, I was running two create-react-app applications in development mode and I forgot to set { "homepage": "/app2" } in package.json or set PUBLIC_URL=/app2 as environment variable to serve the app from a subdirectory.

Secure websocket with Apollo Express, Nginx and docker-compose

Im trying to publish my first GraphQl project on a VPN, using docker-compose
It consists on a webapp, running on nodejs, and a GraphQl API, also running on nodejs, with Apollo Express and Prisma
The idea is to have the app and the API running on different containers and use a nginx container to proxy pass the requests to the right container (/ goes to the webapp, /api goes to the API)
I got it working, seems to be fine, but it needs to run on https. So Ive set up a letsencrypt certificate and set it on nginx and is working too, except for one thing: subscriptions
If I try to connect to the websocket using ws://mydomain/api, its refused cause the app is running on https. But if I try to connect on wss://mydomain/api, I get:
WebSocket connection to 'wss://mydomain/api' failed: Error during WebSocket handshake: Unexpected response code: 400
I read a lot of docs and tutorials and it seems to me Im doing right, but it just wont work and I dont know what to try anymore
Here is the relevant docker-compose.yml code:
version: "3"
services:
api:
build:
context: ./bin/api
container_name: 'node10-api'
restart: 'always'
entrypoint: ["sh", "-c"]
command: ["yarn && yarn prisma deploy && yarn prisma generate && yarn start"]
restart: always
ports:
- "8383:8383"
links:
- prisma
volumes:
- /local/api:/api
app:
build:
context: ./bin/app
container_name: 'node12-app'
restart: 'always'
entrypoint: ["sh", "-c"]
command: ["yarn && yarn build && yarn express-start"]
restart: always
ports:
- "3000:3000"
links:
- api
volumes:
- /local/app:/app
nginx:
container_name: 'nginx'
restart: always
image: nginx:1.15-alpine
ports:
- '80:80'
- '443:443'
volumes:
- ./data/nginx:/etc/nginx/conf.d
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot
- ./www:/etc/nginx/html
And here is the nginx conf:
upstream app {
ip_hash;
server app:3000;
}
upstream api {
server api:8383;
}
server {
listen 80;
}
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 443 ssl;
server_name mydomain;
ssl_certificate /etc/letsencrypt/live/mydomain/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/mydomain/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
location / {
proxy_pass http://app;
}
location /api {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_pass http://api;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
}
}
And finally, the server initialization:
const app = express();
app.use(cookieParser());
app.use(process.env.URL_BASE_PATH + '/' + process.env.UPLOAD_URL_DIR, express.static(process.env.UPLOAD_PATH));
app.use(process.env.URL_BASE_PATH + '/assets', express.static('assets'));
app.use(process.env.URL_BASE_PATH + '/etc', router);
app.use(createLocaleMiddleware());
app.use(helmet());
app.disable('x-powered-by');
console.log(process.env.URL_BASE_PATH);
if(process.env.URL_BASE_PATH === '')server.applyMiddleware({app, cors:corsOptions});
else server.applyMiddleware({app, cors:corsOptions, path:process.env.URL_BASE_PATH});
const httpServer = http.createServer(app);
server.installSubscriptionHandlers(httpServer);
//STARTING
httpServer.listen({port: process.env.SERVER_PORT}, () => {
console.log(`🚀 Server ready`)
}
);
Where server is an ApolloServer
Everything works but the wss connection: the app can connect to the api using https://mydomain/api normally, and regular ws connection works too, if I run the app on http
Is just wss that I cant get to work
Any clues? What am I doing wrong here?
I found my own solution: the docker/nginx configs were right, but Apollo was expecting the websocket connection on wss://mydomain/graphql, even though the graphql server is running on https://mydomain/api
I failed to find a way to change that, so I added this to the nginx conf:
location ^~/graphql {
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Frame-Options SAMEORIGIN;
proxy_pass http://api;
}
And it finally worked

How to resolve mixed content that was loaded over HTTPS, but requested an insecure resource

I am try to run React/NodeJS app with a nodejs and client setup. I have everything running fine locally with using Docker/docker-compose. But when I deploy it to AWS ECS, I get the following error:
apiRequest.js:52 Mixed Content: The page at 'https://myAppUrl/auth/login' was loaded over HTTPS, but requested an insecure resource 'http://0.0.0.0:4000/auth/login'. This request has been blocked; the content must be served over HTTPS.
Docker-compose file
version: "3"
services:
client:
image: docker-registry:dev-test7
api:
image: docker-registry:dev-test4
ports:
- "4000:4000"
nginx:
image: docker-registry:nginx:dev-test5
ports:
- "80:80"
depends_on:
- client
- api
Nginx default.conf
upstream client {
server 0.0.0.0:3000;
}
upstream api {
server 0.0.0.0:5000;
}
server {
listen 80;
location / {
proxy_pass http://client;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
location /api {
rewrite /api/(.*) /$1 break;
proxy_pass http://api;
}
}
The app deploys fine to AWS ECS and it runs fine locally with a docker-compose up. I thought since the app was calling the nodejs api's locally, it wouldn't need to use https. I have tried many different variations for the url such as using the domain or IP, but still not working

Resources