How to setup nginx with docker compose - node.js

I have nginx config and docker compose but I am not able to link them together properly.
default.conf:
worker_processes auto;
events { worker_connections 1024; }
http {
upstream docker_nginx {
server nginx:6000;
}
server {
location / {
proxy_set_header X-Forwarded-For $remote_addr;
proxy_pass http://localhost:4000;
proxy_set_header Host $host;
}
location /jwt/user/ {
proxy_set_header X-Forwarded-For $remote_addr;
pproxy_pass http://localhost:4000;
proxy_set_header Host $host;
}
}
}
Docker compose:
version: '2'
services:
reverseproxy:
image: nginx
container_name: docker_nginx
ports:
- 6000:6000
volumes:
- ./services:/default.conf
restart: always
jwt:
build: ./jwt
image: jwt:1.0
ports:
- 4000:4000
restart: always
volumes:
- ./jwt
I want to listen on port 6000 and route request to a service on port 4000. I tried running http://nginx:6000/ and http://localhost:6000/ but I get this:
This site can’t be reached. The web page at http://localhost:6000/ might be temporarily down or it may have moved permanently to a new web address.
ERR_UNSAFE_PORT

Let me give you my repo, you can check my docker-compose.yml, it uses several services and are proxied using Nginx and are worked together.
On this link you’ll find a default.conf file and it contains several Nginx upstream configuration please take a look at how I used docker-compose service references there.
Inside the client/ directory, I also have another Nginx as a web server of a react.js project.
On server/ directory, it has a Node.js API, It connects to Redis and Postgres SQL database also built from docker-compose.yml.
I think this use case can be useful for you and other users!

There are few things which are misconfigured.
config file is mapped to wrong location
It should be ./services/default.conf:/etc/nginx/nginx.conf
You've to explicitly tell nginx (in conf file) to listen to 6000 port or otherwise map your 6000 port to 80 port inside nginx (default for nginx) "6000:80"
Yes, and as suggested in other answer, instead of proxying to localhost:4000 you have to proxy it to jwt:4000.
For more on nginx in docker see it's doc https://hub.docker.com/_/nginx/

Should - ./services:/default.conf be something like - ./default.conf:/etc/nginx/sites-enables/default.conf?
Also try replacing proxy_pass http://localhost:4000; with proxy_pass http://jwt:4000; and you probably don't need the upstream section.
The "proxy_pass" is the address of the proxied service from within the network created by docker compose - and each service's domain is the same as its name in docker-compose.yml
I have never used upstream, but if you want to use it, I think you have to put this
upstream docker-jwt {
server jwt:4000;
}
# and
proxy_pass http://docker-jwt;
but as I put above, I would skip that and just put http://jwt:4000 as the proxied address.
You also seem to have a typo here: pproxy_pass http://localhost:4000;
But if you leave the typo, and look at the docker compose logs, you will hopefully see an error from nginx, which means that you successfully mapped the config file

Related

Can't redirect traffic to localhost with nginx and docker

I'm new to Docker and nginx so this may be a simple question but I've been searching through question/answers for a while and haven't found the correct solution.
I'm trying to run an nginx server through docker to reroute all requests to my.example.com/api/... to localhost:3000/api/...
I have the following Dockerfile:
FROM nginx
COPY ./nginx.conf /etc/nginx/conf.d/default.conf
and the nginx.conf file:
server {
server_name my.example.com;
location / {
proxy_pass http://localhost:3000/;
}
}
When I make the calls to the api on localhost:3000 this works fine but when I try to run against my.example.com I get a network error that the host isn't found. To be clear the domain I want to 'redirect' traffic from to localhost host is a valid server address, but I want to mock the api for it for development.
This isn't working because your nginx proxying request to localhost which is container itself but your app is running on host's port 3000 -- outside of container. check this article
change
proxy_pass http://localhost:3000/;
  to
proxy_pass http://host.docker.internal.
add 127.0.0.1 example.com my.example.com in etc/hosts

Nginx 502 Bad Gateway when Jenkins pipeline is running a docker for React app

I have an Ubuntu 18.04 server running in a Droplet (DigitalOcean) secured with SSL and using an Nginx reverse proxy. Also Jenkins in running in my server (not in any docker) and configured to be accessed under the domain I created for it: jenkins.testdomain.com (all these steps following DO docs)
So the goal is to manage the deployment of a Node.js-React application to my testdomain.com later, by now, I just want to create the dist folder generated, after the 'npm build', within the /var/lib/jenkins/workspace/ , just that.
By now, I'm able to access my jenkins.testdomain.com site alright, trigger the pipeline to start the process after pushing to my repo, and start to run the stages; but it's here when start to fail nginx, when the pipeline reaches the Deliver phase (let's read 'npm build' phase), sometimes in the Build phase ('npm install').
It's at this point, reading the Jenkins console output where I see when it gets stuck and eventually shows a 502 Bad Gateway error. I will need to run the command systemctl restart Jenkins in my server console, to have access again. After restarting, the pipeline resume the work and seems to get the job done :/
In the /var/log/nginx/error.log for nginx I can read:
*1 connect() failed (111: Connection refused) while connecting to upstream, client: 85.146.85.194, server: jenkins.testdomain.com,
request: "GET /job/Basic%20NodeJS-React%20app/8/console HTTP/1.1",
upstream: "https:
//127.0.0.1:8080/job/Basic%20NodeJS-React%20app/8/console", host:
"jenkins.testdomain.com", referrer: "https:
//jenkins.testdomain.com/job/Basic%20NodeJS-React%20app/8/"
*1 SSL_do_handshake() failed (SSL: error:1408F10B:SSL routines:ssl3_get_record:wrong version number) while SSL handshaking
to upstream, client: 85.146.85.194, server: jenkins.testdomain.com,
request: "GET /favicon.ico HTTP/1.1", upstream: "https:
//127.0.0.1:8080/favicon.ico", host: "jenkins.testdomain.com",
referrer: "https:
//jenkins.testdomain.com/job/Basic%20NodeJS-React%20app/8/console" ...
In the Jenkinsfile of my node-js-react app (from jenkins repo), the agent looks like this:
pipeline {
agent {
docker {
image 'node:6-alpine'
args '-p 3000:80'
}
}
environment {
CI = 'true'
}
stages {
// Build, Test, and Deliver stages
}
}
And my jenkins.testdomain.com configuration (/etc/nginx/sites-available/jenkins.testdomain.com) is like this (pass tests from nginx -t):
server {
listen 80;
root /var/www/jenkins.testdomain.com/html;
server_name jenkins.testdomain.com www.jenkins.testdomain.com;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# Fix the "It appears that your reverse proxy set up is broken" error.
proxy_pass http://localhost:8080;
# High timeout for testing
proxy_connect_timeout 1200s;
proxy_send_timeout 1200s;
proxy_read_timeout 1200s;
proxy_redirect http://localhost:8080 https://jenkins.testdomain.com;
# Required for new HTTP-based CLI
proxy_http_version 1.1;
proxy_request_buffering off;
# Required for HTTP-based CLI to work over SSL
proxy_buffering off;
}
# Certbot auto-generated lines...
}
Any help would be very welcomed 3 days struggling with this and playing around with the different proxy_ directives from nginx and so...
Thanks in advance!
OK just add an update that some days after my latest post, I realized that the main and only reason the server was going down was a lack of resources in the droplet.
So I was using a droplet with 1GB of RAM, 25GB HD, etc.. (the most basic one), so I chose to update it to use at least 2GB of RAM and indeed, that made it work as I was expecting. Everything until now works fine and that issue didn’t happen again.
Hope it helps if someone experiences the same issue.

How to use nginx as proxy to s3 aws?

I try to build nginx in docker as proxy to aws s3.
The problem is there is a variables that I don't understand where they come from?
First, this is how my bucket look like:
Inside this bucket I have pic.png file.
When I using nginx I start from docker using docker-compose:
web:
image: nginx
volumes:
- ./example.com.conf:/etc/nginx/conf.d/default.conf
ports:
- '8080:80'
And I start the docker using docker-compose up.
I have my aws_access_key and aws_secret_key from IAM key.
This is how I defined my example.com.conf file:
server {
listen 80;
server_name localhost;
location ~ '^/([^/]+)/(.*)$' {
set $bucket 'my-bucket';
set $key '';
# Setup AWS Authorization header
set $aws_signature '';
# the only reason we need lua is to get the current date
set_by_lua $now "return ngx.cookie_time(ngx.time())";
#the access key
set $aws_access_key 'AKIA6*******';
set $aws_secret_key '1wLXpiNN0***********';
# the actual string to be signed
# see: http://docs.amazonwebservices.com/AmazonS3/latest/dev/RESTAuthentication.html
set $string_to_sign "$request_method\n\n\n\nx-amz-date:$now\n/$bucket/$key";
# create the hmac signature
set_hmac_sha1 $aws_signature $aws_secret_key $string_to_sign;
# encode the signature with base64
set_encode_base64 $aws_signature $aws_signature;
proxy_set_header x-amz-date $now;
proxy_set_header Authorization "AWS $aws_access_key:$aws_signature";
rewrite .* /$key break;
# we need to set the host header here in order to find the bucket
proxy_set_header Host $bucket.s3.amazonaws.com;
rewrite .* /$key break;
# another solution would be to use the bucket in the url
# rewrite .* /$bucket/$key break;
proxy_pass http://s3.amazonaws.com;
}
}
But I getting error when I run docker with nginx:
nginx: [emerg] unknown directive "set_by_lua" in /etc/nginx/conf.d/default.conf:13
So I'm not sure I doing this right.
I need an explanation and an example of how to do it right please.
for example what is $key? what the the request should look like? http://localhost:8080/pic.png?
try to use nginx with lua installed:
web:
image: firesh/nginx-lua
volumes:
- ./example.com.conf:/etc/nginx/conf.d/default.conf
ports:
- '8080:80'
the problem is set_by_lua need nginx to be compiled with ngx_devel_kit
UPDATE
it seems that you miss many modules , I suggest you to use this Dockerfile
example:
docker run -v /path/to/example.com.conf:/etc/nginx/conf.d/default.conf openresty/openresty:centos
An alternative to using lua for proxying S3 is to use njs to do the proxying. This project shows an example with nginx proxying S3 and running in Docker.

Reroute specific outgoing url to another url - linux ubuntu 16.04

I am working on building a very old program that has many outdated links to dependencies.
These links might be in other dependencies downloaded from the web which takes out the option to change the url
I am able to find all of these dependencies with other links but changing the paths has become a endless task.
Is it possible to create a list of rules for outgoing urls that map one to one.
For example:
http://Oldserver.com/this/is/one/OldDependency.jar -> http://Newserver.com/this/is/one/with/other/url/NewDependency.jar
It does not matter what tool is used for the routing, iptables or something else. I am willing to set anything up for this.
This needs to happen on the OS level because the paths are inside tar files
I was able to get this working by using a local nginx.
The best solution here was a dockerized nginx container.
I will use the example above
http://Oldserver.com/this/is/one/OldDependency.jar -> http://Newserver.com/this/is/one/with/other/url/NewDependency.jar
Steps:
Edit your host file to route the host to your localhost
$ sudo vim /etc/hosts
Add this line to your hosts file
127.0.0.1 Oldserver.com
Pull docker the nginx docker container
docker pull nginx
Save this nginx configuration file to some path (code tags not working, sorry)
events {
worker_connections 4096; ## Default: 1024
}
http {
server {
listen 80;
server_name Oldserver.com;
location = /this/is/one/OldDependency.jar {
proxy_pass http://Newserver.com/this/is/one/with/other/url/NewDependency.jar;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
location / {
proxy_pass http://Oldserver.com;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
}
If you have more paths, add then above the wildcard location /.
The location / forwards all paths not matched to the original server with the path reserved
Set permissions on config
chmod 600 /some/path/to/nginx.conf
Start up a nginx docker container with the configuration file
docker run --name proxy -v /some/path/to/nginx.conf:/etc/nginx/nginx.conf:ro -p 80:80 -d nginx
Now, every request to the Oldserver.com will go through your nginx proxy and reroute if it matches any of your location configurations
I had a similar problem to this, needing to rewrite an outgoing url. In my case it was in a docker container running on kubernetes.
In my case it was because of this issue: https://stackoverflow.com/a/63712440
The app runtime (.net core 3.1) crypto code only checks the first url in the list of certificate revocation endpoints. I was doing an SSL client certificate setup (mTLS).
The PKI cert I was issued contained an internal domain address first+second, and then a publicly addressable url third:
X509v3 CRL Distribution Points:
Full Name:
URI:http://some.domain.1/CRL/Cert.crl
URI:http://some.domain.2/CRL/Cert.crl
URI:http://x509.domain.com/CRL.crl
Because the domain addresses use a 'CRL' folder in the path, but the public url does not, just mapping the public IP address to the local domain host via /etc/hosts (or k8s hostAliases) didn't work.
To solve this in k8s, I added a sidecar to my pod; here's the details:
First, start with an nginx.conf:
events { }
http {
server {
listen 80;
location / {
proxy_pass http://127.0.0.1:5555;
}
location /CRL/ {
proxy_pass http://x509.domain.com/;
}
}
}
This kind of looks like a reverse proxy, but really it just an actual proxy. My dotnet app serves on port 5555 inside the pod. 127.0.0.1 will route to the pod, not the nginx container. Note that the second proxy_pass value doesn't include the 'CRL' path, that allows the url to rewritten not just redirected.
I then built an nginx docker image called crl-rewrite-proxy:
FROM nginx
COPY nginx.conf /etc/nginx/nginx.conf
I then added the image into my pod yaml as a sidecar:
- name: crl-rewrite-proxy
image: $(image):crl-rewrite-proxy
ports:
- containerPort: 80
And then added an alias for the internal domain address, so outgoing calls to it from the app would route back into the pod:
hostAliases:
- ip: "127.0.0.1"
hostnames:
- "some.domain.1"
Lastly I defined an ingress in my k8s yaml, so the aliased calls will be routed to the sidecar:
- kind: Ingress
apiVersion: networking.k8s.io/v1
metadata:
name: $(name)-crl-rewrite-proxy
namespace: $(namespace)
spec:
rules:
- host: $(name).$(host)
http:
paths:
- path: /CRL
pathType: ImplementationSpecific
backend:
service:
name: $(name)
port:
number: 80
The app runtime makes a call to http://some.domain.1/CRL/Cert.crl; the host alias routes that to 127.0.0.1:80; k8s passes that to the sidecar; the sidecar passes that to nginx; nginx rewrite the host+url to a public IP on a different path; the resource then gets fetched successfully.
Thanks to thor above for the local setup, I used this to verify it would work locally before doing up the k8s bits.

Nginx & Node inside the same Docker container

I know that this is against the accepted convention but I need to run Nginx alongside a node.js server inside the same Docker container. I have no issues spinning up the container and getting Nginx and Node working alongside. Nginx works on Port 443 which is exposed by the container. The Node server listens on Port 8080 and is reverse proxied by Nginx
location /node/index.js {
proxy_pass http://127.0.0.1:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X_Forwarded_For $proxy_add_x_forwarded_for;
}
Here is what I find
Access to the resources, https://example.com/text.txt on the Nginx server is straightforward
I startup the Nodejs server nodejs index.js & disown which has been configured to output a few diagnostic messages to a log file. Those messages tell me that the server is working just fine.
I can Telnet to Port 8080 from inside the Docker container - once again indicating that the Node server i sup and running
Precisely the same configuration but with Nginx running on a "real" server with Node running on the same server offers acceess to the Node server with no issues.
However, when I attempt to access the Node server running inside the Nginx Docker container, say, https://example.com/node/index.js I get a 404 error.
Examining my Nginx logs reveals that the request did reach Nginx inside its Docker host. However, examining the Node server log file indicates that the request never got forwarded.
It is not clear to me why this could be happening. From what I can tell when Nginx is running inside a Docker container it is failing to act as a reverse proxy for Node running inside the same container.
For good measure I tried EXPOSing the Node port, 8080 and starting up the Docker container with -p 8080:8080 -p 443:443 but that made no difference. I'd be most grateful to anyone who might be able to shed any light on what is going on here.

Resources