Can't redirect traffic to localhost with nginx and docker - node.js

I'm new to Docker and nginx so this may be a simple question but I've been searching through question/answers for a while and haven't found the correct solution.
I'm trying to run an nginx server through docker to reroute all requests to my.example.com/api/... to localhost:3000/api/...
I have the following Dockerfile:
FROM nginx
COPY ./nginx.conf /etc/nginx/conf.d/default.conf
and the nginx.conf file:
server {
server_name my.example.com;
location / {
proxy_pass http://localhost:3000/;
}
}
When I make the calls to the api on localhost:3000 this works fine but when I try to run against my.example.com I get a network error that the host isn't found. To be clear the domain I want to 'redirect' traffic from to localhost host is a valid server address, but I want to mock the api for it for development.

This isn't working because your nginx proxying request to localhost which is container itself but your app is running on host's port 3000 -- outside of container. check this article
change
proxy_pass http://localhost:3000/;
  to
proxy_pass http://host.docker.internal.
add 127.0.0.1 example.com my.example.com in etc/hosts

Related

Nginx nodejs problems

I am trying to use nginx to direct a website hosted on port 8080 to domain exemple1.com and another one on port 8081 that i want to redirect to domain exemple2.com.
On the file /etc/nginx/sites-available/default i puted this code:
location ~/example1/ {
proxy_pass http://example1.com;
}
location ~/example2/ {
proxy_pass http://example2.com;
}
but i couldn make it work . I am running 2 nodejs servers on the ports i talked about (port 8080 and 8081).
What i am doing wrong and how to "fix "
it?
Because the downstream app server running on different ports(listen) than coming in, you need to specify ports in proxy_pass. So I think
listen 8080;
location ~/example1/ {
proxy_pass http://example1.com:8080;
}
location ~/example2/ {
proxy_pass http://example2.com:8081;
}

How to setup nginx with docker compose

I have nginx config and docker compose but I am not able to link them together properly.
default.conf:
worker_processes auto;
events { worker_connections 1024; }
http {
upstream docker_nginx {
server nginx:6000;
}
server {
location / {
proxy_set_header X-Forwarded-For $remote_addr;
proxy_pass http://localhost:4000;
proxy_set_header Host $host;
}
location /jwt/user/ {
proxy_set_header X-Forwarded-For $remote_addr;
pproxy_pass http://localhost:4000;
proxy_set_header Host $host;
}
}
}
Docker compose:
version: '2'
services:
reverseproxy:
image: nginx
container_name: docker_nginx
ports:
- 6000:6000
volumes:
- ./services:/default.conf
restart: always
jwt:
build: ./jwt
image: jwt:1.0
ports:
- 4000:4000
restart: always
volumes:
- ./jwt
I want to listen on port 6000 and route request to a service on port 4000. I tried running http://nginx:6000/ and http://localhost:6000/ but I get this:
This site can’t be reached. The web page at http://localhost:6000/ might be temporarily down or it may have moved permanently to a new web address.
ERR_UNSAFE_PORT
Let me give you my repo, you can check my docker-compose.yml, it uses several services and are proxied using Nginx and are worked together.
On this link you’ll find a default.conf file and it contains several Nginx upstream configuration please take a look at how I used docker-compose service references there.
Inside the client/ directory, I also have another Nginx as a web server of a react.js project.
On server/ directory, it has a Node.js API, It connects to Redis and Postgres SQL database also built from docker-compose.yml.
I think this use case can be useful for you and other users!
There are few things which are misconfigured.
config file is mapped to wrong location
It should be ./services/default.conf:/etc/nginx/nginx.conf
You've to explicitly tell nginx (in conf file) to listen to 6000 port or otherwise map your 6000 port to 80 port inside nginx (default for nginx) "6000:80"
Yes, and as suggested in other answer, instead of proxying to localhost:4000 you have to proxy it to jwt:4000.
For more on nginx in docker see it's doc https://hub.docker.com/_/nginx/
Should - ./services:/default.conf be something like - ./default.conf:/etc/nginx/sites-enables/default.conf?
Also try replacing proxy_pass http://localhost:4000; with proxy_pass http://jwt:4000; and you probably don't need the upstream section.
The "proxy_pass" is the address of the proxied service from within the network created by docker compose - and each service's domain is the same as its name in docker-compose.yml
I have never used upstream, but if you want to use it, I think you have to put this
upstream docker-jwt {
server jwt:4000;
}
# and
proxy_pass http://docker-jwt;
but as I put above, I would skip that and just put http://jwt:4000 as the proxied address.
You also seem to have a typo here: pproxy_pass http://localhost:4000;
But if you leave the typo, and look at the docker compose logs, you will hopefully see an error from nginx, which means that you successfully mapped the config file

How to connect nginx to local mongodb

I've got nginx to run my (node.js/react) application on the server. But I can't seem to connect to the database.
In the nginx.conf file I've added the following inside http.
http {
...
server {
listen 80;
server_name localhost;
...}
...}
And above the http section I have the following,
stream {
server {
listen 4000;
proxy_connect_timeout 1s;
proxy_timeout 3s;
proxy_pass stream_mongo_backend;
}
upstream stream_mongo_backend {
server 127.0.0.1:27017;
}
}
I start the nginx server, the application runs on localhost, opens up the login page but I can't login because it's still not connected to the database (mongodb).
I'm not sure if I've got my port numbers wrong or if I'm missing some configuration line inside nginx.conf.
EDIT: Ok, I had it wrong before. I wasn't supposed to connect to mongodb at this point. I was supposed to connect to the backend server of my application which would run at 4000. So now I've added a new location for /api/ inside http and proxied all requests to 4000. I still have one question though. I have to run my backend server separately for this to work. For the frontend I've created a folder and put all my build files in there so nginx starts up the webserver from there. Doing the same for the backend did not start up the server. Is there a way to get nginx to start the backend server as well?
Also can I get the frontend to run directly without the build files ? Like node would with npm start?
the port number is right. try to open up a mongo shell and see if you are able to access a mongo instance. if not, you will need to run sudo service mongodb start to start it up.
Guess it's knida late but you don't need to setup nginx for your backend to connect local mongodb.
And you need to run the frontend and backend server first by yarn start, node run or something like that if you want to run it without build files.
And then bypass the calls from 80 port to the local host servers.
For example, your FE run at 3000 port, BE run at 5000 port.
Then your nginx should be:
http {
...
server {
listen 80;
server_name localhost;
location /api/ {
proxy_pass localhost:5000;
}
location / {
proxy_pass localhost:3000;
}
...}
...}

Reroute specific outgoing url to another url - linux ubuntu 16.04

I am working on building a very old program that has many outdated links to dependencies.
These links might be in other dependencies downloaded from the web which takes out the option to change the url
I am able to find all of these dependencies with other links but changing the paths has become a endless task.
Is it possible to create a list of rules for outgoing urls that map one to one.
For example:
http://Oldserver.com/this/is/one/OldDependency.jar -> http://Newserver.com/this/is/one/with/other/url/NewDependency.jar
It does not matter what tool is used for the routing, iptables or something else. I am willing to set anything up for this.
This needs to happen on the OS level because the paths are inside tar files
I was able to get this working by using a local nginx.
The best solution here was a dockerized nginx container.
I will use the example above
http://Oldserver.com/this/is/one/OldDependency.jar -> http://Newserver.com/this/is/one/with/other/url/NewDependency.jar
Steps:
Edit your host file to route the host to your localhost
$ sudo vim /etc/hosts
Add this line to your hosts file
127.0.0.1 Oldserver.com
Pull docker the nginx docker container
docker pull nginx
Save this nginx configuration file to some path (code tags not working, sorry)
events {
worker_connections 4096; ## Default: 1024
}
http {
server {
listen 80;
server_name Oldserver.com;
location = /this/is/one/OldDependency.jar {
proxy_pass http://Newserver.com/this/is/one/with/other/url/NewDependency.jar;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
location / {
proxy_pass http://Oldserver.com;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
}
If you have more paths, add then above the wildcard location /.
The location / forwards all paths not matched to the original server with the path reserved
Set permissions on config
chmod 600 /some/path/to/nginx.conf
Start up a nginx docker container with the configuration file
docker run --name proxy -v /some/path/to/nginx.conf:/etc/nginx/nginx.conf:ro -p 80:80 -d nginx
Now, every request to the Oldserver.com will go through your nginx proxy and reroute if it matches any of your location configurations
I had a similar problem to this, needing to rewrite an outgoing url. In my case it was in a docker container running on kubernetes.
In my case it was because of this issue: https://stackoverflow.com/a/63712440
The app runtime (.net core 3.1) crypto code only checks the first url in the list of certificate revocation endpoints. I was doing an SSL client certificate setup (mTLS).
The PKI cert I was issued contained an internal domain address first+second, and then a publicly addressable url third:
X509v3 CRL Distribution Points:
Full Name:
URI:http://some.domain.1/CRL/Cert.crl
URI:http://some.domain.2/CRL/Cert.crl
URI:http://x509.domain.com/CRL.crl
Because the domain addresses use a 'CRL' folder in the path, but the public url does not, just mapping the public IP address to the local domain host via /etc/hosts (or k8s hostAliases) didn't work.
To solve this in k8s, I added a sidecar to my pod; here's the details:
First, start with an nginx.conf:
events { }
http {
server {
listen 80;
location / {
proxy_pass http://127.0.0.1:5555;
}
location /CRL/ {
proxy_pass http://x509.domain.com/;
}
}
}
This kind of looks like a reverse proxy, but really it just an actual proxy. My dotnet app serves on port 5555 inside the pod. 127.0.0.1 will route to the pod, not the nginx container. Note that the second proxy_pass value doesn't include the 'CRL' path, that allows the url to rewritten not just redirected.
I then built an nginx docker image called crl-rewrite-proxy:
FROM nginx
COPY nginx.conf /etc/nginx/nginx.conf
I then added the image into my pod yaml as a sidecar:
- name: crl-rewrite-proxy
image: $(image):crl-rewrite-proxy
ports:
- containerPort: 80
And then added an alias for the internal domain address, so outgoing calls to it from the app would route back into the pod:
hostAliases:
- ip: "127.0.0.1"
hostnames:
- "some.domain.1"
Lastly I defined an ingress in my k8s yaml, so the aliased calls will be routed to the sidecar:
- kind: Ingress
apiVersion: networking.k8s.io/v1
metadata:
name: $(name)-crl-rewrite-proxy
namespace: $(namespace)
spec:
rules:
- host: $(name).$(host)
http:
paths:
- path: /CRL
pathType: ImplementationSpecific
backend:
service:
name: $(name)
port:
number: 80
The app runtime makes a call to http://some.domain.1/CRL/Cert.crl; the host alias routes that to 127.0.0.1:80; k8s passes that to the sidecar; the sidecar passes that to nginx; nginx rewrite the host+url to a public IP on a different path; the resource then gets fetched successfully.
Thanks to thor above for the local setup, I used this to verify it would work locally before doing up the k8s bits.

nginx as reverse proxy for runining apache

I have some trouble configuring nginx as reverse proxy.
It is good to say I have a VPS with kloxo and webmin installed and running multiple domains on my VPS.
I have installed nginx via REPEL and YUM and this is my /etc/nginx/nginx.conf file
given in this link.
I change apache port to 8080 and restart service for making changes and start nginx and there is some problem.
When I try reaching every domains on my centos vps, I face to APACHE START PAGE (WELCOME PAGE) and when I enter my VPS IP in browser like x.x.x.x, I face to NGINX START PAGE (WELCOME PAGE).
I want nginx to serve my static files and redirect dynamic ones to Apache for better performance.
There is an example from the book Nginx Http Server, page 235.
server {
server_name .example.com;
root /home/example.com/www;
location ~* \.php.$ {
# Proxy all requests with an URI ending with .php*
# (includes PHP, PHP3, PHP4, PHP5...)
proxy_pass http://127.0.0.1:8080;
}
location / {
# Your other options here for static content
# for example cache control, alias...
expires 30d;
}
}

Resources