How do I access a server on localhost with nginx docker container? - node.js

I'm trying to use a dockerized version of nginx as a proxy server for my node (ExpressJS) application. Without any configuration to nginx and publishing port 80 for the container, I am able to see the default nginx landing page. So I know that much is working.
Now I can mount my sites-enabled directory that contains the configuration for proxy_pass localhost:3000. I have my node application running locally (not in any Docker container) and I can access it via port 3000 (i.e. localhost:3000). However, I would assume that with nginx container running, mapped to port 80, and proxying my localhost:3000, that I would be able to see my very simple (hello world) application. Instead I receive a 502.
Do I need to pass something into docker? Is this likely a nginx configuration error? Here is my nginx configuration:
server {
listen 0.0.0.0:80;
server_name localhost;
location / {
proxy_pass http://localhost:3000;
}
}
I have tried using this question but it did not seem to help. That is unless I'm doing something completely wrong.

If you're using docker-for-mac 18.03 or newer it auto creates a special DNS entry host.docker.internal that dynamically binds to the host inet ip. You can then use the dns name to proxy services running on the host machine from inside a container as a stand-in for localhost.
i.e. an nginx config file:
server {
listen 0.0.0.0:80;
server_name localhost;
location / {
proxy_pass http://host.docker.internal:3000;
}
}

You can get your current IP address as shown here:
ifconfig en0 | grep inet | grep -v inet6 | awk '{print $2}'
Then you can use the --add-host flag with docker run:
docker run --add-host localnode:$(ifconfig en0 | grep inet | grep -v inet6 | awk '{print \$2}') ...
In your proxypass use localnode instead of localhost.

Yes. Docker needs to know about your host machine. You can set an alias to that with the --add-host switch. On a *nix box to create an alias to a name "localbox", this would be:
docker run my_repo/my_image --add-host=localbox:<host_name>`
On boot2docker it would be:
docker run my_repo/my_image --add-host=localbox:192.168.59.3`
where you should replace "192.168.59.3" with whatever boot2docker ip returns.
Then, you should access your host machine always through the alias localbox, so just change your nginx config to:
location / {
proxy_pass http://localbox:3000;
}

On linux, this works for me:
In the docker-compose.yml, mount an entrypoint script into the nginx container:
nginx:
image: nginx:1.19.2
# ...
volumes:
- ./nginx-entrypoint.sh:/docker-entrypoint.d/nginx-entrypoint.sh:ro
The contents of the entrypoint map a local address to the host local address.
apt update
apt install iproute2 -y
echo "`ip route | awk '/default/ { print $3 }'`\tdocker.host.internal" >> /etc/hosts
Then, instead of using localhost inside the container, you can use docker.host.internal.

I had the same problem. Fixed it by using the local ip address of the docker host, instead of localhost.
So if the local ip address of your docker host in your LAN is 192.168.2.2:
location / {
proxy_pass http://192.168.2.2:3000;
}
Of course this solution only works well if you have assigned a static ip to your docker host.

And finally, if you are using Nginx as a reverse proxy for multiple services, you can spin all of that with docker-compose. Make sure to expose ports “80:80” only on the Nginx service. Other services you can expose only the service port without mapping to the underlying network like so:
web:
.....
expose:
- 8080
nginx:
.....
port:
- “80:80”
and then use Nginx configuration proxy_pass http://service-name:port
You don’t need the upstream app part at all

Related

Can't redirect traffic to localhost with nginx and docker

I'm new to Docker and nginx so this may be a simple question but I've been searching through question/answers for a while and haven't found the correct solution.
I'm trying to run an nginx server through docker to reroute all requests to my.example.com/api/... to localhost:3000/api/...
I have the following Dockerfile:
FROM nginx
COPY ./nginx.conf /etc/nginx/conf.d/default.conf
and the nginx.conf file:
server {
server_name my.example.com;
location / {
proxy_pass http://localhost:3000/;
}
}
When I make the calls to the api on localhost:3000 this works fine but when I try to run against my.example.com I get a network error that the host isn't found. To be clear the domain I want to 'redirect' traffic from to localhost host is a valid server address, but I want to mock the api for it for development.
This isn't working because your nginx proxying request to localhost which is container itself but your app is running on host's port 3000 -- outside of container. check this article
change
proxy_pass http://localhost:3000/;
  to
proxy_pass http://host.docker.internal.
add 127.0.0.1 example.com my.example.com in etc/hosts

Two Nginx server block only one works with React app

I want to host several react app on my linux(CentOS) server with nginx.
currently I have two server block in nginx.conf.
First server is where I proxy different request to different server.
Second server is my react app.
I couldn't get my react app host by nginx.
Block 1
server {
listen 80
server_name localhost
root /usr/share/nginx/html
...
location /app1/ {
proxy_pass http://localhost:3010
}
...
}
Block 2
server {
listen 3010;
listen [::]:3010;
server_name localhost;
root /home/administrator/Projects/app1/build;
index index.html index.htm;
location / {
try_files $uri $uri/ =404;
}
}
When I using telnet to check the server is hosting or not. only port 80 is listening.
No server is listening on port 3010.
How should I change my Nginx configuration to make it works?
Update
I check the nginx error log and I got
1862#0: bind() to 0.0.0.0:3010 failed (13: Permission denied)
I've search on it and there are a lot answer about non-root user cannot listen to port below 1024
But the server is trying to bind on 3010
Do I still have to add run nginx by root?
This is probably related to SELinux security policy. If You check on the below command and see http is only allowed to bind on the list of ports.
[root#localhost]# semanage port -l | grep http_port_t
http_port_t tcp 80, 81, 443, 488, 8008, 8009, 8443, 9000
Maybe you can use any of the listed ports or add your custom port using the below command
$ semanage port -a -t http_port_t -p tcp 3010
If semanage command is not installed, you can check using yum provides semanage command to identify which package to install.

Nginx Load Balancer High availability running in Docker

I am running, nginx load balancer container for my backend web server,
Since I have only one container running as nginx load balancer, In case of container die/crash, clients cannot reach webservers,
Below is the nginx.conf
events {}
http {
upstream backend {
server 1.18.0.2;
server 1.18.0.3;
}
# This server accepts all traffic to port 80 and passes it to the upstream.
# Notice that the upstream name and the proxy_pass need to match.
server {
listen 80;
location / {
proxy_pass http://backend;
}
}
}
Below is the Dockerfile:
FROM nginx
COPY nginx.conf /etc/nginx/nginx.conf
I am running nginx container like below
docker run -d -it -p 8082:80 nginx-ls
I am accessing above server with http://serve-ip:8082
Container may crash or die any time, in that case my application is not reachable, So I tried to run another container like below and I am getting below error since I am using same port and it is oblivious we cannot repurpose same port in same host.
docker run -d -it -p 8082:80 nginx-ls
06a20239bd303fab2bbe161255fadffd5b468098424d652d1292974b1bcc71f8
docker: Error response from daemon: driver failed programming external connectivity on endpoint
suspicious_darwin (58eeb43d88510e4f67f618aaa2ba06ceaaa44db3ccfb0f7335a739206e12a366): Bind for
0.0.0.0:8082 failed: port is already allocated.
So I ran in different port, It works fine
docker run -d -it -p 8083:80 nginx-ls
But How do we tell/configure clients use port 8083 container when container 8082 is down
or is there any other best method to achieve nginx load balancer with high availability?
Note: For some reasons, I cannot use docker-compose

How to setup Nginix Docker Reverse proxy

I am trying to use Nginix reverse proxy container for my web application(another docker container) which runs on non standard port ,
unfortunately I cannot edit my web application container as its developed by some vendor , so I have a plain request that I need to setup nginx as frontend with 80/443 and forward all requests to 10.0.0.0:10101(web app container).
I had tried jwilder/nginx proxy and default docker nginx container not able to get the right configurtaion .any lead would be great.
At the moment I haven't shared any conf files , I can share it on demand. here is the environment details
OS - Ubuntu
Azure
Use proxy_pass nginx feature
Assuming you have both containers linked and the web app container's name is webapp use this configuration on nginx container
upstream backend {
server webapp:10101;
}
server {
listen 80;
location / {
proxy_pass http://backend;
}
}
NOTE: please note that I am skipping some configurations as this is just an example
Put the configuration in nginx.conf file and then deploy the container like this
docker run -d -v $(pwd)/nginx.conf:/etc/nginx/nginx.conf -p 80:80 nginx
Then you'll be able to access your webapp on http://locahost

linux systemd service on port 80

I try to create systemd service on centos7:
[Unit]
Description=Node.js Weeki Server
Requires=After=mongod.service
[Service]
ExecStart=/usr/bin/node /var/node/myapp/bin/www
Restart=always
StandardOutput=syslog # Output to syslog
StandardError=syslog # Output to syslog
SyslogIdentifier=nodejs-weeki
User=weeki
Environment=NODE_ENV=production PORT=80
[Install]
WantedBy=multi-user.target
When i use port 8080 the service start successfully, but when i change the port to 80, the service failed to start.
I try to open the firewall with the command:
firewall-cmd --zone=public --add-port=80/tcp --permanent
But it still not working.
See the good advises that you got in the comments by arkascha.
First of all - what's the error?
What you can do to test if it's a problem of the user not being able to bind to low ports is trying use ports like 81, 82, 83 etc. If you still cannot bind to those ports then you likely don't have the permission. If you can, then it's not about permissions and the port is already used by some other process.
To see if you can open a given port by that user try running netcat:
nc -l 80
where 80 is the port number. Try low ports like 80, 81, 82 and high ports like 8080, 8081, 8082.
To see if anything is listening to that port try running:
curl http://localhost:80/
or:
nc localhost 80
To see open ports on your system run:
netstat -lnt
To see if other instances of your program are running, try:
ps aux | grep node
ps aux | grep npm
ps aux | grep server.js
If all else fails, you can restart and see if the problem remains:
sudo shutdown -r now
That should give you a clean state with no old processes hanging around.
Update
What you can do to listen on port 80 without running as root.
There are few things that you can do:
Drop privileges
You can start as root and drop privileges as soon as you open a port:
app.listen(80, function () {
try {
process.setuid('weeki');
process.setgid('weeki');
console.log('Listening on port 80');
console.log('User:', process.getuid(), 'Group:', process.getgid());
} catch (e) {
console.log('Cannot drop privileges');
process.exit(1);
}
});
Pros: You don't need to use anything other than your Node program.
Cons: You need to start as root.
See:
https://nodejs.org/api/process.html#process_process_setuid_id
https://nodejs.org/api/process.html#process_process_setgid_id
Reverse proxy
Your Node app can listen on high port like 3000 and you start nginx or other web server to listen on port 80 and proxy requests to port 3000.
Example nginx config:
server {
listen 80;
server_name example.com;
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
Pros: You don't need to start as root. You can host multiple domains on the same server. You can serve static content directly by nginx without hitting your Node app.
Cons: You need to install and run another software (like nginx).
Route tables
You can redirect the incoming traffic on port 80 to port 3000 with iptables:
iptables -A PREROUTING -t nat -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 3000
(you need to run it as root)
Pros: No new software to install. No need to run the Node app as root.
Cons: Static content is served by your Node app. Hosting more than one app per server is not practical.
See:
https://help.ubuntu.com/community/IptablesHowTo
Allow low port for non-root
This is not always available but also an option. You can use the CAP_NET_BIND_SERVICE capability in the Linux kernel:
CAP_NET_BIND_SERVICE
Bind a socket to Internet domain privileged ports (port
numbers less than 1024).
Pros: No need to run other software. No need to start Node app as root. No need to mess with route tables.
Cons: Not practical to host more than one app per server. Needs using capabilities that may not be available on every system.
See:
http://man7.org/linux/man-pages/man7/capabilities.7.html
User should have root privileges to open ports below 1024.

Resources