We have an NGINX and Gunicorn setup that has worked for years, but things have gone flaky all of a sudden on one of our Linux servers.
We have a number of Flask services that are run via Gunicorn. We also have an NGINX upstream defined for Gunicorn, which is running on port 8087.
Whenever we try to hit one of our Flask services after Gunicorn starts/restarts, the HTTP request just hangs for a couple of hours before the issue magically resolves itself.
The output from netstat -anp | grep 8087 includes the following connections:
tcp 2 0 0.0.0.0:8087 0.0.0.0:* LISTEN 13175/python
tcp 0 0 127.0.0.1:54726 127.0.0.1:8087 ESTABLISHED 12978/nginx
If we stop NGINX altogether, the NGINX connection above goes away and we're able to make a speedy request to our Flask service.
Alternatively, if instead of stopping NGINX altogether, we run ps -ef | grep nginx and then kill the process that reads "nginx: worker process", NGINX re-spawns the worker process and we're then able to make a request to our Flask service.
Related
I currently have two docker containers running:
ab1ae510f069 471b8de074c4 "docker-entrypoint.s…" 8 minutes ago Up 8 minutes 0.0.0.0:3001->3001/tcp hopeful_bassi
2d4797b77fbf 5985005576a6 "nginx -g 'daemon of…" 25 minutes ago Up 25 minutes 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp wizardly_cori
One is my client and the other (port 3001) is my server.
The issue I'm facing is I've just added SSL to my site, but now I can't access the server. My theory is that the server needs both port 443 and port 3001 open, but I can't have port 443 open on both containers. I can run the server via HTTP locally, so I think that also points to this conclusion.
Is there anything I can do to have both using https? The client won't talk to the server if the server uses http (for obvious reasons).
Edit:
I'm now not sure if it is to do with port 443, as I killed my client and tried to just run the server, but it still gave me connection refused:
docker run -dit --restart unless-stopped -p 3001:3001 -p 443:443 471b8de074c4
If you open the port 443 for a docker container, it means that a docker-managed tool will be started. This (anyways, highly sup-optimal) tool will forward the TCP requrest to your host port 443 to the container.
If you want two containers to use the port 443, docker would want to start this portforwarder twice, on the same port. As your docker output shows, it could happen only once. Maybe digging (deeply) in the (nearly non-existent) docker logs, you can see also the relevant error message.
The problem you've found is not docker-dependant, it is the same problem what you would face also in a container-less environment - you simply can't start multiple service processes listening on the same TCP port.
Also the solution is coming from the world before the containers.
You need a central proxy service, this would listen on your port 443, and forward the requests - depending on the asked virtualhost - to the corresponding container.
Dig into the docker containers, it is nearly sure that such a https forward proxy exists. This third container will forward the requests where you want. Of course you will need to configure it.
From that moment, you don't even need to have https in your containers (although you can if you want), what helps a lot in productive, correctly certified ssl environments. Only your proxy will need the certificates. So:
/---> container A (tcp:80)
tcp:443 -- proxy
\---> container B (tcp:80)
Apache server is giving an error at boot up (or when I try to start the service with systemctl manually)
make_sock: could not bind to address [::]:7301 # virtual host port
But it starts nicely with following command:
httpd -k start
3 things come to mind:
That port, 7301 is already in use by another process, try a netstat -apn | grep 7301 to see if that's the case and if so change the apache port or kill that process.
You have 2 conflicting Listen directives in your apache conf file. For ex. Listen *:7301 and Listen 1.2.3.4:7301 would cause that error, pleasr remove one of them
You have configured apache to use an interface which is not active or does not have IPv6 enabled
Edit:
You have selinux active on your host and it's preventing apache from using a non default port as port 80.
I have an app set to listen to port 66.
First I tried to run it with sudo node myapp.js . I was able to access it at the correct url (ip:66). Then I stopped the app (Ctrl+c) and started it with pm2, sudo pm2 start app.js. The status is online. However, that same url is now inaccessible.
Running sudo pm2 logs while the app is started with pm2 gives me the error EACCESS for port 66. No one else is running the app, and I am sure I am only using one console and killing the node service before starting it with pm2.
Pm2 was installed globally. Server is Debian stretch. Nodejs version is 8.x
I am logging as a normal user and using sudo to run the app.
on linux systems normal users are not allowed to listen to ports below 1024. There are several ways around this.
You can change this rule to allow non root users to open such ports. But this is a security risc and is not recommended. So i won't add a link to this solution.
you can also listen to a port that is greater than 1024 and then use a forward rule in your firewall to route port 66 to the port you opened.
https://www.systutorials.com/816/port-forwarding-using-iptables/
my (and pm2's) prefered solution is to listen to a port greater than 1024 and use a reverse proxy like nginx to route apps running on that server.
http://pm2.keymetrics.io/docs/tutorials/pm2-nginx-production-setup
I'm moving my NodeJS application to docker, and the last problem that I have encountered is debugging the application.
My setup: OSx, boot2docker, docker (based on centos), WebStorm as IDE and debugger.
Here's what I have by now:
Forward 5858 from docker to boot2docker:
docker run -p 5858:5858 ...
Forward 5858 port from boot2docker to host:
VBoxManage controlvm boot2docker-vm natpf1 "boot2docker5858,tcp,127.0.0.1,5858,,5858"
This same setup works to foreword my application ports to host machine.
Port 5858 on the other hand, doesn't seem to react if accessed from outside the docker container.
Inside the docker container it works just fine.
Any idea what can be done to make this work?
Well, I have finally figured it out.
As it seems, node listens only on 127.0.0.1:5858.
To make it listen on all ports, I installed HAProxy on the docker, that forwards the requests from 0.0.0.0:5859 to 127.0.0.1:5858.
Here's the HAProxy configuration if anybody ever needs:
listen l1 0.0.0.0:5859
mode tcp
timeout client 180000
timeout server 180000
timeout connect 4000
server srv1 127.0.0.1:5858
And than add to your Dockerfile:
COPY haproxy.conf haproxy.conf
RUN haproxy -D -f /haproxy.conf
I have a test app using express that crashes on server.listen(80): ERROR: listen EADDRINUSE. I tried to kill all node processes with killall -9 node but there were no processes. I also have apache running on the same server but I've got two IPs and I have configured apache to serve only one of them and yesterday everything worked fine. Some process is blocking port 80 on IP reserved for node and it's not node. What should I do?
UPDATE
That was my own lame mistake. I defined node_ip and node_port but accidently omitted node_ip in server.listen.
You could use
lsof -i :80
to see what process is running on that port.
if you want to see it first, you can use netstat, e.g.
netstat -tulpn | grep 80
You can use tcpkill ie.:
tcpkill -i eth0 port 80