How to make flask app use public ip on azure vm [duplicate] - azure

Setting up Flask with uWSGI and Nginx can be difficult. I tried following this DigitalOcean tutorial and still had trouble. Even with buildout scripts it takes time, and I need to write instructions to follow next time.
If I don't expect a lot of traffic, or the app is private, does it make sense to run it without uWSGI? Flask can listen to a port. Can Nginx just forward requests?
Does it make sense to not use Nginx either, just running bare Flask app on a port?

When you "run Flask" you are actually running Werkzeug's development WSGI server, and passing your Flask app as the WSGI callable.
The development server is not intended for use in production. It is not designed to be particularly efficient, stable, or secure. It does not support all the possible features of a HTTP server.
Replace the Werkzeug dev server with a production-ready WSGI server such as Gunicorn or uWSGI when moving to production, no matter where the app will be available.
The answer is similar for "should I use a web server". WSGI servers happen to have HTTP servers but they will not be as good as a dedicated production HTTP server (Nginx, Apache, etc.).
Flask documents how to deploy in various ways. Many hosting providers also have documentation about deploying Python or Flask.

First create the app:
import flask
app = flask.Flask(__name__)
Then set up the routes, and then when you want to start the app:
import gevent.pywsgi
app_server = gevent.pywsgi.WSGIServer((host, port), app)
app_server.serve_forever()
Call this script to run the application rather than having to tell gunicorn or uWSGI to run it.
I wanted the utility of Flask to build a web application, but had trouble composing it with other elements. I eventually found that gevent.pywsgi.WSGIServer was what I needed. After the call to app_server.serve_forever(), call app_server.stop() when to exit the application.
In my deployment, my application is listening on localhost:port using Flask and gevent, and then I have Nginx reverse-proxying HTTPS requests to it.

You definitely need something like a production WSGI server such as Gunicorn, because the development server of Flask is meant for ease of development without much configuration for fine-tuning and optimization.
Eg. Gunicorn has a variety of configurations depending on the use case you are trying to solve. But the development flask server does not have these capabilities. In addition, these development servers show their limitations as soon as you try to scale and handle more requests.
With respect to needing a reverse proxy server such as Nginx is concerned it depends on your use case.
If you are deploying your application behind the latest load balancer in AWS such as an application load balancer(NOT classic load balancer), that itself will suffice for most use cases. No need to take effort into setting up NGINX if you have that option.
The purpose of a reverse proxy is to handle slow clients, meaning clients which take time to send the request. These reverse load balancers buffer the requests till the entire request is got from the clients and send them async to Gunicorn. This improves the performance of your application considerably.

Related

NGINX, THe Edge, HAPRoxy

I was going through Uber Engineering website where I came across this paragraph and I it confused me a lot, if anyone can make it clear for me then I would be thankful to him/her:
The Edge The frontline API for our mobile apps consists of over 600
stateless endpoints that join together multiple services. It routes
incoming requests from our mobile clients to other APIs or services.
It’s all written in Node.js, except at the edge, where our NGINX front
end does SSL termination and some authentication. The NGINX front end
also proxies to our frontline API through an HAProxy load balancer.
This is the link.
NGINX is already a reverse proxy + load balancer, then from where HAProxy load balancer came in the picture and where exactly it fits in the picture? What is "the edge" he talked about? Either the guy who wrote his he wrote confused words or I dont know English.
Please help.
It seems like they're using HAProxy strictly as a load balancer, and using NGINX strictly to terminate SSL and for authentication. It isn't necessary in most cases to use HAProxy along with NGINX, as you mentioned, NGINX has load-balancing capabilities, but being Uber, they probably ran into some unique problems that required the use of both. According to the information I've read, such as http://www.loadbalancer.org/blog/nginx-vs-haproxy/ and https://thehftguy.com/2016/10/03/haproxy-vs-nginx-why-you-should-never-use-nginx-for-load-balancing/, NGINX works extremely well as a web server, including the use case where it is serving as a reverse proxy for a node application, but its load-balancing capabilities are basic and not nearly as performant as HAProxy. Additionally, HAProxy exposes many more metrics for monitoring, and has more advanced routing capabilities.
Load balancing is not the core feature of NGINX. In the context of a node.js application, usually what you would see NGINX used for is to serve as a reverse proxy, meaning that NGINX is the web server, and http requests come through it. Then, based on the hostname and other rules, it forwards on the HTTP request to whatever port your node.js application is running on. As part of this flow, often NGINX will handle SSL termination, so that this computationally-intensive task is not being handled by node.js. Additionally, NGINX is often used to serve static assets for node.js apps, as it is more efficient, especially when compressing assets.

How to use Nginx to load pages through express router

So I'm building an end to end application (With node.js/mysql back end, react front end, and using the express router), but I'm having trouble setting up a local development server. I don't need it to be accessed from the outside world, just be able to load different pages connecting to the express router. I don't have any dev ops experience for this, so I'm trying to use nginx to point it to the router which I can't figure out. Is there an easier way to do this?
I also need to run this on a windows machine, which just makes everything slightly more complicated
It's not entirely clear from your description how your application is set up and what the role of Nginx is.
So I'll start from the beginning...
Nginx is primarily an HTTP server which can also function as a proxy for HTTP requests. If you've written a Node.js application using Express, you have written an HTTP server which can handle any routes you have set up and can also serve your static assets (ie. HTML pages, images, front-end Javascript, CSS, etc.). In this case, there is no need for Nginx - if you wrote something like the Express "Hello World" app, then you will see a message like "Example app listening on port 3000" and you can connect to your app by visiting http://localhost:3000 in your browser.
That's it - there's literally nothing else to your app and there is no need for Nginx (or any other HTTP server) to run your application.
Now that's not to say that there is no role for Nginx in your application, but it may not be as an HTTP server. One possibility is that you may want to set up Nginx as a proxy, to handle certain routes by sending the requests to your Node application. For example, I set up an application some time ago which uses Nginx to proxy API routes for my application to a Node application and to serve static assets directly. This may be what you have in mind - if it is, you will need to configure different routes in Nginx to serve different things (and unfortunately there's not enough information in your question to give suggestions on this).
As an aside, you're probably going to find this much easier to set up using Linux - perhaps the Windows Linux Subsystem, a virtual machine running Linux, or Docker.
You'll probably want to use
https://github.com/facebook/create-react-app
create-react-app my-app will set up everything you need (webpack, etc.), and then
npm start will start a local development server.
Should work on Windows, but I don't know, because I wouldn't use/recommend Windows ;-)

Do I need a different server to run node.js

sorry if this is a wrong question on this forum but I am simply just stuck and need some advice. I have a shared hosting service and a cloud based hosting server with node.js installed. I want to host my website as normal but I also want to add real time chat and location tracking using node.js I am confused with what I am reading in several places because node.js is itself a server but not designed to host websites? So I have to run 2 different servers? One for the website and one to run node.js? When I setup the cloud one with a node.js script running I can no longer access the webpages.
Whats the best way for me achieve this as I am just going round in circles. Also is there a way I can set up a server on my PC and run and test both of these together before hand so I see what is needed and get it working as it will stop me ordering servers I dont need.
Many thanks for any help or advice.
Node can serve webpages using a framework like Express, but can cause conflicts if run on the same port as another webserver program (Apache, etc). One solution could be to serve your webpages through your webserver on port 80 (or 443 for HTTPS) and run your node server on a different port in order to send information back and forth.
There are a number of ways you can achieve this but here is one popular approach.
You can use NGINX as your front facing web server and proxy the requests to your backend Node service.
In NGINX, for example, you will configure your upstream service as follows:
upstream lucyservice {
server 127.0.0.1:8000;
keepalive 64;
}
The 8000 you see above is just an example, you may be running your Node service on a different port.
Further in your config (in the server config section) you will proxy the requests to your service as follows:
location / {
proxy_pass http://lucyservice;
}
You're Node service can be running in a process manager like forever / pm2 etc. You can have multiple Node services running in a cluster depending on how many processors your machine has etc.
So to recap - your front facing web server will be handling all traffic on port 80 (HTTP) and or 443 (HTTPS) and this will proxy the requests to your Node service running on whatever port(s) you define. All of this can happen on one single server or multiple if you need / desire.

Is Gunicorn alone is enough to host a Flask application/ Is it must to configure Gunicorn

I have many flask application running as independent module. (No web pages is serving here, no CSS, or any script file). It's purely Rest based Web services which do some processing and return the response.
I would like to take these flask application to the production grade. Thus, I need to change the internal flask server with some other production grade server. Over the Internet, I found that we should go with the some WSGI server (GUnicorn with NGINX as proxy server). As we don't have any static web pages to serve here, I'm confused whether I should configure NGINX or Gunicorn alone with Async worker is enough to handle the load on production.
Note: We have huge load over the production as it will processing over 100k images.
You need to use Gunicorn with Nginx. Gunicorn is python WSGI server.
Quoting from Gunicorn website: It is best to use Gunicorn behind an HTTP proxy server. We strongly advise you to use nginx.
reference
I also found that unicorn is better alternative to Gunicorn. Reference

WSGI container in flask [duplicate]

Setting up Flask with uWSGI and Nginx can be difficult. I tried following this DigitalOcean tutorial and still had trouble. Even with buildout scripts it takes time, and I need to write instructions to follow next time.
If I don't expect a lot of traffic, or the app is private, does it make sense to run it without uWSGI? Flask can listen to a port. Can Nginx just forward requests?
Does it make sense to not use Nginx either, just running bare Flask app on a port?
When you "run Flask" you are actually running Werkzeug's development WSGI server, and passing your Flask app as the WSGI callable.
The development server is not intended for use in production. It is not designed to be particularly efficient, stable, or secure. It does not support all the possible features of a HTTP server.
Replace the Werkzeug dev server with a production-ready WSGI server such as Gunicorn or uWSGI when moving to production, no matter where the app will be available.
The answer is similar for "should I use a web server". WSGI servers happen to have HTTP servers but they will not be as good as a dedicated production HTTP server (Nginx, Apache, etc.).
Flask documents how to deploy in various ways. Many hosting providers also have documentation about deploying Python or Flask.
First create the app:
import flask
app = flask.Flask(__name__)
Then set up the routes, and then when you want to start the app:
import gevent.pywsgi
app_server = gevent.pywsgi.WSGIServer((host, port), app)
app_server.serve_forever()
Call this script to run the application rather than having to tell gunicorn or uWSGI to run it.
I wanted the utility of Flask to build a web application, but had trouble composing it with other elements. I eventually found that gevent.pywsgi.WSGIServer was what I needed. After the call to app_server.serve_forever(), call app_server.stop() when to exit the application.
In my deployment, my application is listening on localhost:port using Flask and gevent, and then I have Nginx reverse-proxying HTTPS requests to it.
You definitely need something like a production WSGI server such as Gunicorn, because the development server of Flask is meant for ease of development without much configuration for fine-tuning and optimization.
Eg. Gunicorn has a variety of configurations depending on the use case you are trying to solve. But the development flask server does not have these capabilities. In addition, these development servers show their limitations as soon as you try to scale and handle more requests.
With respect to needing a reverse proxy server such as Nginx is concerned it depends on your use case.
If you are deploying your application behind the latest load balancer in AWS such as an application load balancer(NOT classic load balancer), that itself will suffice for most use cases. No need to take effort into setting up NGINX if you have that option.
The purpose of a reverse proxy is to handle slow clients, meaning clients which take time to send the request. These reverse load balancers buffer the requests till the entire request is got from the clients and send them async to Gunicorn. This improves the performance of your application considerably.

Resources