Flask-Security force SECURITY_POST_LOGIN_VIEW to use HTTPS - python-3.x

I've run into an issue where the post-login redirect in Flask-Security is not keeping to HTTPS and is instead making an HTTP request. In some instances this is causing an error.
Ideally my nginx config would redirect all requests on :80 to :443 automatically, but apparently this is problematic as well. While I sort out the nginx issue I would really like to force Flask Security to always use HTTPS.
My current var for this is just:
SECURITY_POST_LOGIN_VIEW = '/logged-in'
The documentation says an endpoint name can be used as well, but it does not say what the format for that is. Do you just provide the endpoint name or is it wrapped in a url_for()?
Is there a way to force Flask Security to always use HTTPS, either in this particular instance or as a whole?

I have the same issue before. In fact, because of this limitation. I started to use flask-jwt instead of flask-security. Here is the link to the project https://pythonhosted.org/Flask-JWT/

I don't have an answer about Flask-Security itself, but you can force all HTTP traffic to redirect to HTTPS with Google's Flask-Talisman. That will fix the problem no matter what library you're using.

An old but important questions. I spent too much time working through this but here goes:
The answer is that Flask's url_for() is returning a relative url such as '/logged-in'.
werkzeug by default (via its autocorrect_location_header = True Response option) creates a fully qualified URL.
Where does it get the scheme and server?
It gets it by calling wsgi.get_current_url() -
which for scheme uses: environ["wsgi.url_scheme”]
Assuming you are using uwsgi https://uwsgi-docs.readthedocs.io/en/latest/ - it seems to look at the variables
UWSGI_SCHEME and HTTP_X_FORWARDED_PROTO and if neither are set then look at the variable HTTPS, else set wsgi.url_scheme=“http”
Most examples of setting up uwsgi+python say to place this (and others) in your
uwsgi_params file that is included in your nginx config:
uwsgi_param HTTPS $https if_not_empty;
I believe that simply setting:
uwsgi_param UWSGI_SCHEME https;
in your nginx config would force flask to believe the request was https regardless.
I use AWS ALB which seems to set all the relevant X-FORWARDED-xxx headers so
things just work.
If you need to handle both http and https and your LB doesn’t set
the headers - then the werkzeug folks have an answer - https://werkzeug.palletsprojects.com/en/0.15.x/middleware/proxy_fix

Endpoint name is the name of the view function.
Basically, if your desired route is decorating the show_home function,
app.config['SECURITY_POST_LOGIN_VIEW'] = 'show_home'
#app.route('/your-route')
def show_home():
...
PS: I am not sure what was the situation when the question was posted, but I describe the situation for Flask-security-too==4.0

Related

Apache & NodeJS Express = how use same 404 page

I'm using proxy on Apache to proxypass subfolder of domain to Node.js Express app on local port, but is there a way from Express side, somehow to pass purely a status in a way, so Apache uses own error page? (I want them to look the same).
As far as I know I may either use absolute path on server to that page etc, but that may not be consistent if I change the Apache settings. Is there any way to tell Apache over proxy request response, to show his error page, whatever it has been set to?
Maybe there is no such way, I just wonder if there is some way. The best I came up so far is to redirect to same URL but starting with /e/ which would work, but remains /e/ in URL - not bad, but maybe someone give a better hint.

I need to remove or ignore the X-Frame-Options header. Should I use a proxy?

Premise
I need a way to remove the X-Frame-Options header from the responses from a few websites before those responses reach my browser.
I am doing this so that I can properly render my custom kiosk webpage, which has iframes that point to websites that don't want to show up in frames.
What I have tried
I have tried setting up a proxy using squid and configuring its reply_header_access option to deny X-Frame-Options headers as the server receives them, but that is for some reason not working as anticipated. I have verified that I am indeed going through the Squid proxy, and I have verified that the X-Frame-Options header persists despite my squid.conf file containing the following:
reply_header_access X-Frame-Options deny all
and having built squid (using Homebrew on my Mac) with the --enable-http-violations option.
Having chased down a lot of what might have gone wrong with this approach, I have decided that the reply_header_access option must not do exactly what I thought it does (modify headers before returning them to the client).
So, I tried using another proxy server. After reading a Stack Overflow question asking about a situation roughly similar to mine, I decided I might try using the node-http-proxy library. However, I have never used Node before, so I got lost pretty quickly and am stuck at a point where I am not sure how to implement the library for my specific purpose.
Question
Using Node seems like a potentially very easy solution, so how can I set up a proxy using Node that removes the X-Frame-Options header from responses?
Alternatively, why is Squid not removing the header even though I tried to set it up to do so?
Final alternative: Is there an easier way to reach my ultimate goal of rendering any page I want within an iframe?
I used a proxy, specifically mitmproxy with the following script:
drop_unwanted_headers.py:
import mitmproxy
def requestheaders(flow: mitmproxy.http.HTTPFlow) -> None:
for each_key in flow.request.headers:
if each_key.casefold().startswith("sec-".casefold()):
flow.request.headers.pop(each_key)
def responseheaders(flow: mitmproxy.http.HTTPFlow) -> None:
if "x-frame-options" in flow.response.headers:
flow.response.headers.pop("x-frame-options")
if "content-security-policy" in flow.response.headers:
flow.response.headers.pop("content-security-policy")
To run it, do:
mitmproxy --script drop_unwanted_headers.py
Also ensure that your proxy settings point to the computer where the proxy server is running (maybe localhost) and the correct port is used.

Is there a proxy webserver that routes dynamically requests based on URLs?

I am looking for a way how to dynamically route requests through proxy webserver. I will explain what I need exactly and what I have found so far.
I would like to have some lightweight webserver (thinking about node.js or nginx) set up as proxy webserver with public IP. It would route requests to different local webservers based on URLs. But not only based on hostname but based on full URL.
My idea is, that this proxying webserver would use either local memory cache, memcached or redis to look up key-value based information of URL and local webserver.
I have found these projects:
https://github.com/nodejitsu/node-http-proxy
https://www.steve.org.uk/Software/node-reverse-proxy/
https://github.com/hipache/hipache
They all seem to do similar things, but not exactly what I am looking for, that is:
URL based proxying (absolute URLs routing to different local webservers)
use of memory based configuration storage / cache
dynamically change configuration using API without reloading proxy webserver
Is there any better-suited project or is there a way how to configure one of three projects above to fit my requirements ?
Thank you for your time and effort in advance.
I think this does exactly what you want: https://openresty.org/en/dynamic-routing-based-on-redis.html
It's basically nginx with precompiled modules. You can setup the same by yourself with nginx + lua module + redis ( + of course the necessary lua rocks). OpenResty just makes it easier.

In Node.js, finding the original client URL when app is behind a reverse proxy

I'm working on a Node.js/Express application that, when deployed, sits behind a reverse proxy.
For example: http://myserver:3000/ is where the application actually sits, but users access it at https://proxy.mycompany.com/myapp.
I can get the original user agent request's host from a header passed through easily enough (if the reverse proxy is configured properly), but how do I get that extra bit of path and protocol information from the original URL accessed by the browser?
When my application has to generate redirects, it needs to know that the end user's browser expects the request to go to not only to proxy.mycompany.com over https, but also under a sub-path of myapp.
So far all I can get access to is proxy.mycompany.com, which isn't enough to create a valid redirect.
For dev purposes I'm using a reverse proxy setup in nginx, but my company is using Microsoft's ISA as well as HAProxy.
Generally this is done with x-forwarded-* headers which are inserted by the reverse proxy itself. For example:
x-forwarded-host: foo.com
x-forwarded-proto: https
Take a look here:
http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/x-forwarded-headers.html
Probably you can configure nginx to insert whatever x- header you want, but the convention (standard?) seems to be the above.
If you're reverse proxying into a sub-path such as /myapp, that definitely complicates matters. Presumably that sub-path should be a configuration option available to both nginx and your app.

nodejs get request header from Nginx

I'm using nginx to proxy my nodejs app. In my app, I always asking a "client_id" from header. When I'm doing the local test. Everything working correct. But when I push to server and proxy by Nginx. Then the client_id lost. I can see that when nginx do the proxy, it remove my custom header "client_id".
What I want to ask is:
is there a way to make sure nginx can pass my client_id to nodejs?
is there a way can make nginx pass whatever the custom headers?
Thanks #Peter Lyons, I just found the reason. Yes, nginx do pass all headers to the destination server as default. But, the exception is, as default, nginx block all headers which the name contain an underscore "_".
I don't know why nginx do this. But in this question, this underscore thing is the reason that I can't get my header "client_id".
There are 2 way to solve it:
1, change the header name to avoid the underscore, in this question, change "client_id" to "clientId" or "client-id"
2, in nginx.conf, inside http part, set underscores_in_headers on;, for example:
http {
....
underscores_in_headers on;
....
}
by default, nginx's HttpProxyModule has proxy_pass_request_headers enabled, and thus will pass on the client request headers to the destination server.
My first suggestions is to try renaming your header to "X-Client-Id" to utilize the extension namespace HTTP has reserved for non-standard headers such as yours and see if nginx will forward that. If not, have a look at the proxy_set_header directive.
Side note: using a custom header at all, and specifically one called "client_id" is almost a sure sign you are reinventing the wheel or don't understand industry standards for using cookies and sessions. Unless you are really sure you need this, you may want to step back and rethink your underlying problem.

Resources