Nginx - Rate limit when origin server response code is 401 - node.js

I would like nginx to rate limit by user-ip when the origin server responds with a 401 status code. How would I go about this. I already have a limit_req_zone setup for normal API calls which looks something like this: limit_req_zone $binary_remote_addr zone=api:10m rate=5r/s; but I would like to further rate limit offenders that make unauthorized calls to my API end-points.
Edit:
I did try mapping the response status 401 to ip addresses and rate-limit based on the mapped variable but that doesn't seem to do anything. See code below.
map $status $limit {
default '';
401 $binary_remote_addr;
}
limit_req_zone $limit zone=api:10m rate=5r/s;
location /api {
limit_req zone=api burst=5;
...
}

This is quite tricky because the $status variable is empty when declaring the limit_req_zone. The $status is only known after nginx has processed the request. For example after a proxy_pass directive.
The closest I could get to achieve rate limiting by status is doing the following:
...
...
...
limit_req_zone $binary_remote_addr zone=api:10m rate=5r/s;
...
...
...
server {
location /mylocation {
proxy_intercept_errors on;
proxy_pass http://example.org;
error_page 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 421 422 423 424 426 428 429 431 451 500 501 502 503 504 505 506 507 508 510 511 #custom_error;
}
location #custom_error {
limit_req zone=api burst=5 nodelay;
return <some_error_code>;
}
}
...
The drawback is that this way you must return a different status code then the proxy pass response.

Related

Invoking requests.get() within flask application sub-class is causing uwsgi segmentation fault and 502 on nginx

I'm facing an issue with my current flask app setup and would really appreciate some input on this. Thank you!
Flow
user --> nginx --> uwsgi --> flask app --> https call to external system (response is processed and relevant data returned to client)
Workflow
Intent My flask view/route invokes another class, within which a https (GET) call is made to an external system to retrieve data. This data is then processed (analyzed) and an appropriate response is sent to the user.
Actual User receives 502 Bad Gateway from webserver upon invoking Flask Based endpoint. This is only happening when placing the nginx and uwsgi server in front of my flask application. Initial tests on the server directly with flask's in-built server appeared to work.
Note: That analytics bit does take up some time so I increased all relevant timeouts (to no avail)
Configurations
Nginx (tried with and without TLS)
worker_processes 4;
error_log /path/to/error.log;
pid /path/to/nginx.pid;
events {
worker_connections 1024;
}
http {
default_type application/json;
access_log /path/to/access.log;
sendfile on;
keepalive_timeout 0; [multiple values tried]
# HTTPS server
server {
listen 5555 ssl;
server_name my_host.domain.com;
ssl_certificate /path/to/server.crt;
ssl_certificate_key /path/to/server.key;
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 5m;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
location /my_route {
uwsgi_connect_timeout 60s;
uwsgi_read_timeout 300s;
client_body_timeout 300s;
include uwsgi_params;
uwsgi_pass unix:/path/to/my/app.sock;
}
}
}
uWSGI (threads reduced to 1 as part of troubleshooting attempts)
[uwsgi]
module = wsgi:app
harakiri = 300 [also added as part of troubleshooting steps]
logto = /path/to/logs/uwsgi_%n.log
master = true
processes = 1
threads = 1
socket = app.sock
chmod-socket = 766
vacuum = true
socket-timeout = 60
die-on-term = true
Code Snippets
Main Flask Class (view)
#app.route(my_route, methods=['POST'])
def my_view():
request_json = request.json
app.logger.debug(f"Request Received: {request_json}")
schema = MySchema()
try:
schema.load(request_json)
var1 = request_json["var1"]
var2 = request_json["var2"]
var3 = request_json["var3"]
var4 = request_json["var4"]
# begin
execute = AnotherClass(client_home, config, var1, var2, var3, var4, mime_type)
return jsonify(execute.result)
except ValidationError as exception:
error_message = json.dumps(exception.messages)
abort(Response(error_message, 400, mimetype=mime_type))
Class which executes HTTPS GET on external system
custom_adapter = HTTPAdapter(max_retries=3)
session = requests.Session()
session.proxies = self.proxies
session.mount("https://", custom_adapter)
try:
json_data = json.loads(session.get(process_endpoint, headers=self.headers, timeout=(3, 6)).text)
Errors
Nginx
error] 22680#0: *1 upstream prematurely closed connection while
reading response header from upstream, client: client_ip, server:
server_name, request: "POST /my_route HTTP/1.1", upstream:
"uwsgi://unix:/path/to/my/app.sock:", host: "server_name:5555"
User gets a 502 on their end (Bad Gateway)
uWSGI
2020-04-24 16:57:23,873 - app.module.module_class - DEBUG - Endpoint:
https://external_system.com/endpoint_details 2020-04-24 16:57:23,876 -
urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1):
external_system.com:443 !!! uWSGI process #### got Segmentation Fault
!!!
* backtrace of #### /path/to/anaconda3/bin/uwsgi(uwsgi_backtrace+0x2e) [0x610e8e]
/path/to/anaconda3/bin/uwsgi(uwsgi_segfault+0x21) [0x611221]
/usr/lib64/libc.so.6(+0x363f0) [0x7f6c22b813f0]
/path/to/anaconda3/lib/python3.7/lib-dynload/../../libssl.so.1.0.0(ssl3_ctx_ctrl+0x170)
[0x7f6c191b77b0]
/path/to/anaconda3/lib/python3.7/site-packages/cryptography/hazmat/bindings/_openssl.abi3.so(+0x5a496)
[0x7f6c16de2496]
....
end of backtrace * DAMN ! worker 1 (pid: ####) died :( trying respawn ... Respawned uWSGI worker 1 (new pid: ####)
SOLVED
Steps taken
update cryptography
update requests
update urllib3
add missing TLS ciphers to Py HTTP Adapter (follow this guide)

proxy_pass not support # in nginx

I am new to nginx and I am trying to create clean URL pattern where if I use below URL on browser "http://xx.xx.xx.xx:61001/employee" then it should route to "http://localhost:8080/emp/#/details". But unfortunately I am getting 404 error on browser. Even though my application is up and running. # (special character) having some issue in nginx. can someone help me here.
Below my configuration:
location /employee {proxy_pass http://localhost:8080/emp/#/details;
}
Getting 404 error on browser
This is my full server config:
server {
listen 8081;
server_name xx.xx.xxx.xxx;
location / { root html; index index.html index.htm; }
location /employee { proxy_pass localhost:8080/emp/#/details; }
error_page 500 502 503 504 /50x.html;
location = /50x.html { root html; }
}
Now you are listening on port 8081 change it to 80.
server {
listen 80;
server_name xx.xx.xxx.xxx;
location / { root html; index index.html index.htm; }
location /employee { proxy_pass localhost:8080/emp/#/details; }
error_page 500 502 503 504 /50x.html;
location = /50x.html { root html; }
}

Why can't record the request which result in 404 error?

curl -I -w %{http_code} http://quotes.money.163.com/f10/gszl_600024.html
HTTP/1.1 404 Not Found
Server: nginx
curl -I -w %{http_code} http://quotes.money.163.com/f10/gszl_600023.html
HTTP/1.1 200 OK
Server: nginx
It shows that http://quotes.money.163.com/f10/gszl_600024.html don't exist,its http error code is 404;http://quotes.money.163.com/f10/gszl_600023.html do exist,its http error code is 200.
I want to write a spider to record the request which result in 404 error.
Add HTTPERROR_ALLOWED_CODES in middlewares.py.
HTTPERROR_ALLOWED_CODES = [404,403,406, 408, 500, 503, 504]
Add log setting in settings.py.
LOG_LEVEL = "CRITICAL"
LOG_FILE = "mylog"
Create a spider.
import scrapy
from info.items import InfoItem
import logging
class InfoSpider(scrapy.Spider):
handle_httpstatus_list = [404]
name = 'info'
allowed_domains = ['quotes.money.163.com']
start_urls = [ r"http://quotes.money.163.com/f10/gszl_600023.html",
r"http://quotes.money.163.com/f10/gszl_600024.html"]
def parse(self, response):
item = StockinfoItem()
if(response.status == 200):logging.critical("url whose status is 200 : " + response.url)
if(response.status == 404):logging.critical("url whose status is 404 : " + response.url)
Open mylog file after running the spider.
2019-04-25 08:47:57 [root] CRITICAL: url whose status is 200 : http://quotes.money.163.com/
2019-04-25 08:47:57 [root] CRITICAL: url whose status is 200 : http://quotes.money.163.com/f10/gszl_600023.html
Why there is a 200 status for http://quotes.money.163.com/?
when you input http://quotes.money.163.com/f10/gszl_600023.html in browser,
no content on server for this url,it will redirect into http://quotes.money.163.com/ in 5 seconds and the http code for http://quotes.money.163.com/ is 200,so there are two 200 status lines here.
What confused me is that no such log info as
2019-04-25 08:47:57 [root] CRITICAL: url whose status is 404 : http://quotes.money.163.com/f10/gszl_600024.html
in the log file mylog.
How to make if(response.status == 404):logging.critical("url whose status is 404 : " + response.url) executed in my scrapy1.6?
You have redirect from 404-page to main. So you can set dont_redirect and it will show you needed response. Try this:
class InfoSpider(scrapy.Spider):
handle_httpstatus_list = [404]
name = 'info'
allowed_domains = ['quotes.money.163.com']
start_urls = [
r"http://quotes.money.163.com/f10/gszl_600023.html",
r"http://quotes.money.163.com/f10/gszl_600024.html"
]
def start_requests(self):
for url in self.start_urls:
yield scrapy.Request(url, meta={'dont_redirect': True})
def parse(self, response):
if response.status == 200:
logging.critical("url whose status is 200 : " + response.url)
if response.status == 404:
logging.critical("url whose status is 404 : " + response.url)
So, now I get in my log:
2019-04-25 08:09:23 [root] CRITICAL: url whose status is 200 : http://quotes.money.163.com/f10/gszl_600023.html
2019-04-25 08:09:23 [root] CRITICAL: url whose status is 404 : http://quotes.money.163.com/f10/gszl_600024.html

Nginx handle 500 internal server error security issue

I am trying to fix a security vulnerability of 500 internal server error disclose location of the file
My issue is similar to that of (https://cdn-images-1.medium.com/max/1600/1*2DAwIEJhgLQd82t5WTgydA.png)
(https://medium.com/volosoft/running-penetration-tests-for-your-website-as-a-simple-developer-with-owasp-zap-493d6a7e182b)
I am tried with
proxy_intercept_errors on;
and
error_page 500
redirection but it didnt help.
Any help on this ?
This is a basic example of implementing proxy_intercept_errors on;
upstream foo {
server unix:/tmp/foo.sock;
keepalive 60;
}
server {
listen 8080 default_server;
server_name _;
location = /errors/5xx.html {
internal;
root /tmp;
}
location / {
proxy_pass http://foo;
proxy_http_version 1.1;
proxy_redirect off;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_intercept_errors on;
error_page 500 501 502 503 504 505 404 =200 /errors/5xx.html;
}
}
Notice the:
error_page 500 501 502 503 504 505 404 =200 /errors/5xx.html;
This will intercept some 5xx errors and the 404 except and return with a 200
Also, check the /errors/5xx.html location, is using root /tmp; therefore you still need to create the file errors/5xx.html:
$ mkdir /tmp/errors
$ echo "intercepting errors" > /tmp/errors/5xx.hml
You don't necessarily need to a file to reply you request you could also use something like this:
location = /errors/5xx.html {
internal;
default_type text/plain;
return 200 'Hello world!';
}
In your case the 404 File not found could be handle different, for example:
upstream failover{
server server2:8080;
}
server {
listen 80;
server_name example.com;
root /tmp/test;
location ~* \.(mp4)$ {
try_files $uri #failover;
}
location #failover {
proxy_pass http://failover;
}
}
In this case if the file ending with .mp4 not found it will try another server, then if required you still can intercep the error there.

How to set variable value based on a subrequest

I am trying to set the value of a variable based on the result if the user is correctly logged in. I will use this variable for conditional ssi. I am using the auth_request module to authorise the user. This authorisation happens for all pages. The problem I am facing is that for 401/403 errors, NGINX passes 401 to the client. What I want to do is to show some pages anyway (even if the authorization fails), but set the variable (to the status of subrequest) for conditional ssi.
Config File
location / {
set $auth_status 100; #Default Value
error_page 401 403 $show_anyway;
error_page 500 = /auth_fallback;
auth_request /auth_public;
auth_request_set $auth_status $upstream_status;
auth_request_set $show_anyway $request_uri;
}
location = /auth_public
{
proxy_pass http://localhost:8081;
proxy_pass_request_body off;
#return 200; #Tried this, but sub request doesn't fire.
}
error_page 401 403 =200 ... can help. I tested the following config
ssi on;
location / {
set $auth_status 100;
auth_request /auth.php;
auth_request_set $auth_status $upstream_status;
error_page 401 403 =200 #process;
}
location #process {
try_files $uri =404;
}
location = /auth.php {
fastcgi_pass 127.0.0.1:9001;
fastcgi_index index.php;
include fastcgi_params;
}
Note about try_files inside #process - $uri/ droped for prevent internal redirect by index directive (if index file found). If internal redirect pass to location /, last error code will be used.

Resources