When I try to upload a file with Flask rest API I get: upstream prematurely closed connection while reading response header from upstream, client - python-3.x

I've been experiencing some issues when I try to upload a file trough my web api using python and flask, so whenever I try to do so I get the following error message (it happens with any kind of file and regardless the size):
2019/12/18 16:30:04 [error] 111206#111206: *12251 upstream prematurely closed connection while reading response header from upstream,
client: xxx.61.xxx.70, server: *mydomain.com*, request: "POST /upload HTTP/1.1", upstream: "uwsgi://unix:/home/climpia/tracking/tracking.sock:", host: "*api.mydomain.com*", referrer: "https://client.mydomain.com/?id_ruta=14130"
Connection reset by xxx.251.xx.158 port 22
I am not sure where is the issue since I have already modded some options on the uWSGI.ini and the nginex.config files, but I still get the same error message. I have checked the code (I am using python and flask) and it seems fine.
This is the uWSGI.ini:
[uwsgi]
module = wsgi
master = true
processes = 5
socket-timeout = 65
http-keepalive = 256
socket = tracking.sock
chmod-socket = 660
vacuum = true
#location of log files
logto = /tmp/%n.log
die-on-term = true
This is the nginex.config file for this project:
server {
server_name mydomain.com api.mydomain.com;
location / {
include uwsgi_params;
uwsgi_pass unix:/home/user/tracking/tracking.sock;
}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/mydomain.com-0001/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/mydomain.com-0001/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
sendfile on;
client_max_body_size 512M;
keepalive_timeout 0;
}
server {
if ($host = api.mydomain.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
if ($host = mydomain.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80;
server_name mydomain.com api.mydomain.com;
return 404; # managed by Certbot
}
This is the python code:
# tracking.py
from flask import Flask, request, session, g, redirect, \
url_for, abort, render_template, flash, jsonify, \
make_response, send_from_directory
from flask_sqlalchemy import SQLAlchemy
from sqlalchemy import text
from flask_cors import CORS
from datetime import datetime
from werkzeug.utils import secure_filename
import os
app = Flask(__name__, static_folder='static')
app.config.from_object(__name__)
cors = CORS(app, resources={r"/*": {"origins": "https://client.mydomain.com"}})
#cors = CORS(app, resources={r"/*": {"origins": "*"}})
app.config.from_envvar('APP_CONFIG_FILE', silent=True)
DB_URL = 'postgresql+psycopg2://user:pwsd#127.0.0.1/NAME DB'
UPLOAD_FOLDER = 'static/upload'
ALLOWED_EXTENSIONS = {'qpj', 'cpg', 'prj', 'dbf', 'shx', 'shp', 'txt', 'json'}
app.config['UPLOAD_FOLDER'] = UPLOAD_FOLDER
app.config['SEND_FILE_MAX_AGE_DEFAULT'] = 0
app.config['SQLALCHEMY_DATABASE_URI'] = DB_URL
#app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = False # silence the deprecation warning
#app.config['SQLALCHEMY_ECHO'] = True
app.config['DEBUG'] = True
db = SQLAlchemy(app)
#app.route('/upload',methods = ['POST'])
def upload_file():
response_object = {'status': 'no se ha cargado nada'}
if request.method =='POST':
if 'flecheo' not in request.files:
print('no viene el FLECHEO')
files = request.files['flecheo']
for f in files:
print(f.filename)
filename = secure_filename(f.filename)
app.logger.info('FileName: ' + filename)
f.save(os.path.join(app.config['UPLOAD_FOLDER'], f.filename))
response_object['message'] = 'Subieron los archivos del shape file'
return jsonify(response_object)
This is the Json:
function uploadFiles(){
var capa = document.getElementsByName('capa');
var flecheo = document.getElementById('getFiles').files;
formData = new FormData();
for(i = 0; i < capa.length; i++) {
if(capa[i].checked)
formData.append('capa', capa[i].value);
}
for(i=0; i < flecheo.length; i++){
console.log(flecheo[i]);
formData.append('flecheo', flecheo[i]);
}
for (var key of formData.entries()) {
console.log(key[0] + ', ' + key[1]);
}
var contentType = {
headers: {
'content-type': 'multipart/form-data'
}
};
axios.post('https://api.mydomain.com/upload', formData, contentType)
.then(response => {
console.log(response);
}).catch(error => {
console.log(error);
});
}
Any help is very much appreciated!

Related

What could be the misconfiguration in my api_gateway.conf file that is leading me to the error SSL_do_handshake() failed

I am getting this message in nginx (1.18) error.log file
*39 SSL_do_handshake() failed (SSL: error:1408F10B:SSL routines:ssl3_get_record:wrong version number) while SSL handshaking to upstream
I saw a lot of answers here, but none of them solve my problem.
I am trying to implement an api gateway. It should be the simplest thing in the world...
api_gateway.conf
include api_keys.conf;
server {
access_log /var/log/nginx/api_access.log; # Each API may also log to a
# separate file
auth_request /_validate_apikey;
root /var/www/api;
index index.html index.htm index.nginx-debian.html;
listen 443 ssl;
server_name api.example.com.br;
location /microservices/ {
proxy_pass https://127.0.0.1:10001/;
}
location /ms-email-sender/ {
proxy_pass https://127.0.0.1:10002/;
}
# Error responses
error_page 404 = #400; # Treat invalid paths as bad requests
proxy_intercept_errors on; # Do not send backend errors to client
include api_json_errors.conf; # API client-friendly JSON errors
default_type application/json; # If no content-type, assume JSON
# API key validation
location = /_validate_apikey {
internal;
if ($http_apikey = "") {
return 401; # Unauthorized
}
if ($api_client_name = "") {
return 403; # Forbidden
}
return 204; # OK (no content)
}
ssl_certificate /etc/letsencrypt/live/api.optimusdata.com.br/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/api.optimusdata.com.br/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = api.optimusdata.com.br) {
return 301 https://$host$request_uri;
} # managed by Certbot
server_name api.optimusdata.com.br;
listen 80;
return 404; # managed by Certbot
}
My services are written in node.js
I tried to put some directives under location block like
proxy_ssl_verify off;
I changed a lot of things the api_gateway.conf
I saw several tutorials in the web and all of them are quite look alike that one.

Creating a HTTP proxy to handle mTLS connections with Hyper

I need some help to create a proxy in hyper that resolves a mTLS connection. I used this example https://github.com/hyperium/hyper/blob/0.14.x/examples/http_proxy.rs as a starting point, and worked through adding tokio-rustls to support the mTLS connection. Here is the code that handles the http tunnel.
async fn tunnel(
mut upgraded: Upgraded,
destination: (String, u16),
certificates: Certificates,
) -> std::io::Result<()> {
let root_cert_store = prepare_cert_store(&certificates);
let tls_client_config = tls_config(certificates, root_cert_store);
let tls_connector = TlsConnector::from(Arc::new(tls_client_config));
let target = TcpStream::connect(target_address(&destination)).await?;
let domain = rustls::ServerName::try_from(destination.0.as_str()).expect("Invalid DNSName");
let mut tls_target = tls_connector.connect(domain, target).await?;
debug!("TlS Connection ready");
let (wrote, recvd) = tokio::io::copy_bidirectional(&mut upgraded, &mut tls_target).await?;
debug!("Client wrote {} and received {} bytes", wrote, recvd);
Ok(())
}
To make the proxy connection I made a really small snippet in Kotlin:
fun main() {
val fm = FuelManager.instance
fm.proxy = Proxy(Proxy.Type.HTTP, InetSocketAddress("127.0.0.1", 8100))
repeat(1) {
val response = "https://nginx-test".httpGet()
.responseString()
println("Response: $response")
println("Response Data: ${response.second.data.toString(Charset.defaultCharset())}")
}
}
It just sets the proxy address and makes a call to a local nginx server where the mTls auth is expected to occurs.
This kotlin code throws the following error Unsupported or unrecognized SSL message
And the Nginx logs the request like this:
172.18.0.1 - - [01/Feb/2023:17:51:08 +0000] "\x16\x03\x03\x01\xBF\x01\x00\x01\xBB\x03\x03\xD90\xF6+\xCEIvRr\xEF\x84{\x82\xD0\xA0\xFB8\xAD\xEB\x11\x1D\xC4 " 400 157 "-" "-" "-"
I'm assuming that the message is being delivered encrypted to the Nginx server and I can't really just copy the bytes from one connection to another like this let (wrote, recvd) = tokio::io::copy_bidirectional(&mut upgraded, &mut tls_target).await?;
Maybe I should wrap the "upgraded" connection in some sort of TlsAcceptor so it can decrypt the bytes before writing it, but i could not figure out how to do it.
Anyone has any thoughts on this?
Here is my nginx config:
server {
listen 80;
location / {
return 301 https://$host$request_uri;
}
}
server {
listen 443 ssl;
server_name nginx.test;
ssl_certificate /etc/ssl/server.crt;
ssl_certificate_key /etc/ssl/server.key;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_client_certificate /etc/nginx/client_certs/ca.crt;
ssl_verify_client optional;
ssl_verify_depth 2;
location / {
if ($ssl_client_verify != SUCCESS) { return 403; }
proxy_set_header SSL_Client_Issuer $ssl_client_i_dn;
proxy_set_header SSL_Client $ssl_client_s_dn;
proxy_set_header SSL_Client_Verify $ssl_client_verify;
return 200 "yay";
}
}

receiving a 502 CORS error having used them in the server

I have two servers, running on the same virtual machines:
https:xxx.domain1.com (the front-end)
https:yyy.domain1.com (the back-end only called from the front-end)
Both are running under nginx and the run correctly on my development Ubuntu 20.04.1 machine.
Now I' moving them on AWS: I created a Linux machine the same OS, and transferred both the machines.
So now I have
https:xxx.domain2.com (the front-end)
https:yyy.domain2.com (the back-end only called from the front-end)
The second server will be always called only by the first one. It should be considered hidden.
I run them, but, when accessing the front-end for the login, I received the following errors:
OPTIONS https://xxx.domain2.com/login CORS Missing Allow Origin
Now, in the server https:yyy.domain2.com I always specified
const router = express();
router.use(cors())
and the full nginx config file is as follow
server {
server_name xxx.domain2.com;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://localhost:3000;
}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/xxx.domain2.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/xxx.domain2.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
server_name yyy.domain2.com;
add_header Access-Control-Allow-Origin "xxx.domain2.com";
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://localhost:3001;
}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/xxx.domain2.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/xxx.domain2.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = xxx.domain2.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80;
listen [::]:80;
server_name xxx.domain2.com;
return 404; # managed by Certbot
}
server {
if ($host = yyy.domain2.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80;
listen [::]:80;
server_name yyy.domain2.com ;
return 404; # managed by Certbot
}
Please note that I added the line
add_header Access-Control-Allow-Origin "xxx.domain2.com";
that I don't have on my development server.
====================== FIRST ADDENDUM ==========================
This is the client offending piece of code: The error is in response to the const res = await axios.put(cfLogin, { 'cf': cf }); request below.
const handleSubmitCf = async (e) => {
e.preventDefault();
setSudo(false)
try {
const res = await axios.put(cfLogin, { 'cf': cf });
if (res.status === 200 || res.status === 201)
{
nextPhase();
setResponse(res.data.data1);
}
setErrore('');
}
catch (error) { setErrore(error.response.data); };
}

Istio retry isn't triggered when an error occurs at transport layer

Over the last few days, I was trying to understand the Istio retry policy. Then, I found a field named "retry-on". The default value of this field is below (I use version 1.14.3 Istio).
RetryOn: "connect-failure,refused-stream,unavailable,cancelled,retriable-status-codes"
link to the source code
I want to know which case is included in "connect-failure". From the document, They explain like this.
connect-failure
Envoy will attempt a retry if a request is failed because of a connection failure to the upstream server (connect timeout, etc.). (Included in 5xx)
NOTE: A connection failure/timeout is a the TCP level, not the request level. This does not include upstream request timeouts specified via x-envoy-upstream-rq-timeout-ms or via route configuration or via virtual host retry policy.
link to doc
So, I think it will retry if any error occurs in the TCP protocol at the transport layer. I tried to prove that by creating 2 pods in the Kubernetes cluster. First is Nginx forwarding every HTTP request to the Second. The Second is the NodeJS TCP server that will break the TCP connection if you send an HTTP request with "/error" path. I show it below.
Nginx
user nginx;
error_log /var/log/nginx/error.log warn;
events {
worker_connections 65535;
use epoll;
multi_accept on;
}
http {
log_format main escape=json
'{'
'"clientIP":"$remote_addr",'
'"time-local":"$time_local",'
'"server-port":"$server_port",'
'"message":"$request",'
'"statusCode":"$status",'
'"dataLength":"$body_bytes_sent",'
'"referer":"$http_referer",'
'"userAgent":"$http_user_agent",'
'"xForwardedFor":"$http_x_forwarded_for",'
'"upstream-response-time":"$upstream_response_time",'
'"correlation-id":"$http_x_correlation_id",'
'"user-tier":"$http_x_neo_user_tier",'
'"session-id":"$http_x_session_id"'
'}';
access_log /var/log/nginx/access.log main;
client_max_body_size 100m;
client_header_timeout 5m; # default 60s
client_body_timeout 5m; # default 60s
send_timeout 5m; # default 60s
proxy_connect_timeout 5m;
proxy_send_timeout 5m;
proxy_read_timeout 5m;
server {
listen 8080;
location / {
proxy_pass http://ice-node-service.neo-platform.svc.cluster.local:8080;
}
}
}
NodeJS
var net = require('net');
var server = net.createServer();
server.listen(8080, '127.0.0.1');
server.addListener('close', () => {
console.log('close');
})
server.addListener('connection', socket => {
console.log('connect');
socket.addListener('data', data => {
try {
const [method, path] = data.toString().split("\n")[0].split(" ")
console.log(method, path);
if (path === "/error") {
socket.destroy(new Error("force error"))
} else {
socket.write(respond())
socket.end()
}
} catch (e) {
console.log(e);
}
})
})
server.addListener('error', err => {
console.log('error', err);
})
server.addListener('listening', () => {
console.log('listening');
})
function respond() {
const body = `<html><body>Hello</body></html>`
return `HTTP/1.1 200 OK
Date: ${new Date().toGMTString()}
Server: Apache
Last-Modified: Tue, 01 Dec 2009 20:18:22 GMT
ETag: "51142bc1-7449-479b075b2891b"
Accept-Ranges: bytes
Content-Length: ${body.length + 2}
Content-Type: text/html
${body}\r\n`
}
So, I tried to send a request through Nginx to the NodeJS server on "/error" path. I expected Istio would resend the request if the TCP connection is broken. But, It wasn't retried. So, I want to know why it wasn't.

Invoking requests.get() within flask application sub-class is causing uwsgi segmentation fault and 502 on nginx

I'm facing an issue with my current flask app setup and would really appreciate some input on this. Thank you!
Flow
user --> nginx --> uwsgi --> flask app --> https call to external system (response is processed and relevant data returned to client)
Workflow
Intent My flask view/route invokes another class, within which a https (GET) call is made to an external system to retrieve data. This data is then processed (analyzed) and an appropriate response is sent to the user.
Actual User receives 502 Bad Gateway from webserver upon invoking Flask Based endpoint. This is only happening when placing the nginx and uwsgi server in front of my flask application. Initial tests on the server directly with flask's in-built server appeared to work.
Note: That analytics bit does take up some time so I increased all relevant timeouts (to no avail)
Configurations
Nginx (tried with and without TLS)
worker_processes 4;
error_log /path/to/error.log;
pid /path/to/nginx.pid;
events {
worker_connections 1024;
}
http {
default_type application/json;
access_log /path/to/access.log;
sendfile on;
keepalive_timeout 0; [multiple values tried]
# HTTPS server
server {
listen 5555 ssl;
server_name my_host.domain.com;
ssl_certificate /path/to/server.crt;
ssl_certificate_key /path/to/server.key;
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 5m;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
location /my_route {
uwsgi_connect_timeout 60s;
uwsgi_read_timeout 300s;
client_body_timeout 300s;
include uwsgi_params;
uwsgi_pass unix:/path/to/my/app.sock;
}
}
}
uWSGI (threads reduced to 1 as part of troubleshooting attempts)
[uwsgi]
module = wsgi:app
harakiri = 300 [also added as part of troubleshooting steps]
logto = /path/to/logs/uwsgi_%n.log
master = true
processes = 1
threads = 1
socket = app.sock
chmod-socket = 766
vacuum = true
socket-timeout = 60
die-on-term = true
Code Snippets
Main Flask Class (view)
#app.route(my_route, methods=['POST'])
def my_view():
request_json = request.json
app.logger.debug(f"Request Received: {request_json}")
schema = MySchema()
try:
schema.load(request_json)
var1 = request_json["var1"]
var2 = request_json["var2"]
var3 = request_json["var3"]
var4 = request_json["var4"]
# begin
execute = AnotherClass(client_home, config, var1, var2, var3, var4, mime_type)
return jsonify(execute.result)
except ValidationError as exception:
error_message = json.dumps(exception.messages)
abort(Response(error_message, 400, mimetype=mime_type))
Class which executes HTTPS GET on external system
custom_adapter = HTTPAdapter(max_retries=3)
session = requests.Session()
session.proxies = self.proxies
session.mount("https://", custom_adapter)
try:
json_data = json.loads(session.get(process_endpoint, headers=self.headers, timeout=(3, 6)).text)
Errors
Nginx
error] 22680#0: *1 upstream prematurely closed connection while
reading response header from upstream, client: client_ip, server:
server_name, request: "POST /my_route HTTP/1.1", upstream:
"uwsgi://unix:/path/to/my/app.sock:", host: "server_name:5555"
User gets a 502 on their end (Bad Gateway)
uWSGI
2020-04-24 16:57:23,873 - app.module.module_class - DEBUG - Endpoint:
https://external_system.com/endpoint_details 2020-04-24 16:57:23,876 -
urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1):
external_system.com:443 !!! uWSGI process #### got Segmentation Fault
!!!
* backtrace of #### /path/to/anaconda3/bin/uwsgi(uwsgi_backtrace+0x2e) [0x610e8e]
/path/to/anaconda3/bin/uwsgi(uwsgi_segfault+0x21) [0x611221]
/usr/lib64/libc.so.6(+0x363f0) [0x7f6c22b813f0]
/path/to/anaconda3/lib/python3.7/lib-dynload/../../libssl.so.1.0.0(ssl3_ctx_ctrl+0x170)
[0x7f6c191b77b0]
/path/to/anaconda3/lib/python3.7/site-packages/cryptography/hazmat/bindings/_openssl.abi3.so(+0x5a496)
[0x7f6c16de2496]
....
end of backtrace * DAMN ! worker 1 (pid: ####) died :( trying respawn ... Respawned uWSGI worker 1 (new pid: ####)
SOLVED
Steps taken
update cryptography
update requests
update urllib3
add missing TLS ciphers to Py HTTP Adapter (follow this guide)

Resources