Invoking requests.get() within flask application sub-class is causing uwsgi segmentation fault and 502 on nginx - python-3.x

I'm facing an issue with my current flask app setup and would really appreciate some input on this. Thank you!
Flow
user --> nginx --> uwsgi --> flask app --> https call to external system (response is processed and relevant data returned to client)
Workflow
Intent My flask view/route invokes another class, within which a https (GET) call is made to an external system to retrieve data. This data is then processed (analyzed) and an appropriate response is sent to the user.
Actual User receives 502 Bad Gateway from webserver upon invoking Flask Based endpoint. This is only happening when placing the nginx and uwsgi server in front of my flask application. Initial tests on the server directly with flask's in-built server appeared to work.
Note: That analytics bit does take up some time so I increased all relevant timeouts (to no avail)
Configurations
Nginx (tried with and without TLS)
worker_processes 4;
error_log /path/to/error.log;
pid /path/to/nginx.pid;
events {
worker_connections 1024;
}
http {
default_type application/json;
access_log /path/to/access.log;
sendfile on;
keepalive_timeout 0; [multiple values tried]
# HTTPS server
server {
listen 5555 ssl;
server_name my_host.domain.com;
ssl_certificate /path/to/server.crt;
ssl_certificate_key /path/to/server.key;
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 5m;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
location /my_route {
uwsgi_connect_timeout 60s;
uwsgi_read_timeout 300s;
client_body_timeout 300s;
include uwsgi_params;
uwsgi_pass unix:/path/to/my/app.sock;
}
}
}
uWSGI (threads reduced to 1 as part of troubleshooting attempts)
[uwsgi]
module = wsgi:app
harakiri = 300 [also added as part of troubleshooting steps]
logto = /path/to/logs/uwsgi_%n.log
master = true
processes = 1
threads = 1
socket = app.sock
chmod-socket = 766
vacuum = true
socket-timeout = 60
die-on-term = true
Code Snippets
Main Flask Class (view)
#app.route(my_route, methods=['POST'])
def my_view():
request_json = request.json
app.logger.debug(f"Request Received: {request_json}")
schema = MySchema()
try:
schema.load(request_json)
var1 = request_json["var1"]
var2 = request_json["var2"]
var3 = request_json["var3"]
var4 = request_json["var4"]
# begin
execute = AnotherClass(client_home, config, var1, var2, var3, var4, mime_type)
return jsonify(execute.result)
except ValidationError as exception:
error_message = json.dumps(exception.messages)
abort(Response(error_message, 400, mimetype=mime_type))
Class which executes HTTPS GET on external system
custom_adapter = HTTPAdapter(max_retries=3)
session = requests.Session()
session.proxies = self.proxies
session.mount("https://", custom_adapter)
try:
json_data = json.loads(session.get(process_endpoint, headers=self.headers, timeout=(3, 6)).text)
Errors
Nginx
error] 22680#0: *1 upstream prematurely closed connection while
reading response header from upstream, client: client_ip, server:
server_name, request: "POST /my_route HTTP/1.1", upstream:
"uwsgi://unix:/path/to/my/app.sock:", host: "server_name:5555"
User gets a 502 on their end (Bad Gateway)
uWSGI
2020-04-24 16:57:23,873 - app.module.module_class - DEBUG - Endpoint:
https://external_system.com/endpoint_details 2020-04-24 16:57:23,876 -
urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1):
external_system.com:443 !!! uWSGI process #### got Segmentation Fault
!!!
* backtrace of #### /path/to/anaconda3/bin/uwsgi(uwsgi_backtrace+0x2e) [0x610e8e]
/path/to/anaconda3/bin/uwsgi(uwsgi_segfault+0x21) [0x611221]
/usr/lib64/libc.so.6(+0x363f0) [0x7f6c22b813f0]
/path/to/anaconda3/lib/python3.7/lib-dynload/../../libssl.so.1.0.0(ssl3_ctx_ctrl+0x170)
[0x7f6c191b77b0]
/path/to/anaconda3/lib/python3.7/site-packages/cryptography/hazmat/bindings/_openssl.abi3.so(+0x5a496)
[0x7f6c16de2496]
....
end of backtrace * DAMN ! worker 1 (pid: ####) died :( trying respawn ... Respawned uWSGI worker 1 (new pid: ####)

SOLVED
Steps taken
update cryptography
update requests
update urllib3
add missing TLS ciphers to Py HTTP Adapter (follow this guide)

Related

Creating a HTTP proxy to handle mTLS connections with Hyper

I need some help to create a proxy in hyper that resolves a mTLS connection. I used this example https://github.com/hyperium/hyper/blob/0.14.x/examples/http_proxy.rs as a starting point, and worked through adding tokio-rustls to support the mTLS connection. Here is the code that handles the http tunnel.
async fn tunnel(
mut upgraded: Upgraded,
destination: (String, u16),
certificates: Certificates,
) -> std::io::Result<()> {
let root_cert_store = prepare_cert_store(&certificates);
let tls_client_config = tls_config(certificates, root_cert_store);
let tls_connector = TlsConnector::from(Arc::new(tls_client_config));
let target = TcpStream::connect(target_address(&destination)).await?;
let domain = rustls::ServerName::try_from(destination.0.as_str()).expect("Invalid DNSName");
let mut tls_target = tls_connector.connect(domain, target).await?;
debug!("TlS Connection ready");
let (wrote, recvd) = tokio::io::copy_bidirectional(&mut upgraded, &mut tls_target).await?;
debug!("Client wrote {} and received {} bytes", wrote, recvd);
Ok(())
}
To make the proxy connection I made a really small snippet in Kotlin:
fun main() {
val fm = FuelManager.instance
fm.proxy = Proxy(Proxy.Type.HTTP, InetSocketAddress("127.0.0.1", 8100))
repeat(1) {
val response = "https://nginx-test".httpGet()
.responseString()
println("Response: $response")
println("Response Data: ${response.second.data.toString(Charset.defaultCharset())}")
}
}
It just sets the proxy address and makes a call to a local nginx server where the mTls auth is expected to occurs.
This kotlin code throws the following error Unsupported or unrecognized SSL message
And the Nginx logs the request like this:
172.18.0.1 - - [01/Feb/2023:17:51:08 +0000] "\x16\x03\x03\x01\xBF\x01\x00\x01\xBB\x03\x03\xD90\xF6+\xCEIvRr\xEF\x84{\x82\xD0\xA0\xFB8\xAD\xEB\x11\x1D\xC4 " 400 157 "-" "-" "-"
I'm assuming that the message is being delivered encrypted to the Nginx server and I can't really just copy the bytes from one connection to another like this let (wrote, recvd) = tokio::io::copy_bidirectional(&mut upgraded, &mut tls_target).await?;
Maybe I should wrap the "upgraded" connection in some sort of TlsAcceptor so it can decrypt the bytes before writing it, but i could not figure out how to do it.
Anyone has any thoughts on this?
Here is my nginx config:
server {
listen 80;
location / {
return 301 https://$host$request_uri;
}
}
server {
listen 443 ssl;
server_name nginx.test;
ssl_certificate /etc/ssl/server.crt;
ssl_certificate_key /etc/ssl/server.key;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_client_certificate /etc/nginx/client_certs/ca.crt;
ssl_verify_client optional;
ssl_verify_depth 2;
location / {
if ($ssl_client_verify != SUCCESS) { return 403; }
proxy_set_header SSL_Client_Issuer $ssl_client_i_dn;
proxy_set_header SSL_Client $ssl_client_s_dn;
proxy_set_header SSL_Client_Verify $ssl_client_verify;
return 200 "yay";
}
}

Istio retry isn't triggered when an error occurs at transport layer

Over the last few days, I was trying to understand the Istio retry policy. Then, I found a field named "retry-on". The default value of this field is below (I use version 1.14.3 Istio).
RetryOn: "connect-failure,refused-stream,unavailable,cancelled,retriable-status-codes"
link to the source code
I want to know which case is included in "connect-failure". From the document, They explain like this.
connect-failure
Envoy will attempt a retry if a request is failed because of a connection failure to the upstream server (connect timeout, etc.). (Included in 5xx)
NOTE: A connection failure/timeout is a the TCP level, not the request level. This does not include upstream request timeouts specified via x-envoy-upstream-rq-timeout-ms or via route configuration or via virtual host retry policy.
link to doc
So, I think it will retry if any error occurs in the TCP protocol at the transport layer. I tried to prove that by creating 2 pods in the Kubernetes cluster. First is Nginx forwarding every HTTP request to the Second. The Second is the NodeJS TCP server that will break the TCP connection if you send an HTTP request with "/error" path. I show it below.
Nginx
user nginx;
error_log /var/log/nginx/error.log warn;
events {
worker_connections 65535;
use epoll;
multi_accept on;
}
http {
log_format main escape=json
'{'
'"clientIP":"$remote_addr",'
'"time-local":"$time_local",'
'"server-port":"$server_port",'
'"message":"$request",'
'"statusCode":"$status",'
'"dataLength":"$body_bytes_sent",'
'"referer":"$http_referer",'
'"userAgent":"$http_user_agent",'
'"xForwardedFor":"$http_x_forwarded_for",'
'"upstream-response-time":"$upstream_response_time",'
'"correlation-id":"$http_x_correlation_id",'
'"user-tier":"$http_x_neo_user_tier",'
'"session-id":"$http_x_session_id"'
'}';
access_log /var/log/nginx/access.log main;
client_max_body_size 100m;
client_header_timeout 5m; # default 60s
client_body_timeout 5m; # default 60s
send_timeout 5m; # default 60s
proxy_connect_timeout 5m;
proxy_send_timeout 5m;
proxy_read_timeout 5m;
server {
listen 8080;
location / {
proxy_pass http://ice-node-service.neo-platform.svc.cluster.local:8080;
}
}
}
NodeJS
var net = require('net');
var server = net.createServer();
server.listen(8080, '127.0.0.1');
server.addListener('close', () => {
console.log('close');
})
server.addListener('connection', socket => {
console.log('connect');
socket.addListener('data', data => {
try {
const [method, path] = data.toString().split("\n")[0].split(" ")
console.log(method, path);
if (path === "/error") {
socket.destroy(new Error("force error"))
} else {
socket.write(respond())
socket.end()
}
} catch (e) {
console.log(e);
}
})
})
server.addListener('error', err => {
console.log('error', err);
})
server.addListener('listening', () => {
console.log('listening');
})
function respond() {
const body = `<html><body>Hello</body></html>`
return `HTTP/1.1 200 OK
Date: ${new Date().toGMTString()}
Server: Apache
Last-Modified: Tue, 01 Dec 2009 20:18:22 GMT
ETag: "51142bc1-7449-479b075b2891b"
Accept-Ranges: bytes
Content-Length: ${body.length + 2}
Content-Type: text/html
${body}\r\n`
}
So, I tried to send a request through Nginx to the NodeJS server on "/error" path. I expected Istio would resend the request if the TCP connection is broken. But, It wasn't retried. So, I want to know why it wasn't.

Error: unknown directive "rtmp_stat" (NGNIX fails when using NOALBS .conf file)

I've never used Linux or Raspberry Pi before but I bought one in order to create my own RTMP server. I made one with NGINX and got it working. Now I'm trying to incorporate NOALBS for more functionality. I renamed my working .conf to nginx-old.conf and copied in the NOALBS .conf and now NGINX fails with this error.
nginx: [emerg] unknown directive "rtmp_stat" in /etc/nginx/nginx.conf:26
nginx: configuration file /etc/nginx/nginx.conf test failed
nginx.service: Control process exited, code=exited, status=1/FAILURE
nginx.service: Failed with result 'exit-code'.
My .conf is as follows:
worker_processes 1;
events {
worker_connections 1024;
}
http {
sendfile off;
tcp_nopush on;
directio 512;
include mime.types;
default_type application/octet-stream;
ignore_invalid_headers on;
log_format compression '';
server {
listen 80;
server_name localhost;
# This URL provides RTMP statistics in XML
location /stat {
if ($request_method = "GET") {
add_header "Access-Control-Allow-Origin" *;
}
rtmp_stat all;
# Use this stylesheet to view XML as web page
# in browser
rtmp_stat_stylesheet /stat.xsl;
}
location /stat.xsl {
# XML stylesheet to view RTMP stats.
# Copy stat.xsl wherever you want
# and put the full directory path here
# root /path/to/stat.xsl/;
root /var/www/html/stat.xsl;
}
location /control {
rtmp_control all;
}
}
}
rtmp {
log_format compression '';
server {
listen 1935;
ping 30s;
notify_method get;
chunk_size 8192;
ack_window 8192;
sync 4ms;
interleave on;
access_log logs/rtmp-access.log compression;
# Stream to "rtmp://IPHERE/publish/live".
application publish {
live on;
wait_video on;
wait_key on;
exec_options on;
publish_notify on;
play_restart on;
drop_idle_publisher 4s;
idle_streams off;
sync 4ms;
interleave on;
}
}
}
The only thing I've edited is actually adding the stat.xsl path and my Pi's IP address (which I've scrubbed here). Can anyone help me out?
Add this header :
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;

Nginx Proxy + NodeJS WebSocket + >17KB messages. No traffic. Who is the culprit?

Impossible to increase buffer width to avoid dropping frames
OR
Unable to manage WS fragmentation correctly
Summary
My goal:
A very simple thing: have websocket tunnel to transfer at least 2/3 MB of data per tunnel. I need to send directory structure, therefore the data can be very many
The problem:
Sending WebSocket messages over 17KB, from A to B, cause a "communication lost" or packet drop/loss; the connection/tunnel remains up with the inability to send new messages over the same tunnel, from A to B; conversely, from B to A continues to work.
I must restart the tunnel to get functionality back.
It could also be an idea, the management of the packet heap that restarts the tunnel when the threshold is reached, but it is clear that I need to send more than the threshold at one time.
The "signal path":
GoLang app(Client) ---> :443 NGINX Proxy(Debian) ---> :8050 NodeJS WS Server
The tests:
Sending X messages/chunks of 1000 byte each | messages are received up to the 17th chunk, the following ones are not received (see below)
The analyses:
Wireshark on Go app shows the flow of all packets
tcpdump, on Debian machine, set to listen on eth (public), shows the flow of all packets
tcpdump, on Debian machine, set to listen on lo interface (for rev proxy scanning), shows the flow of all packets
NodeJS/fastify-websocket ws.on('message', (msg)=>{console.log(msg)}) shows up to the 17th chunk
Code & Config:
GoLang app relevant part
websocket.DefaultDialer = &websocket.Dialer{
Proxy: http.ProxyFromEnvironment,
HandshakeTimeout: 45 * time.Second,
WriteBufferSize: 1000, //also tried with 2000, 5000, 10000, 11000
}
c, _, err := websocket.DefaultDialer.Dial(u.String(), nil)
wsConn = c
bufferChunk := 1000
bufferSample := ""
for j := 7; j <= bufferChunk; j++ {
bufferSample = bufferSample + "0"
}
i := 1
for {
sendingBytes := i * bufferChunk
fmt.Println(strconv.Itoa(sendingBytes) + " bytes sent")
wsConn.WriteMessage(websocket.TextMessage, []byte(bufferSample))
i++
time.Sleep(1000 * time.Millisecond)
}
NGINX conf:
upstream backend {
server 127.0.0.1:8050;
}
server {
server_name my.domain.com;
large_client_header_buffers 8 32k;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_buffers 8 2m;
proxy_buffer_size 10m;
proxy_busy_buffers_size 10m;
proxy_pass http://backend;
proxy_redirect off;
#proxy_buffering off; ### ON/OFF IT'S THE SAME
# enables WS support
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade"; ### "upgrade" it's the same
}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/my.domain.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/my.domain.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = my.domain.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
server_name my.domain.com;
listen 80;
return 404; # managed by Certbot
}
NodeJS code:
//index.js
const config = require("./config.js");
const fastify = require('fastify')();
const WsController = require("./controller");
fastify.register(require('fastify-websocket'), {
/*these options are the same as the native nodeJS WS*/
options :{
maxPayload: 10 * 1024 * 1024,
maxReceivedFrameSize: 131072,
maxReceivedMessageSize: 10 * 1024 * 1024,
autoAcceptConnections: false
}
});
fastify.ready(err => {
if (err) throw err
console.log("Server started")
fastify.websocketServer
.on("connection", WsController)
})
//controller.js
module.exports = (ws, req) => {
ws.on("message", (msg) => {
log("msg received"); //it is shown as long as the tunnel does not "fill" up to 17KB
})
})
SOLVED
Updating fastify and fastify-websocket the problem disappeared. What a shame!
I came up with this solution by creating a new cloud instance and installing everything from scratch.
Just npm update.
Thank you all for your support

How to run flask appbuilder with uWSGI and Nginx

I build a web server with flask appbuilder,i can run this project by command:
python3 run.py or fabmanage run,but it always No response when not interaction after some hours,So i try to run it with Nginx.
Here is my config:
uwsgi.ini:
[uwsgi]
base = /root/flask_spider/gttx_spider/web
all_base = /root/flask_spider/gttx_spider/
app = run
module = %(app)
chdir = %(base)
virtualenv = %(all_base)/venv
socket = %(all_base)/uwsgi_gttx_spider.sock
logto = /var/log/uwsgi/%n.log
master = true
processes = 500
chmod-socket = 666
vacuum = true
callable = app
nginx.conf
server {
listen 82;
server_name gttx_spider;
charset utf-8;
client_max_body_size 75M;
location / {
include uwsgi_params;
uwsgi_pass unix:/root/flask_spider/gttx_spider/uwsgi_gttx_spider.sock;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
}
}
and modify run.py
from app import app
app.run(host='0.0.0.0')
#app.run(host='0.0.0.0', port=8080,debug=True)
and then:
sudo ln -s /root/flask_spider/gttx_spider/nginx.conf /etc/nginx/conf.d/
sudo /etc/init.d/nginx restart
uwsgi --ini uwsgi_gttx_spider.ini
When i access IP:82 and get this log in nginx:
[error] 11104#11104: *3 upstream timed out (110: Connection timed out) while reading response header from upstream
When i access IP:5000,
And this log in uwsgi:
2018-09-10 19:36:25,747:INFO:werkzeug: * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
2018-09-10 19:36:38,434:INFO:werkzeug:115.192.37.57 - - [10/Sep/2018 19:36:38] "GET / HTTP/1.1" 302 -
2018-09-10 19:36:38,466:INFO:werkzeug:115.192.37.57 - - [10/Sep/2018 19:36:38] "GET /home/ HTTP/1.1" 200 -
Also,i try this:
mv web/run.py web/run_bak.py
vi run.py
from flask import Flask
app = Flask(__name__)
#app.route("/")
def hello():
return "Hello World!"
if __name__ == "__main__":
app.run(host='0.0.0.0', port=8090)
And access IP:82,it will return 'Hello World' and everything is fine
The difference is werkzeug will run the flask appbuilder project on 5000,and how to run in uwsgi.sock,Please Help,thanks!

Resources