Upload file larger than MAX_CONTENT_LENGTH in Flask results connection reset - browser

I'm trying to limit upload file size, I set app.config['MAX_CONTENT_LENGTH'] to the maximum value I want,
I used this code to display the error.
#app.errorhandler(413)
def request_entity_too_large(error):
return 'File Too Large', 413
When using curl, the error displayed correctly.
I checked using Firefox/Safari, in both I get browser error of connection dropped/reset.
Firefox
The connection was reset
The connection to the server was reset while the page was loading.
The site could be temporarily unavailable or too busy. Try again in a few moments.
If you are unable to load any pages, check your computer's network connection.
If your computer or network is protected by a firewall or proxy, make sure that Firefox is permitted to access the Web.
Safari
Can't open the page
...server unexpectedly dropped the connection...
server log in all of those requests
192.168.1.1 - - [23/May/2015 15:50:34] "POST / HTTP/1.1" 413 -
Why the error doesn't display correctly?

It's an issue related to Flask's development server, you needn't worry about it. Run the application with production server will solve this problem.
In this snippet that posted by Armin Ronacher, he said:
You might notice that if you start not accessing .form or .files on incoming POST requests, some browsers will honor this with a connection reset message. This can happen if you start rejecting uploads that are larger than a given size.
Some WSGI servers solve that problem for you, others do not. For instance the builtin Flask webserver is pretty dumb and will not attempt to fix this problem.
See update here.

Grey Li's answer explain why the problem happen.
But we still can fix it by handler RequestEntityTooLarge, which is raised by werkzeug before abort(413) in Flask.
from werkzeug.exceptions import RequestEntityTooLarge
#app.errorhandler(413)
#app.errorhandler(RequestEntityTooLarge)
def app_handle_413(e):
return 'File Too Large', 413

Related

Error 503 (Service Unavailable) on old server after moving with cPanel Transfer Tool

I used cPanel's Transfer Tool to move my websites to a new IP address. It was a temporary move and I wanted to revert back to my old server today. First thing I noticed was the transfer tool changed all the A records for all sites. I changed these back using swapip, and then tried accessing the sites. They load for a very long time and finally fail with:
Service Unavailable The server is temporarily unable to service your
request due to maintenance downtime or capacity problems. Please try
again later.
Additionally, a 503 Service Unavailable error was encountered while
trying to use an ErrorDocument to handle the request.
From numerous threads, I realized 503 usually occurs when System PHP-FPM is on. However, I didn't set this on niether before nor after moving. I didn't change any other settings except the DNS, so I'm guessing it should be a DNS issue, not sure if DNS issues can cause 503 errors. I've been struggling with this for a day now.
Checking Apache Error log, I see attempts to connect to the server I temporarily moved to:
[proxy_http:error] [pid 1659:tid 47454830633216] (110)Connection timed out: AH00957: HTTPS: attempt to connect to [new.ip.address]:443
After a few days digging, I found my mistake and how to rectify it thanks to the cPanel Support. I thought it worthwhile sharing in case anyone else faces the same problem:
First, I had to disable the live transfer feature before doing the transfer. It prevents the tool from disabling IPs and proxying domains.
Given I hadn't I had to rever the changes the tool had made, which basically involved running the following scripts from the command prompt:
$ whmapi1 unset_all_service_proxy_backends username=$USER
$ /scripts/xfertool --unblockdynamiccontent $username
$ whmapi1 unset_manual_mx_redirects domain=domain.tld
The link above explains what each of these scripts does.

NGINX | recv() failed (104: Connection reset by peer) while reading response header from upstream

I have been getting this error message and have tried almost everything i found on internet that says to fix this issue, but I've had no sucess :(
recv() failed (104: Connection reset by peer) while reading response header from upstream
I am using WordPress on a VPS, with 8-10 domains on the same WordPress install. For all the other sites I've seen no problem but for the master site of WordPress install whenever i goto any page, I get 502 Bad Gateway
None of my pages are working. From what i can see whenever any page on site is opened then PHP-FPM crashes with below error:
child 991195 exited on signal 11 (SIGSEGV) after 18.490300 seconds from start
and this thing keep on happening every time page is opened on this site.
Please help me with a full proof way of identifying the root cause with this issue and on how to fix this.
Many Thanks for your help.

Azure web sites - 500 internal server error (The specified network name is no longer available)

I am running a service on Azure web sites using PHP. From times to times, the server completely stops responding with a 500 HTTP message. So far, I could get these relevant details on the error:
ModuleName: FastCgiModule
Notification: EXECUTE_REQUEST_HANDLER
HttpStatus: 500
HttpReason: Internal Server Error
HttpSubStatus: 0
ErrorCode: The specified network name is no longer available. (0x80070040)
ConfigExceptionInfo:
The only info I was able to find was that this might be a prevention of DoS attack when the server stops executing the scripts (for some limited time?). I solve this now by restarting the server which is not good at all.
As I am unable to find an exact cause of this, I am looking for a better solution than manual restarting or even a hint on how to debug the problem. Thanks

Error 324 (net::ERR_EMPTY_RESPONSE): The server closed the connection without sending any data

I am writing ThreadPooled Web Server. But I found a very big problem, and my head is banging in trying to solve this problem. At the end I am unable to solve this. The problem is very strange.
When I am running Web Server, sometimes it shows
Error 324 (net::ERR_EMPTY_RESPONSE): The server closed the connection without sending any data.
and sometimes it runs fine. But I didn't understand why this all happening.
Please help me to overcome from this strange problem.
EDIT NO. 1
When I run my HTTP web server on Google crome, it gives more 324 error than in firefox. Why ?
I guess you are writing locally.
Are you working around Apache MPM threadpool? Or Its something else?
In General:
1. [localhost case] Check your .htaccess file if you are running server locally [Using your own custom server.or if your server build around Apache MPM ]. If you are not using Apache and its a purely a custom build from start.Your server is closing the connecting without returning any value.
2. [Intranet Case] Check your router, .htaccess
3. [Web Server Case] Check your router, .htaccess
4. Please read the below link.
5. [For Every body else] Simply refresh the page.
Below Link have detailed help.
How To Fix Error 324 (net::ERR_EMPTY_RESPONSE)
Apache MPM threadpool

HTTPS error "data length too long" in s3_pkt.c from Socket.io

We’re trying to get Socket.io flashsockets to work in Internet Explorer 9 over HTTPS/WSS. The flashsockets work over HTTP, but HTTPS is giving us problems. We’re using socket.io version 0.8.7 and socket.io-client version 0.9.1-1.
We’re running our websocket server via SSL on port 443. We’ve specified the location of our WebsocketMainInsecure.swf file (these are cross-domain ws requests) in the correct location, and we’re loading the file in the swfobject embed over HTTPS.
We opened up port 843 in our security group for our EC2 instance and the cross origin policy file is successfully being rendered over HTTP. It does not seem to render over HTTPS (Chrome throws an SSL connection error).
We’ve tried two versions of the WebsocketMainInsecure.swf file. The first is the file provided by Socket.io, which is built off of WebsocketMainInsecure.as that does not include the line
Security.allowInsecureDomain("*");
This throws the error SCRIPT16389: Unspecified error. at the WebSocket.__flash.setCallerUrl(location.href) line.
We figured it was because the SWF file was not permitting HTTPS requests, so we replaced the WebSocketMainInsecure.swf file with the one found at this repo: https://github.com/gimite/web-socket-js because it includes the
Security.allowInsecureDomain("*");
line in the actionscript code. When we used this, we saw that the flashsocket connection kept disconnecting and reconnecting in an infinite loop. We tracked the error down to the transport.js file in the socket.io library in the onSocketError function on the Transport prototype. It throws the error:
[Error: 139662382290912:error:1408F092:SSL routines:SSL3_GET_RECORD:data length too long:s3_pkt.c:503:]
We even tried updating both socket.io and socket.io-client to version 0.9.6 and we still got the Access is denied error.
This error has been very difficult to debug, and now we’re at a loss as to how to get flashsockets to work. We’re wondering if it might have to do with using an older version of socket.io, or maybe that our policy file server doesn’t accept HTTPS requests, or maybe even the way in which the WebSocketMainInsecure.swf file from the web-socket-js github repo was built relative to what socket.io-client expects.
I'm unsure weather it works. But here's my idea/suggestion:
Idea:
I assume that you (possibly) tried to access a URL which is too long. This happens if data often is tansmitted via GET-Parameters. The official limit for a URL is below 512 Bytes.
Details: The HTTP specification says that a protocol line may be at most 512 Bytes. If longer the server may reject the request or may be unable to handle the request. The first line in HTTP with a GET-requet is like "GET /path/to?param1=data1&param2=data2&... HTTP/1.1" which would need to fit in 512 bytes. For POST requests theres no such limitation..
However your error seems to origin from some SSL implementation (openSSL?): refering to s3_pkt.c at line 503 (I found a file like this here: http://www.opensource.apple.com/source/OpenSSL/OpenSSL-7.1/openssl/ssl/s3_pkt.c) but seems to be different; I don't know the details, and am just speculating: I could think of that the openSSL implementation has some limited support for long GET-Requests (as they are not HTTP conform) and just rejects them this way...
I see these possibities now:
1. Solution: Use POST instead of GET-Requests to transmit longer datasets. See if this works...
2. Try to replace you openssl-installation or libopenssl at the server used; it's possibly broken or outdated?
3. Try to request some help from the openssl developers...
Hope that helps...
Try building OpenSSL with SSL_OP_MICROSOFT_BIG_SSLV3_BUFFER (credit to Steven Henson and Jaaron Anderson from OpenSSL mailing list).

Resources