I have on my Linux Ubuntu 12.04 Server Project Gazelle installed.
I have upload a .iso and would download it, but my Transmission give my this error
Tracker gave an error: Tracker gave HTTP response code 404
This isn't a fatal error, it's usually alleviated by having multiple trackers for each torrent. In this instance it most likely means that the responding host to a tracker GET request isn't serving tracker responses at the path requested. A common request path is something like /announce.
You might assume the error is at the server end, and give them some time to resume functionality, ignore the error completely on account of Transmission finding peers through other trackers and means such as DHT, or check for proxies and other HTTP caching on the path to the origin server causing an incorrect response.
Related
When a process is stuck in downloading a remote file(I can see from jstack that is is being blocked in socket read), is there any Linux command to tell what's the actual URL of the remote file that the process is downloading?
Tools like lsof seem only giving the remote host, instead of the path of particular remote files.
Considering that the client has already sent the HTTP request to the server and is now waiting for the response the exact URL requested is no longer available on the network. Nor is the URL available somewhere in sockets states because these deal only with network and transport layer information and not with application level information (i.e. HTTP). If you are lucky than you can find the original URL somewhere inside the memory of the application but since it is actually not needed any longer by the application (request has been sent) it might be, that it is not even known to the application any longer.
I'm trying to limit upload file size, I set app.config['MAX_CONTENT_LENGTH'] to the maximum value I want,
I used this code to display the error.
#app.errorhandler(413)
def request_entity_too_large(error):
return 'File Too Large', 413
When using curl, the error displayed correctly.
I checked using Firefox/Safari, in both I get browser error of connection dropped/reset.
Firefox
The connection was reset
The connection to the server was reset while the page was loading.
The site could be temporarily unavailable or too busy. Try again in a few moments.
If you are unable to load any pages, check your computer's network connection.
If your computer or network is protected by a firewall or proxy, make sure that Firefox is permitted to access the Web.
Safari
Can't open the page
...server unexpectedly dropped the connection...
server log in all of those requests
192.168.1.1 - - [23/May/2015 15:50:34] "POST / HTTP/1.1" 413 -
Why the error doesn't display correctly?
It's an issue related to Flask's development server, you needn't worry about it. Run the application with production server will solve this problem.
In this snippet that posted by Armin Ronacher, he said:
You might notice that if you start not accessing .form or .files on incoming POST requests, some browsers will honor this with a connection reset message. This can happen if you start rejecting uploads that are larger than a given size.
Some WSGI servers solve that problem for you, others do not. For instance the builtin Flask webserver is pretty dumb and will not attempt to fix this problem.
See update here.
Grey Li's answer explain why the problem happen.
But we still can fix it by handler RequestEntityTooLarge, which is raised by werkzeug before abort(413) in Flask.
from werkzeug.exceptions import RequestEntityTooLarge
#app.errorhandler(413)
#app.errorhandler(RequestEntityTooLarge)
def app_handle_413(e):
return 'File Too Large', 413
We've had an issue on and off for some time now...
We have an app which sync's to a server, we know the request is getting to the server as we have been running wireshark and can see the incoming request.
Now this is where I need some correcting if im wrong somewhere...
my understanding is that the traffic will go from the network card on the machine, to HTTP.sys, which forwards it to IIS6, which then sends it to my executable ISAPI, which in turn provides the response which goes back through IIS6, through the network card over the WWW back to the device.
Now these requests that are going missing, like I said we can see that the request has got to the network card thanks to wireshark, but we don't know what's happened to it from this point, there's no error in the HTTP.sys log, and nothing in the IIS log, same with the log for our ISAPI, nothing in here either.
The fact that the HTTP.sys log is empty indicates to me that the kernal thinks it has successfully passed it onto IIS6, but I dont know if IIS6 logs when it first receives the request, or once it has successfully responded to it, has anyone got any ideas on this one? its a very strange one.
I am running a service on Azure web sites using PHP. From times to times, the server completely stops responding with a 500 HTTP message. So far, I could get these relevant details on the error:
ModuleName: FastCgiModule
Notification: EXECUTE_REQUEST_HANDLER
HttpStatus: 500
HttpReason: Internal Server Error
HttpSubStatus: 0
ErrorCode: The specified network name is no longer available. (0x80070040)
ConfigExceptionInfo:
The only info I was able to find was that this might be a prevention of DoS attack when the server stops executing the scripts (for some limited time?). I solve this now by restarting the server which is not good at all.
As I am unable to find an exact cause of this, I am looking for a better solution than manual restarting or even a hint on how to debug the problem. Thanks
We’re trying to get Socket.io flashsockets to work in Internet Explorer 9 over HTTPS/WSS. The flashsockets work over HTTP, but HTTPS is giving us problems. We’re using socket.io version 0.8.7 and socket.io-client version 0.9.1-1.
We’re running our websocket server via SSL on port 443. We’ve specified the location of our WebsocketMainInsecure.swf file (these are cross-domain ws requests) in the correct location, and we’re loading the file in the swfobject embed over HTTPS.
We opened up port 843 in our security group for our EC2 instance and the cross origin policy file is successfully being rendered over HTTP. It does not seem to render over HTTPS (Chrome throws an SSL connection error).
We’ve tried two versions of the WebsocketMainInsecure.swf file. The first is the file provided by Socket.io, which is built off of WebsocketMainInsecure.as that does not include the line
Security.allowInsecureDomain("*");
This throws the error SCRIPT16389: Unspecified error. at the WebSocket.__flash.setCallerUrl(location.href) line.
We figured it was because the SWF file was not permitting HTTPS requests, so we replaced the WebSocketMainInsecure.swf file with the one found at this repo: https://github.com/gimite/web-socket-js because it includes the
Security.allowInsecureDomain("*");
line in the actionscript code. When we used this, we saw that the flashsocket connection kept disconnecting and reconnecting in an infinite loop. We tracked the error down to the transport.js file in the socket.io library in the onSocketError function on the Transport prototype. It throws the error:
[Error: 139662382290912:error:1408F092:SSL routines:SSL3_GET_RECORD:data length too long:s3_pkt.c:503:]
We even tried updating both socket.io and socket.io-client to version 0.9.6 and we still got the Access is denied error.
This error has been very difficult to debug, and now we’re at a loss as to how to get flashsockets to work. We’re wondering if it might have to do with using an older version of socket.io, or maybe that our policy file server doesn’t accept HTTPS requests, or maybe even the way in which the WebSocketMainInsecure.swf file from the web-socket-js github repo was built relative to what socket.io-client expects.
I'm unsure weather it works. But here's my idea/suggestion:
Idea:
I assume that you (possibly) tried to access a URL which is too long. This happens if data often is tansmitted via GET-Parameters. The official limit for a URL is below 512 Bytes.
Details: The HTTP specification says that a protocol line may be at most 512 Bytes. If longer the server may reject the request or may be unable to handle the request. The first line in HTTP with a GET-requet is like "GET /path/to?param1=data1¶m2=data2&... HTTP/1.1" which would need to fit in 512 bytes. For POST requests theres no such limitation..
However your error seems to origin from some SSL implementation (openSSL?): refering to s3_pkt.c at line 503 (I found a file like this here: http://www.opensource.apple.com/source/OpenSSL/OpenSSL-7.1/openssl/ssl/s3_pkt.c) but seems to be different; I don't know the details, and am just speculating: I could think of that the openSSL implementation has some limited support for long GET-Requests (as they are not HTTP conform) and just rejects them this way...
I see these possibities now:
1. Solution: Use POST instead of GET-Requests to transmit longer datasets. See if this works...
2. Try to replace you openssl-installation or libopenssl at the server used; it's possibly broken or outdated?
3. Try to request some help from the openssl developers...
Hope that helps...
Try building OpenSSL with SSL_OP_MICROSOFT_BIG_SSLV3_BUFFER (credit to Steven Henson and Jaaron Anderson from OpenSSL mailing list).