I am using Acunetix tool to scan my website but gives an error cannot connect to the website. I am trying to connect to my local host. But it keeps on throwing me this error
04.11 10:53.48, [Error] Server "http://192.168.24.199/" is not responsive.
04.11 10:53.48, [Error] Response body length exceeds configured limit
Its a .Net website running on IIS
Any suggestions will be appretiacted
What is happening here is that the HTTP response body length exceeds Acunetix's maximum configured limit.
To change this, navigate to Configuration > Scan Settings > HTTP Options and change the 'HTTP response size limit in kilobytes' to a larger value.
Note -- Having said this, I would look into why your app is returning a response body larger than 5MB (Acuentix's default limit), it's not normal to have such large responses (Acunetix automatically ignores common file types like PDFs, images, spreadsheets...).
Related
There is a strange problem that we have when we deploy our application on the Azure environment. When I start the application on my laptop, no Azure, no Docker or anything, on sending requests (which is a little bit big), I don't face any issues.
Our test and production environments are all on Azure right now. So when the application is deployed on it, I get this strange error:
log4javascript error: AjaxAppender.append: XMLHttpRequest request to URL ./common/logToServer.jsp?controllerName=6c3eaf3e-897d-4b30-a15e-62f9d3d3ce78 returned status code 413
Now I know what HTTP 413 error code is, but not sure, why my local is not showing the same error. Which leads me to believe that it might be some Azure configuration that I need to change. But don't know what.
It is simple web application on Java, Servlets and running on Tomcat.
Log4j is used as a logging framework for JavaScript with no runtime dependencies. As per the error statement, the issue was caused by the length of the payload, which is too large.
The HTTP status code 413 ("Payload Too Large") indicates that the request entity is larger than the limits defined by the server; the server might close the connection.
Fix:
Under java code -> application.properties add these two lines
server.tomcat.max-swallow-size=***MB //maximum size of the request body/payload
server.tomcat.max-http-post-size=*** MB //maximum size of entire POST request
NOTE:
*** is your desired integer representing megabyte.
reference article for more information and solution.
our application return huge json data ,but if the response size is getter than 850 mb approx we are getting Http 500 error.I did tried to find the solution , here are some link which I have look at.
https://serverfault.com/questions/514927/file-uploads-and-client-max-body-size-in-nginx-gunicorn-django
nginx - client_max_body_size has no effect
ngnix + gunicorn throws truncated response body
setting client_max_body_size to some value seems to be possible solution for this.
But I am not able to figure it out how to write this in startup command
My startup command is
gunicorn --bind=0.0.0.0 --workers=4 --timeout=3000 app:app
My application is in Flask hosted in Azure web app service
The setting of client_max_body_size is to solve the error of uploading files that are too large. The general error code is 413.
So I think the solution of configuring client_max_body_size cannot solve your problem.
Do you think it is normal for response json data in webapp to exceed 100M? Besides, your data may exceed 850M. The pressure that the azure webapp server can withstand, are you sure that the client's browser is also OK?
You can refer to this article(HOW BIG IS TOO BIG FOR JSON?. The author tested the impact of the size of the returned json data on the browser, which should be very helpful to you.
Suggestion:
1. Use paging reasonably to filter data and return useful data.
2. In the sql script, only return the necessary fields.
Like this closed issue https://github.com/Azure/azure-functions-host/issues/5540 I have issues figuring out what setting I should be changing to allow 100MB files to be uploaded
The weird thing is that the system is deployed in Azure where big files are allowed, but no one have made any changes to settings that should affect this.
So is there some local.settings.json setting that I am missing that is default different when hosting in Azure when compared to localhost
Error:
Microsoft.Azure.WebJobs.Host.FunctionInvocationException: Exception
while executing function: MessageReceiver --->
System.InvalidOperationException: Exception binding parameter
'request' --->
Microsoft.AspNetCore.Server.Kestrel.Core.BadHttpRequestException:
Request body too large.
There is https://learn.microsoft.com/en-us/dotnet/api/microsoft.aspnetcore.server.kestrel.core.kestrelserverlimits.maxrequestbodysize?view=aspnetcore-3.1
But I cant figure out how to set that when running Azure functions, in the startup I cant set it and setting [DisableRequestSizeLimit] or [RequestSizeLimit(100000000)] on top of my Azure function have no effect
A bug has been reported with problems on Windows https://github.com/Azure/azure-functions-core-tools/issues/2262
The HTTP request length is limited to 100 MB (104,857,600 bytes), and the URL length is limited to 4 KB (4,096 bytes). These limits are specified by the httpRuntime element of the runtime's Web.config file.
If a function that uses the HTTP trigger doesn't complete within 230 seconds, the Azure Load Balancer will time out and return an HTTP 502 error. The function will continue running but will be unable to return an HTTP response. For long-running functions, we recommend that you follow async patterns and return a location where you can ping the status of the request. For information about how long a function can run, see Scale and hosting - Consumption plan.
For more details, you could refer to this article.
I'm looking into IIS Request filtering by content-length. I've set the max allowed content length :
appcmd set config /section:requestfiltering /requestlimits.maxallowedcontentlength:30000000
My question is about when the filter will occur.
Will IIS first read ALL the request into memory and then throw an error, or will it raise an issue as soon as it reaches the threshold?
The IIS Request Filtering module is processed very early in the request pipeline. Unwanted requests are quickly discarded before proceeding to application code which is slower and has a much larger attack surface. For this reason, some have reported performance increases after implementing Request Filtering settings.
Limitations
Request Filtering Limitations include the following:
Stateless - Request Filtering has no knowledge of application or session state. Each request is processed individually regardless of whether a session has or has not been established.
Request Header Only - Request Filtering can only inspect the request header. It has no visibility into the request body or any part of the response.
Basic Logic - Regular expressions and wildcard matches are not available. Most settings consist of establishing size constraints while others perform simple string matching.
maxAllowedContentLength
Request Filtering checks the value of the Content-Length request header. If the value exceeds that which is set for maxAllowedContentLength the client will receive an HTTP 404.13.
The IIS 8.5 STIG recommends a value of 30000000 or less.
IISRFBaseline
This above information is based on my PowerShell module IISRFBaseline. It helps establish an IIS Request Filtering baseline by leveraging Microsoft Logparser to scan a website's content directory and IIS logs.
Many of the settings have a dedicated markdown file providing more information about the setting. The one for maxAllowedContentLength can be found at the following:
https://github.com/phbits/IISRFBaseline/blob/master/IISRFBaseline-maxAllowedContentLength.md
Update - #johnny-5 comment
The filtering happens immediately which makes sense because Request Filtering only has visibility into the request header. This was confirmed via the following methods:
Failed Request Tracing - the Request Filtering module responded to the request with an HTTP 413 Request entity too large.
http.sys event tracing - the request is accepted and handed off to the IIS website. Shortly thereafter is an entry showing the HTTP 413 response. The time between was not nearly long enough for the upload to complete.
Packet capture - Using Microsoft Network Monitor, the HTTP conversation shows IIS immediately responded with an HTTP 413 Request entity too large.
The part you're rightfully concerned with is that IIS still accepts the upload regardless of file size. I found the limiting factor to be connectionTimeout which has a default setting of 120 seconds. If the file is "completed" before the timeout then an HTTP 413 error message is displayed. When a timeout occurs, the browser shows a connection reset since the TCP connection is destroyed by IIS after sending a TCP ACK/RST.
To test this further the timeout was increased and set to connectionTimeout=6000. Then a large upload was submitted and the following IIS components were stopped one at a time. After each stop, the upload was checked via Network Monitor and confirmed to be still running.
Website
Application Pool (Stop-WebAppPool -Name AppPoolName)
World Wide Web Publishing Service (Stop-Service -Name W3SVC)
With all three stopped I verified there was no IIS process still running and yet bytes were still being uploaded. This leads me to conclude that the connection is maintained by http.sys. The fact that connectionTimeout is closely tied to http.sys seems to support this. I do not know if the uploaded bytes go to a buffer or are simply discarded. The event tracing messages didn't provide anything helpful in this context.
Leaving out the Content-Length request header will result in an RFC protocol error (i.e. HTTP 400 Bad request) generated by http.sys since the size of the HTTP payload isn't being declared.
I am performing a simple "page = requests.get()" with a loop to extract results from a website. After 100-150 requests, it starts returning an error 500.
I've tried two things: put a second delay between my requests (to reduce the risk that I'm causing a DOS), and I'm deleting my page variable in case the connection is being stored in memory and the server is running out of ports. However, documentation seems to say that request.get does not store active connections.
What else could it be? Why does the script stop working after a few 100 requests?
The Error Code 500 means that the server has a problem handling your request. The reason for this is difficult to say if you don't control the server, but some kind of rate limiting might be the cause for this.
As per the HTTP Specifications:
6.6.1. 500 Internal Server Error
The 500 (Internal Server Error) status code indicates that the
server encountered an unexpected condition that prevented it from
fulfilling the request.
It's probably not a problem of your code, but some kind of rate limiting, resource shortage or other problem on the server side.
In a perfect world the Webserver would return the HTTP Code 429 in case of rate limiting.