In order to change the maximum upload size in IIS, the user can adjust in the IIS metabase the AspMaxRequestEntityAllowed value as explained in different sites.
My question is:
Is it possible to read the current maximum allowed size for uploads from classic Asp code?
Explanation:
I'm adjusting some upload code, and one feature now is to notify the client side of the maximum upload size so that files sent with Flash or FormData can be validated and not even try to send them if they are too large and the upload would fail. So I can either hope that when the users configure it, correctly write in the configuration file the maximum allowed by their server; but it would be much better if I can get the real value from IIS.
You can use WMI to do that. For instance, using VBScript:
Dim provider, setting, maxUploadSize
Set provider = GetObject("winmgmts:/root/MicrosoftIISv2")
Set setting = provider.Get("IIsWebVirtualDirSetting='W3SVC/1/ROOT'")
maxUploadSize = setting.AspMaxRequestEntityAllowed
Related
I am connected to Azure blob storage and we need to upload files bigger than 256MB.
I followed documentation and created PageBlob and than uploaded files through PutPage.
The problem is that I don't know in advance how big the data is going to be so I set it to the max size of 8TB.
Is there a reason why I shouldn't do this?
As far as I know, the max size is only the maximum possible size for the blob and shouldn't cause any issues with memory.
Correct me if I'm wrong please.
Thanks
Per my experience and according to the REST API reference Put Page, the API requires to set the value of the header parameter Content-Length with a specific number of bytes being transmitted in the request body, not an approximate value or possible value, as the figure below.
Otherwise, Azure Storage will refuse the request and response an error after it checked the validity of the request on cloud. I think it's the real reason about the errors you might get from Common REST API Error Codes and Blob Service Error Codes.
I'm trying to log the total size of a request sent out from an ASP.NET Core website hosted in Azure. The goal is to be able to attribute some sense of what the data out cost is for specific functionality within the application. (Specifically we are looking at some functionality that uses Azure Blob Storage and allows downloads of the blobs but I don't think that's relevant to the question.)
I think the solution is some simple middleware that logs out the sizes to a storage mechanism (less concerned with that part) but not sure what to put inside the middleware.
Does anyone know how you would "construct" the total size of the response being sent out of the web app.?
I would presume it will include HttpContext.Response.Body.Length but I'm pretty confident that doesn't include the headers. Also not sure if that's the compressed size of the response body or not.
Thanks
In my Lotus Notes web application, I have file upload functionality. Here I want to validate the attachment file size before uploading which I did through webquerysave. My problem is that whenever the attached file size exceeds the limitation, which is configured in server document, it throws the server error page like “HTTP: 500 Invalid POST Request Exception”.
I tried some methods to resolve this, but they’re not working:
In domcfg.nsf, I mapped the target form called "CustomGeneralErrorForm".
I created "$$ReturnGeneralError" from to show error page.
In Notes.ini, I added "HTTPMultiErrorPage=/error.html"
How can I resolve this issue?
I suppose there's no way. I've tried several time to catch that error but I think the only way is to test files size with javascript; Obviously it works only with html5 browsers as you can find in this post:
Using jQuery, Restricting File Size Before Uploading
So... you have to write code to detect browser features and use javascript code with html5 browser and find alternative ways for old browser.
For example you can use Flash plugin and post tu server-side code depending on your backend.
Uploadify is a very good chance (http://www.uploadify.com/) to work just one time, but make a internet search and choose the best for you.
In this way you can stop user large posts, but if you need to upload large size file (>10Mb default) you must set a secondary internet site server document with greater post size limit.
I have an image upload view on my client (ember.js) that send the resized image to nodejs rest api;
it works well but it is easy for someone expert to force upload of a non-resized image;
I would like to keep the resize process on the client because this allows users to select heavy-weight images, that are resized locally and uploaded only after that, when they are lightweight;
If someone else uses something like this, I'm interested on how it is possible to make this as safe as possible;
As a rule of thumb when developing web applications is never ever trust any data coming from the client side, always try to do a check in your server side!
Use authentication, this ensures that user only allow to upload data to their own account and not fiddling others files.
Add a special message passing between your server and client, a simple example would be
i. send a post API request first (that contains the image information and targeted compressed size) to your server indicating that your client is starting to compress the picture
ii. when uploading, add a metadata to include the complete compressed image, and check the uploaded image with your server if it is within the accepted threshold, else discard it
You could enhance the security of the message passing to be more complicated!
This would be my simple security, anyone else got better solution? :)
Approaches here also work for file uploads. You can use a combination of checking:
content-length header and/or (i.e. req.headers['content-length'] > x)
reading stream size as it's being read by server. (i.e req.on('data'))
If the stream data exceeds a certain size you can respond accordingly. Check out something like Multer for file uploads, specifically the limits section. Best approach would probably the second option.
We've hit a problem with some forms in the admin portion of our web app. There are a handful of forms that contain a large number of fields (it can range anywhere from one input field to the hundreds).
We've found that as these forms grow, there is a point where the server will throw 500 errors when a form is posted.
After running a test, I was able to find that the server can handle forms with 100 fields in them; once 101 or more fields are used, we get the errors.
We run Coldfusion, and we have determined that Coldfusion is not throwing this error. We never see this error logged in Coldfusion, so we are assuming IIS is throwing an error even before it sends the request to the Coldfusion server.
I'm assuming there is some setting in IIS 7.5 where we can up this limit. I've searched on the web, but all I can find is how to raise the byte-size limits of this data, not any kind of limit on a number of fields that are allowed.
So, am I right in assuming that this can be changed, and if so, how can it be done?
This is an issue introduced with hotfix APSB12-06. While it is a ColdFusion error, people have reported receiving the error in Tomcat, before it supposedly hit the CF server
There is a setting in neo-runtime.xml which defines the postsizelimit - and is defaulted to 100.
The full notes are located here, but here is the short version.
This hot fix has a new setting in ColdFusion, Post Parameter Limit. This setting limits the number of parameters in a post request. The default value is 100. If a post request contains more parameters as specified, the server doesn't process the request and throws an exception. This process protects against DoS attack using Hash Collision. This setting is different from Post Size Limit (ColdFusion Administrator > Settings > Maximum size of post data). This setting isn't exposed in the ColdFusion Administrator console. But you can easily change this limit in the neo-runtime.xml file. See point 5 below.
Customers who want to change postParameterLimit, go to {ColdFusion-Home}/lib for Server Installation or {ColdFusion-Home}/WEB-INF/cfusion/lib for Multiserver or J2EE installation. Open file neo-runtime.xml, after line.
<var name='postSizeLimit'><number>100.0</number></var>
Add the line below and you can change 100 with the desired number.
<var name='postParametersLimit'><number>100.0</number></var>
CF10+ has the setting available to edit within the CF Admin Settings page under Maximum number of POST request parameters under Server Settings -> Settings.
On our 9.0.1 server, we just increased the setting up to 10000 and have seen no adverse effects.
I believe you are bumping up against a security feature of ColdFusion. What ColdFusion version are you running? In ColdFusion Security Hotfix APSB12-06 they introduced a fix to protect against DoS attack using Hash Collision. From that page:
This hotfix implements a new setting in ColdFusion, Post Parameter
Limit. This limits the number of parameters in a post request. The
default value is 100. If a post request contains more parameters as
specified, server will not process the request and throws an
exception. This is done to protect against DoS attack using Hash
Collision. This setting is different from Post Size Limit (ColdFusion
Administrator > Settings > Maximum size of post data). We are not
exposing this setting in ColdFusion Administrator console, but this
limit can be easily changed in neo-runtime.xml file. See point 5
below.
Also on that page are instructions on how to increase that limit. Basically you have to make a change in your neo-runtime.xml file.
Customers who want to change postParameterLimit, go to
{ColdFusion-Home}/lib for Server Installation or
{ColdFusion-Home}/WEB-INF/cfusion/lib for Multiserver or J2EE
installation. Open file neo-runtime.xml, after line:
<var name='postSizeLimit'><number>100.0</number></var>
add the below line and you can change 100 with desired number.
<var name='postParametersLimit'><number>100.0</number></var>