I am currently working on a web application where I need to store large files (mp4 videos that sometimes have a size greater than 100mb). But, when I am trying to upload them from a static Angular website hosted in a S3 bucket to an API hosted with AWS Elastic Beanstalk, I got a error that I don't understand.
Click here to see the error
What I tried:
There is no problem when uploading PDF. It works perfectly.
There is no problem when uploading very short MP4 (3s, 453Kb). It works clean, but a little bit slower than PDF, but still really short (3 seconds). This is why I think the problem could came from the file size.
I read on Internet that there's something called client_max_body_size when using Nginx (as AWS does). I tried to increase this default limit by adding this to my project:
myrootproject/.ebextensions/nginx.conf
Into nginx.conf:
files:
"/etc/nginx/conf.d/proxy.conf" :
content: |
client_max_body_size 4G;
but nothing has changed... or at least it didn't have the desired effect, it still not working.
Additional informations
When I do this manipulation in local, it works fine.
When I do this manipulation from the hosted website (S3 bucket) to localhosted API, it works fine.
It takes really long to have a response from the server (only when this error occurs). I have the feeling that the request don't even access my NodeJS code, because if an error is emit on it, I would handled it.
Here is screenshots of my request, if it could help:
Request (first part)
Request (second part)
I really need help on this one, hoping you can give it to me!
P.S: I created this post with help of a translator. If some parts are strangely written, my apologies,
The option myrootproject/.ebextensions/nginx.conf does not take effect, probably because you are using Amazon Linux 2 (AL2) - I assume that you are using AL2. But this config file works only in AL1.
For AL2, the nginx settings should be provided in .platform/nginx/conf.d/, not .ebextentions. Therefore, for example, you could create the following config file:
.platform/nginx/conf.d/mynginx.conf
with the content of:
client_max_body_size 4G
Related
I have searched for an definite answer online and on stackoverflow, but I have not found a clear step-by-step way to handle uploading large files (50MB+) to a wagtail CMS website.
My setup is nginx, gunicorn, postgresql on a ubuntu server.
When trying to upload a large file from the "documents" section of the admin (e.g: /admin/documents/multiple/add/), the progressbar moves as when uploading a file as normal, but then it has an error in the admin: "Sorry, upload failed."
I am basically having the same problem as this question, only without that specific error message.
I have set client_max_body_size to 100000M (nginx) and the MAX_UPLOAD_SIZE setting (wagtail/django) to a large amount as well.
How can I resolve the issue and successfully upload my large files (.zip and .xyz) to the wagtail website? Any help and/or suggestions are appreciated. Thanks.
Sorry, It might be very novice problem but I am new to node and web apps and just have been stuck on this for couples of days.
I have been working with a API called "Face++" that requires user to upload images to detect faces. So basically users needed to upload images to my webapps backend and my backend would do an API request with that image. I somehow managed to upload the files at my node's backend using tutorial provided below but now I am struggling how to use those image files. I really don't know how to have access to those files. I thought writing just the filepath/filename would help but it did not. I am really new at webapps.
I used tutorial from here: https://coligo.io/building-ajax-file-uploader-with-node/
to upload my files at back-end.
thanks
You can also use the Face++ REST API node client
https://www.npmjs.com/package/faceppsdk
As per in documentation it requires a live URL on web. Then you have to upload your files into remote location (You may upload files to a Amazon S3 Bucket)
And also you check the sample codes from Documentation where you can upload directly to Face++
So we've got a webpage made with node.js, express and mongodb.
I've got 2 servers in DMZ, not joined to AD.
One of the servers will be serving the webpage, while the other will purely receive and serve videofiles.
The servers are running iis and iisnode.
Currently the page is using multer for uploading files, which works fine for uploading to the same server the code is running on.
Uploading to a separate server is proving to be harder though, and my googling isnt getting me any closer to the solution.
I want the uploads and downloads to go directly between the client and the videoserver, not through the webserver.
Any tips on how to approach this?
I'm trying to setup a facebook share on https://donate.mozilla.org/en-US/thunderbird/share/
The og:url points to just /thunderbird which is the url I would want shared. Best I can tell the og tags are all there.
When I try to update the data on https://developers.facebook.com/tools/debug/og/object/
When I fetch new scrape information I get one of two errors. Initially, it'll take a long time then respond with a Curl Error : OPERATION_TIMEOUTED Operation timed out after 10000 milliseconds with {some number less than 10000} bytes received then subsequent fetch attempts respond with Curl Error : PARTIAL_FILE transfer closed with 17071 bytes remaining to read
We're using AWS Cloudfront and nodejs with hapijs
It responds with a 206 partial content, which, should be fine. The og tags are all in the beginning of the file.
I found this: docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/RangeGETs.html
There it says a range request is used to get the file in chunks, not to get just the part of the file and give up. So maybe that's causing unexpected behavior. Maybe cloudfront is sending it back in chunks, and facebook stops listening after the first response? I dunno. Just trying to find a theory that fits the facts.
We already have a working share for donate.mozilla.org/en-US/share/ but that might be old data from when we were not using hapijs and instead using expressjs which I don't think was supporting range requests and would instead return a 200.
I'm mostly a front end dev, so a lot of this is out of my comfort zone but I have already learned a lot :)
Edit: I also want to point out we use Heroku for hosting, and if I setup a test with just heroku and without cloudfront: donate.mofostaging.net/en-US/thunderbird/ it fetches the tags successfully. So I suspect it's a bug when facebook and hapijs interact with cloudfront.
In my Lotus Notes web application, I have file upload functionality. Here I want to validate the attachment file size before uploading which I did through webquerysave. My problem is that whenever the attached file size exceeds the limitation, which is configured in server document, it throws the server error page like “HTTP: 500 Invalid POST Request Exception”.
I tried some methods to resolve this, but they’re not working:
In domcfg.nsf, I mapped the target form called "CustomGeneralErrorForm".
I created "$$ReturnGeneralError" from to show error page.
In Notes.ini, I added "HTTPMultiErrorPage=/error.html"
How can I resolve this issue?
I suppose there's no way. I've tried several time to catch that error but I think the only way is to test files size with javascript; Obviously it works only with html5 browsers as you can find in this post:
Using jQuery, Restricting File Size Before Uploading
So... you have to write code to detect browser features and use javascript code with html5 browser and find alternative ways for old browser.
For example you can use Flash plugin and post tu server-side code depending on your backend.
Uploadify is a very good chance (http://www.uploadify.com/) to work just one time, but make a internet search and choose the best for you.
In this way you can stop user large posts, but if you need to upload large size file (>10Mb default) you must set a secondary internet site server document with greater post size limit.