I am uploading (through aws-sdk library to node.js) some files to Amazon S3. When it comes to image file - it looks like it is much bigger on S3 than body.length printed in node.js
E.g. I've got file with 7050103 of body.length, but S3 browser shows that it is:
Size: 8,38 MB (8789522 bytes)
I know that there are some meta here - but what meta could take more than 1MB?
What is the source of such a big difference? Is there a way to find out what size it would be on s3 before sending this file to s3?
I have upload file via s3 console and actually in this case there was no difference in size. I found out that the problem was in using lwip library for rotating image. I had a bug - I did rotate even if angle was 0, so I was rotating by 0 deg. After such rotating the image was bigger. I think that compression to jpg may be in different quality or something.
Related
Hello, I am using the PRDownloader library to download files, but there seems to be some limitation with the file size.
implementation 'com.mindorks.android:prdownloader:0.6.0'
When I download files with a size of 20 Mb, there is no problem, it downloads fine, but if I download a file with a weight of 25 Mb, it doesn't download completely, it downloads a file of 2.21KB.
Why doesn't it let me download larger files?
How can I remove this limitation, to be able to download larger files?
Thank you.
I'm using puppeteer module to generate a pdf. But pdf size is too large because of images.
Image on the S3 needs to be compressed each to the size of 20 kb approx before generating the PDF.
Object is to have the PDF report size to be an average of 5 MB with 300 images on the average.
What will be the better approach:-
Compress the images first and then generate pdf -> Is sharp module is right for this.
Directly compress the pdf. -> Which service should be used for compress pdf.
I have a gzipped csv file in s3 having the compressed volume around 85GB and uncompressed volume equivalent to 805GB. I need to decompress this file and persist the decompressed file back to S3. Due to limitations in my execution environment, I need to decompress the compressed file data on the fly and persist it back to S3. Currently, I'm using smart-open and following code snippet to do the work, and it works pretty well, on handling streaming and decompressing. As minimizing execution time is crucial in my task, I want to know whether there are methods to improve this code or some better mechanism to handle this task.
import smart_open
transport_params = {"min_part_size": 200 * 1024 * 1024}
with smart_open.open('s3://outputbucket/decompressed.csv', 'w', transport_params=transport_params) as fout:
for line in smart_open.open('s3://inputbucket/compressed.csv.gz', 'rb', encoding='utf8'):
fout.write(line)
TASK
I am trying to write a Lambda function for AWS which upon uploading any given bitmap file to my AWS cloud, the function will read this given bitmap and resize it to a preset size and rewrite it back to the same bucket that it read it from.
SCENARIO
My Ruby web-app PUTs a given bitmap file to my AWS bucket which is 8MB in size and approximately 1920x1080 pixels in size.
Upon being uploaded, the image should be read by my Lambda function, resized to 350 x 350 in size and rewritten with the same file name and key location back to the bucket.
PROBLEM
I have no experience with NodeJS, and hence I cannot properly write this function myself. Can anyone advise me the steps to complete this task or point me to a similar function which outputs a resized BMP file?
Image resizing is one of the reference uses for Lambda. You can use the Serverless Image Resizer, which is a really robust solution, or an older version of it here.
There are literally dozens open source image manipulation projects, that you can find on Github. A very simple standalone Lambda that supports BMP's out of the box can be found here.
I'm trying to upload 160 Gb file from ec2 to s3 using
s3cmd put --continue-put FILE s3://bucket/FILE
but every time uploading interrupts with the message:
FILE -> s3://bucket/FILE [part 10001 of 10538, 15MB] 8192 of 15728640 0% in 1s 6.01 kB/s failed
ERROR: Upload of 'FILE' part 10001 failed. Aborting multipart upload.
ERROR: Upload of 'FILE' failed too many times. Skipping that file.
The target bucket does exist.
What is the issue's reason?
Are there any other ways to upload the file?
Thanks.
You can have up to 10000 upload parts per object, so it fails on part 10001. Using larger parts may solve the issue.
"huge"---is it 10s or 100s of GBs? s3 limits the object size to 5GB and uploading may fail if it exceeds the size limitation.