I'm using puppeteer module to generate a pdf. But pdf size is too large because of images.
Image on the S3 needs to be compressed each to the size of 20 kb approx before generating the PDF.
Object is to have the PDF report size to be an average of 5 MB with 300 images on the average.
What will be the better approach:-
Compress the images first and then generate pdf -> Is sharp module is right for this.
Directly compress the pdf. -> Which service should be used for compress pdf.
Related
Hello, I am using the PRDownloader library to download files, but there seems to be some limitation with the file size.
implementation 'com.mindorks.android:prdownloader:0.6.0'
When I download files with a size of 20 Mb, there is no problem, it downloads fine, but if I download a file with a weight of 25 Mb, it doesn't download completely, it downloads a file of 2.21KB.
Why doesn't it let me download larger files?
How can I remove this limitation, to be able to download larger files?
Thank you.
I am importing depth data from Kinect V2 saved as .MAT files using scipy.io.loadmat into my python 3.5 code. When I print out the .MAT data I get an uint16 array with values ranging from 0 - 8192. This is expected as the Kinect V2 gives a 13 bit depth image data. Now, when I save this as a TIFF file using
cv2.imwrite('depth_mat.tif' , depth_arr) and read it using
depth_im = tifffile.imread('depth_mat.tif').The range of values are scaled up. In my original .MAT file the maximum value is 7995 and after saving and reading the .TIFF file the maximum value becomes 63728. This throws off my calculations for mapping Kinect Depth to actual distance in real world. Any insight about this would help me a lot.
I have to do some image processing in between, hence it is necessary to preserve the original values. Also instead of using cv2.imwrite(),if tifffile.imsave() is used to save the .MAT file, the image is entirely dark.
I am using python 3.5 on a Win 64 machine
I am about to publish a machine learning dataset. This dataset contains about 170,000 files (png images of 32px x 32px). I first wanted to share them by a zip archive (57.2MB). However, extracting those files takes extremely long (more than 15 minutes - I'm not sure when I started).
Is there a better format to share those files?
Try .tar.xz - better compression ratio but a little slower to extract than .tar.gz
I just did some Benchmarks:
Experiments / Benchmarks
I used dtrx to extract the following and time dtrx filename to get the time.
Format File size Time to extract
.7z 27.7 MB > 1h
.tar.bz2 29.1 MB 7.18s
.tar.lzma 29.3 MB 6.43s
.xz 29.3 MB 6.56s
.tar.gz 33.3 MB 6.56s
.zip 57.2 MB > 30min
.jar 70.8 MB 5.64s
.tar 177.9 MB 5.40s
Interesting. The extracted content is 47 MB big. Why is .tar more than 3 times the size of its content?
Anyway. I think tar.bz2 might be a good choice.
Just use tar.gz at the lowest compression level (just to get rid of the tar zeros between files). png files are already compressed, so there is no point in trying to compress them further. (Though you can use various tools to try to minimize the size of each png file before putting them into the distribution.)
I am uploading (through aws-sdk library to node.js) some files to Amazon S3. When it comes to image file - it looks like it is much bigger on S3 than body.length printed in node.js
E.g. I've got file with 7050103 of body.length, but S3 browser shows that it is:
Size: 8,38 MB (8789522 bytes)
I know that there are some meta here - but what meta could take more than 1MB?
What is the source of such a big difference? Is there a way to find out what size it would be on s3 before sending this file to s3?
I have upload file via s3 console and actually in this case there was no difference in size. I found out that the problem was in using lwip library for rotating image. I had a bug - I did rotate even if angle was 0, so I was rotating by 0 deg. After such rotating the image was bigger. I think that compression to jpg may be in different quality or something.
We are creating single Zip file from multiple files using ZipOutputStream (on 32 bit jdk).
If we create Zip file using 5 pdf files(each pdf is of 1 GB) than it creates corrupt Zip file. If I create Zip file using (4 pdf files - each pdf is of 1 GB) than it creates correct Zip file.
Is there any limitation in Zip file size on 32 bit JDK?
The original ZIP format had a number of limits (uncompressed size of a file, compressed size of a file and total size of the archive) at 4GB.
More info: http://en.wikipedia.org/wiki/ZIP_(file_format)
The original zip format had a 4 GiB limit on various things (uncompressed size of a file, compressed size of a file and total size of the archive), as well as a limit of 65535 entries in a zip archive. In version 4.5 of the specification (which is not the same as v4.5 of any particular tool), PKWARE introduced the "ZIP64" format extensions to get around these limitations, increasing the limitation to 16 EiB (264 bytes).
The File Explorer in Windows XP does not support ZIP64, but the Explorer in Windows Vista does. Likewise, some libraries, such as DotNetZip and IO::Compress::Zip in Perl, support ZIP64. Java's built-in java.util.zip supports ZIP64 from version Java 7.[29]