Say I have a really large zip file (80GB) containing one massive CSV file (> 200GB).
Is it possible to fetch a subsection of the 80GB file data, modify the central directory, and extract just that bit of data?
Pictorial representation:
Background on my problem:
I have a cyclic process that does a summing on a certain column of a large zipped CSV file stashed in the cloud.
What I do today is I stream the file to my disk, extract it and then stream the file line by line. This makes is a very disk bound operation. Disk IS the bottle neck for sure.
Sure, I can leverage other cloud services to get what I need faster but that is not free.
I'm curious if I can see speed gains by just taking 1GB sub sections of zip until there's nothing left to read.
What I know:
The Zip file is stored using the deflate compression algorithm (always)
In the API I use to get the file from the cloud, I can specify a byte range to filter to. This means I can seek through the bytes of a file without hitting disk!
According the zip file specs there are three major parts to a zip file in order:
1: A header describing the file and it's attributes
2: The raw file data in deflated format
3: The central directory listing out what files start and stop and what bytes
What I don't know:
How the deflate algorithm works exactly. Does it jumble the file up or does it just compress things in order of the original file? If it does jumble, this approach may not be possible.
Had anyone built a tool like this already?
You can always decompress starting from the beginning, going as far as you like, keeping only the last, say, 1 GB, once you get to where you want. You cannot just start decompressing somewhere in the middle. At least not with a normal .zip file that has not been very specially prepared somehow for random access.
The central directory has nothing to do with random access of a single entry. All it can do is tell you where an entry starts and how long it is (both compressed and uncompressed).
I would recommend that you reprocess the .zip file into a .zip file with many (~200) entries, each on the order of 1 GB uncompressed. The resulting .zip file will be very close to the same size, but you can then use the central directory to pick one of the 200 entries, randomly access it, and decompress just that one.
Related
Web site loads a big array (70,000 elements, each a line of text) at present from a file as a script.
is it worth zipping it (reduces size from 2Mb by a factor of 6) and unzipping in the client?
If so what is the simplest way to do it?
I don't know whether sending long data takes more time than unzipping in typical cases.
I have 100Ks+ of small JSON data files in one directory (not by choice). When accessing each of them, does a flat vs. pyramid directory structure make any difference? Does it help Node.js/Nginx/filesystem retrieve them faster, if the files would be grouped by e.g. first letter, in corresponding directories?
In other words, is it faster to get baaaa.json from /json/b/ (only b*.json here), then to get it from /json/ (all files), when it is same to assume that the subdirectories contain 33 times less files each? Does it make finding each file 33x faster? Or is there any disk read difference at all?
jfriend00's comment EDIT: I am not sure what the underlying filesystem will be yet. But let's assume an S3 bucket.
I've 1B+ gzip files (avg. 50 kb per each) and I want to upload them into S3 server. As I need to pay for each write operation, it becomes a huge cost problem to transfer them into S3. Also, those files are very similar and I want to compress them within a large file, so that compression efficiency will increase too.
I'm a newbie when it comes to write shell scripts but looking for a way, where I can:
Find all .gz files,
Decompress first 1K,
Compress in a single folder,
Delete this 1K batch,
Iterate to next 1K file,
I appreciate if you able to help me to think more creatively to do this. The only way in my mind is decompressing all of them and compress them by each 1K chunks, but it is not possible as I don't have disk space to compress them.
Test with a few files how much additional space is used when decompressing files. Try to make more free space (move 90% of the files to another host).
When files are similar the compression rate of 10% of the files will be high.
I guess that 10 chunks will fit, but it will be tight everytime you want to decompress one. So I would go for 100 chunks.
But first think what you want to do with the data in the future.
Never use it? Delete it.
Perhaps 1 time in the far future? Glacier.
Often? Use smaller chunks so you can find the right file easier.
There seems to be quite some confusion about PAR files and Im struggling to find an answer to this.
I have several PAR files, each containing several GB of data. Considering PAR is a type of archive file (similar to tar I assume), I would like to extract its contents using linux. However, I cant seem to find how to do this. I can only find how to repair files or create a par file.
I am trying to use the par2 command line tool to do this.
Any help would be appreciated
TLDR: They're not really like .tar archives - they are generally created to support other files (including archives) to protect against data damage/loss. Without any of the original data, I think it is very unlikely any data can be recovered from these files.
.par files are (if they are genuinely PAR2 files) error recovery files for supporting a set of data stored separately. PAR files are useful, because they can protect the whole of the source data without needing a complete second copy.
For example, you might choose to protect 1GB of data using 100MB of .par files in the form of 10x 10MB files. This means that if any part of the original data (up to 100MB) is damaged or lost, it can be recalculated and repaired using the .par records.
This will still work if some of the .par files are lost, but the amount of data that can be recovered cannot exceed what .par files remain.
So...given that it is rare to create par files constituting 100% of the size of the original data, unless you have some of the original data as well, you probably won't be able to recover anything from the files.
http://www.techsono.com/usenet/files/par2
In the EXTRACT documentation there's the (awesome) auto-magic support for gzipped files (which we are using).
But should I assume it won't use more than one AU? As if I understand correctly the files need to be "splitable" to spread across AUs?
Or will it split across AU's once extracted-on-the-fly and / or do gziped files have an index to indicate where they can be split somehow?
Or perhaps I'm muddling the vertex concept with AUs?
This is a good question :).
In general, if the file format is splitable (e.g., basically row-oriented with rows being less than the rowsize limit, which currently is 4MB), then large files will be split into 1GB per vertex.
However, GZip itself is not a splitable format. Thus we cannot split a GZip file during decompression and we end up not splitting the processing of the decompressed file either (the current framework does not provide this). As a consequence, we limit the size of a GZip file to 4GB. If you want scale out with GZip files, we recommend to split the data into several GZip files and then use file sets to scale out processing.