There seems to be quite some confusion about PAR files and Im struggling to find an answer to this.
I have several PAR files, each containing several GB of data. Considering PAR is a type of archive file (similar to tar I assume), I would like to extract its contents using linux. However, I cant seem to find how to do this. I can only find how to repair files or create a par file.
I am trying to use the par2 command line tool to do this.
Any help would be appreciated
TLDR: They're not really like .tar archives - they are generally created to support other files (including archives) to protect against data damage/loss. Without any of the original data, I think it is very unlikely any data can be recovered from these files.
.par files are (if they are genuinely PAR2 files) error recovery files for supporting a set of data stored separately. PAR files are useful, because they can protect the whole of the source data without needing a complete second copy.
For example, you might choose to protect 1GB of data using 100MB of .par files in the form of 10x 10MB files. This means that if any part of the original data (up to 100MB) is damaged or lost, it can be recalculated and repaired using the .par records.
This will still work if some of the .par files are lost, but the amount of data that can be recovered cannot exceed what .par files remain.
So...given that it is rare to create par files constituting 100% of the size of the original data, unless you have some of the original data as well, you probably won't be able to recover anything from the files.
http://www.techsono.com/usenet/files/par2
Related
I have 100Ks+ of small JSON data files in one directory (not by choice). When accessing each of them, does a flat vs. pyramid directory structure make any difference? Does it help Node.js/Nginx/filesystem retrieve them faster, if the files would be grouped by e.g. first letter, in corresponding directories?
In other words, is it faster to get baaaa.json from /json/b/ (only b*.json here), then to get it from /json/ (all files), when it is same to assume that the subdirectories contain 33 times less files each? Does it make finding each file 33x faster? Or is there any disk read difference at all?
jfriend00's comment EDIT: I am not sure what the underlying filesystem will be yet. But let's assume an S3 bucket.
I've 1B+ gzip files (avg. 50 kb per each) and I want to upload them into S3 server. As I need to pay for each write operation, it becomes a huge cost problem to transfer them into S3. Also, those files are very similar and I want to compress them within a large file, so that compression efficiency will increase too.
I'm a newbie when it comes to write shell scripts but looking for a way, where I can:
Find all .gz files,
Decompress first 1K,
Compress in a single folder,
Delete this 1K batch,
Iterate to next 1K file,
I appreciate if you able to help me to think more creatively to do this. The only way in my mind is decompressing all of them and compress them by each 1K chunks, but it is not possible as I don't have disk space to compress them.
Test with a few files how much additional space is used when decompressing files. Try to make more free space (move 90% of the files to another host).
When files are similar the compression rate of 10% of the files will be high.
I guess that 10 chunks will fit, but it will be tight everytime you want to decompress one. So I would go for 100 chunks.
But first think what you want to do with the data in the future.
Never use it? Delete it.
Perhaps 1 time in the far future? Glacier.
Often? Use smaller chunks so you can find the right file easier.
Say I have a really large zip file (80GB) containing one massive CSV file (> 200GB).
Is it possible to fetch a subsection of the 80GB file data, modify the central directory, and extract just that bit of data?
Pictorial representation:
Background on my problem:
I have a cyclic process that does a summing on a certain column of a large zipped CSV file stashed in the cloud.
What I do today is I stream the file to my disk, extract it and then stream the file line by line. This makes is a very disk bound operation. Disk IS the bottle neck for sure.
Sure, I can leverage other cloud services to get what I need faster but that is not free.
I'm curious if I can see speed gains by just taking 1GB sub sections of zip until there's nothing left to read.
What I know:
The Zip file is stored using the deflate compression algorithm (always)
In the API I use to get the file from the cloud, I can specify a byte range to filter to. This means I can seek through the bytes of a file without hitting disk!
According the zip file specs there are three major parts to a zip file in order:
1: A header describing the file and it's attributes
2: The raw file data in deflated format
3: The central directory listing out what files start and stop and what bytes
What I don't know:
How the deflate algorithm works exactly. Does it jumble the file up or does it just compress things in order of the original file? If it does jumble, this approach may not be possible.
Had anyone built a tool like this already?
You can always decompress starting from the beginning, going as far as you like, keeping only the last, say, 1 GB, once you get to where you want. You cannot just start decompressing somewhere in the middle. At least not with a normal .zip file that has not been very specially prepared somehow for random access.
The central directory has nothing to do with random access of a single entry. All it can do is tell you where an entry starts and how long it is (both compressed and uncompressed).
I would recommend that you reprocess the .zip file into a .zip file with many (~200) entries, each on the order of 1 GB uncompressed. The resulting .zip file will be very close to the same size, but you can then use the central directory to pick one of the 200 entries, randomly access it, and decompress just that one.
I need to write a Spark app that uses temporary files.
I need to download many many large files, read them with some legacy code, do some processing, delete the file, and write the results to a database.
The files are on S3 and take a long time to download. However, I can do many at once, so I want to download a large number in parallel. The legacy code reads from the file system.
I think I can not avoid creating temporary files. What are the rules about Spark code reading and writing local files?
This must be a common issue, but I haven't found any threads or docs that talk about it. Can someone give me a pointer?
Many thanks
P
Hi i have a spec for fetch the files from server and predict the un-used files from the directory in this situation i am going to fetch the files from server it will return huge files, the problem is the cpu usage will increase while i am fetching large files, so i like to eliminate this scenario. can any one knows how to avoid this situation please share with me though it might help full for me.
Thanks
You can split your large file on server into several smaller pieces and fetch some metadata about amount of pieces, size etc. and than fetch them one by one from your client c# code and join pieces in binary mode to your larger file.