I wanted to start a thread on this. A lot of people are wondering how to do it in a specific context or with a specific language, but I was wondering what's the best strategy in general
I see two main practices :
load small chunks (like 2048 samples) of the file in a buffer. It seems the most straightforward but it involves to use the disk the lot, so I suspect it is not the best.
load all the file in a big buffer. More gentle with the hardrive, but needs a lot of ram if you use several long files. And if your file is very long, or has a lot of channels, I imagine the index variable could get corrupted. For example if it's a 16bit integer maybe it cannot reach the end of the file (or am I paranoid ?)
and I'm thinking about hybrid things, like :
using very big buffers without loading the whole file
store the file in a custom format on hardrive, in a way that it's optimized for accessing it quickly.
So, what do you think, how do you deal with this ?
I don't really care what's the "best", I'm more wondering about the pros and cons of each.
Answering part of my own question (the part about hybrid solutions).
Audacity is using custom BlockFiles format for storing and playback. It encapsulates both the idea of big(-ger than callback) buffers which are around 1Mb and the idea of custom file type (.aup).
"BlockFiles balance two conflicting forces. We can insert and delete audio without excessive copying, and during playback we are guaranteed to get reasonably large chunks of audio with each request to the disk. The smaller the blocks, the more potential disk requests to fetch the same amount of audio data; the larger the blocks, the more copying on insertions and deletions." (from : http://www.aosabook.org/en/audacity.html)
From what I've read, it was primarly designed for speeding up the edition of very long files (for example inserting data at the beginning without having to move everything after).
But for playback of relatively short audio data (< 1 hour) I guess putting everything in RAM is just fine.
Related
I have a player that plays encrypted video files and works like this:
I open an encrypted video file with it
it decrypts the video file and writes it to its memory
and plays the file from the memory after that
and I want to copy the decrypted video file from memory and play it with a usual video player like VLC so I tried to create its memory dump with task manager and hoped to find out the video file there. Sadly I don't know enough to find a video file in a large chunk of bits from memory. I tried to find mp4 patterns in a hex editor and done every solution that I find online but nothing worked for me so I hoped someone here maybe has an idea and willing to help me how to make it done.
I upload its memory dump here (after opening a short encrypted video with it)
Most probably, the software doesn't decode whole video file in one go, but instead in streaming fashion. This makes it impossible to catch a moment when the decoded video data is available in the memory dump.
If the player software is open source, compile it with debug symbols and run it under debugger. Otherwise, resort to reverse engineering.
I don't think the question is on-topic for StackOverflow in general, including but not limited to specifically reversing a software solution intended for digital rights management. However I would still leave an answer.
First of all, as comments suggest the topic in question is reversal of specific solution provided by a commercial provider. Ability to recover a media file from memory dump highly depends on implementation of this solution and methods the provider used to complicate the reversal. It is only the simplest and straightforward solution is easy to reverse and the more developer put in to cover traces, the harder - exponentially - is to reverse.
Even though there is a little chance to find the original file in full in memory (through memory dump analysis) it is unlikely to be possible for any media playback application, even such that does not do any decryption. Media playback is typically streaming: the data is loaded from disk, storage, network etc. as necessary for playback and not as a full download. Decryption needs to be applied to certain pieces of data needed momentarily, and then a decent DRM-enabled application would immediately erase the ephemeral clear data once it is no longer needed. That is, a memory dump would - at best - contain a ridiculously small amount of media data.
To capture/restore the original media file one would typically have to place himself as a middleman into some media streaming related process and be able to copy data as it is being streaming durign playback. A static memory dump is of little help here.
I need to implement external multithreading sort. I dont't have experience in multithreading programming and now I'm not sure if my algorithm is good anoth also I don't know how to complete it. My idea is:
Thread reads next block of data from input file
Sort it using standart algorith(std::sort)
Writes it to another file
After this I have to merge such files. How should I do this?
If I wait untill input file will be entirely processed until merge
I recieve a lot of temporary files
If I try to merge file straight after sort, I can not come up with
an algorithm to avoid merging files with quite different sizes, which
will lead to O(N^2) difficulty.
Also I suppose this is a very common task, however I cannot find good prepared algoritm in the enternet. I would be very grateful for such a link especially for it's c++ implementation.
Well, the answer isn't that simple, and it actually depends on many factors, amongst them the number of items you wish to process, and the relative speed of your storage system and CPUs.
But the question is why to use multithreading at all here. Data too big to be held in memory? So many items that even a qsort algorithm can't sort fast enough? Take advantage of multiple processors or cores? Don't know.
I would suggest that you first write some test routines to measure the time needed to read and write the input file and the output files, as well as the CPU time needed for sorting. Please note that I/O is generally A LOT slower than CPU execution (actually they aren't even comparable), and I/O may not be efficient if you read data in parallel (there is one disk head which has to move in and out, so reads are in effect serialized - even if it's a digital drive it's still a device, with input and output channels). That is, the additional overhead of reading/writing temporary files may more than eliminate any benefit from multithreading. So I would say, first try making an algorithm that reads the whole file in memory, sorts it and writes it, and put in some time counters to check their relative speed. If I/O is some 30% of the total time (yes, that little!), it's definitely not worth, because with all that reading/merging/writing of temporary files, this will rise a lot more, so a solution processing the whole data at once would rather be preferable.
Concluding, don't see why use multithreading here, the only reason imo would be if data are actually delivered in blocks, but then again take into account my considerations above, about relative I/O-CPU speeds and the additional overhead of reading/writing the temporary files. And a hint, your file accessing must be very efficient, eg reading/writing in larger blocks using application buffers, not one by one (saves on system calls), otherwise this may have a detrimental effect if the file(s) are stored on a machine other than yours (eg a server).
Hope you find my suggestions useful.
Hej sharp minds!
I need your expert guidance in making some choices.
Situation is like this:
1. I have approx. 500 flat files containing from 100 to 50000 records that have to be processed.
2. Each record in the files mentioned above has to be replaced using value from the separate huge file (2-15Gb) containing 100-200 million entries.
So I thought to make the processing using multicores - one file per thread/fork.
Is that a good idea? Since each thread needs to read from same huge file? It's a bit of a problem loading it into memory do to the size? Using file::tie is an option, but is that working with threads/forks?
Need your advise how to proceed.
Thanks
Yes, of course, using multiple cores for multi-threaded application is a good idea, because that's what those cores are for. Though it sounds like your problem involves heavy I/O, so, it might be that you will not use that much of CPU anyway.
Also since you are only going to read that big file, tie should work perfectly. I haven't heard of problems with that. But if you are going to search that big file for each record in your smaller files, then I guess it would take you a long time despite of the number of threads you use. If data from big file can be indexed based on some key, then I would advice to put it in some NoSQL databse and access it from your program. That would probably speed up your task even more than using multiple threads/cores.
I'd like to know if there's a way to make a zip file, or any other compressed file (tar,gz,etc) that will extract as quickly as possible. I'm just trying to move one folder to another computer, so I'm not concerned with the size of the file. However, I'm zipping up a large folder (~100 Mbs), and I was wondering if there's a method to extract a zip file quicker, or if another standard can decompress files more quickly.
Thanks!
The short answer is that compression is always a trade off between speed and size. i.e. faster compression usually means smaller size - but unless you're using floppy disks to transfer the data, the time you gain by using a faster compression method means more network time to haul the data about. But having said that, the speed and compression ratio for different mathods varies depending on te structure of the file(s) you are compressing.
You also have to consider availability of software - is it worth spending the time downloading and compiling a compression program? I guess if its worth the time waiting for an answer here then either you're using an RFC1149 network or you're going to be doing this a lot.
In which case the answer is simple: test the programs yourself using a representative dataset.
I have a program which reads data from 2 text files and then save the result to another file. Since there are many data to be read and written which cause a performance hit, I want to parallize the reading and writing operations.
My initial thought is, use 2 threads as an example, one thread read/write from the beginning, and another thread read/write from the middle of the file. Since my files are formatted as lines, not bytes(each line may have different bytes of data), seek by byte does not work for me. And the solution I could think of is use getline() to skip over the previous lines first, which might be not efficient.
Is there any good way to seek to a specified line in a file? or do you have any other ideas to parallize file reading and writing?
Environment: Win32, C++, NTFS, Single Hard Disk
Thanks.
-Dbger
Generally speaking, you do NOT want to parallelize disk I/O. Hard disks do not like random I/O because they have to continuously seek around to get to the data. Assuming you're not using RAID, and you're using hard drives as opposed to some solid state memory, you will see a severe performance degradation if you parallelize I/O(even when using technologies like those, you can still see some performance degradation when doing lots of random I/O).
To answer your second question, there really isn't a good way to seek to a certain line in a file; you can only explicitly seek to a byte offset using the read function(see this page for more details on how to use it.
Queuing multiple reads and writes won't help when you're running against one disk. If your app also performed a lot of work in CPU then you could do your reads and writes asynchronously and let the CPU work while the disk I/O occurs in the background. Alternatively, get a second physical hard drive: read from one, write to the other. For modestly sized data sets that's often effective and quite a bit cheaper than writing code.
This isn't really an answer to your question but rather a re-design (which we all hate but can't help doing). As already mentioned, trying to speed up I/O on a hard disk with multiple threads probably won't help.
However, it might be possible to use another approach depending on data sensitivity, throughput needs, data size, etc. It would not be difficult to create a structure in memory that maintains a picture of the data and allows easy/fast updates of the lines of text anywhere in the data. You could then use a dedicated thread that simply monitors that structure and whose job it is to write the data to disk. Writing data sequentially to disk can be extremely fast; it can be much faster than seeking randomly to different sections and writing it in pieces.