Byte downloading via torrent? - bittorrent

Is that possible to download only several bytes using torrents? E.g. download all ID2 headers from all mp3 files in torrent.

NO
but you can write script which downloads only pieces which contain part of file which contain desired data and process that piece(partOFfile). but its very complex process. size of pieces relies on size of torrent so you can end up downloading whole song if torrent is big. it depends on your application.

No , there is no such facility in existing torrent clients , you should build your Torrent client to do that .

Related

Simplest format to archive a directory

I have a script that needs to work on multiple platforms and machines. Some of those machines don't have any available archiving software (e.g. zip, tar). I can't download any software onto these machines.
The script creates a directory containing output files. I need to package all those files into a single file so i can download it easily.
What is the simplest possible archiving format to implement, so I can easily roll my own impl in the script. It doesn't have to support compression.
I could make up something ad-hoc, e.g.
file1 base64EncodedContents
dir1/file1 base64EncodedContents
etc.
However if one already exists then that will save me having to roll my own packing and unpacking, only packing, which would be nice. Bonus points if it's zip compatible, so that I can try zipping it with compression if possible, and them impl my own without compression otherwise, and not have to worry about which it is on the other side.
The tar archive format is extremely simple - simple enough that I was able to implement a tar archiver in powershell in a couple of hours.
It consists of a sequence of file header, file data, file header, file data etc.
The header is pure ascii, so doesn't require any bit manipulation - you can literally append strings. Once you've written the header, you then append the file bytes, and pad it with nil chars till it's a multiple of 512 bytes. You then repeat for the next file.
Wikipedia has more details on the exact format: https://en.wikipedia.org/wiki/Tar_(computing).

Does ActiveStorage use the checksum for anything?

In a very real sense, my question is actually 'can I skip generating a checksum', but answering that question rests on the above question.
To give you some background, I'm (finally) converting from Paperclip to ActiveStorage, and one of the pains of my particular conversion process is that I'm storing a decent sized number of fairly large files -- in addition to normal sized thumbnail images, I'm also storing large multimedia files, some in excess of 10GBs (currently poking at a 15GB file).
The basic conversion process has me downloading the file to generate a checksum, and a few other minor details that could be done with a head request instead of downloading the full file. We also copy the file from it's old 'home' to its new 'home', but that is done as an S3 to S3 copy, and doesn't take as long as downloading and uploading.
I'd love to skip the download & generate checksum process -- or at least, put it off for another day, as a cleanup step that isn't important to what we're actually doing.
So the question is: does the checksum actually do anything in ActiveStorage, or is it just a 'nice-to-have' feature that would allow me to, for example, publish the checksum if someone wanted to verify their version?
Found in code Rails
Prior to uploading, we compute the checksum, which is sent to the
service for transit integrity validation. If the checksum does not
match what the service receives, an exception will be raised.
You can create your own checksum without downloading the file:
Found in code Rails
def compute_checksum_in_chunks(io)
OpenSSL::Digest::MD5.new.tap do |checksum|
while chunk = io.read(5.megabytes)
checksum << chunk
end
io.rewind
end.base64digest
end

ZIP file format. How to read file properly?

I'm currently working on one Node.js project. I want to have an ability to read, modify and write ZIP file without saving it into FS (we receive it by TCP and send it back after modifications were made), and so far it looks like possible bocause of simple ZIP file structure. Currently I refer to this documentation.
So ZIP file has simple structure:
File header 1
File data 1
File data descriptor 1
File header 2
File data 2
File data descriptor 2
...
[other not important yet]
First we need to read file header, which contains field compressed size, and it could be the perfect way to read file data 1 by it's length. But it's actually not. This field may contain '0' or '0xFFFFFFFF', and those values don't describe its actual length. In that case we have to read file data without information about it's length. But how?..
Compression/Decopression algorithm descriptions looks pretty complex to me, and I plan to use ZLIB for compression itself anyway. So if something useful described there, then I missed the point.
Can someone explain the proper way to read those files?
P.S. Please avoid suggesting npm modules. I do not want to only solve the problem, but also to understand how things work.
Note - I'm assuming you want to read and process the zip file as
it comes off the socket, rather than reading the complete zip file into
memory before processing. Both options are valid.
I'd initially ignore the use cases where the compressed size has a value of '0' or '0xFFFFFFFF'. The former is only present in zip files created in streaming mode, the latter for zip files larger than 4Gig.
Dealing with them adds a lot of complexity - you can add support for them later, if necessary. Whether you ever need to support the 0/0xFFFFFFFF use cases depends on the nature of the zip files you intend to process.
When the compression method is deflated (8), use zlib for compression/decompression. You also need to support compression method stored (0). It gets used for very small files where compression isn't appropriate.

How to load large Pdf files fast? [duplicate]

This question already has answers here:
opening a large pdf files on web
(5 answers)
Closed 6 years ago.
I am hosting around 18Mb pdf file to s3 bucket and trying to get it, but it takes a long time on a bit slow network, I also tried to covert the file to HTML and then render it but it becomes of around 48MB because of which the phone starts hanging. I have also moved the s3 to Singapore location to reduce latency and have also tried to pipe it through the server, Now I am only left with a option to disintegrate the PDF into images for every page and load them when requested, Is there anything that I am missing to make the load time of pdf bearable?
You have the following options as you are facing limitations on end-users devices:
Split large PDF files into several parts and allow users to download these parts separately.
Linearize PDF files, this will affect how files are loaded but will not decrease the size so you may face issue with crashes on end-user devices too.
Optimize file size of PDF files by re-compressing images inside.
Render low resolution JPEG images of PDF pages (with Ghostscript or ImageMagick) but please do not use JPEG as main format as JPEG compression is not designed for text compression (but for human faces).

Concatenate chunk containg headers to another chunk in h264

I'm trying to extract thumbnails from a torrent stream by downloading the first couple of chunks to get the headers, another set of chunks from the middle and then concat them to have a single video file.
For this I'm using nodejs but I'm having trouble with the concatenation part. Obviously the headers include the length of the video so if I simply concat another chunk to the end of the headers chunk, it won't work.
In other words, I have 2 chunks of a video file: The first one contains the headers and some material and the other one is fully composed of a video stream. I want to combine the two to form a single video file
So my question is how can I make this work properly if at all?
I found out that this is actually not possible. It has something to do with the encoded stream and I can't simply make a sparse file out of the two chunks

Resources