Convert zip file timestamps to human readable format - zip

How can the file last modification date and time, for files in a zip file, be converted to a human readable format?

I've got it!
The zip file timestamp in the same format as the FAT timestamp. You can find out how to convert it at wikipedia
Or at other stackoverflow question:
Unix timestamp to FAT timestamp

Related

Combine audio files based on the timestamps in their filenames

I just received dozens of audio files that are recorded radio transmissions. Each transmission is its own file, and each file has its transmission time as part of its filename.
How can I programmatically combine the files into a single mp3 file, with each transmission starting at the correct time relative to the first?
Filename format:
PD_YYYY_MM_DD_HH_MM_SS.wav
Examples:
PD_2022_01_22_16_21_52.wav
PD_2022_01_22_16_21_55.wav
PD_2022_01_22_16_22_02.wav
PD_2022_01_22_16_22_05.wav
PD_2022_01_22_16_23_03.wav

iOS - Convert Audio Format (opus to mp3)

Recently I started to develop application that work with .opus file (Audio Format).
I am working with external SDK that can processor a mp3/wav file, unfortunately my local file is a .opus file and I need to convert it to mp3/wav format in order to process the file.
I read and research a lot around the network to find a solution,
I found the FFmpegWrapper library that can convert two type of Audio Format but when I try to convert .opus to .mp3/ , I get this error: opus codec not supported in WAVE format
I do not know what can be done, I'll be happy to help.
Any information about how to convert .Opus format to any other format will be appreciated.
Thanks
Have you tried using this pod: https://github.com/chrisballinger/Opus-iOS
You can use it to convert your Opus-encoded file to wav, then feed it into your SDK.

About hydra, crunch and 7-zip

I want to create a password list using crunch. As you know the file will become more than 1 petabyte. I know that in 7-zip you can "archive" a file in the format of "7z" and compress it by a lot. If I create a text file then compress it to a "7z" format, then is it possible to get crunch to access the text file while it is compressed in the "7z" format and to add to the list.
Also is it possible to get hydra to read an archive in this format "7z", when you want to use a password list?

Appending filename information to RDD initialized by sc.textFile

I have a set of log files I would like to read into an RDD.
These files are all compressed .gz and are the filenames are date stamped.
The source of these files is the page view statistics data for wikipedia
http://dumps.wikimedia.org/other/pagecounts-raw/
The file names look like this:
pagecounts-20090501-000000.gz
pagecounts-20090501-010000.gz
pagecounts-20090501-020000.gz
What I would like to do is read in all such files in a directory and prepend the date from the filename (e.g. 20090501) to each row of the resulting RDD.
I first thought of using sc.wholeTextFiles(..) instead of sc.textFile(..), which creates a PairRDD with the key being the file name with a path,
but sc.wholeTextFiles() doesn't handle compressed .gz files.
Any suggestions would be welcome.
The following seems to work fine for me in Spark 1.6.0:
sc.wholeTextFiles("file:///tmp/*.gz").flatMapValues(y => y.split("\n")).take(10).foreach(println)
Sample output:
(file:/C:/tmp/pagecounts-20160101-000000.gz,aa 271_a.C 1 4675)
(file:/C:/tmp/pagecounts-20160101-000000.gz,aa Battaglia_di_Qade%C5%A1/it/Battaglia_dell%27Oronte 1 4765)
(file:/C:/tmp/pagecounts-20160101-000000.gz,aa Category:User_th 1
4770)
(file:/C:/tmp/pagecounts-20160101-000000.gz,aa Chiron_Elias_Krase 1 4694)

Find data file in Zip file using microcontroller

I need a microcontroller to read a data file that's been stored in a Zip file (actually a custom OPC-based file format, but that's irrelevant). The data file has a known name, path, and size, and is guaranteed to be stored uncompressed. What is the simplest & fastest way to find the start of the file data within the Zip file without writing a complete Zip parser?
NOTE: I'm aware of the Zip comment section, but the data file in question is too big to fit in it.
I ended up writing a simple parser that finds the file data in question using the central directory.

Resources