I am writing my own Opus Ogg writer following these specifications: RFC7845 and RFC3533.
Currently, I am facing an issue that I believe is related to how I am setting the lacing values (segment table).
My current setup is to basically read (using an existing Ogg reader) an Ogg file with a single Opus track and put that Opus track in another Ogg file that I create using my own Ogg writer.
So I have a function that takes the Opus content of each page from the original Ogg file and put it in pages in my new Ogg file.
I am being able to create the file successfully, but when I try playing it on VLC, it shows the correct timestamp but it does not play any sound.
I noticed that the issue is being caused by the way my segment table (or lacing values) is set.
I am currently creating it by filling each segment with as much data as possible (i.e 255 bytes), and letting only the last segment have a size < 255. This seems to be the way that other implementations are doing it (see Rust implementation, C implementation).
However, when I inspect the lacing values for a page containing that Opus content in the original Ogg file, it is not filled with 255s. It's another combination of segment sizes that still sums up to the same page size, but that uses more segments (since it's not taking up the max segment size). When I try using the exact segments combination in the original file, the file plays on VLC successfully.
So that makes me conclude that the approach I am taking with creating as many 255-sized segments is incorrect. Does anyone have any idea how to properly set the lacing values?
The common situation when the integrity of an MP3 file is not correct, is when the file has been partially uploaded to the server. In this case, the indicated audio duration doesn't correspond to what is really in the MP3 file: we can hear the beginning, but at some point the playing stops and the indicated duration of the audio player is broken.
I tried with libraries like node-ffprobe, but it seems they just read metadata, without making comparison with real audio data in the file. Is there a way to detect efficiently a corrupted or incomplete MP3 file from node.js?
Note: the client uploading MP3 files is a hardware (an audio recorder), uploading files on a FTP server. Not a browser. So I'm not able to upload potentially more useful data from the client.
MP3 files don't normally have a duration. They're just a series of MPEG frames. Sometimes, there is an ID3 tag indicating duration, but not always.
Players can determine duration by choosing one of a few methods:
Decode the entire audio file.This is the slowest method, but if you're going to decode the file anyway, you might as well go this route as it gives you an exact duration.
Read the whole file, skimming through frame headers.You'll have to read the whole file from disk, but you won't have to decode it. Can be slow if I/O is slow, but gives you an exact duration.
Read the first frame's bitrate and estimate duration by file size.Definitely the fastest method, and the one most commonly used by players. Duration is an estimate only, and is reasonably accurate for CBR, but can be wildly inaccurate for VBR.
What I'm getting at is that these files might not actually be broken. They might just be VBR files that your player doesn't know the duration of.
If you're convinced they are broken (such as stopping in the middle of content), then you'll have to figure out how you want to handle it. There are probably only a couple ways to determine this:
Ideally, there's an ID3 tag indicating duration, and you can decode the whole file and determine its real duration to compare.
Usually, that ID3 tag won't exist, so you'll have to check to see if the last frame is complete or not.
Beyond that, you don't really have a good way of knowing if the stream is incomplete, since there is no outer container that actually specifies number of frames to expect.
The expression for calculating the filesize of an mp3 based on duration and encoding (from this answer) is quite simple:
x = length of song in seconds
y = bitrate in kilobits per second
(x * y) / 1024 = filesize (MB)
There is also a javascript implementation for the Web Audio API in another answer on that same question. Perhaps that would be useful in your Node implementation.
mp3diags is some older open source software for fixing mp3s and which was great for batch processing stuff like this. The source is c++ and still available if you're feeling nosy and want to see how some of these features are implemented.
Worth a look since it has some features that might be be useful in your context:
What is MP3 Diags and what does it do?
low quality audio
missing VBR header
missing normalization data
Correcting files that show incorrect song duration
Correcting files in which the player cannot seek correctly
As part of a project I am working on, there is a requirement to concatenate multiple pieces of audio data into one large audio file. The audio files are generated from four sources, and the individual files are stored in a Google Cloud storage bucket. Each file is an mp3 file, and it is easy to verify that each individual file is generating correctly (individually, I can play them, edit them in my favourite software, etc.).
To merge the audio files together, a nodejs server loads the files from the Google Cloud storage as an array buffer using an axios POST request. From there, it puts each array buffer into a node Buffer using Buffer.from(), so now we have an array of Buffer objects. Then it uses Buffer.concat() to concatenate the Buffer objects into one big Buffer, which we then convert to Base64 data and send to the client server.
This is cool, but the issue arises when concatenating audio generated from different sources. The 4 sources I mentioned above are Text to Speech software platforms, such as Google Cloud Voice and Amazon Polly. Specifically, we have files from Google Cloud Voice, Amazon Polly, IBM Watson, and Microsoft Azure Text to Speech. Essentially just five text to speech solutions. Again, all individual files work, but when concatenating them together via this method there are some interesting effects.
When the sound files are concatenated, seemingly depending on which platform they originate from, the sound data either will or will not be included in the final sound file. Below is a 'compatibility' table based on my testing:
|------------|--------|--------|-----------|-----|
| Platform / | Google | Amazon | Microsoft | IBM |
|------------|--------|--------|-----------|-----|
| Google | Yes | No | No | No |
|------------|--------|--------|-----------|-----|
| Amazon | | No | No | Yes |
|------------|--------|--------|-----------|-----|
| Microsoft | | | Yes | No |
|------------|--------|--------|-----------|-----|
| IBM | | | | Yes |
|------------|--------|--------|-----------|-----|
The effect is as follows: When I play the large output file, it will always start playing the first sound file included. From there, if the next sound file is compatible, it is heard, otherwise it is skipped entirely (no empty sound or anything). If it was skipped, the 'length' of that file (for example 10s long audio file) is included at the end of the generated output sound file. However, the moment that my audio player hits the point where the last 'compatible' audio has played, it immediately skips to the end.
As a scenario:
Input:
sound1.mp3 (3s) -> Google
sound2.mp3 (5s) -> Amazon
sound3.mp3 (7s)-> Google
sound4.mp3 (11s) -> IBM
Output:
output.mp3 (26s) -> first 10s is sound1 and sound3, last 16s is skipped.
In this case, the output sound file would be 26s seconds long. For the first 10 seconds, you would hear the sound1.mp3 and sound3.mp3 played back to back. Then at 10s (at least playing this mp3 file in firefox) the player immediately skips to the end at 26s.
My question is: Does anyone have any ideas why sometimes I can concatenate audio data in this way, and other times I cannot? And how come there is this 'missing' data included at the end of the output file? Shouldn't concatenating the binary data work in all cases if it works for some cases, as all the files have mp3 encoding? If I am wrong please let me know what I can do to successfully concatenate any mp3 files :)
I can provide my nodeJS backend code, but the process and methods used are described above.
Thanks for reading?
Potential Sources of Problems
Sample Rate
44.1 kHz is often used for music, as it's what is used on CD audio. 48 kHz is usually used for video, as it's what was used on DVDs. Both of those sample rates are much higher than is required for speech, so it's likely that your various text-to-speech providers are outputting something different. 22.05 kHz (half of 44.1 kHz) is common, and 11.025 kHz is out there too.
While each frame specifies its own sample rate, making it possible to generate a stream with varying sample rates, I've never seen a decoder attempt to switch sample rates mid-stream. I suspect that the decoder is skipping these frames, or maybe even skipping over an arbitrary block until it gets consistent data again.
Use something like FFmpeg (or FFprobe) to figure out what the sample rates of your files are:
ffmpeg -i sound2.mp3
You'll get an output like this:
Duration: 00:13:50.22, start: 0.011995, bitrate: 192 kb/s
Stream #0:0: Audio: mp3, 44100 Hz, stereo, fltp, 192 kb/s
In this example, 44.1 kHz is the sample rate.
Channel Count
I'd expect your voice MP3s to be in mono, but it wouldn't hurt to check to be sure. As with above, check the output of FFmpeg. In my example above, it says stereo.
As with sample rate, technically each frame could specify its own channel count but I don't know of any player that will pull off switching channel count mid-stream. Therefore, if you're concatenating, you need to make sure all the channel counts are the same.
ID3 Tags
It's common for there to be ID3 metadata at the beginning (ID3v2) and/or end (ID3v1) of the file. It's less expected to have this data mid-stream. You would want to make sure this metadata is all stripped out before concatenating.
MP3 Bit Reservoir
MP3 frames don't necessarily stand alone. If you have a constant bitrate stream, the encoder may still use less data to encode one frame, and more data to encode another. When this happens, some frames contain data for other frames. That way, frames that could benefit from the extra bandwidth can get it while still fitting the whole stream within a constant bitrate. This is the "bit reservoir".
If you cut a stream and splice in another stream, you may split up a frame and its dependent frames. This typically causes an audio glitch, but may also cause the decoder to skip ahead. Some badly behaving decoders will just stop playing altogether. In your example, you're not cutting anything so this probably isn't the source of your trouble... but I mention it here because it's definitely relevant to the way you're working these streams.
See also: http://wiki.hydrogenaud.io/index.php?title=Bit_reservoir
Solutions
Pick a "normal" format, resample and rencode non-conforming files
If most of your sources are all the exact same format and only one or two outstanding, you could convert the non-conforming file. From there, strip ID3 tags from everything and concatenate away.
To do the conversion, I'd recommend kicking it over to FFmpeg as a child process.
child_process.spawn('ffmpeg' [
// Input
'-i', inputFile, // Use '-' to write to STDIN instead
// Set sample rate
'-ar', '44100',
// Set audio channel count
'-ac', '1',
// Audio bitrate... try to match others, but not as critical
'-b:a', '64k',
// Ensure we output an MP3
'-f', 'mp3',
// Output
outputFile // As with input, use '-' to write to STDOUT
]);
Best Solution: Let FFmpeg (or similar) do the work for you
The simplest, most robust solution to all of this is to let FFmpeg build a brand new stream for you. This will cause your audio files to be decoded to PCM, and a new stream made. You can add parameters to resample those inputs, and modify channel counts if needed. Then output one stream. Use the concat filter.
This way, you can accept audio files of any type, you don't have to write the code to hack those streams together, and once setup you won't have to worry about it.
The only downside is that it will require a re-encoding of everything, meaning another generation of quality lost. This would be required for any non-conforming files anyway, and it's just speech, so I wouldn't give it a second thought.
#Brad's answer was the solution! The first solution he suggested worked. It took some messing around getting FFMpeg to work correctly, but in the end using the fluent-ffmpeg library worked.
Each file in my case was stored on Google Cloud Storage, and not on the server's hard drive. This posed some problems for FFmpeg, as it requires file paths to have multiple files, or an input stream (but only one is supported, as there is only one STDIN).
One solution is to put the files on the hard drive temporarily, but this would not work for our use case as we may have a lot of use in this function and the hard drive adds latency.
So, instead we did as suggested and loaded each file into ffmpeg to convert it into a standardized format. This was a bit tricky, but in the end requesting each file as a stream, using that stream as an input for ffmpeg, then using fluent-ffmpeg's pipe() method (which returns a stream) as output worked.
We then bound an event listener to the 'data' event for this pipe, and pushed the data to an array (bufs.push(data)), and on stream 'end' we concatenated this array using Buffer.concat(bufs), followed by a promise resolve.
Then once all requests promises were resolved, we could be sure ffmpeg had processed each file, and then those buffers were concatenated in the required groups as before using Buffer.concat(), converted to base64 data, and sent to the client.
This works great, and now it seems to be able to handle every combination of files/sources I can throw at it!
In conclusion:
The answer to the question was that the mp3 data must have been encoded differently (different channels, sample rates, etc.), and loading it through ffmpeg and outputing it in a 'unified' way made the mp3 data compatible.
The solution was to process each file in ffmpeg separately, pipe the ffmpeg output into a buffer, then concatenate the buffers.
Thanks #Brad for your suggestions and detailed answer!
I have around 15,000 music files stored on Ubuntu server (16.04), around 50% FLAC, 25% each mp3 and m4a (aac).
I think maybe 3-5% are corrupted due to HDD hardware failure. The problems accumulated gradually for some time before I noticed. Files are now recovered to new drives using ddrescue.
Original storage was two copies of each file on separate devices, and both drives gradually failed, but independently. Result is that a file which is bad in one copy may be ok in the other copy.
I am trying to find command line validation tools to use in a script to identify which titles have at least one good copy. In cases where both are bad I will need to re-rip from cd.
For FLAC, I have looped the command flac -t in a script which generates lists of good files and the bad files. I believe the flac -t command decodes without sending audio to any play device, and calculates an MD5 hash on the decoded audio and compares this to an original hash included in the file’s metadata. This is pretty fast and works fine.
I would like to achieve similar validation with the mp3 and the m4a files, but have not been able to find a suitable tool. I have looked at mp3val, but testing this against an mp3 where I deliberately damaged data in the audio does not show an error.
From what I can find researching mp3 and m4a it seems there is no hash stored, so I am not sure what other approaches to validation might be possible.
Ideally I would like to sort into definitely good / definitely bad. If this can't be done, I would still benefit from sorting into possibly good / definitely bad, or definitely good / possibly bad.
Can anyone suggest some linux tool that could achieve this, for either/both of mp3 and m4a/aac ?
I'm using gstreamer (gst-launch-1.0 actually) to receive audio and encode it using flacenc. At this point, for testing, the command line looks like this:
gst-launch-1.0 -q autoaudiosrc ! flacenc ! fdsink
This is actually launched by a separate program that gets the FLAC native format data via the child process's stdout.
Now, what I want to be able to do, for archiving purposes, is segment this audio stream into multiple files of limited duration, e.g. one file per minute. I have written code that does the minimal work necessary to parse the stream, segment audio frames, buffer them, and output fully-formed FLAC files. However, in the long term, I'm concerned about the CPU load once I'm archiving hundreds of streams.
The main problem is the frame number. It has a variable length encoding, and even worse, this requires two CRCs to be recomputed for every frame. Wouldn't it be nice if I could either:
Have gstreamer reset the frame number every so often, or even better
Have gstreamer start a whole new file mid-stream?
The latter case would be ideal. If I just dumped this to a file, it wouldn't be a valid FLAC file. After the first segment, the reader would find a file header where it expects a frame header and puke. But I can handle that in my receiving code.
I'm working on trying to figure out how to use various mux and split filters, but most combinations I have tried have resulted in errors of this ilk:
WARNING: erroneous pipeline: could not link flacenc0 to splitmuxsink0
I am also aware that I can use the gstreamer library and probably do stuff like this in my own code where I keep the audio source going and keep bringing the FLAC encoder up and down. A few months ago, I tried to figure out in general how to write programs that link to the gstreamer API and just got thoroughly lost. I was probably not looking at the right docs.
I've also so far found clever ways to always do what I wanted to do with the gstreamer command line. For instance, I managed to get metadata inserted into an tsmpeg stream from a fifo. So maybe I can manage to solve this problem the same way, with some help from kind stackoverflow users. :)
CLARIFICATION: I don't want gstreamer to write multiple files. I want it to generate multiple files but have them concatenated going through stdout and have a completely separate program split them into files.
The default muxer selected by splitmuxsink is mp4mux, which does not support flac. Setting muxer=matroskamux as an example would help you using splitmuxsink. Though you'll get FLAC contained into matroska, which may or may not be what you want.
While this is likely not working yet, you could try and make flacparse usable as a muxer in splitmuxsink in order to avoid the container.
Meanwhile, you can always use a container for the split, and then remove the container using the sink property. The following is an example pipeline the generates 5 seconds flac files.
gst-launch-1.0 audiotestsrc ! flacenc ! flacparse ! sm.audio_0 \
splitmuxsink name=sm muxer=matroskamux \
location=audio%05d.flac \
max-size-time=5000000000 \
sink="matroskademux ! filesink"