Understanding Azure Media Service Encoding permutations - that increases file size - azure

How system can improve a video quality automatically? For example, a dark line on my face in video can't be removed by system automatically...make sense. Here I'm trying to understand Azure Media Services encoding permutations.
When I uploaded a 55.5 MB mp4 file and encoded with "H264AdaptiveBitrateMP4Set720p" encoder preset, I received following output files:
Now look at green rectangular highlighted video file, this looks good according to input file size. But if you look at red rectangular highlighted video files, these are improved files for adaptive streaming, which looks useless if you compare with my example 'a dark line on my face'. Here's my questions and I would love to read your input on this:
What are exact reasons encoder increases the file size?
Why I should pay more for bandwidth and storage on these large files, how I convince clients?
Is there any way I can define not to create such files when scheduling encoding?
Any input is highly appreciated.

1) The dark lines appearing on your face have nothing to do with encoding. Encoding simply means re-arranging bits that make up the video using a different compression algorithm than the one used in the source video.
2) As you see from the filenames of the files generated, they all have a different bitrate, denoted in kbps. This is the amount of data, i.e. number of bits, that the transcoder has to decode to get 1-second worth of video footage. The higher the bit-rate, the better is the quality of the video because there is more detail such as better light and color information stored in every pixel and hence in every frame of such a video.
As a corollary, a higher-bit rate video is suited better for faster internet connections.
So, Azure must have converted from your source video, these 4 different videos of different bit-rates, all having the same video (h.264) and audio (AAC) encoding.
3) As to how to let Azure not make so many files, I do not know the answer to that. I am pretty sure it will be some configuration somewhere but I honestly have no idea. I am confident, though, that it is only a matter of some configuration to tell it to stop doing the other bit-rate conversions.
In summary:
a) to clear off the dark thingy on your face in the video, you have to edit the source video in a video editor and that has nothing to do with video encoding.
b) The file sizes are different due to different bit-rates, meaning differences in light and color information, i.e. shadow detail, stored in every pixel of every frame of the video footage.
Those users who have a faster Internet connection, to them you could show the option of downloading a higher-bit-rate file. The higher bit-rate file will show slightly better quality even under the same video resolution, i.e. 720p in your case.

Related

I am trying to build a music visualizer but I am completely inexperienced

Where should I begin?
I am trying to build a real-time stem-split music visualizer for VJing and the like. What sets this apart is that I would like to split the input audio stream into its stems (either algorithmically or using something like Spleeter) and then use each stem data to control different aspects of the visualization.
For example:
The isolated drums to play a BPM-synced video.
I'm hoping to achieve this by making a short looping video at a fixed BPM (say, 60) and then by detecting the BPM of the stream, adjust the playback speed of the video so that the video is in sync.
The isolated synth stream could control DMX lights.
I want to try and encode this data in, say, the last row of pixels in the above video. By reading the colour, intensity, and movement data from the pixels, moves and timings could be read and sent to the lights in real-time. I'm doing this so that the user can encode all the data needed for a scene into one video file.
The isolated vocals could be synced and displayed on screen using
MusixMatch.
The isolated bassline could be parsed into MIDI data and visualized on screen.
All of the above can be controlled live.
Now the problem is that I am relatively inexperienced with programming. I am not sure where to start. Which language to use, which IDE, how to display visuals, how to interact with audio input streams, how to use DMX and how to visualize MIDI data. I know this is currently quite a bit out of my depth but I'll manage with the right resources. Please give me some advice on where to begin for a project like this.

Is there a way to set the details of a file in Windows using python?

I want to be able to set the "Title" and "Comments" (listed in properties->details) of some mp3 files in Windows using python. Is this possible, perhaps with a library like PyWin32? Also, would these details be visible in other operating systems or are they Windows-specific? Thanks.
Simple Answer:
Yes, you can set 'Title' and 'Comments' (and many other fields) of an mp3 file in Windows using Python.
Also, the details are visible on all operating systems and are not windows specific.
First you have to understand what is mp3 file and how data is organized within an mp3 file.
Detailed Answer:
Raw audio consumes a lot of size. For example, an audio signal of 10 sec sampled 48 kHz and having a bit depth of 16 bits per sample will be of size 10*48000*16 bits, which is close to 1 MB. So, for a 5 minute song, it will almost take 30 MB. But, if you observe, most 5 min mp3 songs are of size around 5 MB (of course it depends on sampling frequency, bit depth and amount of compression used). How is it possible? It is possible because we compress the data using signal processing techniques which in itself is a big topic altogether which we will not discuss here. So, to create an mp3 file we need something called encoder which converts the raw audio data to compressed data and every time you play an mp3 song, decoder is used which converts the data from compressed format to raw audio, which is what you can only listen. So, compression is done for saving storage and also transmission bandwidth (basically saving amount of data to be transmitted over internet).
Now, coming to how data is organized inside an mp3 file. mp3 file will obviously contain the compressed data. In addition many mp3 files contain some meta data (like Title and Comments you mentioned in your question). There are several formats for storing this meta data. So, a decoder which is decoding mp3 file should also support decoding of meta-data, then only you can see the information, other wise you can't see. The meta data is operating system independent, and can be seen on any operating system provided you have a proper decoder.
Finally, yes you can edit the meta data on windows (for that matter on any OS) using python. If you want to do this, using only python without any library, you need to understand how data is organized inside an mp3 file, find the meta-data inside it, edit it and store it back. But, there are libraries and packages in python which support editing meta-data of mp3 file. You can use them directly. Also, the meta data is independent of OS, and once you edit your properties, you should be able to see the properties in any OS provided the decoder you use has the support.
Some links which will help you:
mp3 tag tool
Another stack overflow question which gives details about libraries that support viewing and editing of meta data using Python

How can I prepare video for Azure Media Services myself (on-premises) in variable bitrate?

I usually use Media Encoder Standard to encode 4k videos in H264 Multiple bitrate format. But it's just becoming expensive (for me) because of source 4k file size, so it take up to 20 hours when encoding in Azure.
So I wonder is there a way to prepare it myself for this format https://learn.microsoft.com/en-us/azure/media-services/media-services-mes-preset-h264-multiple-bitrate-4k ? I do video editing and color grading anyway.
Ok, so the answer to this, as can be seen in the comment thread above is to do several changes to your workflow to reduce the time and the costs:
Change your source content to be 4k 30p instead of 60p. There really is no need to have 60p for the type of content that you are filming. It's not really high action content.
This should cut your upload source data size in half...
Download the JSON for the 4K preset that you are using "H264 Multi Bitrate 4k" and customize it. Don't trust that we have given you the right settings for your cost demands or scenario. :-)
Change the frame rates in the preset, drop some of the bitrate layers, remove some layers as Anil suggested above. This should seriously reduce the encoding time, and your overall output costs. Just cut it down to the bare minimum and give it another shot.
If that does not work out for you, ping us again at amshelp#microsoft.com and we can help figure out other scenarios to assist.
Thanks for using Azure Media Services! And also thanks for contributing to the community.
John D.

How to determine if an audio track is a Dolby Pro Logic II mixdown

I'm trying to find out if there's a way to determine if an AAC-encoded audio track is encoded with Dolby Pro Logic II data. Is there a way of examining the file such that you can see this information? I have for example encoded a media file in Handbrake with (truncated to audio options) -E av_aac -B 320 --mixdown dpl2 and this is the audio track output that mediainfo shows:
Audio #1
ID : 2
Format : AAC
Format/Info : Advanced Audio Codec
Format profile : LC
Codec ID : 40
Duration : 2h 5mn
Bit rate mode : Variable
Bit rate : 321 Kbps
Channel(s) : 2 channels
Channel positions : Front: L R
Sampling rate : 48.0 KHz
Compression mode : Lossy
Stream size : 288 MiB (3%)
Title : Stereo / Stereo
Language : English
Encoded date : UTC 2017-04-11 22:21:41
Tagged date : UTC 2017-04-11 22:21:41
but I can't tell if there's anything in this output that would suggest that it's encoded with DPL2 data.
tl:dr; it's probably possible; it may be easier if you're a programmer.
Because the information encoded is just a stereo analog pair, there is no guaranteed way of detecting a Dolby Pro Logic II (DPL2) signal therein, unless you specifically store your own metadata saying "this is a DPL2 file." But you can probably make a pretty good guess.
All of the old analog Dolby Surround formats, including DPL2, store surround information in two channels by inverting the phase of the surround or surrounds and then mixing them into the original left and right channels. Dolby Surround type decoders, including DPL2, attempt to recover this information by inverting the phase of one of the two channels and then looking for similarities in these signal pairs. This is either done trivially, as in Dolby Surround, or else these similarities are artificially biased to be pushed much further to the left or right, or the left or right surround, as in DPL2.
So the trick is to detect whether important data is being stored in the surround channel(s). I'll sketch out for you a method that might work, and I'll try to express it without writing code, but it's up to you to implement and refine it to your liking.
Crop the first N seconds or so of program content into a stereo file, where N is between one and thirty. Call this file Input.
Mix down the Input stereo channels to a new mono file at -3dB per channel. Call this file Center.
Split the left and right channels of Input into separate files. Call these Left and Right.
Invert the right channel. Call this file RightInvert.
Mix down the Left and RightInvert channels to a new mono file at -3dB per channel. Call this file Surround.
Determine the RMS and peak dB of the Surround file.
If the RMS or peak DB of the Surround file are below "a tolerance", stop; the original file is either mono or center-panned and hence contains no surround information. You'll have to experiment with several DPL2 and non-DPL2 sources to see what these tolerances are, but after a dozen or so files the numbers should become clear. I'm guessing around -30 dB or so.
Invert the Center file into a new file. Call this file CenterInvert.
Mix the CenterInvert file into the Surround file at 0 dB (both CenterInvert and Surround should be mono). Call this new file SurroundInvert.
Determine the RMS and peak dB of the SurroundInvert file.
If either the RMS and/or peak dB of SurroundInvert are below "a tolerance," stop; your original source contains panned left or right front information, not surround information. You'll have to experiment with several DPL2 and non-DPL2 sources to see what these tolerances are, but after a dozen or so files the numbers should become clear -- I'm guessing around -35 dB or so.
If you've gotten this far, your original Input probably contains surround information, and hence is probably a member of the Dolby Surround family of encodings.
I've written this algorithm out such that you can do each of these steps with a specific command in sox. If you want to be fancier, instead of doing the RMS/peak value step in sox, you could run an ebur128 program and check your levels in LUFS against a tolerance. If you want to be even fancier, after you create the Surround and Center files, you could filter out all frequencies higher than 7kHz and do de-emphasis on them, just like a real DPL2 decoder would.
To keep this algorithm simple, I've sketched it out entirely in the amplitude domain. The calculation of the SurroundLevel file would probably be a lot more accurately done in the frequency domain, if you know how to calculate the magnitude and angle of FFT bins and you use windows of 30 to 100 ms. But this cheapo version above should get you started.
One last caution. AAC is a modern psychoacoustic codec, which means that it likes to play games with stereo phasing and imaging to achieve its compression. So I consider it likely that the mere act of encapsulating DPL2 into an AAC stream will likely hose some of the imaging present in DPL2. To be candid, neither DPL2 nor AAC belongs anywhere in this pipeline. If you must store an analog stream originally encoded with DPL2, do it in a lossless format like WAV or FLAC, not AAC.
As of this writing, operational concepts behind Dolby Pro Logic (I) are here. These basic concepts still apply to DPL2; operational concepts for DPL2 are here.
If the file has more than one channel, you can with some certainty assume that they are used for surround purposes, although they could be just multiple tracks.
In this case it falls on a playing system to do with channels as it "thinks" best. (if file header doesn't say what to do)
But your file is stereo. If you want to know whether it is a virtual surround file then you look in header for an encoder field to see which encoder was used.
This may help somewhat, although not much. Mostly encoder field is left empty, and second thing is that the encoder doesn't have to be same as the recoder that mixed down the surround data.
I.e. the recoder will first create raw PCM data, then feed it to some encoder to produce compressed file. (AAC or whatever)
Also, there are many applications and versions vary, so might the encoder field, so tracking all of them would be nasty work.
However, you can, with over 60% certainty, deduce whether something is virtual surround or not by examining the data.
This would be advanced DSP and, for speed, even machine learning may be involved.
You would have to find out whether the stereo signals contain certain features of HRTF (head related transfer function).
This may be achieved by examining intensity difference and delay features between same sounds appearing in time domain and harmonic features (characteristic frequency changes) in frequency domain.
You would have to do both, because one without another may just tell you that something is very good stereo recording,, not a virtual surround.
I don't know whether there are HRTF specific features mapped somewhere already, or you would need to do it by yourself.
It's a very complicated solution that takes a lot of time to make properly. Also it's performance would be problematic.
With this method you can also break the stereo mixdown to the nearly original surround channels.
But for stereo to surround conversion other methods are used and they sound well.
If you are determined to perform such a detection, dedicate half a year or more of hard work if no HRTF features are mapped, few weeks if they are,
brace yourself for big stress and I wish you luck. I have done something similar. It is a killer.
If you want an out of the box solution, then the answer to your question is no, unless header provides you with encoder field and the encoder is distinctive and known to be used only for doing surround to stereo conversion.
I do not think anyone did this from actual data as I described, or if they did it is a part of commercial product. Doing what you want is not usually needed, but it can be done.
Ow, BTW, try googling HRTF inversion, it might give some help.

How to decrease pitch of audio file in nodejs server side?

I have a .MP3 file stored on my server, and I'd like to modify it to be a bit lower in pitch. I know this can be achieved by increasing the length of the audio, however, I don't know of any libraries in node that can do this.
I've tried using the node web audio api, and soundbank-pitch-shift, but the former doesn't seem to have the capabilities of pitch shifting (AFAIK), and the latter seems designed toward client
I need the solution within the realm of node ONLY- that means no external programs, etc., and it needs to be automated as well, so I can't manually pitch shift.
An ideal solution would be a function that takes a file/filepath as an input, and then creates (or overwrites) another MP3 file but with the pitch shifted by x amount, but really, any solution that produces something with a lower pitch than the original, works.
I'm totally lost here. Please help.
An audio file is basically a list of numbers. Those numbers are read one at a time at a particular speed called the 'sample rate'. The sample rate is otherwise defined as the number of audio samples read every second e.g. if an audio files sample rate is 44100, then there are 44100 samples (or numbers) read every second.
If you are with me so far, the simplest way to lower the pitch of an audio file is to play the file back at a lower sample rate (which is normally fixed in place). In most cases you wont be able to do this, so you need to achieve the same effect by resampling the file i.e adding new samples to the file in between the old samples to make it literally longer. For this you would need to understand interpolation.
The drawback to this technique in either case is that the sound will also play back at a slower speed, as well as at a lower pitch. If it is a problem that the sound has slowed down as well as lowered in pitch as a result of your processing, then you will also have to use a timestretching algorithm to fix the playback speed.
You may also have problems doing this using MP3 files. In this case you may have to uncompress the data in the MP3 file before you can operate on it in such a way that changes the pitch of the file. WAV files are more ideal in audio processing. In any case, you essentially need to turn the file into a list of floating point numbers, and change those numbers to be effectively read back at a slower rate.
Other methods of pitch shifting would probably need to involve the use of ffts, and would be a more complicated affair to say the least.
I am not familiar with nodejs I'm afraid.
I managed to get it working with help from Ollie M's answer and node-lame.
I hadn't known previously that sample rate could affect the speed, but thanks to Ollie, suddenly this problem became a lot more simple.
Using node-lame, all I did was take one of the examples (mp32wav.js), and make it so that I change the parameter sampleRate of the format object, so that it is lower than the base sample rate, which in my application was always a static 24,000. I could also make it dynamic since node-lame can grab the parameters of the input file in the format object.
Ollie, however perfectly describes the drawback with this method
The drawback to this technique in either case is that the sound will
also play back at a slower speed, as well as at a lower pitch. If it
is a problem that the sound has slowed down as well as lowered in
pitch as a result of your processing, then you will also have to use a
timestretching algorithm to fix the playback speed.
I don't have a particular need to implement a time stretching algorithm at the moment (thankfully, because that's a whole other can of worms), since I have the ability to change the initial speed of the file, but others may in the future.
See https://www.npmjs.com/package/audio-decode, https://github.com/audiojs/audio-buffer, and related linked at bottom of audio-buffer readme.

Resources