Azure Media Services <Quality> in thumbnails - azure

I am using azure media services for video content management. For this I am also generating thumbnails using xml preset file as provided here.
But I am not able to understand the significance of Quality here. I am not able to find much documentation for this. I understand the common definition of the word. But will it impact the time taken for encoding, any cost difference etc. ?

It only impacts the JPEG compression. 0-100%, similar to 1-10 in Photoshop when compressing a JPEG. It's going to just crunch your JPEG down with more compression and artifacts. A typical setting of 70-80% is usually a good enough quality for a JPEG. If you are using PNG, you don't need to set it.
It does not affect the performance, cost, etc. of the Thumbnail job.

Related

What is the most lightweight method to load different image file formats in nodejs and read pixels?

Is there a unified and lightweight method for loading multiple common image file formats in NodeJS which provides read access to individual pixels?
It should support gif, jpeg, and png.
Preferably it would either support other image formats too or provide a way to add more. (webp, etc.)
It does not need to be able to save the file again after modifying pixels, provide metadata access, or anything else.
It doesn't need to be able to load images from URLs.
So far the libraries that support multiple image formats are heavyweight, such as providing full canvas support or full image processing support.
Is there a lightweight way to do this that I'm not finding?
I don't know why I couldn't find this one before posting here:
get-pixels
Given a URL/path, grab all the pixels in an image and return the result as an ndarray. Written in 100% JavaScript, works both in browserify and in node.js and has no external native dependencies.
Currently the following file formats are supported:
PNG
JPEG
GIF
It hasn't had any updates for two years but seems the most lightweight. I'm guessing people might mostly use Jimp these days. It doesn't seem to have external dependencies and is actively developed, but includes a lot of image processing functionality I don't need.

How can I prepare video for Azure Media Services myself (on-premises) in variable bitrate?

I usually use Media Encoder Standard to encode 4k videos in H264 Multiple bitrate format. But it's just becoming expensive (for me) because of source 4k file size, so it take up to 20 hours when encoding in Azure.
So I wonder is there a way to prepare it myself for this format https://learn.microsoft.com/en-us/azure/media-services/media-services-mes-preset-h264-multiple-bitrate-4k ? I do video editing and color grading anyway.
Ok, so the answer to this, as can be seen in the comment thread above is to do several changes to your workflow to reduce the time and the costs:
Change your source content to be 4k 30p instead of 60p. There really is no need to have 60p for the type of content that you are filming. It's not really high action content.
This should cut your upload source data size in half...
Download the JSON for the 4K preset that you are using "H264 Multi Bitrate 4k" and customize it. Don't trust that we have given you the right settings for your cost demands or scenario. :-)
Change the frame rates in the preset, drop some of the bitrate layers, remove some layers as Anil suggested above. This should seriously reduce the encoding time, and your overall output costs. Just cut it down to the bare minimum and give it another shot.
If that does not work out for you, ping us again at amshelp#microsoft.com and we can help figure out other scenarios to assist.
Thanks for using Azure Media Services! And also thanks for contributing to the community.
John D.

Understanding Azure Media Service Encoding permutations - that increases file size

How system can improve a video quality automatically? For example, a dark line on my face in video can't be removed by system automatically...make sense. Here I'm trying to understand Azure Media Services encoding permutations.
When I uploaded a 55.5 MB mp4 file and encoded with "H264AdaptiveBitrateMP4Set720p" encoder preset, I received following output files:
Now look at green rectangular highlighted video file, this looks good according to input file size. But if you look at red rectangular highlighted video files, these are improved files for adaptive streaming, which looks useless if you compare with my example 'a dark line on my face'. Here's my questions and I would love to read your input on this:
What are exact reasons encoder increases the file size?
Why I should pay more for bandwidth and storage on these large files, how I convince clients?
Is there any way I can define not to create such files when scheduling encoding?
Any input is highly appreciated.
1) The dark lines appearing on your face have nothing to do with encoding. Encoding simply means re-arranging bits that make up the video using a different compression algorithm than the one used in the source video.
2) As you see from the filenames of the files generated, they all have a different bitrate, denoted in kbps. This is the amount of data, i.e. number of bits, that the transcoder has to decode to get 1-second worth of video footage. The higher the bit-rate, the better is the quality of the video because there is more detail such as better light and color information stored in every pixel and hence in every frame of such a video.
As a corollary, a higher-bit rate video is suited better for faster internet connections.
So, Azure must have converted from your source video, these 4 different videos of different bit-rates, all having the same video (h.264) and audio (AAC) encoding.
3) As to how to let Azure not make so many files, I do not know the answer to that. I am pretty sure it will be some configuration somewhere but I honestly have no idea. I am confident, though, that it is only a matter of some configuration to tell it to stop doing the other bit-rate conversions.
In summary:
a) to clear off the dark thingy on your face in the video, you have to edit the source video in a video editor and that has nothing to do with video encoding.
b) The file sizes are different due to different bit-rates, meaning differences in light and color information, i.e. shadow detail, stored in every pixel of every frame of the video footage.
Those users who have a faster Internet connection, to them you could show the option of downloading a higher-bit-rate file. The higher bit-rate file will show slightly better quality even under the same video resolution, i.e. 720p in your case.

How to reduce file size of jpeg without losing its quality

I want to reduce the size of jpg's I use on my website. Is there a way to reduce the size of jpg files so that I can reduce the data transfer charges without reducing much of clarity? I am hoping to do this without uploading my files somewhere.
[Nearly] Any application of JPEG is going to distort the image from the original. You can adjust the compression settings to balance compression to image distortion. The amount of compression you can get without visible distortion depends upon the type of image. If you have a cartoon with sharp color transitions, you are going to quickly see distortions with JPEG.
Things you can do to change compression:
Change the quantization tables ("Quality Settings" in many encoders)
Subsample the Cb and Cr components.
Use optimal Huffman tables (has no visible effect the image)
The best way to shrink JPG files on web is to use shrinkjpeg.com. The reason I recommend shrinkjpeg.com is because the site compresses JPG Images without actually uploading the files to their server (Who knew HTML5 magic can do this locally?)

Need to know standards for png file in web graphics?

I'm starting to venture out from using jpeg and gif files to png, I was wondering if there were any standards for using png beside IE's lack of support for it. I also want to know if there was any current articles about setting I should be using when optimizing for web? Right now I'm using photoshop to do this, should I be using firework instead?
Which optimizations you use depends on the type of image. If your image contains only few colors, you might use png-8, otherwise you may need png-24. Same goes for the use of transparency/alpha blending.
The Photoshop save for web-feature does a fine job, but when your website has a lot of visitors, you may benefit from using PNGCrush for further compressing your images. You can use the YSlow plugin for FireFox to test how much bandwidth you can save by crushing your images.
Also, you can make use of CSS-sprites if your design allows it. This can result in less (but larger) images and therefore less requests and sometimes less bandwidth. But this doen't depend on the type of images you use.
Png is supported by IE, by the way. Only the alpha-transparency is not supported by IE 6, but there are CSS/Javascript trics to work around that, although they do not work for background images.
I wouldn't quit using jpg. Jpg is very useful when it comes to pictures. Png files are convenient for small images like buttons, graphical elements, and for images with large plain areas, like screenshots.

Resources