I found many websites that switch from a content to another without letting the user watches the loading of many images and everything appears so light. Usually, I saw a sort of loading animation of 5, 10 or 15 seconds (without progress bar) that makes me think that it's the moment when the website renders all the initial content. I just came across a good use of progressive jpeg or they use a special framework? If not, what's the right development approach for fast loading images?
In these days of high-speed internet the advantage of progressive JPEG is that, with the right settings, you can get often get better compression than with sequential. In the days of dialup modems, progressive JPEG and interlaced GIF and PNG could allow to get a preview of what the image you were downloading looked like (and you could stop the download if it were bad).
Progressive JPEG does allow what you are describing. It takes more processing because the decoder has to decompress the image for each screen update. In order to see this effect on the screen, the decoder has to support re-decoding after scans and the application needs to interact with the decoder to update the display.
In summary, the fastest way to decode images is to process the entire JPEG stream. A progressive display take more processing but allows the user to see what is coming down.
Related
How system can improve a video quality automatically? For example, a dark line on my face in video can't be removed by system automatically...make sense. Here I'm trying to understand Azure Media Services encoding permutations.
When I uploaded a 55.5 MB mp4 file and encoded with "H264AdaptiveBitrateMP4Set720p" encoder preset, I received following output files:
Now look at green rectangular highlighted video file, this looks good according to input file size. But if you look at red rectangular highlighted video files, these are improved files for adaptive streaming, which looks useless if you compare with my example 'a dark line on my face'. Here's my questions and I would love to read your input on this:
What are exact reasons encoder increases the file size?
Why I should pay more for bandwidth and storage on these large files, how I convince clients?
Is there any way I can define not to create such files when scheduling encoding?
Any input is highly appreciated.
1) The dark lines appearing on your face have nothing to do with encoding. Encoding simply means re-arranging bits that make up the video using a different compression algorithm than the one used in the source video.
2) As you see from the filenames of the files generated, they all have a different bitrate, denoted in kbps. This is the amount of data, i.e. number of bits, that the transcoder has to decode to get 1-second worth of video footage. The higher the bit-rate, the better is the quality of the video because there is more detail such as better light and color information stored in every pixel and hence in every frame of such a video.
As a corollary, a higher-bit rate video is suited better for faster internet connections.
So, Azure must have converted from your source video, these 4 different videos of different bit-rates, all having the same video (h.264) and audio (AAC) encoding.
3) As to how to let Azure not make so many files, I do not know the answer to that. I am pretty sure it will be some configuration somewhere but I honestly have no idea. I am confident, though, that it is only a matter of some configuration to tell it to stop doing the other bit-rate conversions.
In summary:
a) to clear off the dark thingy on your face in the video, you have to edit the source video in a video editor and that has nothing to do with video encoding.
b) The file sizes are different due to different bit-rates, meaning differences in light and color information, i.e. shadow detail, stored in every pixel of every frame of the video footage.
Those users who have a faster Internet connection, to them you could show the option of downloading a higher-bit-rate file. The higher bit-rate file will show slightly better quality even under the same video resolution, i.e. 720p in your case.
I have some large images on my web site, and so I saved them as progressive jpegs. That way the user should see that something is happening while they download. But nothing shows up for several seconds until the entire jpeg is downloaded. What am I doing wrong?
The site (the large image should be obvious)
http://www.heylookthatsme.com
A typical image on its own, displaying properly:
http://www.heylookthatsme.com/art/stories/Wonderland.jpg
EDIT: it look like it may be due to my using tables for layout. A bit of Googling suggests that tables won't show anything until the last byte is downloaded. I'll try using pure DIVs and see if that fixes the problem.
You can be perfectly correct in your progressive jpegs but the web browser does not have to display them frame-by-frame.
The display of the image is entirely up to the decoder. It's more complicated to implement, say, a web browser that continuously updates a progressive JPEG than to buffer it and display it all at once.
Remember, each browser can do it differently.
What are some scalable and secure ways to provide a streaming video to a recipient with their name overlayed as a watermark?
Some of the comments here are very good. Using libavfilter is probably a good place to start. Watermarking every frame is going to be very expensive because it requires decoding and re-encoding the entire video for each viewer.
One idea I'd like to expand upon is watermarking only portions of the video. I will assume you're working with h.264 video, which requires far more CPU cycles to decode and encode than older codecs. I think per cpu core you could mark 1 or 2 stream in real time. If you can reduce your requirements to 10 seconds marked out of 100, then you're talking about 10-20 per core, so about 100 per server. It's probably not the performance you're looking for.
I think some companies sell watermarking hardware for TV operators, but I doubt it's any cheaper than a rack of servers and far less flexible.
I think you want to use the ffmpeg libavfilter library. Basically it allows you to overlay an image on top of a video. There is an example showing how to insert a transparent PNG logo in the bottom left corner of the input. You can interface with the library from C++ or from a shell on a command line basis.
In older versions of ffmpeg you will need to use a extension library called watermark.so, often located in /usr/lib/vhook/watermark.so
Depending on what your content is, you may want to consider using invisible digital watermarking as well. It embeds a digital sequence into your video which is not visually detectable. Even if someone were to remove the visible watermark, the invisible watermark would still remain. If a user were to redistribute your video, invisible watermarking would indicate the source of the redistribution.
Of course there are also companies which provide video content management, but I get the sense you want to do this yourself. Doing the watermarking real time is going to be very resource intensive, especialy as you scale up. I would look to do some type of predicitive watermarking.
I'm starting to venture out from using jpeg and gif files to png, I was wondering if there were any standards for using png beside IE's lack of support for it. I also want to know if there was any current articles about setting I should be using when optimizing for web? Right now I'm using photoshop to do this, should I be using firework instead?
Which optimizations you use depends on the type of image. If your image contains only few colors, you might use png-8, otherwise you may need png-24. Same goes for the use of transparency/alpha blending.
The Photoshop save for web-feature does a fine job, but when your website has a lot of visitors, you may benefit from using PNGCrush for further compressing your images. You can use the YSlow plugin for FireFox to test how much bandwidth you can save by crushing your images.
Also, you can make use of CSS-sprites if your design allows it. This can result in less (but larger) images and therefore less requests and sometimes less bandwidth. But this doen't depend on the type of images you use.
Png is supported by IE, by the way. Only the alpha-transparency is not supported by IE 6, but there are CSS/Javascript trics to work around that, although they do not work for background images.
I wouldn't quit using jpg. Jpg is very useful when it comes to pictures. Png files are convenient for small images like buttons, graphical elements, and for images with large plain areas, like screenshots.
There is a 2d-game based on Direct3D. This game has a lot of graphics and animations. What is the best way to extract animation image sequences from the running game (e.g. using memory dump)? Is there any special tools for such purposes?
Depending on what you call 'the best'
FRAPS - http://www.fraps.com/
Allows you to capture screen shots which you can edit the frames out of.
Alternatively you may be able to use graphical debugging tools like PIX (http://msdn.microsoft.com/en-us/library/bb173085(VS.85).aspx) to capture the graphical commands and pull the textures out directly (games often disable PIX support on release though).
Or, try and pull the images directly out of the files (they have to be loaded somewhere and file formats are usually pretty easy to reverse engineer).
NB: I'm assuming by 2D game you don't mean actually really mean 3D assets but 2D game play.
I don't know if it can work on full screen mode, but with a desktop screen recorder tool like CamStudio you can record the animation in uncompressed avi format.
With an extra tool for video processing you can do whatever you want with the captured frames.
there is a tool which can extract the resource files from many popular games and binary formats: Game file explorer.
Saves you the trouble of screen grabbing