http requests and file sizes - sprite

Hi all I'm in the process of finding out all about sprites and how they can speed up your pages.
So I've used spriteMe to create a overall sprite image which is 130kb, this is made up of 14 images with a combined total size of about 65kb
So is it better to have one http request and a file size of 130kb or 14 requests for a total of 65kb?
Also there is a detailed image which has been put into the spite which caused it's size to go up by about 60kb odd, this used to be a seperate jpg image which was only 30kb. Would I be better off having it seperate and suffering the additional request?

This can depend. 14 requests would be improved by combining items into one request becuase limiting the http requests is where you want to get to as this will make your page more responsive as one request although larger is less blocking then 14 smaller requests.
I say it depends, because sometimes it makes sense to split sprites down into groups of related items as apposed to having one big sprite for everything, this will depend on the complexity of projects, if you are dealing with a simple set of images, one sprite works very well.

Related

How to reduce response time of the web page to minimum in django?

Currently my network speed is about 1.5s per page if there are images in that page, If i move to different pages with image, audio and video file in it then loading of page takes around 2s to 2.5s. What i want to know is if there is a way to bring that loading time to minimum.
I am using Django, and Django-templates here to create such web application.
Since you are talking about images, audio and video, first I will play with the size and quality of the image, make some tests. For the audio and video you might want to use the attribute preload="none". This from the template. Another option is to upload image/audio/video by scroll, or paging. It is not a good practice to send all together. i.e: show 10/15 multimedia per page
From the view check the queries you are executing and the data structures you are using (memory consume is important, avoid huge lists and jsons). The same goes for the custom filters, if applicable.

First Contentful Paint measured with almost 6 s although content is visible in second screen

I did everything I could to optimize my Wordpress site and got rid of most of the recommendations by PageSpeed Insights. I use WP Rocket caching plugin, Optimole image optimization and Cloudflare CDN.
Google PageSpeed Insights got somewhat better but still, especially on mobile, results are far from good - although all of the recommendations that were there in the beginning and that I could get rid of (without custom coding and without breaking the site) are now gone.
There is one thing that strikes me as odd about the PageSpeed Insights results. That is that First Contentful Paint is measured with something between 5 and 6 seconds although the screenshots of the page that Google presents clearly show that there is contentful paint in the second frame already. See image 1.
Any ideas on this?
The only remaining points on your suggestions are 1. Remove Unused css, and 2.Defer non critical resources( I think, cus the text is in German)
Point 2 affects time to first paint the most.
The screenshots you see in PSI are not in real time.
Also there is a slight (bug?) discrepancy between screenshots and actual performance as PSI uses a simulated slow-down of the page rather than an applied slowdown (so it loads the page at full speed then adjusts the figures to account for bandwidth and Round Trip Time to the server caused by higher latency).
If you run a Lighthouse audit (Google Chrome -> F12 -> audits 0> run audits) with the throttling set to 'applied' rather than 'simulated' you will see it is about 5 seconds before a meaningful paint.
Lighthouse is the engine that powers Page Speed Insights now so the same information should be in both.
With regards to your site speed you have a load of blank SVGs being loaded for some reason and your CSS and JS files need combining (do it manually, plugins don't tend to do a good job here) to reduce the number of requests your site makes. (It can only make 8 requests at a time and on 4G the round-trip / latency to your server means these add up quickly e.g. 40 files = 5 * 8 round trips at 100ms latency = 500ms of dead time waiting for a response)

Loading website Images faster

Is it possible to improve Website background image to load faster that it is. My website's background image size is 1258X441 and memory size is 656KB. Its taking too long to load complete background image while accessing my website. Is there anyway than Compressing (As the image is already compressed) to improve its loading speed.
Choose: (2. is my main solution, and 3.)
Compress image even more, because image is background, it doesn't matter that much. Users do not focus at background as much as they do at content.
Since background is partially covered by content, you can paint black (or any other solid color) the part of background that is not visible (is behind content). This will make the image compress more nicely, sparig some place.
Save image in JPG progresive compression. That will make background display in gradually more quality as the image loads.
Get rid of background image. (obvious) :)
Today's web speeds are big, don't change anything.
ALSO: If your PNG image has any repeating parts, you can then slice your image in three parts and spare a lot of space.
The speed of loading the background image is determined by (latency ignored) bandwidth of your connection and the size of the image. So if you have let's say 128 KB/s and an image of size 4096 KB, you at least need
4096/128 = 32s
for it to load.
Since you can't change the bandwidth, the only thing you can do is change the size of the picture. That is, if you can't compress it more, lower the resolution.
If you don't want to lose precision, you could put different layers of background in your website with different qualities, the better ones over the bad ones. Then when your page is loading, the low quality images load fast and you get some background. Then over time the better quality images load and the background is improved.
Load your image in Database and call them , when ever you required. The benefit of this is ? Database is loading once you initiate it and can able to retrieve the information when ever required. it is fast compare to any other technique

Should AspBufferLimit ever need to be increased from the default of 4 MB?

A fellow developer recently requested that the AspBufferLimit in IIS 6 be increased from the default value of 4 MB to around 200 MB for streaming larger ZIP files.
Having left the Classic ASP world some time ago, I was scratching my head as to why you'd want to buffer a BinaryWrite and simply suggested setting Response.Buffer = false. But is there any case where you'd really need to make it 50x the default size?
Obviously, memory consumption would be the biggest worry. Are there other concerns with changing this default setting?
Increasing the buffer like that is a supremely bad idea. You would allow every visitor to your site to use up to that amount of ram. If your BinaryWrite/Response.Buffer=false solution doesn't appease him, you could also suggest that he call Response.Flush() now and then. Either would be preferable to increasing the buffer size.
In fact, unless you have a very good reason you shouldn't even pass this through the asp processor. Write it to a special place on disk set aside for such things and redirect there instead.
One of the downsides of turning off the buffer (you could use Flush but I really don't get why you'd do that in this scenario) is that the Client doesn't learn what the Content length at the start of the download. Hence the browsers dialog at the other end is less meaningfull, it can't tell how much progress has been made.
A better (IMO) alternative is to write the desired content to a temporary file (perhaps using GUID for the file name) then sending a Redirect to the client pointing at this temporary file.
There are a number of reasons why this approach is better:-
The client gets good progress info in the save dialog or application receiving the data
Some applications can make good use of byte range fetches which only work well when the server is delivering "static" content.
The temporary file can be re-used to satisify requests from other clients
There are a number of downside though:-
If takes sometime to create the file content, writing to a temporary file can therefore leave some latency before data is received and increasing the download time.
If strong security is needed on the content having a static file lying around may be a concern although the use of a random GUID filename mitigates that somewhat
There is need for some housekeeping on old temporary files.

JavaME - LWUIT images eat up all the memory

I'm writing a MIDlet using LWUIT and images seem to eat up incredible amounts of memory. All the images I use are PNGs and are packed inside the JAR file. I load them using the standard Image.createImage(URL) method. The application has a number of forms and each has a couple of labels an buttons, however I am fairly certain that only the active form is kept in memory (I know it isn't very trustworthy, but Runtime.freeMemory() seems to confirm this).
The application has worked well in 240x320 resolution, but moving it to 480x640 and using appropriately larger images for UI started causing out of memory errors to show up. What the application does, among other things, is download remote images. The application seems to work fine until it gets to this point. After downloading a couple of PNGs and returning to the main menu, the out of memory error is encountered. Naturally, I looked into the amount of memory the main menu uses and it was pretty shocking. It's just two labels with images and four buttons. Each button has three images used for style.setIcon, setPressedIcon and setRolloverIcon. Images range in size from 15 to 25KB but removing two of the three images used for every button (so 8 images in total), Runtime.freeMemory() showed a stunning 1MB decrease in memory usage.
The way I see it, I either have a whole lot of memory leaks (which I don't think I do, but memory leaks aren't exactly known to be easily tracked down), I am doing something terribly wrong with image handling or there's really no problem involved and I just need to scale down.
If anyone has any insight to offer, I would greatly appreciate it.
Mobile devices are usually very low on memory. So you have to use some tricks to conserve and use memory.
We had the same problem at a project of ours and we solved it like this.
for downloaded images:
Make a cache where you put your images. If you need an image, check if it is in the cachemap, if it isn't download it and put it there, if it is, use it. if memory is full, remove the oldest image in the cachemap and try again.
for other resource images:
keep them in memory only for as long as you can see them, if you can't see them, break the reference and the gc will do the cleanup for you.
Hope this helps.
There are a few things that might be happening here:
You might have seen the memory used before garbage collection, which doesn't correspond to the actual memory used by your app.
Some third party code you are running might be pooling some internal datastructures to minimize allocation. While pooling is a viable strategy, sometimes it does look like a leak. In that case, look if there is API to 'close' or 'dispose' the objects you don't need.
Finally, you might really have a leak. In this case you need to get more details on what's going on in the emulator VM (though keep in mind that it is not necessarily the same as the phone VM).
Make sure that your emulator uses JRE 1.6 as backing JVM. If you need it to use the runtime libraries from erlyer JDK, use -Xbootclasspath:<path-to-rt.jar>.
Then, after your application gets in the state you want to see, do %JAVA_HOME%\bin\jmap -dump:format=b,file=heap.bin <pid> (if you don't know the id of your process, use jps)
Now you've got a dump of the JVM heap. You can analyze it with jhat (comes with the JDK, a bit difficult to use) or some third party profilers (my preference is YourKit - it's commercial, but they have time-limited eval licenses)
I had a similar problem with LWUIT at Java DTV. Did you try flushing the images when you don't need them anymore (getAWTImage().flush())?
Use EncodedImage and resource files when possible (resource files use EncodedImage by default. Read the javadoc for such. Other comments are also correct that you need to actually observe the amount of memory, even high RAM Android/iOS devices run out of memory pretty fast with multiple images.
Avoid scaling which effectively eliminates the EncodedImage.
Did you think of the fact, that maybe loading the same image from JAR, many times, is causing many separate image objects (with identical contents) to be created instead of reusing one instance per-individual-image? This is my first guess.

Resources