Is it possible to set the order in which images are downloaded by the browser ? Ive got about 20 images on a page and looking the network tab in developer tools in chrome they seems to be downloaded in no particular order, and the order differs every time the page is loaded.
Ive looked at some lazy loading JS plugins, but these all seem to have the same core issue that they remove the image src and replace it later, thus forcing the browser to take images in order, but this is TERRIBLE for seo as a search engines bot will disregard an image with no src.
Is there any other way to do this ?
Related
Im using react/next.js and a lot of the images are coming from a CMS in the form of a link - However this causes them to load really slowly (300ms+ even for 40kb images) so i want to prerender/preload/presave them every time on build.
Is there any way i can do this given the links to the images?
So I have this wordpress blog set up on a VPS with litespeed and cloudflare. The website loads some banners from a revive insallation on the same VPS server, only that domain doesn't have cloudflare installed.
Although the page speed and wslow scores are good, I still get a 3 to 5 secs page load. You can see the results here:
https://gtmetrix.com/reports/www.survivalsullivan.com/WIZjVt68
Although individual resources seem to load fast (including the revive banners), there seem to be inexplicable "delays" in the waterfall... I'm no wiz in website optimization but do have some experience.
Am I missing something? I couldn't find a decent resource on how to read the waterfall, although I figured out most of it. Thanks!
Overall you got pretty good results!
First of all deal with all those images gtmetrix displays: optimize them using photoshop, jpeg mini or sprites.
If you haven't already, install bj lazy load plugin and above the fold.
Install and configure W3C cache which will fix the YSLOW settings that still not green in gtmetrix.
I assume you use some kind of theme / page builder? see if you can reduce the number of dom elements in page. Use DOM Monster! to see how nested is your page.
For example if need to display an image dont nest it in div inside column inside row inside container div.
If your website is gonna be used by users in multiple countries I would suggest paying for MAXCDN. It also integrated into W3C cache plugin.
If you use google fonts try adding them locally to style instead of GETing them.
I'm trying to redesign a small portion of vastly huge site and I was told that I can load custom images to Inspect Element (Chrome) if they are located in the same path as the stylesheet to which the site is remapped. (all done through css via 'content: url('...');') but the webpage is still looking for them in its own resources. So is there a way to use a locally stored image with Inspect Element?
When you're passing images in locally you can use, for example:
file:///C:/Users/[username]/Desktop/picture.png
So if I was to change a background image I would use
background-image:url("file:///C:/Users/Julia/Desktop/background.png");
But note that a lot of sites don't allow you to load local resources, so an error may appear in the inspect console when you try.
Actually, it only works with background instead of background-image (not sure if it works) but background without image seems to work. So put in:
file:///Users/[YourName]/Documents/picture.png
Like this:
background:url("file:///Users/[YourName]/Documents/picture.png")
I just decided to contribute because I just wanted to try it out myself as well and found this as the first answer despite the post being 2-3 yrs old; although for other users it might be helpful.
Also, this was done on Opera, but haven't tried it on other browsers, but it does work for me. Don't include the drive name. But you can simply copy the URL, by dragging the image into the browser (if it loads it) and copy the link. It should work (usually older browsers that don't auto-download the image).
We have a web application with over 560 pages. I would like a way to catalog the site somehow so that I can review the pages (without having to find each on in the menu or enter the URL). Be very glad for ideas on the best way to go about this.
I'd be happy to end up with 560 image files or PDFs, or one large PDF or whatever. I can easily put together a script with all the URLs, but how to pull those up and take a snapshot of some sort and save that to a file or files is where I need help.
The site is written in Java (server) and javascript (client).
I found a great plugin for Firefox that made this relatively painless. The plugin is called Screenshot Pimp (hate the name, love what it does). It takes a snapshot of your browser contents and immediately saves it to a file on your hard drive.
So then I wrote a script that would pull each page up in an IFrame with the URL showing above that, and took snapshots of each page. It took a couple hours to cycle through the whole set of 560+ pages, but it worked great, and now I have a catalog of all the pages.
Sometimes when you view a file on a page on its own, the browser has some default way of viewing it, like to place it in an image or video tag, or invoke some plugin. Other times, it just downloads the file.
Sometimes this is because of headers set by the server, but lets ignore that for now. For some file types, it doesn't matter what headers were set -- the browser will try to download them regardless.
Some of the types that the browser will view are listed in navigator.mimeTypes. However, this is not authoritative. The iPad can view Microsoft Office files but it does not report this.
Is there any simple way to figure out what the browser is going to do with a file before it does it?