It looks like I can't get the image links to optimize my sites like I could in the previous version. Is there a way to get these links?
Thanks!
Hey Ben i am having the same issues, what he is talking about is before page insights was with lighthouse as it is now, we were given the exact resource needed in the form google requested them to be. So if we had an image that was 4 mb and 2000x2000 but the view port of where the image was lets say 300x300, google would provide that picture in a zip folder along with all other photos in the same boat. Also if javascript or css needed minification it also provided those files for us. I do not see that option at all any longer and its really disapponintng as it saved me two steps of optimiaztion in regards to page speed and hoping we can get it back!?
Related
So I have this wordpress blog set up on a VPS with litespeed and cloudflare. The website loads some banners from a revive insallation on the same VPS server, only that domain doesn't have cloudflare installed.
Although the page speed and wslow scores are good, I still get a 3 to 5 secs page load. You can see the results here:
https://gtmetrix.com/reports/www.survivalsullivan.com/WIZjVt68
Although individual resources seem to load fast (including the revive banners), there seem to be inexplicable "delays" in the waterfall... I'm no wiz in website optimization but do have some experience.
Am I missing something? I couldn't find a decent resource on how to read the waterfall, although I figured out most of it. Thanks!
Overall you got pretty good results!
First of all deal with all those images gtmetrix displays: optimize them using photoshop, jpeg mini or sprites.
If you haven't already, install bj lazy load plugin and above the fold.
Install and configure W3C cache which will fix the YSLOW settings that still not green in gtmetrix.
I assume you use some kind of theme / page builder? see if you can reduce the number of dom elements in page. Use DOM Monster! to see how nested is your page.
For example if need to display an image dont nest it in div inside column inside row inside container div.
If your website is gonna be used by users in multiple countries I would suggest paying for MAXCDN. It also integrated into W3C cache plugin.
If you use google fonts try adding them locally to style instead of GETing them.
We have a web application with over 560 pages. I would like a way to catalog the site somehow so that I can review the pages (without having to find each on in the menu or enter the URL). Be very glad for ideas on the best way to go about this.
I'd be happy to end up with 560 image files or PDFs, or one large PDF or whatever. I can easily put together a script with all the URLs, but how to pull those up and take a snapshot of some sort and save that to a file or files is where I need help.
The site is written in Java (server) and javascript (client).
I found a great plugin for Firefox that made this relatively painless. The plugin is called Screenshot Pimp (hate the name, love what it does). It takes a snapshot of your browser contents and immediately saves it to a file on your hard drive.
So then I wrote a script that would pull each page up in an IFrame with the URL showing above that, and took snapshots of each page. It took a couple hours to cycle through the whole set of 560+ pages, but it worked great, and now I have a catalog of all the pages.
A client would like to have his website implemented like the following one, on google:
https://www.google.ch/search?q=mlzd&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla:fr:official&client=firefox-a
With:
a list of links to the differents sections of the website.
the google function with the arrow (on rollover of the links) to get previews of the website pages.
So, here are my questions:
Do you know to implement the several links? Is it tags?
Do I then have to do something to get the google images previews, or is it automatic?
Will the google images previews work, with a Flash website? If not, the preview will be a screenshot the website with no flash enabled?
Thank you a lot for your answers!!
David
The links are called Sitelinks, they are automatic. You cant do anything to sepefically enable them.
http://support.google.com/webmasters/bin/answer.py?hl=en&answer=47334
The preview is automatic, but you do need to make sure that 1) allow google to cache the url (ie avoid "nocache" and 2) not block in anyway the Previews https://sites.google.com/site/webmasterhelpforum/en/faq-instant-previews
See link in the previous answer
This is sort of a statistics question. I am looking for a website analyser, not quite like google analytics. I want the analyser to crawl the website itself and record all the data on a page. Images, size of image and so on.
Even if it is just a library then its a start for me.
Thanks
You could try wget to download all the images on a site. Doubt it's the best way to do this though. Chrome's Inspect Element function has information on the sizes of all images on a page, if that's more what you're looking for.
i'm trying to build a batch image downloader in chrome. Basically, i will overlay a small download square to each image on the page and user clicks on it to download. Or the user can click to download all images on a page. I'm currently stuck on figuring out how to download the images. The best i can come up with is to use XHR to send the image to another server, the user can then retrieve it there.
If anyone have a solution for me. It would be much appreciated!
Jason
I believe you can XHR the images and using the File API you can store them locally.
Take a look at the following site http://www.html5rocks.com/features/file there are additional resources on the right column that has detailed examples and tutorials. Such as http://www.html5rocks.com/tutorials/file/filesystem/
Mohamed Mansour
This code will do the trick for you: https://gist.github.com/1049553
It's very simple usage of a 'feature' in chrome when you open an image in a new tab.