How to profile browser page load using Javascript (Library)? - browser

I've been doing a lot of research on this, but I figure I could crowd-source with what I have and see if anyone can offer additions to what I have. So I want to be able to determine page load time using JS. Not just page load as a single number, but as a breakdown.
First what I found was a new W3C Specification (Draft):
https://dvcs.w3.org/hg/webperf/raw-file/tip/specs/NavigationTiming/Overview.html
This would be perfect, however its limited to Chrome, and IE, and it's still inconsistent between the browsers.
But now I have found Real User Monitoring (RUM) by New Relic that is based off of a Javascript Library by Steve Souders. From what I can tell they can determine the same data that I saw from the new w3c Draft.
It seems that they are using HTTP Archive: http://code.google.com/p/httparchive/
However, I cannot seem to find any information on page performance or load, so I wasn't sure if I was looking at the correct library.
Now of course, if there is anything else out there, that could provide more information on page profiling, I am welcomed to the information.

Have a look at Boomerang.js (https://github.com/yahoo/boomerang) by Yahoo.
Should allow you to roll your own RUM and does graceful degradation so you should still get some information from browsers without navigation.timing.
Also if you've got access to Windows have a play with dynatrace's tools - gives quite a good insight into what it going on during page load (in IE and FF)

Related

Getting CSP reports on www.pagespeed-mod.com

I have switched to using Content-Security-Policy for my website. I'm starting to see reports about the following not being allowed: https://www.pagespeed-mod.com/v1/taas
Does anyone know why the website is trying to load this file? I'm using Google Analytics and Tag Manager, but I don't think that I have any page speed mod installed. Maybe this is an extension in the user's browser? Or when they open developer tools? Another source I could think of is automatic optimization through Cloudflare which I'm also running on.
Extra info: The source of loading this script is https://3001.scriptcdn.net/code/static/1 which doesn't reveal much about who made that.
Had the exactly the same issue and preventing me from using Element Inspector/ debugger. It appears to be some Chrome extension you have installed gone rogue, see if you have extension called "Auto Refresh Plus" installed like i did before.
I also see reports on https://www.pagespeed-mod.com/v1/taas being blocked with the same source of loading. It seems to happen in short periods on the various resources I have reports from. This indicates that it is related to the user/browser and not related to the site itself.
The same can be seen with translators, extensions, security proxies etc. I have given up trying to attribute the source of anything that is likely not caused by legitimate site content.

Why do user agents / browsers lie

I have read several articles on feature detection and that it is more reliable than browser detection because browsers lie.
I couldn't find any information on why they lie. Does anyone know the reason why they would do that?
As far as I understand it, Webmasters do browser sniffing to find the capabilities of a browser and limit what they send to the browser. If a browser lies about it's capabilities they will receive more from the webmaster, you can read more:
http://farukat.es/journal/2011/02/499-lest-we-forget-or-how-i-learned-whats-so-bad-about-browser-sniffing
http://webaim.org/blog/user-agent-string-history/
The reason is simple:
Because web sites look at the user agent string and make assumptions about the browser, which are then invalid when the browser is updated to a new version.
This has been going on almost since the begining of the web. Browser vendors don't want their new versions to break the web, so they tweak the UA string to fool the code on existing sites.
Ultimately, if everyone used the UA string responsibly and updated their sites whenever new browser versions come out, then browsers wouldn't need to lie. But you have to admit, that's asking quite a lot.
Feature detection works better because when a new browser version comes out with that feature, the detection will pick it up automatically without the either browser needing to do anything special nor the site owner.
Of course, there are times when feature detection doesn't work perfectly -- eg maybe if a feature exists but has bugs in a particular browser. In that case, yes, you may want to do browser detection as a fall-back. But in most cases, feature detection is a much better option.
Another more modern reason is to just avoid demands to install mobile apps (where product owners contol what I can and can't do with content. No thanks!).
Today Reddit started to block viewing subreddits in case they detect a mobile browser in UserAgent so I had to change it just to be able to view content.

Retrieve Google results without using the Custom Search API

Recently I've been working on an idea that requires me to query Google Images and retrieve links for images matching that search term. My most promising candidate for a usable Google Images API was the Google Web Search API, but it looks like it's going to be going out of service as of tomorrow:
https://developers.google.com/web-search/docs/
The API that replaced it is the Google Custom Search API, but it's a little discouraging to use:
Google API Custom Search with Python - Programmatic Search Results
100 search results a day is a very strict limit; that's just four searches per hour. I also don't want to have to go through the hassle of creating some custom search bar that I'm never going to use except through Python
I decided to turn to parsing HTML directly from the results page. This presents a problem, though, because nowhere inside the page's HTML is there any direct link to the image, only referrer URLs. This is true of the javascript-enabled and javascript-disabled versions of Google Images (so even if Python spoofs javascript as enabled, nothing). I'm not sure where to go from here. Could anyone refer me to some obscure, updated library that I've somehow overlooked, or give me some pointers?
You could use Selenium Webdriver to actually execute the JavaScript and click on the images in the thumbnail view. Once an image has been opened, the link is in the DOM and you can scrape it from there. All Webdriver does is open an actual browser and simulate a user. You can even run it as a headless browser if you use xvfbwrapper. The downside is that even then, you will need all the dependencies of the browser you are using installed on your server.
However, scraping Google is against their terms of service and they will make an effort of blocking you as quickly as possible. So, unless you pass through the captchas (which are linked to sessions), you will possibly not be able to make a whole lot of searches before being blocked this way, either.

Always using google chrome frame meta tag for standard compliant page, is good idea?

I was thinking to add meta tag always in all the websites.
That will trigger google chorme frame to load for users who already installed. I can see the benefits but is there any concerns or facts that I should know before I do that?
Testing in google chrome is enough or testing in google chrome frame explicitly required?
Thanks
Note: please do not mention current know problems "print" and "download" issue. I'm sure those will get fixed soon :)
The only argument against chrome frame that I have seen so far is Microsoft's - "Google Chrome Frame running as a plugin has doubled the attach area for malware and malicious scripts."
Also, you may run into problems with frames. If you have chrome frame on your page and someone has that page iframed on their site you may run into some problems. More info:
http://groups.google.com/group/google-chrome-frame/browse_thread/thread/d5ffe442658bc60e/e6d7a4c1c179c931?lnk=gst&q=iframe
You should only need to test in Chrome Frame for (X)HTML, CSS, and JavaScript...basic stuff. If you are using AJAX (while trying not to break the back button), worried about caching, cookies (accessed via javascript), or other potentially browser-specific browser interactions I suggest testing on the IE+CF platform...at least until the CF team announces 100% interoperability between CF and IE.
Check out the CF Google group for more issues.
Are there any concerns or facts you should know? Yes: Not everyone has Google Chrome Frame installed.
You are adding a new user agent that you will need to test and debug against, without removing the need to test and debug the user experience for other browsers (notably plain IE by itself).
If you don't make the IE user experience equivalent to the Google Chrome experience, then you are alienating a significant percentage of users. Depending on your website and its expected users, the impact of this may range from undesirable to unacceptable. If you do make the user experience equivalent, then there is no point in adding the meta tag.

Why does google.com look different on blackberry & phonegap vs. blackberry & browser

I'm tyring to get phonegap up and running on blackberry storm (9530 simulator). I had been testing my webapp from withing BB's built in browser, and it was looking ok, but then it totally bit once I tried to look at the some code from within phonegap, even though I was pointing phonegap to the same url (I hadn't yet gotten to the point of running code locally on the device).
I tried a test case on google and got similiar results. see below. I suspect that I'm missing something basic here. I would have expect both images to be nearly identical.
Browser
http://www.eleganttechnologies.com/outside/ImgDeviceBB9530WebGoogle.jpg
Phonegap
http://www.eleganttechnologies.com/outside/ImgDeviceBB9530PgGoogle.jpg
[Update]
To shed some light on what is happening, I ran the browser and the embedded browser (phonegap) against the W3 mobile web acid test: http://www.w3.org/2008/06/mobile-test/
I definitely notice differences between the two, but I don't yet know the 'why' and the 'how-to-address'.
Acid via built-in browser
(source: eleganttechnologies.com)
BTW - I ran this earlier today and got a couple more green square than just now.
Acid via browser embedded into phonegap
http://www.eleganttechnologies.com/outside/ImgDeviceBb9530PgAcid.jpg
Disclaimer: I don't know anything about phonegap, but have a pretty good theory. By default the embedded browser control on BlackBerry uses an older version of the rendering engine than the BlackBerry browser itself does.
At the BlackBerry developer conference last year, a talk was given about this, and there's an undocumented option to use the newer rendering engine. \
The option ID is 17000 (yes, a magic number, which could change, use at your own risk etc), and should be set to true. Not sure how you'd pass this option through phonegap (I'm not familiar with the toolkit) but using the BlackBerry APIs it's something like:
BrowserContent content;
...
content.getRenderingOptions().setProperty(RenderingOptions.CORE_OPTIONS_GUID, 17000, true);
I don't know the specifics of the browsers you are using, but I do know that most of the big sites will detect your OS + browser combination to decide what HTML to show you.
If Google is seeing a different user agent, you might get a generic mobile version of the HTML instead os the Blackberry specific HTML you get for the built in browser.
If you have access to a web server, try hitting it with both browser setups and see if there is any difference in the log file. That might tell you something interesting.
As we can see in your Acid tests...
One browser (the built-in one) is reporting correctly as a BlackBerry9530, and the other (phonegap) is not presenting the user-agent ["Testing with ."].
In this case, Google is providing you with the default view of their homepage, whereas when you are reporting yourself as a BlackBerry device, you will get the BlackBerry specific rendering.
By the sounds of things, using phonegap is removing the default user-agent (most probably because it's not recognising your device). As phonegap is open-source, the best bet is to get in there, and debug it and find out what happens with the user-agent when the http requests leave the device and track it back from there.
Maybe one browser has capabilities that another one does not?
Hm. By looking at the screenshot I would say that the second page is probably missing some resources. It may be missing some images, scripts and the CSS files, which would explain different l&f. Knowing how Blackberry Browser Field API works, I would guess that the implementation that uses the BrowserField was not done correctly. Just my guess. In addition to that, when the browser field is initialized the caller needs to configure it properly by enabling the appropriate browser features - scripts, styles etc. Again, the API is done in a very weird way, I have gotten myself into this trap once. When setting the options, you cannot just create one mask (like CSS | WML | SCRIPT) and make one call. Options are numeric and, I believe, non-overlapping - but you still need to call the API for setting each option independently.
Also the way asynchronous loading of the resources for BrowserField takes time to understand.
Just my $0.02.

Resources