I have searched and found so many answers but nothing that fits my requirement. I will try to explain here and see if any of you guys have some tips.
I wish to click on a link manually and from there on; I wish that some kind of tool or service starts recording time from my click and stops when the desired page is loaded. This way, I am able to find out the exact user interface response time.
All the online web testing services ask for main URL. In my case the main URL has gazillion links and I wish to use only 1 link as standard sample which is a dynamic link
For example:
- I click on my friend's name on Facebook
- From my click to the time page is loaded, if there's a tool that does the stop watch thing?
End goal is:
I will be stress testing a server with extensive load and client wishes to see response time of simple random pages when load is at 500, 1000, 2000 and so on.
Please help!
Thank you.
You can use a simple load tester tool and the developer tools on the Chrome browser, you can get a clear picture of page load times under load. Also you can see which request completed in how much time and the time from start to finish.
Just start the load test and try from the chrome.
Also you can use a automated latency monitor like smokeping.
You may use httpwatch or YSlow to find the client side page load times.
http Watch and Fiddler helped. Didn't really go as I had thought but pretty Close and satsifactory. Thanks guys
You could try WPT this is a tool which has a private and a public instance for serving exactly what you want to do also supports scripted steps executed via the browsers JS the nicest thing i find in WPT is that you can use the public instance to measure the actual user experience from other than yours world locations or you can make a private one.
Related
I'm new to Chrome extension development, and I'm a bit struggling with the architecture to put in place.
I would like to develop an extension (browser_action), that, when the button is clicked, opens a window where information will be populated from the WebTraffic.
I figured out I could use the WebRequest API to get info about the traffic.
I could create a popup window, but it's displayed only when I click on the extension button, and hides as soon as I click somewhere else
I tried creating a background window, but it does not show up.
I'd be very grateful if anyone could help me with the initial setup of my application.
Thanks in advance
You need both.
Take a look at the Architecture Overview, or maybe this question.
The lifetime of the popup is indeed equal to how long it stays on screen. It's the UI part, but putting logic there is usually bad.
A background page is permanently there but invisible. It's typically the "brain" of an extension, taking care of heavy lifting and routing messages to other parts.
In short:
You need a background script to collect webRequest information for you in some format.
You need a popup page to show it. Keep in mind it's not guaranteed to be present at a given time and can close at any time.
It's probably best to use Messaging to request the information from the background page. If you need real-time updates, you can use long-lived connections.
In your case you can also tightly couple the two and call chrome.runtime.getBackgroundPage() to directly reference stuff in it.
Trying hello world hosted app but getting this error on deployment,
Google Chrome could not load the webpage because
myPortalapps-12812b1f934c6c.myPortal.apps.com took too long to respond
I can ping myPortalapps.myPortal.apps.com but not myPortalapps-12812b1f934c6c.myPortal.apps.com
I also had some similar problems facing sharepoint web app's this forum post helped me out alot:
When troubleshooting performance issues where more than one
person/computer is impacted, the first place I like to start is by
running a sniffer like Fiddler:
http://www.fiddler2.com/fiddler2/version.asp
Fiddler will let you now exactly how long it takes to load the page,
and break down each and every resource that is also loaded in order to
render the page. This is also a great tool for determining what is
and what is not being cached.
I take the output of this and see if there is anything being loaded
that I'm not expecting. Every once in awhile I'll see where a user
might reference an image housed on an external site or server. This
can have serious consequences to load times.
I also look at the actual SharePoint page to see if there are any
hidden web parts loading list data. Most users accidentaly click
"Close" and not "Delete" so those web parts or list views are still
there. In some cases there could be significant data being loaded and
just not displayed.
Likewise I'll also take a look to see if any audiences are being used
since Audiences can be used to show/hide content.
I have recently messed around with NodeJs and it loading any website and saving a screenshot. To be more specific, I have used PhantomJS to load the website and save a screenshot. I have also used CasperJS and ZombieJS, but none of these tools really allow you to mess around with the resources of the website before loading. Is it even possible?
To be clear, I would like to load any website, lets say stackoverflow.com and calculate load time and save a screenshot. That's easy, but on second run I want to load the same website and remove jquery resource for example and then calculate load time of that.
It looks like phantomjs and casperjs have callbacks like onResourceRequested or onResourceReceived but there is not method to abort a request. Is it possible? I would not want to proxy the request via some php script that does this but that is the alternative.
So, it looks like this is not possible but it is a feature on the phantomjs roadmap: http://code.google.com/p/phantomjs/issues/detail?id=230
I've been doing a lot of research on this, but I figure I could crowd-source with what I have and see if anyone can offer additions to what I have. So I want to be able to determine page load time using JS. Not just page load as a single number, but as a breakdown.
First what I found was a new W3C Specification (Draft):
https://dvcs.w3.org/hg/webperf/raw-file/tip/specs/NavigationTiming/Overview.html
This would be perfect, however its limited to Chrome, and IE, and it's still inconsistent between the browsers.
But now I have found Real User Monitoring (RUM) by New Relic that is based off of a Javascript Library by Steve Souders. From what I can tell they can determine the same data that I saw from the new w3c Draft.
It seems that they are using HTTP Archive: http://code.google.com/p/httparchive/
However, I cannot seem to find any information on page performance or load, so I wasn't sure if I was looking at the correct library.
Now of course, if there is anything else out there, that could provide more information on page profiling, I am welcomed to the information.
Have a look at Boomerang.js (https://github.com/yahoo/boomerang) by Yahoo.
Should allow you to roll your own RUM and does graceful degradation so you should still get some information from browsers without navigation.timing.
Also if you've got access to Windows have a play with dynatrace's tools - gives quite a good insight into what it going on during page load (in IE and FF)
http://developers.facebook.com/tools/lint?url=http%3A%2F%2Fnetworks.co.id%2Fblog.php%3Fid%3D2
the problem is that facebook does read my url rightly like this
ok everything goes fine untill i scroll and check my like button on my page.
do facebook cache them ? or there is a better explanation.
thanks for the time looking in :D.
It's absolutely a matter of caching, just try to add a dummy parameter to your URL to fool facebook and you'll see. :-)
For you and #Michael Irigoyen, it's always a good practice to do this whenever you feel that FB is showing something you didn't expect OR if by mistake (or intentionally) you clicked on the share button and the page was not 100% ready to publish. And trust me this happens A LOT! ;-)
Actually facebook scrapes the pages and updates the cache every 24 hours https://developers.facebook.com/docs/reference/plugins/like/#scraperinfo.
In my experience, I've come to find that Facebook does indeed cache the all the <og:*> tags the first time it's shared on Facebook. I ran into a similar issue when creating a "Share to Facebook" link on our website. I was trying to tweak the title and the description to be exactly how I wanted and I'd always have to change the knowledge base article I was working with to one I hadn't shared previously to see the changes I had made.
That being said, I have no clue for how long Facebook keeps that stuff in cache. I didn't do any sort of testing on that.
I don't know if this is a cache related problem. But the reason could rely in the fact that you are limited to change the og:title, you can do it before you reach the 50 likes, so I think that facebook keeps a history in case you try to change the title after the limit becomes reached.
Of course, this is only a supposition.