I had been working on automating the console testing for a website. As manually it is very time taking to go and check for consoled errors using browser dev tools for all the pages. For eg a website with 200 navigation links is too much manual effort and time consuming.
Using web driver is a potential solution but not in our case depending on requirements and resources.
I was hoping it could pull the console errors in the website log and then configure a notification but I'm unable to pull up the console error in the log.
Any help/solution for pulling up the errors in log?
Or may be any better solution for automating it all together?
You can use below code to get the browser logs (Note : Chrome with selenium Java)
LogEntries logEntries = driver.manage().logs().get(LogType.BROWSER);
for (LogEntry entry : logEntries) {
System.out.println(new Date(entry.getTimestamp()) + " " + entry.getLevel() + " " + entry.getMessage());
}
There's no way to get a browser to visit 200+ links that's somehow faster than a script on a headless browser. So the question makes little sense because it's not automation unless you're doing something automated.
Another approach would be to use a javascript error reporting tool like Sentry.
Related
I know it sounds a lot like other issues here in Stackoverflow, bear with me, it's not (not that I could tell)
I have a scraping app (using Puppeteer) that I use to scrape an Amazon public page.
It works great, I've debugged it by setting the headless: false and I see it works, and it gives me back the expected result.
The same app fails on Heroku, but the problem is not with launching or using Puppeteer (I have several indications), but probably because I'm being identified as a robot.
The error returned is:
waiting for selector `#link_continue input` failed: timeout 30000ms exceeded
Important to say that the error is a generic Puppeteer error that indicates that the selector I'm waiting for just doesn't appear on-page.
I know it should as it's a selector on the first page I navigate to, and it works locally (as mentioned before) - the selector always exists if the page loads.
I had the exactly same error when I've tried to run the scraping on my local machine before setting a User-Agent header. But at that time I could use the headless:false so I saw in my eyes that I'm being rejected due to illegal operations on their page (robots-like operations) so I was redirected to an error page that didn't contain this selector on it.
For this reason, I suspect it recognizes me as a robot, but I don't know how to debug it, it drives me crazy.
Now, if you'd like to reproduce the problem:
You need to wait for the mentioned selector on this site:
https://sellercentral.amazon.com/hz/fba/profitabilitycalculator/index
and then deploy it to Heroku and try to run it maybe 2-3 times
** Two questions: **
How can I proceed from here, I'm 99.9% sure it's the same issue I had previously, but I can't verify... any suggestions?
Given that this is actually the problem, can anyone suggest an easy-to-use/deploy hosting that also allow easy VPN configuration? I think Heroku doesn't give you to do that unless you have an enterprise account
Thanks
I would like to point out that Amazon is very good at blocking IPs. It is very likely that they already blacklisted IPs of cloud services like Heroku, Azure, etc... Previously I have observed services like Cloudflare, Akamai etc... blacklisting these known IPs.
In this scenario Rotating proxies could help you to avoid getting blocked.
How do I correctly handle the login/authentication scenario for an Azure web app in my VS2015 web performance test?
I created an XML file as a data source for the WAAD username and password. I bind the username and password to the Form Post Parameters: login and passwd respectively at request: https://login.microsoftonline.com/xxxx/login
But when I run the test, the Web Browser tab shows this error:
We can't sign you in
Your browser is currently set to block JavaScript. You need to allow
JavaScript to use this service.
To learn how to allow JavaScript or to find out whether your browser
supports JavaScript, check the online help in your web browser.
I also get a number of errors like this:
The value of the ExpectedResponseUrl property
Validation xxxx.azurewebsites.net/xxxx/docs/xxxx.aspx does
not equal the actual response URL
login.microsoftonline.com/xxxx/wsfed. QueryString
parameters were ignored.
Any idea how I can successfully log in to the Azure web app via the web performance test?
There are several methods of login and authentication that can be used. Just binding values to form post parameters may not be sufficient or correct. You will find the login form has hidden session identities that must be passed as well as the login data. I find that recording a test two times using as nearly as possible the same inputs and doing the same activities helps. These two tests can then be compared to find the dynamic data that needs to be handled.
In a comment the questioner added "I noticed these parameters, n1-43 are different but I have no idea what they represent. How do I handle them?". I can have no idea what they represent as I do not know the website you are testing. You could ask the website developers. Or, better, treat them as dynamic data. Find where the values come from, save them into context variables and use them as needed. This is basic web test development. Here and here are two good articles on what to do.
The message about JavaScript not being supported can be ignored. Visual Studio web tests do not support JavaScript or any other "active" parts of a web page, they only support the html part. Your job as a tester is to simulate what the JavaScript does for the specific user journeys you are testing. That simulation is generally just filling in the correct values (via context parameters) in the recorded requests.
Unexpected response urls can be due to earlier failures, such as teh login not working. I suggest not worrying about them until all of the other test problems are solved. Then, if you need help ask another new question.
Is there a way through the brasiers like firebug or another browser plugin to do traces or log console from a cfc file.
I'm completely new to CF so sorry if this seems like a stupid question.
If you want logs to be visible in the browser ColdFire is your best choice. With it, you can see all of ColdFusion's extended debugging information even on a production site. Unless you have the proper authentication via ColdFire the server won't spit out the extended info.
As #gillesc recommended, you can use LogBox which is extracted from the ColdBox framework. The ColdBox Framework has a debugging mode that allows you to trace messages to the bottom of the page, or, to a separate window. This is useful even on production sites since you can observe the tracer methods from other users.
Finally, you can simply print to the console using writeDump(var="my log message",output="console") for quick debugging--or--use the <cflog> tag to save log messages to a named log file which you can monitor using tail. For a dead simple solution, you can save the log file to the root of your site and simply press F5 to see the new log entries; however, I do not recommend this practice (unless you are saving credit card information and share that file with me :).
Hope this reply helps.
Aaron
There is a cftrace tag that will allow you to log output to the console, among other spots in your application and development environment.
<cftrace category="init data" type="Information" var="myvartooutput" />
Calling this tag will output the relevant content in a few places:
The console in ColdFusion Builder, if you are using that IDE
In Dreamweaver, the Adobe docs mention a server debug tab/view (I don't use DW, so am not sure)
At the end of the request in the debug output
In cftrace.log, which is in your log directory (/COLDFUSION/INSTALL/DIR/logs/cftrace.log)
You can also use the tag cflog to write data to one of the standard log files or you may choose to have it write the desired data to a custom log file.
<cflog file="customlog" application="no" text="Output #somevar#!" />
If "customlog" does not exist, CF will create it for you (in the same location noted above).
Hope that helps!
EDIT: I offered this more of an alternative way to using to Firebug ... if you want the logs/traces but were not necessarily wed to a browser/plug-in.
If you've got CF Builder you can actually set up a debugger, but it's terribly slow. Here's the documentation on that: http://help.adobe.com/en_US/ColdFusionBuilder/Using/WS0ef8c004658c1089-31c11ef1121cdfd6aa0-7fff.html
There's also ColdFire, which is a Firebug add-on. Never used it before but I hear good things: https://github.com/nmische/ColdFire/
Try ColdFire for firebug extension
http://coldfire.riaforge.org/
How can I perform a post through the chrome extention, lets say I want to send the current tab page title to a webpage
You can do POST XHRs from chrome extensions to any URL, as long as you have host permissions defined in your manifest. See these docs.
In a chrome extension the best way to try and do what i think you want is via a content script see documentation a word of warning however pinging your server with a POST request every time someone with your extension installed opens a web page is going to be extremely heavy going on your servers especially if you have a lot of installs. A possible solution is to use the content script to keep tally of the sites a user visits and save this data in a HTML5 database (wich chrome supports) then using background.html sending the data at given intervals in bulk with an AJAX request, this will significantly cut down the number of times your server is pinged.
I'm trying to automate some datascraping from a website. However, because the user has to go through a login screen a wget cronjob won't work, and because I need to make an HTTPS request, a simple Perl script won't work either. I've tried looking at the "DejaClick" addon for Firefox to simply replay a series of browser events (logging into the website, navigating to where the interesting data is, downloading the page, etc.), but the addon's developers for some reason didn't include saving pages as a feature.
Is there any quick way of accomplishing what I'm trying to do here?
A while back I used mechanize wwwsearch.sourceforge.net/mechanize and found it very helpful. It supports urllib2 so it should also work with HTTPS requests as I read now. So my comment above could hopefully prove wrong.
You can record your action with IRobotSoft web scraper. See demo here: http://irobotsoft.com/help/
Then use saveFile(filename, TargetPage) function to save the target page.