I am trying to get data from a website into a table in Excel. I am just using the regular button (get data - from web) in Excel (No code) Works fine for two websites but for a different website I am getting the following error:
Details: "The remote server return an HTTP status code '404' when trying to access 'https://smarkets.com/listing/sport/football/premier-league-2017-2018'."
The webpage certainly exists - I am guessing this is a deliberate strategy by the website to prevent data harvesting.
Anyone have any idea how I can get round it either through the get data route or a VBA approach?
Thanks
JL
I inspected traffic with Fiddler and Postman to no avail and in the end contacted the team direct for an answer.
The short answer, from their API team, is no.
Eventually our API, which may be suitable for your needs, will be
available to everyone.
API is in closed alpha stage as I mentioned in comments. More information here: API feed.
API/Odds Feed We're currently working on a new streaming API that is
faster and more scalable. The API is currently in a closed alpha
stage. Unfortunately there is no timeframe on when we'll be able to
release it to the public.
We will prioritise market makers when issuing streaming API accounts.
If you would like to gain alpha access to this service, you can apply
by outlining your proposal here
You can gain access to their XML feed with odds.smarkets.com/oddsfeed.xml .
The feed is updated every few seconds but the information is delayed
by 30 seconds.
Related
I am trying to automate some simple updating of a Google spreadsheet and I'm using the gspread library to do so. One task that is critical and not currently supported by gspread is the ability to add comments to a specific cell (there's an open issue for this and even a gist solution but I was getting a 404 error when trying to use it).
I know that the Google Drive API (v3) supports adding comments as described here, but I'm having issues with authenticating and could use some help.
What I have/know:
I have already setup the OAuth 2.0 and registered for the API through Google, as well as have the client_secret.json in my directory, but my knowledge of web requests and responses is limited so going through the Drive API documentation hardly makes sense. I know in order to create the comments I will have to make use of anchors and specify the cell location using column/row numbers.
What I'm stuck on:
When using the Google API Explorer, I'm getting a 400 error with the message: The 'fields' parameter is required for this method. How can I make the POST request using my authentication? I think from there I'd be able to actually add the comments myself.
I'm getting a 400 error with the message: The 'fields' parameter is required for this method
The error is asking for a property which you want returned (these properties are listed in Drive API files resource).
You can just place ' * ' to indicate you want it to return a complete response. That's the quick fix.
How do I correctly handle the login/authentication scenario for an Azure web app in my VS2015 web performance test?
I created an XML file as a data source for the WAAD username and password. I bind the username and password to the Form Post Parameters: login and passwd respectively at request: https://login.microsoftonline.com/xxxx/login
But when I run the test, the Web Browser tab shows this error:
We can't sign you in
Your browser is currently set to block JavaScript. You need to allow
JavaScript to use this service.
To learn how to allow JavaScript or to find out whether your browser
supports JavaScript, check the online help in your web browser.
I also get a number of errors like this:
The value of the ExpectedResponseUrl property
Validation xxxx.azurewebsites.net/xxxx/docs/xxxx.aspx does
not equal the actual response URL
login.microsoftonline.com/xxxx/wsfed. QueryString
parameters were ignored.
Any idea how I can successfully log in to the Azure web app via the web performance test?
There are several methods of login and authentication that can be used. Just binding values to form post parameters may not be sufficient or correct. You will find the login form has hidden session identities that must be passed as well as the login data. I find that recording a test two times using as nearly as possible the same inputs and doing the same activities helps. These two tests can then be compared to find the dynamic data that needs to be handled.
In a comment the questioner added "I noticed these parameters, n1-43 are different but I have no idea what they represent. How do I handle them?". I can have no idea what they represent as I do not know the website you are testing. You could ask the website developers. Or, better, treat them as dynamic data. Find where the values come from, save them into context variables and use them as needed. This is basic web test development. Here and here are two good articles on what to do.
The message about JavaScript not being supported can be ignored. Visual Studio web tests do not support JavaScript or any other "active" parts of a web page, they only support the html part. Your job as a tester is to simulate what the JavaScript does for the specific user journeys you are testing. That simulation is generally just filling in the correct values (via context parameters) in the recorded requests.
Unexpected response urls can be due to earlier failures, such as teh login not working. I suggest not worrying about them until all of the other test problems are solved. Then, if you need help ask another new question.
I'm trying to setup a facebook share on https://donate.mozilla.org/en-US/thunderbird/share/
The og:url points to just /thunderbird which is the url I would want shared. Best I can tell the og tags are all there.
When I try to update the data on https://developers.facebook.com/tools/debug/og/object/
When I fetch new scrape information I get one of two errors. Initially, it'll take a long time then respond with a Curl Error : OPERATION_TIMEOUTED Operation timed out after 10000 milliseconds with {some number less than 10000} bytes received then subsequent fetch attempts respond with Curl Error : PARTIAL_FILE transfer closed with 17071 bytes remaining to read
We're using AWS Cloudfront and nodejs with hapijs
It responds with a 206 partial content, which, should be fine. The og tags are all in the beginning of the file.
I found this: docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/RangeGETs.html
There it says a range request is used to get the file in chunks, not to get just the part of the file and give up. So maybe that's causing unexpected behavior. Maybe cloudfront is sending it back in chunks, and facebook stops listening after the first response? I dunno. Just trying to find a theory that fits the facts.
We already have a working share for donate.mozilla.org/en-US/share/ but that might be old data from when we were not using hapijs and instead using expressjs which I don't think was supporting range requests and would instead return a 200.
I'm mostly a front end dev, so a lot of this is out of my comfort zone but I have already learned a lot :)
Edit: I also want to point out we use Heroku for hosting, and if I setup a test with just heroku and without cloudfront: donate.mofostaging.net/en-US/thunderbird/ it fetches the tags successfully. So I suspect it's a bug when facebook and hapijs interact with cloudfront.
I've been hacking my way through the internet for the past half hour looking for an answer to this issue. I want to build an online system for school testing running on nodejs. This apps front-end would request questions "and corresponding answers" from the backend and this information will be delivered to the front end. The whole purpose of this is that the app should calculate test scores instantly and display it.
Now, in the browsers network tab, we can see server responses, and if I were to build an app that submits both questions and answers, any student could just peep at the answers in the dev console and get perfect scores.
One way wouldve been to deliver the questions alone and then send back to the server to do the scoring and then send back the score but that doesnt feel "real-time".
REQUEST INFORMATION IS VERY OK
NEED TO REMOVE RESPONSE FROM BEING DISPLAYED IN BROWSERS DEV CONSOLE
So, how can I safely transport this information to the front end, but hide it from showing in the dev console headers response zone in a browser? Or any ideas on how I can implement this real-time concept without losing out on security.
Thanks.
You can do something like serializing it to binary and when they send the answer deserialize it back and check for the answer. That way even if they look at the network tab the will only see binary they could not understand.
I use the Google weather API in my web site, and today I get an error that the API link doesn't return any data.
When I check the link directly I get an (Error 403).
Here is the link.
Can anyone please tell me a solution for this and provide me another link for the API?
Every now and then the API stops working for short periods of time, the last days more often a 403 is trown. For my site, last night it happened 13 times. But the site tries immediately again and the second or third time, the data loads without problems. As the API is unofficial, not sure what’s causing the 403.
Make sure you cache the data as the API will block your IP temporary when you make too much requests. In my case, I cache for 20 minutes and if no data can retrieved, the site will not try more than 10 times to reload the API. Once I forgot to turn caching on after debugging and as my site did many hundred requests (with every visitor), the IP was blocked within an hour. If a remember correct, the error was not a 403. Fortunately, the block lasts for less than a half day.
There is currently an intermittent 403 Forbidden response to the Google Weather API that you are using. See Google Weather API 403 Error
The reason for the intermittent 403 response is not known but has been a problem since the 7th of August 2012.