Im going through the documentation for using Fusion Table's API and I am having trouble executing HTTP Requests.
For example, I copy and pasted this request
GET https://www.googleapis.com/fusiontables/v2/tables/1e7y6mtqv891111111111_aaaaaaaaa_CvWhg9gc
into my URL bar. What I want is to see the table with
table ID = "1e7y6mtqv891111111111_aaaaaaaaa_CvWhg9gc"
Instead, google does a search on the above code.
Any thoughts?
Thanks in advance
You cannot just copy paste that with "GET" in your address bar.
If you would like to just see the result of the GET operation you can paste
https://www.googleapis.com/fusiontables/v2/tables/1e7y6mtqv891111111111_aaaaaaaaa_CvWhg9gc
to your URL bar as your browser will do the "GET" here.
For making POST request however you may need to use some extension or tool like POSTMAN but for a simple "GET" request this will do.
I would request you to read up on HTTP GET/POST operations.
http://www.w3schools.com/tags/ref_httpmethods.asp
Related
We are developing a chatbot to handle internal and external processes for a local authority. We are trying to display contact information for a particular service from our api endpoint. The HTTP request is successful and delivers, in part, exactly what we want but there's still some unnecessary noise we can't exclude.
We specifically just want the text out of the response ("Response").
Logically, it was thought all we need to do is drill down into ${dialog.api_response.content.Response} but that fails the HTTP request and ${x.content} returns successful but includes Tags, response and the fields within 1.
Is there something simple we've missed using composer to access what we're after or do we need to change the way our endpoint is responding 2? Unfortunately the MS documentation for FrwrkComp is lacking to say the very least.
n.b. The response is currently set up as a (syntactically) SSML response, this is just a test case using an existing resource.
Response in the Emulator
Snippet from FwrkComp
Turns out it was the first thing I tried just syntactically correct. For the case of the code given it was as simple as:
${dialog.api_response.content[0].Response}
I am trying to scrape dynamic websites and was using Puppeteer with Node.js before I realized i can just fetch the website's API directly and not have to render stuff that I don't need. By looking in the "Network" tab of Chrome's developer tools I could find the exact endpoints that returns the data I need. It works for most of the sites I am trying to scrape, but for some, especially POST requests, the API returns a "403: Forbidden" error code.
The API returns a success if I do a fetch-request directly from the Chrome console. But as soon as I try from a different tab, Postman, or Node using node-fetch I get "403: Forbidden".
I have tried copying the exact headers that are sent naturally from the website, and I have tried explicitly setting the "origin" and "referer" headers to the website's address but to no avail.
Is this simply a security measure that is impossible to breach or is there a way to trick the API into thinking that the request is coming from their own website?
I'm setting up a website that will be mobile focused and one of the features I wan't to implement is users to be able to log an entry by just scanning a QR code.
For what I read is not really possible to make a POST request directly from a QR code, so I was thinking in two different options:
1. Make a GET request and then redirect that inside my server to a POST route in my routes.
So the URL would be something like https://example.com/user/resources/someresourceid123/logs/new and then this would create a POST request to https://example.com/user/resources/someresourceid123/logs/ and create the new entry to then send a response to the user but I'm not really sure this is the best approach or if it's possible at all.
My POST request only requires the resourceid which I should be able to get from req.params and the userid which I get from my req.user.
2. Do my logic and log the entry to my DB using the GET request to https://example.com/user/resources/someresourceid123/logs/new.
This would mean that my controller for that request will do everything needed from the GET request without having to make an additional POST request afterwards. I should be able to get both the resourceid and userid from the req object but not sure if being a GET request limits what I can do with it.
If any of those are possible, which would be the best approach?
I'd propose to go with a second option simply for the sake of performance. But you need to make sure your requests are not cached by any proxy, which is usually the case with GET requests.
I created a userscript for myself which is active on all webpages i visit. It sends data to my debugger/app via jquery's post ($.post).
I notice one site not allowing me to send data even though it worked before and after a quick look it appears there is some kind of error via xhr-src. It appears the response headers has a 'X-Content-Security-Policy' which list a bunch of sites (google being one). So when i try to do a post to localhost:myport/ it violates the rule thus doesn't post.
What can I do to get this working again? I can't exactly edit the headers (unless i write my own http proxy?) would i be able to create an iframe using localhost:1234/workaround and post via that? But the issue is i still dont know if thats a violation or how to give it data.
We need the user to be able to enter URLs in our media section . However since users can make mistakes in entering the URLs , we need to ensure that the URLs are valid flickr URLs .
Valid URL eg :
http://www.flickr.com/photos/53067560#N00/sets/72157606175084388/
Invalid URL eg :
http://www.flickr.com/photos/53067560#N00/sets/12345/
Flickr offers API services , but I could not find one that would validate a URL .
The easiest way would be to make a HTTP HEAD request to the url, which would mean the URL is valid if it returns a HTTP response code of 200.
You could also use regex to ensure that it matches the expected pattern, which would possibly cut down the amount of requests you would have to make.
You could make an http HEAD request to the URL and check the http response code. Any response code greater than or equal to 200 and less than 300 is ok. In all likelyhood good URLs will return 200 code.
I am pretty sure you can at least use `flickr.photos.getInfo' to see if the photo in question exists. The rest of the URL can be validated according to the result received.
You can't check for every possible permutation of course but when it comes to photo id, I am pretty certain you can rely on Flickr API... or else, they would be in trouble themselves, no?
Of course, you can double-check by issuing an HTTP GET request on the URL and verify that the result HTTP code is something like 200 (or maybe something >=200 & <=300).