Some on my team believe this cannot be true, but I'd like a definitive answer.
I have seen, but cannot currently find, an article somewhere showing how to see in logs API Gateway spawning its CloudFront distribution. If someone could show me how to do that, I'd have my answer. (Just answering the question would be OK, too :-)
Thanks!
Thank you Michael for your comments showing me how to do this.
I sent requests to demo APIs in both us-west-1 and us-gov-west-1, and the response headers indicated a cloudfront distribution in front of the API in us-west-1, but not in front of the API in us-gov-west-1. So, my teammates are correct.
No cloudfront distribution for GovCloud-based API-Gateway-generated API's.
Related
I have searched all over the internet for an answer and although I can find a million people with the same question I cannot find an official solution to the problem im experiencing.
I always get "Cannot display preview. You can post as is, or try another link." displayed.
I've stripped a page down to only the required open graph meta tags so I know they work (run through multiple OG validators), Ive disabled any kind of robots blocking, any kind of redirects, disabled the firewall on a test server, made sure the LinkedIn bot requests are hitting the server. All I see in the browser console all the time is a status 500 being returned from LinkedIn's preview generator API.
We are hosting on Windows Server in IIS 8.5, it seems if I create a demo and host it somewhere else it works, which makes me think it is server related or IIS settings.
Reading this Linkedin post's picture doesn't appear in summary its seems like a similar issue. We are not serving over SSL so nothing to do with that.
I have already asked this question on LinkedIn's forum but having no luck, so im hoping someone on here can help or someone from LinkedIn's tech team can help.
Thanks
So we had this issue as well and it turns out parts of our system that use user generated themes were not adding the "Content-Type" header to the response.
So examine the response headers coming from your server and make absolutely sure they are correct and that they include the correct "Content-Type" (with correct encoding) and "Content-Length".
I have a very interesting requirement that I am not too sure of the answer. I am turning to Stack Overflow in the hope that someone is able to share their experiences and propose a solution.
Setup
I have a front facing website that is powered by Ghost running a standard MEAN stack enviorment and all traffic is handled via CloudFlare.
Problem
I have become aware recently that I have been receiving a large amount of requests via the CloudFlare display that do not appear in my Google Analytics. I am aware that some people may have JS disabled, however we are talking orders of magnitude difference between the two. I would very much like to know why.
Hypothesis
I suspect that person(s) are trying to use port scanning, or attempt to find vulnerabilities in my platform. Or it could be a simple case of linking going astray. Either way, I am not sure.
Solutions
This is the part I am not sure about. What would be the best approach to record and retain HTTP requests? One consideration I have had is to use Morgan to to filestream requests into a .log file and review at a later date. However, I wonder if there is a more elegant solution.
I welcome any thoughts you may have.
Thanks
Google Analytics is a fair bit more conservative than Cloudflare. One reason, as you mentioned is that Cloudflare is able to access raw HTTP logs, instead of having to use JavaScript to identify page views. As Cloudflare only marks HTTP requests, port scanning would not be recorded as a hit.
However, even with bots accounted for, Cloudflare may still record views which Google Analytics can't, for example; AJAX content requests. As the Google Analytics beacon is only run once when the page is loaded, Google Analytics only records this once - Cloudflare sees this as 2 HTTP requests in it's raw logs.
For details, please see the following blog post, it goes into detail as to how Google Analytics and Cloudflare Analytics can differ: Understanding Analytics: When Is a Page View Not a Page View?
I realise their are numerous questions regarding this issue and I have read them all, but I still cannot get this to work!
I have:
Created my project in the API console
Enabled Places API in services
Created a new IOS API Key (repeated this step twice now)
Tried the request with sensor=true, sensor=false and no sensor param at all
Tried HTTP and HTTPS
Those are all the fixes I found within the existing questions regarding this issue, have I missed anything? Here is a sample URL I am using to test:
https://maps.googleapis.com/maps/api/place/textsearch/json?sensor=true&query=Test%20sd&key=MYKEY
And yes, I am replacing 'MYKEY' with my actual API key :).
I am developing an IOS app using monotouch but I don't really see how that is relevant as I can't get this to work in the browser either.
Any help would be hugely appreciated! Been stuck on this all day now.
I believe you want to be using your "Simple API Access" key (not an Android/iOS key). The documentation mentions this as the last step.
https://developers.google.com/places/documentation/#Authentication
I tried your sample URL with my Simple API Access key and it was successful.
The GitHub API documentation says that the url
https://api.github.com/users
will give all users in the order they signed up, but I only seem to get the first 135.
Any ideas how to get the real full list?
Please use since parameter in your GET request.
https://api.github.com/users?since=XXX
Probably it's done this way to limit the resources needed to handle such request. Without such limit it's just asking for DoS attack.
If you check the response headers for that request Github provides pagination links under the header Links
Link: <https://api.github.com/users?since=135>; rel="next", <https://api.github.com/users{?since}>; rel="first"
I believe since their api v3 Github has been moving towards a hypermedia api.
Github Hypermedia API
EDIT
This is beyond the scope of this question but its related. To learn more about hypermedia API and REST. Take a look at these slides by Steve Klabnik
http://steveklabnik.github.com/hypermedia-presentation/#1
Both of the existing answers are 100% correct, but I would advise you to use a wrapper for whatever language you happen to be doing this in. There are plenty of them and there is an official one for ruby (Octokit). Here is a list of all of them.
You can filter on type:user like this:
https://api.github.com/search/users?q=type:user
See Also: GitHub API get total number of users/organizations
We have started developing an application for location aware emergency service. The users can connect through computer,smart phone or even through WAP. We want to use cloud servers (GAE or AWS). We want to optimize the site for the user's device.
I can not find out exactly how to know the device or the browser the user is using. From apache, by analyzing browser request, we could know the browser type. But how to learn that in Cloud servers like GAE or AWS? Is there any other way to learn which browser or device the user is using? Also is it possible to know the ip address of the user in GAE or AWS?
Thanks in advance.
I have no experience with programming in the cloud, but the request headers (among them USER_AGENT) come from the client and should be present as usual.
For GAE / Python, the answer is in this question: User-Agent in Google App Engine python
For GAE / Java, a hint is in the GAE docs. There must be a request object containing all the headers.