Hello there and thanks for reading my question.
I am looking into Amazon Cloudfront (CF) at the moment and need to define exactly the steps to setting up CF with our own origin server before I can proceed past inputting payment details. The basic steps I have been able to find out through Googling are:
Register with CF
Set-up a CF distribution (this is where you register your origin server)
Update your resource references on your site
The problem I am having is with step 2. Although Amazon describe it as a simple API call, I am still not quite sure exactly what this means and what I would have to do to perform this call.
A lot of bloggers/forum posters suggest using a third party software like CloudBerry - the problem is is that CloudBerry costs to do the CF/origin server bit and I only need to do it once (everything else after that can be handled by the AWS management console.
I have looked at loads of other similar pieces of software but have found them to either error on download or on install or not have the functionality I am looking for on the Windows version!
Now, this page describes how to setup the origin server manually (http://docs.amazonwebservices.com/AmazonCloudFront/latest/DeveloperGuide/) but I am still not sure exactly how this is performed.
Has anyone done this before and can offer some guidance or step on how to do this?
Many thanks in advance!
Greg
I had success using Fog gem. Once you establish a connection to the Amazon API, it was painless to create a distribution.
cdn = Fog::AWS::CDN.new(
:aws_access_key_id => YOUR_ID,
:aws_secret_access_key => YOUR_SECRET_KEY
)
cdn.post_distribution(options = YOUR_OPTIONS_HASH )
And with that you should receive a 201.
The documentation is great, too.
Cloudbuddy (http://m1.mycloudbuddy.com/downloads.html) is free and you can use it to setup the Cloudfront custom origin. Windows only, unfortunately, but you only have to use it once, right? :-D
This page walks through the custom origin server in a bit more detail http://docs.amazonwebservices.com/AmazonCloudFront/latest/DeveloperGuide/index.html?CreatingDistributions.html. The API call you need to make is to create a new distribution which points at your custom origin server. Basically you would craft the request as described and post it to amazon's web services.
Amazon just updated their CloudFront Management console to support features that were previously only available through their API, so you should be able to configure it without using the API.
http://aws.amazon.com/about-aws/whats-new/2010/11/09/cloudfront-adds-support-for-custom-origins-and-sla/?ref_=pe_2170_19753730
Related
If I were to host a simple Node API (or any API for that matter), such that on requesting the hosted URI path, it must return an object to the client.
For Example:
If a client makes a request to the API, the API must return the IP address along with the request type.
I'm particularly worried about the hosting part, not the implementation of the API itself as I'm already well aware of it.
Any suggestion or resource links are appreciated.
PS: Not very familiar with hosting server or Devops.
I am not sure, if I understood your question propery.
PS: Not very familiar with hosting server or Devops.
You should definitely be familiar with system administration on the respective operating system and environment. Which you haven't mentioned.
I almost downvoted your post, because it's not even a real question. I'd recommend to consult your favourite search engine first.
Try heroku they have nice documentation and free hosting
https://devcenter.heroku.com/articles/getting-started-with-nodejs#introduction
Are there any limits using the dbpedia-spotlight APIs ?
I found the endpoint documented here https://www.dbpedia-spotlight.org/api but the page does not speak about any limits such as rete-limits and about the endpoint availability.
Related to availability, you can check it out at status.dbpedia-spotlight.org. The API is totally open without any rate-limit, but please try to don't overload our demo server. If you need batch processing, you can install it locally using our docker images
I guess that API has several limits. I could not find any documentation about this, but in my experience I discovered:
Your requests for API can have until 930 words. (I recommend 900 or less)
I did not use for many requests. I guess which restrictions are for 19 or less requests.
I have no problem with limits and I'm grateful for service, but it could useful that is clearly in documentation at web page.
I'm a novice mturk user. I created HITs for crowdsourcing using external question hosted on a server. I wanted to know if there is a web interface where I can see progress of my HITs. I tried looking at the https://requester.mturk.com/manage and https://requestersandbox.mturk.com/manage. But I cannot see the HITs programatically created using boto3. Should I look somewhere else? If not what's the way to get this information?
I share your pain right now. As of June 2020, this situation hasn't changed. HITs that are NOT created through the MTurk web interface STILL do not display on the web interface. It's terrible. We have 3 options for seeing and managing the HITs:
Use scripting and boto3. <-- Best option for now.
Use the AWS CLI.
Use the AWS shell (aws-shell).
I think the best option is to make scripts that do exactly what you need. Chances are you'll need to do things more efficiently than you could using the AWS CLI only. aws-shell isn't easy enough to use, and it also looks unsupported for over a year at this point (judging by their official github issue tracker).
For what you're asking specifically you'll need to use the method list_hits() and possibly list_assignments_for_hit(). See https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/mturk.html
Also I'm very new to this, so if it sounds like a barely or only sorta know what I'm talking about, that's correct. But I also wished there had been a straightforward answer to this question a couple weeks ago when I was sitting here dumbfounded.
I just stood up a website using the Windows Azure Websites preview. After doing so, I ran YSlow to make sure the score is what I expect it to be. When doing so, I get a message that reads "Use cookiless domains". Well, this is just an informational website, we don't even use session. So I check the http request and there's a cookie in there named "ARRAffinity". Some quick googling turns up this link:
http://social.msdn.microsoft.com/Forums/en-US/windowsazurewebsitespreview/thread/0ba2c2f6-d5a1-40b6-8d0d-e44b58b65753/
Does this mean that Azure websites always use sticky-IP? This is kind of shocking since Web Roles use a round robin behavior.
Yes, Windows Azure Web Sites will do sticky load balancing using the ARRAffinity cookie. And, it applies to free, shared, and reserved models.
I imagine this is done to more easily support the custom galleries that might not run correctly without a proper state server, and it also allows for easier scaling without worrying about stateless servers.
I'm starting a blog and i'm in the process of choosing where should i host it. For now i want a free solution like Blogger or Wordpress.com.
The problem i'm facing is that i want to use files i have in a S3 bucket in my blog but none of the blog solutions i found supports any kind of server code, which means that in order to use S3 query string authentication i would have to put vulnerable information in the client. For obvious reasons i don't want to do that.
So, i'm looking for ideas on how i can safely include content from S3 in a free blog host.
Im not aware of any blog software that by default supports Amazon S3. So your best shot is to get a cheap hosting (hosting is really cheap these days, a few dollars a month). Then you can install a plugin which supports Amazon S3.
I think we might need a bit more detail here. For example, if you just want to link to files on S3 from your blog, you can make the files globally readable on S3 and then just link to them, with no authentication necessary.
If you want to do something more complex, maybe look into hosting WordPress yourself using WordPress.org, at which point you can use server-side code yourself, perhaps as a plugin. Or, maybe there's an existing plugin that works with WordPress that would suit you -- there's definitely a plug-in which copies WordPress file uploads to S3 and then serves them from there, rather than from your blog host, for example. It's not free a free solution, but hosting starts pretty cheap.
For fairly obvious reasons of security, there aren't any blog service providers I can think of who provide server-side code access.