Adding proper permissions to AWS S3 bucket to allow SEO - search

I'm trying to verify my site for SEO purposes with Google using https://www.google.com/webmasters/tools/home?hl=en. I am using AWS S3 to host my content, and AWS Cloudfront to serve it through the CDN. I'm following this checklist: http://www.wikihow.com/Get-Your-Website-Indexed-by-Google and am on Step 4.
The steps Google lists to verify are:
Download this HTML verification file. [googlelongstringofcharacters.html]
Upload the file to https://www.dynamicdentaledu.com/
Confirm successful upload by visiting https://www.dynamicdentaledu.com/googlelongstringofcharacters.html in your browser.
Click Verify below.
To stay verified, don't remove the HTML file, even after verification succeeds.
I've added the HTML file to my site's root. When I click confirm in step 3, I get:
So I skipped that and clicked Verify button in step 4. Google says:
Verification failed for https://www.dynamicdentaledu.com/ using the
HTML file method (less than a minute ago). We were unable to connect
to your server.
I think this is due to the permissions and bucket policies I have the S3 bucket. They are, respectively:
And
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "AddPerm",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::dynamicdentaledu.com/*"
}
]
}
How can I enable Google to access what it needs?
EDIT: following AWS's bucket policies, I changed the policy to:
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "AddPerm",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::dynamicdentaledu.com/*"
}
]
}
Am now getting:
<Error>
<Code>NoSuchKey</Code>
<Message>The specified key does not exist.</Message>
<Key>google*longstringofcharacters*.html</Key>
<RequestId>42DD1F1F0D5E06F7</RequestId>
<HostId>
zbmsLAEMz3ed2zKx3gKCHjrtHxeWmaLl16JJs6012zFcLQdnMH48mFJY1YOETD3WMS/8NwkU3SY=
</HostId>
</Error>

You have three issues.
CloudFront will return errors to the browser for 5 minutes after you fix the problem, by default. When the origin server returns an error, usually there is no reason for CloudFront to continually retry. In a case like this, you may want to reconfigure the Error Caching TTL for 403 and 404 errors to 0 seconds in CloudFront. See my answer to Amazon CloudFront Latency for further explanation of this.
You did not need to change your bucket policy. If your site was otherwise working and you uploaded this new object with the "make everything public" option selected (equivalent to setting x-amz-acl: public-read if using the API) then that should have been sufficient, though the 5 minute timer mentioned above could have complicated your troubleshooting process. Note also that in your bucket permissions, you are allowing Everyone to List the contents of your bucket. This is not actually causing the problem, here, but is a configuration that is potentially too permissive and needs to be mentioned. This option allows anyone to download a complete list of all your files, which seems like a bad idea in most cases.
You didn't upload the file with the correct name. <Code>NoSuchKey</Code> is never returned for any reason other than, simply enough, there is no object with this key (path/filename.ext) in the bucket. It cannot be caused by policy, permissions, ACL, etc. Check in the S3 console: the file is not named as you intended, or is not in the right place, at the root of the bucket. The long string of characters is not, as far as I am aware, a secret value -- only an obscure/unpredictable value -- so if the information here doesn't help you resolve this, showing a screen shot of the console including this object and its properties should not pose any security issue for you. This may be necessary for further troubleshooting, should that be required.

Related

How do I load an image uploaded to S3 in my website?

I'm sure this question has been debated a lot. But I searched for some hours and I don't seem to find a solution to this. Most guides are old or I can't make them work when I try.
I'm working in a Nodejs + React application and I'm using S3 to store images. I want the React app to load the image, so I need to provide it a URL in the future. I have worked with S3 in the past (and RoR) and I remember I used to create a temporary URL and it was refreshed from time to time.
Tutorials I found say I have to create a URL like this:
https://<bucket>.s3.amazonaws.com/<image>
So, not a temporary URL, and it doesn't work either way. I get a "Access Denied" xml in return. I even have granted permissions in this file (I selected "Make it public") but it doenst seem to work.
So how should I load this image in React? How do I retrieve it from the service? I'm using the official SDK.
upload your image as public in s3 and check that the image is given public rights by copying and pasting URL returned by s3 in web browser and if image renders then use the image tag in below mentioned way
Insert your image url of S3 bucket in src
<img src = "https://yourS3Url/image.jpg" alt = "HTML IMAGE"/>
Try this way
Add these policy to the S3 bucket
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject"
],
"Resource": "arn:aws:s3:::BUCKETNAME/*"
}
]
}
Then add these cors config
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<MaxAgeSeconds>9000</MaxAgeSeconds>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
upload files and then you can access file
<img src = "https://yourS3Url/image.jpg" />
What is needed is a 'Pre-signed URL'. Making buckets public isn't recommended.
In terms of NodeJS is something like:
s3Client.getSignedUrl('getObject', {
Bucket: bucket,
Key: key,
Expires: timeInSeconds
});

Get route url for a http triggered functions in an ARM template

I'm trying to figure out how to get the route for an HTTP triggered Azure Function within an ARM template.
Thanks to a blog post I managed to find out the listsecret command, but when trying to execute this action via powershell, the output doesn't give me the trigger_url I was expecting. The URL does not comply with the configured route of the function, and shows the default trigger if no route would have been configured.
Any way I can get a hold of the configured route instead since I can't seem to use the trigger_url.
My configured route has got parameters in the path as well, e.g.:
{
"name": "req",
"type": "httpTrigger",
"direction": "in",
"authLevel": "function",
"methods": [
"POST"
],
"route": "method/{userId}/{deviceId}"
}
The output of listsecrets is:
trigger_url: https://functionapp.azurewebsites.net/api/method?code=hostkey
Is there any other way to extract the host key and route?
Try playing with the API version, but I would suspect that this is not possible as of now.
Currently, the only way to get the route is by reading the function.json file and parsing that information out, which you can do by using Kudu's VFS API.
For the keys, I would actually recommend using the key management APIs instead of listSecrets. As the latter is meant to address a small set of scenarios (primarily to enable some internal integrations) where the key management API more robust API and will continue to work with different secret storage providers (e.g. Azure Storage, which is what is used when slots are enabled and will eventually become the default).

User configurable url permissions in chrome extension

I am creating a chrome extension that adds an action with a global hotkey to JIRA. I can hard code the url for my own instance of JIRA into my extension, but I would like this url to be user configurable as other users will have different urls for their own JIRA instances.
I would like to know if there is a better way of doing this than just giving my extension permission for all urls and checking in my background script to compare the current url to the one the user has chosen as a setting. Ideally my background/inject script would only run on the users chosen url. I looked at the permissions api but can't figure it out.
My current manifest.json looks as follows, I have hard coded permission to my jira url, is there a way to make this permission user configurable?
{
"permissions": [
"contentSettings",
"storage",
"commands",
"http://myjiraurl/*"
],
"background": {
"matches": [
"http://myjiraurl/*"
],
"scripts": ["src/background/background.js"]
},
"content_scripts": [
{
"matches": [
"http://myjiraurl/*"
],
"js": [
"js/jquery.min.js",
"src/inject/inject.js"
]
}
]
}
Whatever you do, in order to inject into arbitrary hosts, you have to have http://*/ and https://*/ in your permissions, preferably optional permissions. Your proposal - comparing the current url with the one in the settings - sounds very reasonable: Just compare the current url with the one in the setting and make a chrome.permissions.request if it matches, silently stop running your code if it fails.
I can think of one work-around to avoid that, but honestly it's not giving you much:
Use http://*/ and https://*/ as optional permissions (you should really really make them optional anyway).
Make the chrome.permissions.request call at the time the user is modifying the options.
In your background script, make a chrome.permissions.contains call for the current url and silently stop running your code if there's no permission.
The result of the above is that instead of comparing the current url to the one in the setting, it will compare the current url with whatever the extension has previously got permission for. I guess it simplifies the background script a bit (you don't have to get the url stored in the settings), but makes the options page code a bit more complicated.

AuthorizationError when confirming SNS subscription over HTTP

I'm writing a simple SNS client that is meant to subscribe itself to an SNS topic and then listen for notifications. I can successfully submit a sns.subscribe request, but when I pick up the SubscriptionConfirmation POST message from AWS and try and respond using sns.confirmSubscription I get an AuthorizationError returned:
[AuthorizationError: User: arn:aws:iam::xxx:user/mv-user is not authorized to perform: SNS:ConfirmSubscription on resource: arn:aws:sns:us-east-1:xxx:*]
If I use exactly the same Token and TopicArn in a GET query to the server the subscription confirmation works fine, with no authentication.
Any ideas why it's not working? My SNS topic is wide open with publish/subscribe permissions set to 'Everyone'.
For reference, my code is something like this:
var params = {
TopicArn: topicArn, // e.g. arn:aws:sns:us-east-1:xxx:yyy
Token: token // long token extracted from POST body
};
sns.confirmSubscription(params, function (err, data) {
if (err) {
// BOOOM - keep getting here with AuthorizationError
} else {
// Yay. Worked, but never seem to get here :(
}
});
However, if I navigate to the URL similar to this in a browser (i.e. completely unauthenticated), it works perfectly:
http://sns.us-east-1.amazonaws.com/?Action=ConfirmSubscription&Token=<token>&TopicArn=arn%3Aaws%3Asns%3Aus-east-1%3Axxx%3Ayyy&Version=2010-03-31
The only differences seem to be the inclusion of 'Authorization' and 'Signature' headers in the programmatic version (checked using Wireshark).
Any ideas? Thanks in advance!
Update
In my code, if I just programatically do a simple GET request to the SubscribeURL in the SubscriptionConfirmation message this works fine. Just seems odd that the confirmSubscription API call doesn't work. Will probably stick to this workaround for now.
Update 2
Also get the same error when calling sns.unsubscribe although, again, calling the UnsubscribeURL in each notification works. Seems other people have run into that issue too but can't find any solutions.
I faced a similar issue while developing my application.
The way I ended up solving it is the following:
go to IAM and click on your user
go to the permissions tab and click on "Attach Policy"
use the filter to filter for "AmazonSNSFullAccess"
Attach the above policy to your user.
The above should take care of it.
If you wanna be fancy you can create a custom policy that is based on "AmazonSNSFullAccess" and apply it to you user instead.
The custom policy would be something similar to the following:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"sns:ConfirmSubscription"
],
"Effect": "Allow",
"Resource": "YOUR_RESOURCE_ARN_SHOULD_BE_HERE"
}
]
}
The error says it all:
[AuthorizationError: User: arn:aws:iam::xxx:user/mv-user is not authorized to perform: SNS:ConfirmSubscription on resource: arn:aws:sns:us-east-1:xxx:*]
is basically telling you that the IAM user you're using to call ConfirmSubscription doesn't have the proper permissions to do so. Best bet is to update the permissions for that IAM user, specifically adding ConfirmSubscription permissions.
(Based on your comments, even though the documentation says otherwise, the error is pretty specific... might be worth following up directly with AWS about this issue, since either the error message or documentation is incorrect).

Chrome extension bug that could be related to cross-origin permissions

We run an extension that requires fetching and searching for data on multiple websites.
We have been using cross-origin XMLHttpRequests using Jquery, and have not faced an issue until now.
The asynchronous requests are being executed successfully. This has been the case even though we have not explicitly requested cross-origin permissions as suggested here: https://developer.chrome.com/extensions/xhr
This is what the relevant portions of our manifest currently look like:
{
"background" : {
"scripts": ["background.js"]
},
"permissions" : ["storage" ],
"content_scripts" : [
{
"matches" : ["<all_urls>"],
"js" : [ "jquery-2.0.0.min.js","jquery-ui-1.10.3.custom.min.js","date.js",
"file1.js","file2.js",
"fileN.js"],
"run_at" : "document_idle",
"all_frames" : false
},
],
"content_security_policy": "script-src 'self' https://ssl.google-analytics.com; object-src 'self'",
"web_accessible_resources" : [ "icona.png" , "iconb.png","iconc.png"],
"manifest_version": 2
}
Even though the permissions do not explicitly request access to urls from which data is asynchronously fetched, the extension has worked fine.
Off late, we have had a few complaints from users that the extension no longer works and no data is being displayed. We have not been able to replicate this issue in Chrome on Linux (Version 34.0.1847.132). The users who seem to be facing this issue seem to be using Mac OS X or, less frequently, Windows.
We cannot figure out why this issue is OS specific, or if that's a curious correlation.
If the problem is indeed one of wrong permissions, can we set the permission to
["http://*/","https://*/"]
without having the extension disabled automatically for manual re-enabling by the user?
We already require permissions for all urls through "matches" : ["<all_urls>"] Does this ensure that the addition of permissions as above will not trigger automatic disabling of the extension?
Chrome extensions allow for cross-origin requests, but you have to declare the hosts you want to access in the permissions section of your manifest. The matches section of content scripts shouldn't give you host permissions.
You should add host permissions to your manifest. I don't know what will happen on update. Considering that the user was already prompted to allow your extension access to all their web data, maybe your extension won't be disabled on update. You can simply test that by creating a testers only extension on the webstore with your original version, install it, update it, and see what happens.

Resources