How to control document access to S3 with NodeJS / Express - node.js

I’ve done a fair amount of research on this question, and surprisingly haven’t found anything relevant. Perhaps I am missing the right keywords to search for!
My app has a requirement for users to upload files to it. These will go into Amazon S3. I have worked out how to upload to S3 using one single user permissioned through IAM.
A further requirement is that a user can only access the files they have uploaded. User 1 cannot access User 2’s documents. In due course, I would also like to enable User 1 to grant User 2 permissions to, say, a collection of User 1’s documents.
I’m struggling to work out how to implement this. There are a few options I think:
One single bucket, one single IAM user. The permissions are completely controlled through the backend in Express/NodeJS. This would be the simplest implementation for me, but I’m concerned that my permissions are not mirrored in S3. Is that a security risk?
Multiple buckets, create IAM users on the fly through Express. I presume this is technically possible, but presumably would lead to me storing IAM credentials in my app’s database. That sounds like a no-no to me.
Using Auth0 delegation and generating a temporary token in AWS. (My app uses Auth0 to authenticate users).This sounds quite complicated and I can’t find a good enough tutorial to get me clued up enough on this. Perhaps this is the best way forward, but is it substantially different from (1)?
If anybody has any experience with this it would be much appreciated if you could point me in the right direction!

I have no prior experience with your current problem but I did some research because I am currently preparing for an AWS exam and the question interests me.
If your bucket have multiple folders inside (one for each user), I found out you can restrict the access by specifying an IAM policy by user.
For instance:
{
"Version":"2012-10-17",
"Statement": [
{
"Sid": "AllowUserToSeeBucketListInTheConsole",
"Action": ["s3:ListAllMyBuckets", "s3:GetBucketLocation"],
"Effect": "Allow",
"Resource": ["arn:aws:s3:::*"]
},
{
"Sid": "AllowRootAndHomeListingOfCompanyBucket",
"Action": ["s3:ListBucket"],
"Effect": "Allow",
"Resource": ["arn:aws:s3:::my-company"],
"Condition":{"StringEquals":{"s3:prefix":["","home/", "home/David"],"s3:delimiter":["/"]}}
},
{
"Sid": "AllowListingOfUserFolder",
"Action": ["s3:ListBucket"],
"Effect": "Allow",
"Resource": ["arn:aws:s3:::my-company"],
"Condition":{"StringLike":{"s3:prefix":["home/David/*"]}}
},
{
"Sid": "AllowAllS3ActionsInUserFolder",
"Effect": "Allow",
"Action": ["s3:*"],
"Resource": ["arn:aws:s3:::my-company/home/David/*"]
}
]
}
Or event better, you can use policy variables and create a single policy that applies to all your users:
{
"Version":"2012-10-17",
"Statement": [
{
"Sid": "AllowGroupToSeeBucketListInTheConsole",
"Action": ["s3:ListAllMyBuckets", "s3:GetBucketLocation"],
"Effect": "Allow",
"Resource": ["arn:aws:s3:::*"]
},
{
"Sid": "AllowRootAndHomeListingOfCompanyBucket",
"Action": ["s3:ListBucket"],
"Effect": "Allow",
"Resource": ["arn:aws:s3:::my-company"],
"Condition":{"StringEquals":{"s3:prefix":["","home/"],"s3:delimiter":["/"]}}
},
{
"Sid": "AllowListingOfUserFolder",
"Action": ["s3:ListBucket"],
"Effect": "Allow",
"Resource": ["arn:aws:s3:::my-company"],
"Condition":{"StringLike":{"s3:prefix":
[
"home/${aws:username}/*",
"home/${aws:username}"
]
}
}
},
{
"Sid": "AllowAllS3ActionsInUserFolder",
"Action":["s3:*"],
"Effect":"Allow",
"Resource": ["arn:aws:s3:::my-company/home/${aws:username}/*"]
}
]
}
Source: https://aws.amazon.com/fr/blogs/security/writing-iam-policies-grant-access-to-user-specific-folders-in-an-amazon-s3-bucket/
Hope it helps!

Related

AWS resource policy on Api Gateway: anonymous is not authorized to perform invoke on resource with explicit deny

Below resource policy on AWS API-Gateway generating this response while calling from outside as well as inside VPC
{"Message":"User: anonymous is not authorized to perform: execute-api:Invoke on resource: arn:aws:execute-api:ap-south-1:********2818:d5cbeh0e78/default/GET/autoimageresize-staging with an explicit deny"}
Resource Policy: whitelist VPC
{
"Version": "2012-10-17",<
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": "execute-api:Invoke",
"Resource": "arn:aws:execute-api:ap-south-1:********2818:d5cbeh0e78/*/*/*"
},
{
"Effect": "Deny",
"Principal": "*",
"Action": "execute-api:Invoke",
"Resource": "arn:aws:execute-api:ap-south-1:********2818:d5cbeh0e78/*/*/*",
"Condition": {
"StringNotEquals": {
"aws:SourceVpc": "vpc-********"
}
}
}
]
}
whereas whitelisting the resource with the concerned IP is working quite well with below resource policy
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": "execute-api:Invoke",
"Resource": "arn:aws:execute-api:ap-south-1:********2818:d5cbeh0e78/*/*/*"
},
{
"Effect": "Deny",
"Principal": "*",
"Action": "execute-api:Invoke",
"Resource": "arn:aws:execute-api:ap-south-1:********2818:d5cbeh0e78/*/*/*",
"Condition": {
"NotIpAddress": {
"aws:SourceIp": "xx.xxx.xxx.xx"
}
}
}
]
}
Can anyone suggest where I might be going wrong, and also how can I verify what aws:SourceVpc value am I getting if not what I am expecting. And another thing, some places I am getting key as SourceVpc while other as sourceVpc in aws docs.
Thanks in advance
I've found that you need two things to create a private REST API. First you need a Resource Policy which allows access from the VPC. Then you need to create a VPC Endpoint in the VPC that is trying to access the private REST API.
When troubleshooting/revising a Resource Policy, the following steps must be executed in order.
Save the Resource Policy.
Re-deploy the API (Resources - Actions | Deploy API)
Wait 10 - 15 seconds.
Failure to wait for the changes to propagate will result in confusing results.
When troubleshooting these types of problems, I haven't found the API Gateway logs to be that useful. There is usually a single entry that says "The client is not authorized to perform this operation" which is analogous to HTTP 403 (Forbidden).
API gateway API is public unless you create a private VPC endpoint for api gateway.
Only when there is a private VPC endpoint exists:
you can SourceVPC check
SourceIp condition for private ip addresses
this is because the traffic come from internal network
If you have a API Gateway VPC endpoint:
Once a private VPC endpoint is created, all the existing public apis in the same account can be accessed only through their custom domain.
Also make sure to redeploy the api for the resource policy changes to reflect.
Reference:
https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-resource-policies-aws-condition-keys.html

Securing s3 files ,Not to display the content when link is given directly in a browser?

I am using nodejs npm-multer s3 to upload my video/audio/image files to amazon s3 bucket.
I am using the below policy to enable permission for viewing my files through my mobile application
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "AllowPublicRead",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::my bucket/*"
}
]
}
But the problem is whenever i copy the link of my s3 files in a browser and paste it, my files are getting downloaded(or shown).
how can i prevent this?
i dont want my files to get downloaded or shown when the link is given in the addressbar.
my files should only be shown or streamed through my mobile and web application.
How can i achieve this?
You might want to consider serving your content through CloudFront in this case using either Signed URLs or Signed Cookies and use an Origin Access Identity to restrict access to your Amazon S3 content.
This way, only CloudFront can access your S3 content and only clients with valid signed URL/cookies can access your CloudFront distribution.
After you setup your Origin Access Identity in CloudFront, your bucket policy should be something like:
{
"Version": "2012-10-17",
"Id": "Policy1476619044274",
"Statement": [
{
"Sid": "1",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity <Your Origin Access Identity ID>"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::my-bucket/*"
}
]
}
The format for specifying the Origin Access Identity in a Principal statement is:
"Principal":{
"CanonicalUser":"<Your Origin Access Identity Canonical User ID>"
}
or
"Principal": {
"AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity <Your Origin Access Identity ID>"
}
See: Serving Private Content through CloudFront.

How do I connect my alexa app to dynamo db with the alexa node sdk?

I have created a lambda function that attempts to make a connection with Dynamo DB through the Alexa Skills Kit for Node according to the documentation all you need to connect to the database is
alexa.dynamoDBTableName = 'YourTableName'; // That's it!
For some reason I get the following error
User: arn:aws:sts::XXXXXXXXXXX:assumed-role/lambda_basic_dynamo/MyApp is not authorized to perform: dynamodb:GetItem on resource: arn:aws:dynamodb:us-east-1:XXXXXXXXX:table/McCannHealth"
The weird thing is that I made new roll called lambda_full_access and changed it for the skill, but it's still assuming another roll. What am I doing wrong.
I don't know if you already figured it out, but you'd have to edit the permission JSON yourself. So when your creating a new IAM role, open the "Advanced settings" and change the content of the JSON to:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"cloudwatch:*",
"cognito-identity:ListIdentityPools",
"cognito-sync:GetCognitoEvents",
"cognito-sync:SetCognitoEvents",
"dynamodb:*",
"events:*",
"iam:ListAttachedRolePolicies",
"iam:ListRolePolicies",
"iam:ListRoles",
"iam:PassRole",
"kinesis:DescribeStream",
"kinesis:ListStreams",
"kinesis:PutRecord",
"lambda:*",
"logs:*",
"s3:*",
"sns:ListSubscriptions",
"sns:ListSubscriptionsByTopic",
"sns:ListTopics",
"sns:Subscribe",
"sns:Unsubscribe",
"sns:Publish",
"sqs:ListQueues",
"sqs:SendMessage",
"kms:ListAliases",
"ec2:DescribeVpcs",
"ec2:DescribeSubnets",
"ec2:DescribeSecurityGroups",
"iot:GetTopicRule",
"iot:ListTopicRules",
"iot:CreateTopicRule",
"iot:ReplaceTopicRule",
"iot:AttachPrincipalPolicy",
"iot:AttachThingPrincipal",
"iot:CreateKeysAndCertificate",
"iot:CreatePolicy",
"iot:CreateThing",
"iot:ListPolicies",
"iot:ListThings",
"iot:DescribeEndpoint"
],
"Resource": "*"
}
]
}
Above gives a full access to DynamoDB. JSON for other permissions are available on AWS as well.
This is clearly permission issue. You have selected a role "lambda_full_access". If you have created that role then please check you have give dynamoDB GetItem permission to that role. If you have selected one of the default role then you can either you can edit that role and attach a custom policy with below policy,
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "YouID",
"Effect": "Allow",
"Action": [
"dynamodb:GetItem",
"dynamodb:Scan"
],
"Resource": [
"YOUR DYNAMODB ARN HERE"
]
}
]
}
It means now your role will have full lambda access and dynamoDB access for only "GetItem" and "Scan". If you want more permission like "PutItem" etc. you can add it.
Alternatively you can create a custom role and can attach policies for Lambda access and can create a custom policy with the above given setting.

Amazon S3 - Failed to load resource: the server responded with a status of 403 (Forbidden)

This is the first time I use Amazon S3. I've read questions and answers. They all seem similar to this problem but none of the answers fixed it for me.
I can successfully upload pictures but I can get them to display them (403 forbidden status).
This is the bucket's policy:
{
"Version": "2012-10-17",
"Id": "Policy1475848347662",
"Statement": [
{
"Sid": "Stmt1475848335256",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::235314345576:user/userx"
},
"Action": [
"s3:*"
],
"Resource": "arn:aws:s3:::bucketdev/*"
}
]}
This is the CORS config:
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<AllowedMethod>PUT</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
This is the user's policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListAllMyBuckets",
"s3:ListBucket",
"s3:ListBucketVersions",
"s3:GetObject",
"s3:ListBucketMultipartUploads"
],
"Resource": [
"arn:aws:s3:::*"
]
}
]}
Using this component: https://www.npmjs.com/package/react-dropzone-s3-uploader.
Can anyone help?
Thanks.
There are two things to note:
Where to assign permissions for access to Amazon S3
Which permissions to assign
Where to assign permissions for access to Amazon S3
Objects in Amazon S3 are private by default. There are three ways to assign permission to access objects:
Object ACLs (Access Control Lists): These are permissions on the objects themselves
Bucket Policies: This is a set of rules applied to the bucket as a whole, but it can also specify permissions related to a subset of a bucket (eg a particular path within the bucket)
IAM Policies that are applied to IAM Users, Groups or Roles: These permissions apply specifically to those entities
If your intention is to keep the content of the S3 bucket private but allow access to a specific user, then you should assign permissions to the IAM User (as you have done). It also means that you do not require a Bucket Policy since granting access via any one of the above methods is sufficient.
See documentation: Guidelines for Using the Available Access Policy Options
Also a CORS Policy is only required if a HTML page served from one domain is referring to content from another domain. It is quite possible that you do not require the CORS Policy -- do some testing to confirm whether this is the case.
Which permissions to assign
This is always confusing... Some permissions are associated with the Bucket, while some permissions are associated with the contents of the Bucket.
The following permissions from your policy should be at the Bucket level (arn:aws:s3:::MyBucket):
s3:CreateBucket
s3:DeleteBucket
s3:DeleteBucketPolicy
s3:GetBucketPolicy
s3:GetLifecycleConfiguration
s3:ListBucket
s3:ListBucketMultipartUploads
s3:PutBucketPolicy
s3:PutLifecycleConfiguration
Other API calls (eg GetObject) should be at the object-level (eg arn:aws:s3:::MyBucket/*).
See: Specifying Permissions in a Policy
Therefore, the policy associated with your IAM User should look more like this:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1",
"Effect": "Allow",
"Action": [
"s3:ListAllMyBuckets",
"s3:ListBucket",
"s3:ListBucketVersions",
"s3:ListMultipartUploadParts"
],
"Resource": [
"arn:aws:s3:::MY-BUCKET"
]
},
{
"Sid": "Stmt2",
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::MY-BUCKET/*"
]
}
]
}
This grants GetObject permission to objects within the bucket, rather than on the bucket itself.
Just if some body will face with the same problem - be sure that all files was uploaded to bucket, because if you use "Add files" button it does not upload nested folders. Better use "drag and drop".

AWS S3 deny all access except for 1 user - bucket policy

I have set up a bucket in AWS S3. I granted access to the bucket for my IAM user with an ALLOW policy (Using the Bucket Policy Editor). I was able to save files to the bucket with the user. I have been working with the bucket for media serving before, so it seems the default action is to give public permission to view the files (images), which is fine for most web sites.
In my new project I want to be able to access the S3 bucket with an IAM user but want to deny all other access. No public read access, no access whatsoever besides the IAM user who should have full access save/delete whatever.
What seems like I should do, I was reading about here. It says to create a Deny policy using the NotPrincipal attribute, and that way it will still allow that user, but deny everyone else. For good measure I also added an Allow policy just for the user I want:
{
"Version": "2012-10-17",
"Id": "Policy**********",
"Statement": [
{
"Sid": "Stmt**********",
"Effect": "Deny",
"NotPrincipal": {
"AWS": "arn:aws:iam::*********:user/my_user"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::my_bucket/*"
},
{
"Sid": "Stmt*************",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::**********:user/my_user"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::my_bucket/*"
}
]
}
But this is denying access to everyone even my_user. Again I can confirm that I had access when I just used the Allow portion of the policy above, but then the public also has read access, which I am trying to turn off.
How can I set up my bucket policy to give full access to only the unique IAM user and deny all access to any other user?
Thanks.
It's quite simple:
By default, buckets have no public access
Do NOT add a Bucket Policy, since you do not want to grant access public access
Instead, add a policy to the IAM User granting them access
The main thing to realise is that the IAM User cannot access content via unauthenticated URLS (eg s3.amazonaws.com/bucket/file.jpg) because S3 doesn't know who they are. When the IAM User accesses the content, they will need to use authenticated access so that S3 knows who they are, such as:
Accessing via the AWS Management Console using a Username + Password
Accessing via the AWS Command-Line Interface (CLI) using an Access Key + Secret Key
The policy on the IAM User would look something like:
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::my_bucket",
"arn:aws:s3:::my_bucket/*"
]
}
If I understand correctly, allow access to buket only for 1 IAM user. We can use the bucket policy. I got this in netApp documnetation.
{
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::95390887230002558202:federated-user/Bob"
},
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::examplebucket",
"arn:aws:s3:::examplebucket/*"
]
},
{
"Effect": "Deny",
"NotPrincipal": {
"AWS": "arn:aws:iam::95390887230002558202:federated-user/Bob"
},
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::examplebucket",
"arn:aws:s3:::examplebucket/*"
]
}
]
}

Resources