AWS Disallow Actions as a Root User with SCP - security

Aws best practices recommends to secure aws accounts by disallowing account access with root user credentials.
this is the template they provide with
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "GRRESTRICTROOTUSER",
"Effect": "Deny",
"Action": "*",
"Resource": [
"*"
],
"Condition": {
"StringLike": {
"aws:PrincipalArn": [
"arn:aws:iam::*:root"
]
}
}
}
]
}
The way I understand this is that if I attach this to my account, I will not have permissions anymore as root. But I do. And I if wouldn't it means I'd lock myself out from any operation.
However, if I add this to another account created, the permissions for that account and any other IAM users in that account are not having permissions anymore.
I am confused. here are the docs for Disallow Creation of Access Keys for the Root User
Update
The way I am implementing the policy is through Organizations SCP.
I think the policy is supposed to be implemented through Control Tower.
That is why I think what I am trying to achieve is not possible. I am still not clear about it, therefore not an answer.

It might be that your account where this SCP is not working is your management (formerly called master) account.
According to the docs:
Important:
SCPs don't affect users or roles in the management account. They
affect only the member accounts in your organization.

Related

Why are my lambda/alexa-hosted skill permissions being denied?

My goal is to integrate an Alexa-hosted skill with AWS IoT. I'm getting an access denied exception runinng the following python code from this thread:
iota = boto3.client('iotanalytics')
response = iota.get_dataset_content(datasetName='my_dataset_name',versionId='$LATEST',roleArn = "arn:aws:iam::123456789876:role/iotTest")
contentState = response['status']['state']
if (contentState == 'SUCCEEDED') :
url = response['entries'][0]['dataURI']
stream = urllib.request.urlopen(url)
reader = csv.DictReader(codecs.iterdecode(stream, 'utf-8'))
What's weird is that the get_dataset_content() method described here has no mention of needing permissions or credentials. Despite this, I have also gone through the steps to use personal AWS resources with my alexa-hosted skill with no luck. As far as I can tell there is no place for me to specify the ARN of the role with the correct permissions. What am I missing?
Oh, and here's the error message the code above throws:
botocore.exceptions.ClientError: An error occurred (AccessDeniedException) when calling the GetDatasetContent operation: User: arn:aws:sts::123456789876:assumed-role/AlexaHostedSkillLambdaRole/a224ab4e-8192-4469-b56c-87ac9a34a3e8 is not authorized to perform: iotanalytics:GetDatasetContent on resource: arn:aws:iotanalytics:us-east-1:123456789876:dataset/my_project_name
I have created a role called demo, which has complete admin access. I have also given it the following trust relationship:
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "iotanalytics.amazonaws.com"
},
"Action": "sts:AssumeRole"
},
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::123456789876:role/AlexaHostedSkillLambdaRole"
},
"Action": "sts:AssumeRole"
}
]
}
--- The trust relationships tab displays this as well: ---
Trusted entities
The identity provider(s) iotanalytics.amazonaws.com
arn:aws:iam::858273942573:role/AlexaHostedSkillLambdaRole
I ran into this today and after an hour of pondering what is going on, i figured out my problem, and i think it may be the same as what you were running into.
As it turns out, most of the guides out there don't mention the fact that you have to do some work to have the assumed role be the actual role that is used when you build up the boto3 resource or client.
This is a good reference for that - AWS: Boto3: AssumeRole example which includes role usage
Basically, from my understanding, if you do not do that, the boto3 commands will still execute under the same base role that the Alexa Lambda uses - you must first create the assumed role, and then use it.
Additionally, your role you're assuming must have the privileges that it needs to do what you are trying to do - but that's the easy part.
As I look at your code, I see: roleArn = "arn:aws:iam::123456789876:role/iotTest"
Replace it with the correct ARN of a role that has allow iotanalytics:GetDatasetContent
In addition, I assume you didn't paste all of your code, since you are trying to access the arn:aws:iotanalytics:us-east-1:123456789876:dataset/my_project_name
I have doubts that your account id is 123456789876, it looks like you miss some more ARNs in your code.

Can I use Azure Active Directory to hold my application's user store?

I'm designing a solution for an ERP requirement. The Client insists on using AAD for one point management of users for different applications.
Since AAD has the capability of rendering Oauth service, I'm intending to use it as my OAUTH server and utilize its tokens inside my WebAPI services. But was wondering how I can capture the failed user login attempts as I need to apply locking mechanism.
When I found that AAD can handle this locking mechanism also through some configurations, I'm now left out with a question whether I can just use AAD for my user store, meaning I will have the users, their credentials and their roles stored in AAD, while I will have the permissions for each role and other data stored in my application's database.
Is this a feasible solution? or is there a different way of handling this?
Note: We are using noSQL database.
Yes, this is a feasible solution. You can use application roles to assign roles to users.
You can define the application roles by adding them to the application manifest. Then you can assign these roles to a user.
"appRoles": [
{
"allowedMemberTypes": [
"User"
],
"description": "Creators can create Surveys",
"displayName": "SurveyCreator",
"id": "1b4f816e-5eaf-48b9-8613-7923830595ad",
"isEnabled": true,
"value": "SurveyCreator"
},
{
"allowedMemberTypes": [
"User"
],
"description": "Administrators can manage the Surveys in their tenant",
"displayName": "SurveyAdmin",
"id": "c20e145e-5459-4a6c-a074-b942bbd4cfe1",
"isEnabled": true,
"value": "SurveyAdmin"
}
],
The user list with roles listed.

User: anonymous is not authorized to perform: es:ESHttpPost on resource:

I'm having this issue with my app.
my app is deployed to Heroku server, and i'm using Elasticsearch which is deployed on AWS.
when i try to access locally to Elasticsearch - on aws domain - everyting works.
but,when i try to access to my Heroku domain (both from postman) i get 503 error with this message :
2017-12-21T13:36:52.982331+00:00 app[web.1]: statusCode: 403,
2017-12-21T13:36:52.982332+00:00 app[web.1]: response: '{"Message":"User: anonymous is not authorized to perform: es:ESHttpPost on resource: houngrymonkey"}',
my access policy is :
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "es:*",
"Resource": "arn:aws:es:eu-central-1:[ACCOUNT_ID]:domain/[ES_DOMAIN]/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": "[heroku static ip]"
}
}
}
]
}
can anyone tell me what is my problem here?
thanks!
I've experienced the same issue with ES and lambda, it's not exactly your case, but maybe it'll be helpful.What actually I did to resolve the issue
1) in lambda (Node.js v6.10) I added the following code:
var creds = new AWS.EnvironmentCredentials('AWS');
....
// inside "post to ES"-method
var signer = new AWS.Signers.V4(req, 'es');
signer.addAuthorization(creds, new Date());
....
// post request to ES goes here
With those lines my exception changed from
"User: anonymous..."
to
"User: arn:aws:sts::xxxx:assumed-role/yyyy/zzzzz"
That was exactly the case.
2) I've updated ES policy in the following way
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:sts::xxxx:assumed-role/yyyy/zzzzz" (which was in exception)
},
"Action": "es:*",
"Resource": "arn:aws:es:[region]:[account-id]:domain/[es-domain]/*"
},
{
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "es:*",
"Resource": "arn:aws:es:[region]:[account-id]:domain/[es-domain]/*"
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"1.2.3.4/32",
....
]
}
}
}
]
}
Hope that will help.
More solutions to the error mentioned in title are described here:
If you are using a client that doesn't support request signing (such as a browser), consider the following:
Use an IP-based access policy. IP-based policies allow unsigned requests to an Amazon ES domain.
Be sure that the IP addresses specified in the access policy use CIDR notation. Access policies use CIDR notation when checking IP address against the access policy.
Verify that the IP addresses specified in the access policy are the same ones used to access your Elasticsearch cluster. You can get the public IP address of your local computer at https://checkip.amazonaws.com/.
Note: If you're receiving an authorization error, check to see if you are using a public or private IP address. IP-based access policies can't be applied to Amazon ES domains that reside within a virtual private cloud (VPC). This is because security groups already enforce IP-based access policies. For public access, IP-based policies are still available. For more information, see About access policies on VPC domains.
If you are using a client that supports request signing, check the following:
Be sure that your requests are correctly signed. AWS uses the Signature Version 4 signing process to add authentication information to AWS requests. Requests from clients that aren't compatible with Signature Version 4 are rejected with a "User: anonymous is not authorized" error. For examples of correctly signed requests to Amazon ES, see Making and signing Amazon ES requests.
Verify that the correct Amazon Resource Name (ARN) is specified in the access policy.
If your Amazon ES domain resides within a VPC, configure an open access policy with or without a proxy server. Then, use security groups to control access. For more information, see About access policies on VPC domains.

How do I connect my alexa app to dynamo db with the alexa node sdk?

I have created a lambda function that attempts to make a connection with Dynamo DB through the Alexa Skills Kit for Node according to the documentation all you need to connect to the database is
alexa.dynamoDBTableName = 'YourTableName'; // That's it!
For some reason I get the following error
User: arn:aws:sts::XXXXXXXXXXX:assumed-role/lambda_basic_dynamo/MyApp is not authorized to perform: dynamodb:GetItem on resource: arn:aws:dynamodb:us-east-1:XXXXXXXXX:table/McCannHealth"
The weird thing is that I made new roll called lambda_full_access and changed it for the skill, but it's still assuming another roll. What am I doing wrong.
I don't know if you already figured it out, but you'd have to edit the permission JSON yourself. So when your creating a new IAM role, open the "Advanced settings" and change the content of the JSON to:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"cloudwatch:*",
"cognito-identity:ListIdentityPools",
"cognito-sync:GetCognitoEvents",
"cognito-sync:SetCognitoEvents",
"dynamodb:*",
"events:*",
"iam:ListAttachedRolePolicies",
"iam:ListRolePolicies",
"iam:ListRoles",
"iam:PassRole",
"kinesis:DescribeStream",
"kinesis:ListStreams",
"kinesis:PutRecord",
"lambda:*",
"logs:*",
"s3:*",
"sns:ListSubscriptions",
"sns:ListSubscriptionsByTopic",
"sns:ListTopics",
"sns:Subscribe",
"sns:Unsubscribe",
"sns:Publish",
"sqs:ListQueues",
"sqs:SendMessage",
"kms:ListAliases",
"ec2:DescribeVpcs",
"ec2:DescribeSubnets",
"ec2:DescribeSecurityGroups",
"iot:GetTopicRule",
"iot:ListTopicRules",
"iot:CreateTopicRule",
"iot:ReplaceTopicRule",
"iot:AttachPrincipalPolicy",
"iot:AttachThingPrincipal",
"iot:CreateKeysAndCertificate",
"iot:CreatePolicy",
"iot:CreateThing",
"iot:ListPolicies",
"iot:ListThings",
"iot:DescribeEndpoint"
],
"Resource": "*"
}
]
}
Above gives a full access to DynamoDB. JSON for other permissions are available on AWS as well.
This is clearly permission issue. You have selected a role "lambda_full_access". If you have created that role then please check you have give dynamoDB GetItem permission to that role. If you have selected one of the default role then you can either you can edit that role and attach a custom policy with below policy,
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "YouID",
"Effect": "Allow",
"Action": [
"dynamodb:GetItem",
"dynamodb:Scan"
],
"Resource": [
"YOUR DYNAMODB ARN HERE"
]
}
]
}
It means now your role will have full lambda access and dynamoDB access for only "GetItem" and "Scan". If you want more permission like "PutItem" etc. you can add it.
Alternatively you can create a custom role and can attach policies for Lambda access and can create a custom policy with the above given setting.

AWS S3 deny all access except for 1 user - bucket policy

I have set up a bucket in AWS S3. I granted access to the bucket for my IAM user with an ALLOW policy (Using the Bucket Policy Editor). I was able to save files to the bucket with the user. I have been working with the bucket for media serving before, so it seems the default action is to give public permission to view the files (images), which is fine for most web sites.
In my new project I want to be able to access the S3 bucket with an IAM user but want to deny all other access. No public read access, no access whatsoever besides the IAM user who should have full access save/delete whatever.
What seems like I should do, I was reading about here. It says to create a Deny policy using the NotPrincipal attribute, and that way it will still allow that user, but deny everyone else. For good measure I also added an Allow policy just for the user I want:
{
"Version": "2012-10-17",
"Id": "Policy**********",
"Statement": [
{
"Sid": "Stmt**********",
"Effect": "Deny",
"NotPrincipal": {
"AWS": "arn:aws:iam::*********:user/my_user"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::my_bucket/*"
},
{
"Sid": "Stmt*************",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::**********:user/my_user"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::my_bucket/*"
}
]
}
But this is denying access to everyone even my_user. Again I can confirm that I had access when I just used the Allow portion of the policy above, but then the public also has read access, which I am trying to turn off.
How can I set up my bucket policy to give full access to only the unique IAM user and deny all access to any other user?
Thanks.
It's quite simple:
By default, buckets have no public access
Do NOT add a Bucket Policy, since you do not want to grant access public access
Instead, add a policy to the IAM User granting them access
The main thing to realise is that the IAM User cannot access content via unauthenticated URLS (eg s3.amazonaws.com/bucket/file.jpg) because S3 doesn't know who they are. When the IAM User accesses the content, they will need to use authenticated access so that S3 knows who they are, such as:
Accessing via the AWS Management Console using a Username + Password
Accessing via the AWS Command-Line Interface (CLI) using an Access Key + Secret Key
The policy on the IAM User would look something like:
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::my_bucket",
"arn:aws:s3:::my_bucket/*"
]
}
If I understand correctly, allow access to buket only for 1 IAM user. We can use the bucket policy. I got this in netApp documnetation.
{
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::95390887230002558202:federated-user/Bob"
},
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::examplebucket",
"arn:aws:s3:::examplebucket/*"
]
},
{
"Effect": "Deny",
"NotPrincipal": {
"AWS": "arn:aws:iam::95390887230002558202:federated-user/Bob"
},
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::examplebucket",
"arn:aws:s3:::examplebucket/*"
]
}
]
}

Resources