Terraform - how to remove ability to edit via console? - terraform

I have looked in the terraform documentation for a solution to this issue but have not found anything. I have a problem where my AWS account has 1000s of EC2s, SQS queues, SNS topics, dynamo tables and tons of other stuff. Some of this stuff is managed by terraform and some of it is not. I want to be able to make it so a given terraform resource is not able to be edited via the console. A simple example of an ideal conguration is as follows:
resource "aws_sns_topic" "my_topic" {
name = "my_topic_name"
is_console_configurable = false
}
Is something like the above possible to do? Or what is the best way to go about solving this issue?
Thanks in advance

Terraform itself can't directly control what the AWS console allows or does not allow.
I think in order to get an effect like this you'd need to use very granular IAM policies so that the credentials that your team is using to log in to the AWS Console do not have access to make changes to the objects managed by Terraform. You'd then use different credentials to run Terraform which do have the necessary access.
Coordinating policies at such a fine level of detail will be complicated, though. I think the closest approximation of what you showed in your example would be an IAM policy containing "Deny" statements, which you would then associate with all of the principals associated with users who have AWS Console access.
resource "aws_sns_topic" "my_topic" {
name = "my_topic_name"
}
resource "aws_iam_policy" "disable_sns_console" {
name = "SNS Topic Disable Console"
# ...
policy = jsonencode({
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Deny",
"Resource": aws_sns_topic.my_topic.arn,
},
]
})
}
You'd need to find some suitable IAM user, role, or group object to attach this policy to and ensure that every credential used for console access is associated with whatever object confers this policy.
This sort of "default allow, deny specific objects" policy is tricky because it will "fail open" if you don't set it up correctly. However, if your goal is more to inspire good behavior than to implement an infallible security layer then perhaps this compromise is reasonable.

Related

Applying ServiceAccount specific OPA policies through Gatekeeper in kubernetes

We are trying to replace our existing PSPs in kubernetes with OPA policies using Gatekeeper. I'm using the default templates provided by Gatekeeper https://github.com/open-policy-agent/gatekeeper-library/tree/master/library/pod-security-policy and defined corresponding constraints.
However, I can't figure out how I can apply a policy to a specific ServiceAccount.
For eg. how to define allow-privilege-escalation policy only to a ServiceAccount named awsnode?
In PSPs I create a Role/ClusterRole for required podsecuritypolicies and create a RoleBinding to allow awsnode ServiceAccount to use required PSP. I'm struggling to understand how to achieve the same using Gatekeeper OPA policies?
Thank you.
Apparently PSPs and Gatekeeper OPA policies are designed to achieve pod security at different levels. Here is the response from AWS support on the above question.
Gatekeeper constraint templates (and the corresponding constraint CRDs defined from the templates) apply to a larger scope of Kubernetes resources than just pods. Gatekeeper extends additional functionality that RBAC cannot provide at this stage.
Gatekeeper itself cannot be managed by RBAC (by means of using verbs to restrict access to Gatekeeper constraints), because no RBAC resource keyword exists for Gatekeeper policy constraints (at least, at the time of writing this).
PodSecurity Admission Controller might be an option for someone looking for a replacement for PSPs which needs to be controlled by RBAC if the cluster is on 1.22 version or above.
I think a possible solution to applying an OPA Gatekeeper policy (a ConstraintTemplate) to a specific ServiceAccount, is to make the OPA/Rego policy code reflect that filter / selection logic. Since you said you're using pre-existing policies from the gatekeeper-library, maybe changing the policy code isn't an option for you. But if changing it is an option, I think your OPA/Rego policy can take into account the pod's serviceAccount field. Keep in mind with OPA Gatekeeper, the input to the Rego policy code is the entire admission request, including the spec of the pod (assuming it's pod creations that you're trying to check).
So part of the input to the Rego policy code might be like
"spec": {
"volumes": [... ],
"containers": [
{
"name": "apache",
"image": "docker.io/apache:latest",
"ports": [... ],
"env": [... ],
"resources": {},
"volumeMounts": [... ],
"imagePullPolicy": "IfNotPresent",
"securityContext": {... }
}
],
"restartPolicy": "Always",
"terminationGracePeriodSeconds": 30,
"dnsPolicy": "ClusterFirst",
"serviceAccountName": "apache-service-account",
"serviceAccount": "apache-service-account",
So gatekeeper-library's allow-privilege-escalation references input.review.object.spec.containers and finds an array of containers like "apache". Similarly, you could modify the policy code to reference input.review.object.spec.serviceAccount and find "apache-service-account". From there, it's a matter of using that information to make sure the rule "violation" only matches if the service account is one you want to apply to.
Beyond that, it's possible to then take the expected service account name and make it a ConstraintTemplate parameter, to make your new policy more flexible/useable.
Hope this helps!

Why are my lambda/alexa-hosted skill permissions being denied?

My goal is to integrate an Alexa-hosted skill with AWS IoT. I'm getting an access denied exception runinng the following python code from this thread:
iota = boto3.client('iotanalytics')
response = iota.get_dataset_content(datasetName='my_dataset_name',versionId='$LATEST',roleArn = "arn:aws:iam::123456789876:role/iotTest")
contentState = response['status']['state']
if (contentState == 'SUCCEEDED') :
url = response['entries'][0]['dataURI']
stream = urllib.request.urlopen(url)
reader = csv.DictReader(codecs.iterdecode(stream, 'utf-8'))
What's weird is that the get_dataset_content() method described here has no mention of needing permissions or credentials. Despite this, I have also gone through the steps to use personal AWS resources with my alexa-hosted skill with no luck. As far as I can tell there is no place for me to specify the ARN of the role with the correct permissions. What am I missing?
Oh, and here's the error message the code above throws:
botocore.exceptions.ClientError: An error occurred (AccessDeniedException) when calling the GetDatasetContent operation: User: arn:aws:sts::123456789876:assumed-role/AlexaHostedSkillLambdaRole/a224ab4e-8192-4469-b56c-87ac9a34a3e8 is not authorized to perform: iotanalytics:GetDatasetContent on resource: arn:aws:iotanalytics:us-east-1:123456789876:dataset/my_project_name
I have created a role called demo, which has complete admin access. I have also given it the following trust relationship:
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "iotanalytics.amazonaws.com"
},
"Action": "sts:AssumeRole"
},
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::123456789876:role/AlexaHostedSkillLambdaRole"
},
"Action": "sts:AssumeRole"
}
]
}
--- The trust relationships tab displays this as well: ---
Trusted entities
The identity provider(s) iotanalytics.amazonaws.com
arn:aws:iam::858273942573:role/AlexaHostedSkillLambdaRole
I ran into this today and after an hour of pondering what is going on, i figured out my problem, and i think it may be the same as what you were running into.
As it turns out, most of the guides out there don't mention the fact that you have to do some work to have the assumed role be the actual role that is used when you build up the boto3 resource or client.
This is a good reference for that - AWS: Boto3: AssumeRole example which includes role usage
Basically, from my understanding, if you do not do that, the boto3 commands will still execute under the same base role that the Alexa Lambda uses - you must first create the assumed role, and then use it.
Additionally, your role you're assuming must have the privileges that it needs to do what you are trying to do - but that's the easy part.
As I look at your code, I see: roleArn = "arn:aws:iam::123456789876:role/iotTest"
Replace it with the correct ARN of a role that has allow iotanalytics:GetDatasetContent
In addition, I assume you didn't paste all of your code, since you are trying to access the arn:aws:iotanalytics:us-east-1:123456789876:dataset/my_project_name
I have doubts that your account id is 123456789876, it looks like you miss some more ARNs in your code.

Read and write from/to S3 bucket using access points with boto3

I have to access S3 bucket using access points with boto3.
I have created an access point with a policy to allow reading and writing (<access_point_arn> is my access point ARN):
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": ["s3:GetObject", "s3:PutObject"],
"Resource": "<access_point_arn>/object/*"
]
}
In the official documentation there is a mention about access points, where access point ARN has to come in place of bucket name (https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/s3.html). There are no examples on the official documentation site for developers (https://docs.aws.amazon.com/AmazonS3/latest/dev/using-access-points.html).
So based on the information I assume that the right way to use it is:
import boto3
s3 = boto3.resource('s3')
s3.Bucket('<access_point_arn>').download_file('hello.txt', '/tmp/hello.txt')
When I execute this code in Lambda with AmazonS3FullAccess managed policy attached I am getting an ClientError: An error occurred (403) when calling the HeadObject operation: Forbidden
Both Lambda and S3 access point are connected to the same VPC.
My first guess is that you are missing permissions that have to be defined (1) on the bucket (bucket policy) and (2) on the IAM user or role which you are using in the boto3 SDK.
(1) From the documentation I can see that
For an application or user to be able to access objects through an access point, both the access point and the underlying bucket must permit the request.
You could, for instance, add a bucket policy that is delegating access control to access points so that you don't have to specify each principal that comes via the access points. An example is given in the linked docs.
(2) As stated in your question, you are already using AmazonS3FullAccess policy in your LambdaExecutionRole. My only guess (i.e. what happened to me) is that there is, e.g., KMS encryption on the objects in your bucket and your role is missing permissions for kms actions. Try executing the function with Admin policy attached and see if it works. If it does, find out which specific permissions are missing.
Some further notes: I assume you
didn't restrict the access point to be available within a specific VPC only.
are blocking public access.
replace...
"Resource": "arn:aws:s3:region_name:<12-digit account_id>:bucket_name"
s3.Bucket('bucket_name').download_file('hello.txt', '/tmp/hello.txt')
Hope it helps...

Azure course that tries to use Azure Cloud Shell fails with "RequestDisallowedByPolicy" using free account

I was asked by Azure Support to post this question, just to see if anyone had a useful opinion.
I am stepping through MS Azure training courses. I created the usual free account to go through these. I've gone through a few dozen of them, and am now at this one:
https://learn.microsoft.com/en-us/learn/modules/secure-and-isolate-with-nsg-and-service-endpoints/3-exercise-network-security-groups?source=learn
This attempts to use the Azure PowerShell service. I had some trouble getting to the PowerShell page. It appears that if I'm not already logged into the portal, it goes into a semi-infinite loop, trying to get to the shell page, then trying to login, then the shell page, and finally it gives up and says "We couldn't sign you in. Please try again.".
However, I was able to work around this. If in a separate tab, I log into the Azure Portal, and then go back and follow the link to Azure Cloud Shell, it passes the login gate and sends me to the page where I choose Bash or PowerShell. The course specifies using Bash. When I select that, it then asks me to create a Storage object. When I confirm that, it gives me the following error (subscription id elided):
{
"error": {
"code": "RequestDisallowedByPolicy",
"target": "cs733f82532facdx4f04x95b",
"message": "Resource 'cs733f82532facdx4f04x95b' was disallowed by policy. Policy identifiers: '[{\"policyAssignment\":{\"name\":\"Enforce tag on resource\",\"id\":\"/subscriptions/xxxxx/providers/Microsoft.Authorization/policyAssignments/740514d625684aad84ef8ca0\"},\"policyDefinition\":{\"name\":\"Enforce tag on resource\",\"id\":\"/subscriptions/xxxxx/providers/Microsoft.Authorization/policyDefinitions/be3862a6-ca1e-40b0-a024-0c0c7d1e8b3e\"}}]'.",
"additionalInfo": [
{
"type": "PolicyViolation",
"info": {
"policyDefinitionDisplayName": "Enforce tag on resource",
"evaluationDetails": {
"evaluatedExpressions": [
{
"result": "True",
"expressionKind": "Field",
"expression": "tags[Department]",
"path": "tags[Department]",
"targetValue": "false",
"operator": "Exists"
}
]
},
"policyDefinitionId": "/subscriptions/xxxxx/providers/Microsoft.Authorization/policyDefinitions/be3862a6-ca1e-40b0-a024-0c0c7d1e8b3e",
"policyDefinitionName": "be3862a6-ca1e-40b0-a024-0c0c7d1e8b3e",
"policyDefinitionEffect": "deny",
"policyAssignmentId": "/subscriptions/xxxxx/providers/Microsoft.Authorization/policyAssignments/740514d625684aad84ef8ca0",
"policyAssignmentName": "740514d625684aad84ef8ca0",
"policyAssignmentDisplayName": "Enforce tag on resource",
"policyAssignmentScope": "/subscriptions/xxxxx",
"policyAssignmentParameters": {
"tagName": {
"value": "Department"
}
}
}
}
]
}
}
I think the simple conclusion from this is that my free account doesn't have enough rights to do what is needed here. The documentation I've read seems to imply that I have to get additional rights on the account in order to do this. However, I'm just using a free account that I created to go through the Azure training courses. It doesn't really make sense to ask me to do this. I've seen other Azure courses create a temporary sandbox supposedly because they have particular objects pre-created in the sandbox, but I'm also thinking that the sandbox has particular permissions that are not available in the free account. It seems to me that the only reasonable fix for this problem is for that course to be refactored to use a temporary sandbox with the correct permissions.
I'm just looking for any opinions on this, and confirmations that this is what should be done.
It doesn't look like you are creating resource, cloudshell storage, on your free subscription. Except if you added to a Work/Corporate tenant.
From the information you provide, subscription you are trying to use has a policy to enforce tags Department, mean any resource created should have a tag with Department information.

User is not authorized to perform: dynamodb:PutItem on resource

I am trying to access DynamoDB from my Node app deployed on AWS ElasticBeanStalk. I am getting an error
User is not authorized to perform: dynamodb:PutItem on resource
It works perfectly fine locally, but when I deploy to the AWS it stops performing.
The dynamoDB access denied is generally a Policy issue. Check the IAM/Role policies that you are using. A quick check is to add
AmazonDynamoDBFullAccess
policy in your role by going to "Permissions" tab in AWS console. If it works after that then it means you need to create a right access policy and attach it to your role.
Check the access key you are using to connect to DynamoDB in your Node app on AWS. This access key will belong to a user that does not have the necessary privileges in IAM. So, find the IAM user, create or update an appropriate policy and you should be good.
For Beanstalk you need to setup user policies when you publish. Check out the official docs here.
And check out the example from here too, courtesy of #Tirath Shah.
Granting full dynamodb access using aws managed policy AmazonDynamoDBFullAccess is not recommended and is not a best practice.
Try adding your table arn in the resource key in the policy in your role policy json.
"Resource": "arn:aws:dynamodb:<region>:<account_id>:table:/dynamodb_table_name"
In my case (I try to write to a DynamoDB table through a SageMaker Notebook for experimental purposes), the complete error looks like this:
ClientError: An error occurred (AccessDeniedException) when calling the UpdateItem operation: User: arn:aws:sts::728047644461:assumed-role/SageMakerExecutionRole/SageMaker is not authorized to perform: dynamodb:UpdateItem on resource: arn:aws:dynamodb:eu-west-1:728047644461:table/mytable
I needed to go to AWS Console -> IAM -> Roles -> SageMakerExecutionRole, and Attach these two Policies:
AmazonDynamoDBFullAccess
AWSLambdaInvocation-DynamoDB
In a real-world scenario though, I'd advise to follow the least-permissions philosophy, and apply a policy that allows put item method to go through, in order to avoid accidents (e.g. deleting a record from your table).
Sign in to IAM > Roles, select the service name. Make sure the DynamoDB Resource is correct.

Resources