Terraform import on aws_s3_bucket asking me both acl=private and grants blocks - terraform

I used terraform import to link an aws_s3_bucket resources with the least parameters.
Since the bucket is in my state, it's allow me to reflect the real resource paramters (the first terraform apply failed, but it's intended).
I have some buckets with acl="private" which gave me errors and invite me to add some grants blocks. When i'm doing it, of course terraform gave me two ConflictWith errors since acl and grants cannont be used together.
But if for example i use a s3 bucket with the proper grants blocks, terraform invite me to add an acl="private" statement.
On the same time, I have a strange behavior with the force_destroy = false block. Which seems to be not detected.
Can somebody help me with me ? maybe i'm doing something wrong.
Thanks.
Code example:
resource "aws_s3_bucket" "s3-bucket-example" {
bucket = "s3-bucket-example"
force_destroy = false
grant {
permissions = [
"READ",
"READ_ACP",
"WRITE",
]
type = "Group"
uri = "http://acs.amazonaws.com/groups/s3/LogDelivery"
}
grant {
id = "xxxxxxxxxxxxxxx"
permissions = [
"FULL_CONTROL",
]
type = "CanonicalUser"
}
}
Result
# aws_s3_bucket.s3-bucket-jolivdi-acces will be updated in-place
~ resource "aws_s3_bucket" "s3-bucket-example" {
+ acl = "private"
+ force_destroy = false
id = "s3-bucket-example"
# (7 unchanged attributes hidden)
# (4 unchanged blocks hidden)
}

Your code is absolutely right and working fine.
I had checked the ACLs for the S3 bucket also. The permissions are perfectly applied as written in the terraform code.
If you still have any problems then please elaborate a bit in a comment.

Related

Allocate AWS SSO Permission Set to Groups in Accounts

Working to fully code the aws sso set up
So far coded via Terraform I have all permission-sets and using scim to pull in groups.
Allocation of the permission sets to groups in accounts (I have over 100 accounts) is done by hand. I want to allocate permission sets to groups in selected accounts via IaC (Terraform) but I cant for the life of me find working code.
Ive tried using
aws_sso_permission_set_group_assignment,
aws_sso_permission_set_group_attachment,
aws_sso_group_permission_set_assignment,
aws_sso_group_permission_set_attachment,
aws_sso_permission_set_attachment,
aws_sso_permission_set_assignment,
These i found in some old docs but they dont work :( giving The provider hashicorp/aws does not support resource type
Does anyone have any advice they can offer of how to remedy this or how they managed to surmount this issue
Here is example of code tried
resource "aws_sso_group_permission_set_attachment" "example" {
group_id = "93sd433ee-cd43e4b-cfww-434e-re33-707a0987eb"
permission_set_id = "arn:aws:sso:::permissionSet/ssoins-63456a11we432d8/ps-1231ded3d42fcrr2"
account_id = "8765322052550"
}
resource "aws_sso_group_permission_set_attachment" "example" {
permission_set_arn = "arn:aws:sso:::permissionSet/ssoins-63456a11we432d8/ps-1231ded3d42fcrr2"
group_name = "93sd433ee-cd43e4b-cfww-434e-re33-707a0987eb"
account_id = "8765322052550"
}
ssoadmin_account_assignment resource is something which you might be looking for, please go through all the available attributes in the resource to match your needs.
resource "aws_ssoadmin_account_assignment" "example" {
instance_arn = tolist(data.aws_ssoadmin_instances.example.arns)[0]
permission_set_arn = "arn_of_the_permission_set" # replace this with actually permission set arn
principal_id = "group_id" # replace this with groupID
principal_type = "GROUP"
target_id = "012347678910" # replace with account ID
target_type = "AWS_ACCOUNT"
}

programmatic way to provide access to a object in GCS bucket for multiple users

I have a list of users whom I want to provide read access to an object stored in my GCS Bucket.
I am able to do this task manually by adding one one user, but I want to do this programmatically.
Please guide me if there is any such way to do it.
If you are comfortable with Terraform and it's possible for you to use it, you can use the dedicated resource :
You can configure the users access as a variable in a map :
variables.tf file
variable "users_object_access" {
default = {
user1 = {
entity = "user-user1#gmail.com"
role = "READER"
}
user2 = {
entity = "user-user2#gmail.com"
role = "OWNER"
}
}
}
Then in the Terraform resource, you can use a foreach in your users access list configured previously.
main.tf file :
resource "google_storage_object_access_control" "public_rule" {
for_each = var.users_object_access
object = google_storage_bucket_object.object.output_name
bucket = google_storage_bucket.bucket.name
role = each.value["role"]
entity = each.value["entity"]
}
resource "google_storage_bucket" "bucket" {
name = "static-content-bucket"
location = "US"
}
resource "google_storage_bucket_object" "object" {
name = "public-object"
bucket = google_storage_bucket.bucket.name
source = "../static/img/header-logo.png"
}
If it's to one particular object in a bucket then it sounds like more of an ACL approach.
gsutil will make things easier. You have a couple of options depending on your specific needs. If those users already have authenticated Google accounts then you can use the authenticatedRead predefined ACL:
gsutil acl set authenticatedRead gs://BUCKET_NAME/OBJECT_NAME
This will gives the bucket or object owner OWNER permission, and gives all authenticated Google account holders READER permission.
Or, with ACL enabled, you can retrieve the ACL of that particular object, make some edits to the JSON file, and set the updated ACL back on the object.
Retrieve the ACL of the object:
sutil acl get gs://BUCKET_NAME/OBJECT_NAME > acl.txt
Then make the permission edits by adding the required users/groups, and apply the updated ACL back to the object:
gsutil acl set acl.txt gs://BUCKET_NAME/OBJECT_NAME
You can apply the updated ACL to a particular object, bucket, or pattern (all images, etc).

Terraform check if resource exists before creating it

Is there a way in Terraform to check if a resource in Google Cloud exists prior to trying to create it?
I want to check if the following resources below exist in my CircleCI CI/CD pipeline during a job. I have access to terminal commands, bash, and gcloud commands. If the resources do exist, I want to use them. If they do not exist, I want to create them. I am doing this logic in CircleCI's config.yml as steps where I have access to terminal commands and bash. My goal is to create my necessary infrastructure (resources) in GCP when they are needed, otherwise use them if they are created, without getting Terraform errors in my CI/CD builds.
If I try to create a resource that already exists, Terraform apply will result in an error saying something like, "you already own this resource," and now my CI/CD job fails.
Below is pseudo code describing the resources I am trying to get.
resource "google_artifact_registry_repository" "main" {
# this is the repo for hosting my Docker images
# it does not have a data source afaik because it is beta
}
For my google_artifact_registry_repository resource. One approach I have is to do a Terraform apply using a data source block and see if a value is returned. The problem with this is that the google_artifact_registry_repository does not have a data source block. Therefore, I must create this resource once using a resource block and every CI/CD build thereafter can rely on it being there. Is there a work-around to read that it exists?
resource "google_storage_bucket" "bucket" {
# bucket containing the folder below
}
resource "google_storage_bucket_object" "content_folder" {
# folder containing Terraform default.tfstate for my Cloud Run Service
}
For my google_storage_bucket and google_storage_bucket_object resources. If I do a Terraform apply using a data source block to see if these exist, one issue I run into is when the resources are not found, Terraform takes forever to return that status. It would be great if I could determine if a resource exists within like 10-15 seconds or something, and if not assume these resources do not exist.
data "google_storage_bucket" "bucket" {
# bucket containing the folder below
}
output bucket {
value = data.google_storage_bucket.bucket
}
When the resource exists, I can use Terraform output bucket to get that value. If it does not exist, Terraform takes too long to return a response. Any ideas on this?
Thanks to the advice of Marcin, I have a working example of how to solve my problem of checking if a resource exists in GCP using Terraform's external data sources. This is one way that works. I am sure there are other approaches.
I have a CircleCI config.yml where I have a job that uses run commands and bash. From bash, I will init/apply a Terraform script that checks if my resource exists, like so below.
data "external" "get_bucket" {
program = ["bash","gcp.sh"]
query = {
bucket_name = var.bucket_name
}
}
output "bucket" {
value = data.external.get_bucket.result.name
}
Then in my gcp.sh, I use gsutil to get my bucket if it exists.
#!/bin/bash
eval "$(jq -r '#sh "BUCKET_NAME=\(.bucket_name)"')"
bucket=$(gsutil ls gs://$BUCKET_NAME)
if [[ ${#bucket} -gt 0 ]]; then
jq -n --arg name "" '{name:"'$BUCKET_NAME'"}'
else
jq -n --arg name "" '{name:""}'
fi
Then in my CircleCI config.yml, I put it all together.
terraform init
terraform apply -auto-approve -var bucket_name=my-bucket
bucket=$(terraform output bucket)
At this point I check if the bucket name is returned and determine how to proceed based on that.
TF does not have any build in tools for checking if there are pre-existing resources, as this is not what TF is meant to do. However, you can create your own custom data source.
Using the custom data source you can program any logic you want, including checking for pre-existing resources and return that information to TF for future use.
There is a way to check if a resource already exists before creating the resource. But you should be aware of whether it exists. Using this approach, you need to know if the resource exists. If the resource does not exist, it'll give you an error.
I will demonstrate it by create/reading data from an Azure Resource Group. First, create a boolean variable azurerm_create_resource_group. You can set the value to true if you need to create the resource; otherwise, if you just want to read data from an existing resource, you can set it to false.
variable "azurerm_create_resource_group" {
type = bool
}
Next up, get data about the resource using the ternary operator supplying it to count, next do the same for creating the resource:
data "azurerm_resource_group" "rg" {
count = var.azurerm_create_resource_group == false ? 1 : 0
name = var.azurerm_resource_group
}
resource "azurerm_resource_group" "rg" {
count = var.azurerm_create_resource_group ? 1 : 0
name = var.azurerm_resource_group
location = var.azurerm_location
}
The code will create or read data from the resource group based on the value of the var.azurerm_resource_group. Next, combine the data from both the data and resource sections into a locals.
locals {
resource_group_name = element(coalescelist(data.azurerm_resource_group.rg.*.name, azurerm_resource_group.rg.*.name, [""]), 0)
location = element(coalescelist(data.azurerm_resource_group.rg.*.location, azurerm_resource_group.rg.*.location, [""]), 0)
}
Another way of doing it might be using terraformer to import the infra code.
I hope this helps.
This work for me:
Create data
data "gitlab_user" "user" {
for_each = local.users
username = each.value.user_name
}
Create resource
resource "gitlab_user" "user" {
for_each = local.users
name = each.key
username = data.gitlab_user.user[each.key].username != null ? data.gitlab_user.user[each.key].username : split("#", each.value.user_email)[0]
email = each.value.user_email
reset_password = data.gitlab_user.user[each.key].username != null ? false : true
}
P.S.
Variable
variable "users_info" {
type = list(
object(
{
name = string
user_name = string
user_email = string
access_level = string
expires_at = string
group_name = string
}
)
)
description = "List of users and their access to team's groups for newcommers"
}
Locals
locals {
users = { for user in var.users_info : user.name => user }
}

COS access policies interface vs terraform

In interface I can go to COS Bucket Access Policies and easily assign policy that then looks more or less like:
Cloud Object Storage service
serviceInstance string equals foo-bar, resource string equals foo-bar-pcaps, resourceType string equals bucket
I'm struggling to find a way to do the same via terraform because whenever I try with the proper TF code like:
resource "ibm_iam_service_policy" "policy_pcaps" {
iam_service_id = ibm_iam_service_id.serviceID_pcaps.id
roles = ["Writer"]
resources {
service = "cloud-object-storage"
resource = ibm_cos_bucket.pcaps.id
}
}
I'm ending up with
Cloud Object Storage service
resource string equals crn:v1:bluemix:public:cloud-object-storage:global:a/27beaaea79a<redacted>34dd871b:8b124bc6-147c-47ba-bd47-<redacted>:bucket:foo-bar-pcaps:meta:rl:us-east
The problem is that the Writer policy that is required here does not work properly with that policy details.
How to achieve something similar to the first policy with Terraform?
Thanks
You can achieve this similar to this example Service Policy by using attributes.
I created a policy through the UI for Cloud Object Storage and specified the policy to contain a bucket name. Then I used:
ibmcloud iam access-group-policy GROUP_NAME POLICY_ID --output JSON
to get a better understanding of the policy.
With that I created this sample terraform snippet and tested it. It is creating the IAM access group + policy:
resource "ibm_iam_access_group" "accgrp_cos" {
name = "test_cos"
}
resource "ibm_iam_access_group_policy" "policy" {
access_group_id = ibm_iam_access_group.accgrp_cos.id
roles = ["Writer"]
resources {
service = "cloud-object-storage"
attributes = {
resourceType = "bucket"
resource = "tf-test-cos"
}
}
}

Terraform - assigning one group to each created IAM user

How can I assign during creation of IAM users with below code to one group existing alredy in AWS?
resource "aws_iam_user" "developer-accounts" {
path = "/"
for_each = toset(var.names)
name = each.value
force_destroy = true
}
resource "aws_iam_user_group_membership" "developers-membership" {
user = values(aws_iam_user.developer-accounts)[*].name
groups = [data.aws_iam_group.developers.group_name]
}
With above code I’m getting
inappropriate value for attribute “user”: string required.
Users variable used:
variable "names" {
description = "account names"
type = list(string)
default = ["user-1", "user-2", "user-3",...etc]
}
2nd part of question. With below I want to create passwords for each users:
resource "aws_iam_user_login_profile" "devs_login" {
for_each = toset(var.names)
user = each.value
pgp_key = "keybase:macdrorepo"
password_reset_required = true
}
Output:
output "all_passwordas" {
value = values(aws_iam_user_login_profile.devs_login)[*].encrypted_password
}
How can I decode the passwords? Below is not working as I'm sure missing some kind of loop...
terraform output all_passwordas | base64 --decode | keybase pgp
decrypt
For your first question, the following should do the trick:
You need to iterate over all users again and attach groups to each of them:
resource "aws_iam_user_group_membership" "developers-membership" {
for_each = toset(var.names)
user = aws_iam_user.developer-accounts[each.key].name
groups = [data.aws_iam_group.developers.group_name]
}
To answer your second question: You are trying to decrypt all user passwords at once, which will not work as expected. Instead, you need to decrypt each users password one by one. You could use tools like jq to loop over terraform output -json output.
Just a small note. It's better to open two questions instead of adding multiple (unrelated) questions into one. I hope this answer helps.

Resources