Is it possible to use Terraform to create new users and add them to the AWS Workspaces directory? I have looked all over Hashi as well as different forums and I can't seem to find out how to do this or if it is even possible. Thanks in advance!
Pic of the GUI where I am try to add user(s)
I am able to create an AWS workspace with the username "Administrator" using the below code.
resource "aws_workspaces_workspace" "workspace" {
directory_id = aws_workspaces_directory.directory.id
bundle_id = data.aws_workspaces_bundle.standard_amazon_linux2.id
user_name = "Administrator"
root_volume_encryption_enabled = true
user_volume_encryption_enabled = true
volume_encryption_key = "alias/aws/workspaces"
workspace_properties {
compute_type_name = "VALUE"
user_volume_size_gib = 10
root_volume_size_gib = 80
running_mode = "AUTO_STOP"
running_mode_auto_stop_timeout_in_minutes = 60
}
}
I am trying to find a way to add users to SimpleAD in AWS using Terraform. So that I can create a workspace for users.
Related
Working to fully code the aws sso set up
So far coded via Terraform I have all permission-sets and using scim to pull in groups.
Allocation of the permission sets to groups in accounts (I have over 100 accounts) is done by hand. I want to allocate permission sets to groups in selected accounts via IaC (Terraform) but I cant for the life of me find working code.
Ive tried using
aws_sso_permission_set_group_assignment,
aws_sso_permission_set_group_attachment,
aws_sso_group_permission_set_assignment,
aws_sso_group_permission_set_attachment,
aws_sso_permission_set_attachment,
aws_sso_permission_set_assignment,
These i found in some old docs but they dont work :( giving The provider hashicorp/aws does not support resource type
Does anyone have any advice they can offer of how to remedy this or how they managed to surmount this issue
Here is example of code tried
resource "aws_sso_group_permission_set_attachment" "example" {
group_id = "93sd433ee-cd43e4b-cfww-434e-re33-707a0987eb"
permission_set_id = "arn:aws:sso:::permissionSet/ssoins-63456a11we432d8/ps-1231ded3d42fcrr2"
account_id = "8765322052550"
}
resource "aws_sso_group_permission_set_attachment" "example" {
permission_set_arn = "arn:aws:sso:::permissionSet/ssoins-63456a11we432d8/ps-1231ded3d42fcrr2"
group_name = "93sd433ee-cd43e4b-cfww-434e-re33-707a0987eb"
account_id = "8765322052550"
}
ssoadmin_account_assignment resource is something which you might be looking for, please go through all the available attributes in the resource to match your needs.
resource "aws_ssoadmin_account_assignment" "example" {
instance_arn = tolist(data.aws_ssoadmin_instances.example.arns)[0]
permission_set_arn = "arn_of_the_permission_set" # replace this with actually permission set arn
principal_id = "group_id" # replace this with groupID
principal_type = "GROUP"
target_id = "012347678910" # replace with account ID
target_type = "AWS_ACCOUNT"
}
I'm trying to provision a kubernetes cluster by creating all the certificates through vault first. It somehow makes it easy in the context of terraform, because I can insert all this information in the cloudinit config, so I don't have to rely on a node being ready and then transfer data from one to another.
In any case, the problem that I have is that vault_pki_secret_backend_cert doesn't seem to support any change to the subject field except for common_name (https://registry.terraform.io/providers/hashicorp/vault/latest/docs/resources/pki_secret_backend_cert), whereas kubernetes relies on these types of certificates where the organization is specified. For example:
Subject: O = system:masters, CN = kube-etcd-healthcheck-client
I'm generating these certificates by directly using vault's intermediate certificate, so the private key is in vault. I cannot generate them separately, and I wouldn't want that anyway, because I'm trying to provision basically everything using terraform.
Any ideas how I can get around this issue?
I was able to find out the answer eventually. The only way to do this with terraform/vault seems to be configuring the backend role and add the organization parameter in that role:
https://registry.terraform.io/providers/hashicorp/vault/latest/docs/resources/cert_auth_backend_role.
For example, you define the role:
resource "vault_pki_secret_backend_role" "etcd_ca_clients" {
depends_on = [ vault_pki_secret_backend_intermediate_set_signed.kube1_etcd_ca ]
backend = vault_mount.kube1_etcd_ca.path
name = "kubernetes-client"
ttl = 3600
allow_ip_sans = true
key_type = "ed25519"
allow_any_name = true
allowed_domains = ["*"]
allow_subdomains = true
organization = [ "system:masters" ]
}
And here you tell vault to generate the certificate based on that role:
resource "vault_pki_secret_backend_cert" "etcd_healthcheck_client" {
for_each = { for k, v in var.kubernetes_servers : k => v if startswith(k, "etcd-") }
depends_on = [vault_pki_secret_backend_role.etcd_ca_clients]
backend = vault_mount.kube1_etcd_ca.path
name = vault_pki_secret_backend_role.etcd_ca_clients.name
common_name = "kube-etcd-healthcheck-client"
}
The limitation makes no sense whatsoever to me, but if you don't a bulk of very different certificates, it's not all too bad and you don't have to repeat a lot of code.
I am not sure what I am missing but somehow I am not able to start the job and gets failed with insufficient permission:
Here is terraform code I run:
resource "google_dataflow_job" "poc-pubsub-stream" {
project = local.project_id
region = local.region
zone = local.zone
name = "poc-pubsub-to-cloud-storage"
template_gcs_path = "gs://dataflow-templates-us-central1/latest/Cloud_PubSub_to_GCS_Text"
temp_gcs_location = "gs://${module.poc-bucket.bucket.name}/tmp"
enable_streaming_engine = true
on_delete = "cancel"
service_account_email = google_service_account.poc-stream-sa.email
parameters = {
inputTopic = google_pubsub_topic.poc-topic.id
outputDirectory = "gs://${module.poc-bucket.bucket.name}/"
outputFilenamePrefix = "poc-"
outputFilenameSuffix = ".txt"
}
labels = {
pipeline = "poc-stream"
}
depends_on = [
module.poc-bucket,
google_pubsub_topic.poc-topic,
]
}
My SA permission that is used in the terraform code:
Any thoughts what I am missing?
The error describes being unable to get the machine type information because of insufficient permissions. To access the machine type information, add the roles/compute.viewer role to your service account.
The roles/compute.viewer role, to access machine type information and view other settings.
Refer to this doc for more information about the required permissions to create a Dataflow job.
It seems my DataFlow job needed to provide these following options. Per docs it was optional but in my case that needed to be defined.
...
network = data.terraform_remote_state.dev.outputs.network.network_name
subnetwork = data.terraform_remote_state.dev.outputs.network.subnets["us-east4/us-east4-dev"].self_link
...
I am building a small tool using Terraform to generate sandbox on AWS. I will be the owner of every new sandbox and each user will be added as an IAM user with appropriate rights.
The input file looks like this: users.auto.tfvars
sandboxs_manager = "pierre-alexandre.mousset"
dev_team_members = [
{ name = "brian.davids", is_enabled = true }, { name = "tom.hanks", is_enabled = true }]
Which is going to create 2 different AWS accounts:
pierre-alexandre.mousset+brian.davids#company.com
pierre-alexandre.mousset+tom.hanks#company.com
What I am now trying to achieve, is to create a simple S3 bucket on every new aws_organizations_account generated by Terraform.
This is how I am generating AWS accounts on my Terraform:
resource "aws_organizations_account" "this" {
for_each = local.all_user_names
name = "Dev Sandbox ${each.value}"
email = "${var.manager}+sbx_${each.value}#company.com"
role_name = "Administrator"
parent_id = var.sandbox_organizational_unit_id
}
Is there a way to loop over every id generated by aws_organizations_account to create a S3 bucket on each of those newly created account?
Based on this Github issue, I would need to use muti-provider which is not yet supported by Terraform and would probably look like this before to generate my s3 buckets:
provider "aws" {
for_each = local.aws_accounts
alias = each.value.aws_account_id
assume_role {
role_arn = "arn:aws:iam::${aws_organizations_account.this.id}:role/TerraformAccessRole"
}
}
Is there any way to deal with this?
I have successfully created a project, user and cluster via the Mongodb terraform provider, however I am expecting to see a database already created under my new cluster, which is not to be found. I am not sure what it is missing or incorrect and I could not find any example/info in the documentation that diverges from what I implemented myself. Here are the relevant info from my main.tf file:
# Create a db user
resource "mongodbatlas_database_user" "mongodb_user" {
username = "${var.database_username}"
password = "${random_string.master_password.result}"
project_id = "${mongodbatlas_project.mongodb.id}"
database_name = "admin"
roles {
role_name = "readWrite"
database_name = "admin"
}
}
group
resource "mongodbatlas_project" "mongodb"{
org_id = "${var.mongodb_atlas_org_id}"
name = "${var.project_name}-${var.stage}"
}
cluster
# Create a cluster
resource "mongodbatlas_cluster" "mongodb-cluster" {
project_id = "${mongodbatlas_project.mongodb.id}"
name = "${var.cluster_name}-${var.stage}"
num_shards = 1
replication_factor = 3
backup_enabled = true
auto_scaling_disk_gb_enabled = true
mongo_db_major_version = "4.0"
//Provider Settings "block"
provider_name = "AWS"
disk_size_gb = 100
provider_disk_iops = 300
provider_encrypt_ebs_volume = false
provider_instance_size_name = "M40"
provider_region_name = "us-east-1"
}
Any help/advice is greatly appreciated.
Thank you
The database creation is a CRUD operation, and the MongoDB Atlas API does not supports CRUD operation.
Also, Terraform is used to deploy your infrastructure and not the data inside it. You can create your own REST API which connects to the cluster created by Terraform, uses the user created by Terraform to connect, and then perform any CRUD operation you want.
Hope this answers your question.
Database and collection creation in MongoDb Atlas is a developer jobs.
I suggest you define the User's permissions on Terraform (mongodbatlas_custom_db_role).
So you can to restrict the acces, database name and collection name. it's a good approach. 😉
https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/resources/custom_db_role