Configuring HKEY_CURRENT_USER with DSC Resource actually updates HKEY_USERS\.DEFAULT - dsc

The following DSC declaration writes to Registry key HKEY_USERS.DEFAULT\Console instead of HKEY_CURRENT_USER\Console. Why?
Registry ConsoleFaceName
{
Key = 'HKEY_CURRENT_USER\Console'
ValueName = "FaceName"
ValueData = "Lucida Console"
Ensure = "Present"
}

The behavior of writing to .DEFAULT is because the DSC Local Configuration Manager (LCM) is running as local system, which does not have a current user registry hive.
If you want it to update a particular user you need to run using PsDscRunAsCredential (docs linked), where $Credential is the credentials from the user you want to change the value for.
Registry ConsoleFaceName
{
Key = 'HKEY_CURRENT_USER\Console'
ValueName = "FaceName"
ValueData = "Lucida Console"
Ensure = "Present"
PsDscRunAsCredential = $Credential
}
Before doing this please read Securing the MOF File.

Related

Onepassword_item error terraform plan "status 401: Authentication: (Invalid token signature), Each header's KID must match the KID of the public key"

I have a kubernetes cluster running on google cloud. This cluster has a op_connect_server up and running and I am trying to use terraform to create the items on some specific vaults.
To be able to run it locally, I am port-forwarding the port 8080 to my kubernetes op_connect_server pod. (
kubectl get pods -A |grep onepassword-connect |grep -v operator |awk '{print $2}' 8080:8080 -n tools)
My kubernetes cluster is a private one with a public address attached to it. To run it locally, I am accessing it's public address, and to run it on gitlab, I am accessing it's private address (Because my gitlab pipeline machine is running from inside kubernetes cluster and has access to it's private address. It works for other features)
When I run it locally, everything works well. The items are created on vault without any problems, and also during the terraform plan it can connect to the op_connect_server and check the items without any error.
On my terraform provider for one_password I am setting the token and the op_connect_server address.
When I run it on my pipeline (gitlab), I get the error: status 401: Authentication: (Invalid token signature), Each header's KID must match the KID of the public key.
This error happens during terraform plan, when checking for some onepassword_item. I tried to retrieve the same information using curl and I am able to do it, but for some reason, it fails on terraform.
I already checked/tried:
Check all variables like token, op_connect server address, vault id and they are the same on both (local and gitlab)
Tried using the same cluster endpoint (public one) when running locally and from gitlab
Delete the cluster and create/run everything from the gitlab pipeline.
The creation process works (op_connect_server, all items are created and so on) but when I run it again, it fails with the same error message.
This is my code for creating the items:
resource "onepassword_item" "credentials" {
vault = ""
title = "Redis Database cache"
category = "database"
type = "other"
username = ""
database = "Redis Database"
hostname = module.beta_redis.database_host_access_private
port = module.beta_redis.database_host_access_port
password = module.beta_redis.auth_string
section {
label = "TLS"
field {
label = "tls_cert"
value = module.beta_redis.tls_cert
type = "CONCEALED"
}
field {
label = "tls_transit_encryption_mode"
value = module.beta_redis.tls_transit_encryption_mode
type = "CONCEALED"
}
field {
label = "tls_sha1_fingerprint"
value = module.beta_redis.tls_sha1_fingerprint
type = "CONCEALED"
}
}
My op_connect_server has these settings:
set {
name = "connect.credentials_base64"
value = data.local_file.input.content_base64
type = "string"
}
set {
name = "connect.serviceType"
value = "NodePort"
}
set {
name = "operator.create"
value = "true"
}
set {
name = "operator.autoRestart"
value = "true"
}
set {
name = "operator.clusterRole.create"
value = "true"
}
set {
name = "operator.roleBinding.create"
value = "true"
}
set {
name = "connect.api.name"
value = "beta-connect-api"
}
set {
name = "operator.token.value"
value = var.op_token_beta
}
My one password version is:
1.1.4
Does someone have any clue why this could be happening, or how can I debug it?

Automating Permissions for Databricks SQL Tables or Views

Trying to automate the setup of Databricks SQL.
I have done it from the UI and it works, so this is a natural next step.
The one thing I am unsure about is how to automate granting of the access to SQL tables and/or views using REST. I am trying to avoid a Notebooks job.
I have seen this microsoft documentation and downloaded the specification but when I opened it with Postman, I see permissions/objectType/Object id, but the only sample I have seen there is for "queries". It just seems to be applicable for Queries and Dashboards. Can't this be done for Tables and views? There is no further documentation that I could see.
So, basically how to do something like
grant select on tablename to group using REST api without using a Notebook job. I am interested to see if I can just call a REST endpoint from our release pipeline (Azure DevOps)
As of right now, there is no REST API for setting Table ACLs. But it's available as part of the Unity Catalog that is right now in the public preview.
If you can't use Unity Catalog yet, then you still have a possibility to automate assignment of Table ACLs by using databricks_sql_permissions resource of Databricks Terraform Provider - it sets permissions by executing SQL commands on a cluster, but this is hidden from administrator.
This is an extension to Alex Ott `s answer giving some details on what I tried to make the databricks_sql_permissions Resource work for Databricks SQL as was the OP's original question. All this assumes that one does not want/can use Unity Catalog which follows a different permission model and has a different Terraform resource, namely databricks_grants Resource.
Alex`s answer refers to table ACLs which had me surprised as the OP (and myself) were looking for Databricks SQL object security and not table ACLs in the classic workspace. But from what I understand so far, it seems the two are closely interlinked and the Terraform provider addresses table ACLs in the classic workspace (i.e. non-SQL) which are mirrored to SQL objects in the SQL workspace. It follows that if you like to steer SQL permissions in Databricks SQL via Terraform, you need to enable table ACLs in classic workspace (in admin console). If you (for whatever reason) cannot enable table ACLs, it seems to me the only other option is via sql scripts in the SQL workspace with the disadvantage of having to explicitly write out grants and revokes. Potentially an alternative is to throw away all permissions before one only runs grant statements but this has other negative implications.
So here is my approach:
Enable table ACL in classic workspace (this has no implications in classic workspace if you don`t use table ACL-enabled clusters afaik)
Use azurerm_databricks_workspace resource to register Databricks Azure infrastructure
Use databricks_sql_permissions Resource to manage table ACLs and thus SQL object security
Below is a minimal example that worked for me and may inspire others. It certainly does not follow Terraform config guidance but is merely used for minimal illustration.
NOTE: Due to a Terraform issue I had to ignore changes from attribute public_network_access_enabled, see GitHub issues: "azurerm_databricks_workspace" forces replacement on public_network_access_enabled while it never existed #15222
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "=3.0.0"
}
databricks = {
source = "databricks/databricks"
version = "=1.4.0"
}
}
backend "azurerm" {
resource_group_name = "tfstate"
storage_account_name = "tfsa"
container_name = "tfstate"
key = "terraform.tfstate"
}
}
provider "azurerm" {
features {}
}
provider "databricks" {
azure_workspace_resource_id = "/subscriptions/mysubscriptionid/resourceGroups/myresourcegroup/providers/Microsoft.Databricks/workspaces/mydatabricksworkspace"
}
resource "azurerm_databricks_workspace" "adbtf" {
customer_managed_key_enabled = false
infrastructure_encryption_enabled = false
load_balancer_backend_address_pool_id = null
location = "westeurope"
managed_resource_group_name = "databricks-rg-myresourcegroup-abcdefg12345"
managed_services_cmk_key_vault_key_id = null
name = "mydatabricksworkspace"
network_security_group_rules_required = null
public_network_access_enabled = null
resource_group_name = "myresourcegroup"
sku = "premium"
custom_parameters {
machine_learning_workspace_id = null
nat_gateway_name = "nat-gateway"
no_public_ip = false
private_subnet_name = null
private_subnet_network_security_group_association_id = null
public_ip_name = "nat-gw-public-ip"
public_subnet_name = null
public_subnet_network_security_group_association_id = null
storage_account_name = "dbstorageabcde1234"
storage_account_sku_name = "Standard_GRS"
virtual_network_id = null
vnet_address_prefix = "10.139"
}
tags = {
creator = "me"
}
lifecycle {
ignore_changes = [
public_network_access_enabled
]
}
}
data "databricks_current_user" "me" {}
resource "databricks_sql_permissions" "database_test" {
database = "test"
privilege_assignments {
principal = "myuser#mydomain.com"
privileges = ["USAGE"]
}
}
resource "databricks_sql_permissions" "table_test_student" {
database = "test"
table = "student"
privilege_assignments {
principal = "myuser#mydomain.com"
privileges = ["SELECT", "MODIFY"]
}
}
output "adb_id" {
value = azurerm_databricks_workspace.adbtf.id
}
NOTE: Serge Smertin (Terraform Databricks maintainer) mentioned in GitHub issues: [DOC] databricks_sql_permissions Resource to be deprecated ? #1215 that the databricks_sql_permissions resource is deprecated but I could not find any indication about that in the docs, only a recommendation to use another resource when leveraging Unity Catalog which I'm not doing.

Creating user(s) in AWS Workspaces SimpleAD via Terraform

Is it possible to use Terraform to create new users and add them to the AWS Workspaces directory? I have looked all over Hashi as well as different forums and I can't seem to find out how to do this or if it is even possible. Thanks in advance!
Pic of the GUI where I am try to add user(s)
I am able to create an AWS workspace with the username "Administrator" using the below code.
resource "aws_workspaces_workspace" "workspace" {
directory_id = aws_workspaces_directory.directory.id
bundle_id = data.aws_workspaces_bundle.standard_amazon_linux2.id
user_name = "Administrator"
root_volume_encryption_enabled = true
user_volume_encryption_enabled = true
volume_encryption_key = "alias/aws/workspaces"
workspace_properties {
compute_type_name = "VALUE"
user_volume_size_gib = 10
root_volume_size_gib = 80
running_mode = "AUTO_STOP"
running_mode_auto_stop_timeout_in_minutes = 60
}
}
I am trying to find a way to add users to SimpleAD in AWS using Terraform. So that I can create a workspace for users.

Unable to get machine type information for machine type n1-standard-2 in zone us-central-c because of insufficient permissions - Google Cloud Dataflow

I am not sure what I am missing but somehow I am not able to start the job and gets failed with insufficient permission:
Here is terraform code I run:
resource "google_dataflow_job" "poc-pubsub-stream" {
project = local.project_id
region = local.region
zone = local.zone
name = "poc-pubsub-to-cloud-storage"
template_gcs_path = "gs://dataflow-templates-us-central1/latest/Cloud_PubSub_to_GCS_Text"
temp_gcs_location = "gs://${module.poc-bucket.bucket.name}/tmp"
enable_streaming_engine = true
on_delete = "cancel"
service_account_email = google_service_account.poc-stream-sa.email
parameters = {
inputTopic = google_pubsub_topic.poc-topic.id
outputDirectory = "gs://${module.poc-bucket.bucket.name}/"
outputFilenamePrefix = "poc-"
outputFilenameSuffix = ".txt"
}
labels = {
pipeline = "poc-stream"
}
depends_on = [
module.poc-bucket,
google_pubsub_topic.poc-topic,
]
}
My SA permission that is used in the terraform code:
Any thoughts what I am missing?
The error describes being unable to get the machine type information because of insufficient permissions. To access the machine type information, add the roles/compute.viewer role to your service account.
The roles/compute.viewer role, to access machine type information and view other settings.
Refer to this doc for more information about the required permissions to create a Dataflow job.
It seems my DataFlow job needed to provide these following options. Per docs it was optional but in my case that needed to be defined.
...
network = data.terraform_remote_state.dev.outputs.network.network_name
subnetwork = data.terraform_remote_state.dev.outputs.network.subnets["us-east4/us-east4-dev"].self_link
...

How can I pass credentials in Terraform?

I've got 2 options to pass creds to terraform provider:
Setup ENV variables like FOO_PROVIDER_USERNAME & FOO_PROVIDER_PASSWORD. Update: and read them from ENV in a source code of a provider so there's no username / password vars in *.tf files.
Set it explicitly in a provider:
provider "foocloud" {
username = "admin#foocloud.org"
password = "coolpass"
}
Shall I pick #1 or #2? My concern about #2 is that those username / password might be saved to a state file which is a security concern.
EDIT: this is typically for managing secrets in resources:
A few weeks ago, I came across this great article by Yevgeniy Brikman:
https://blog.gruntwork.io/a-comprehensive-guide-to-managing-secrets-in-your-terraform-code-1d586955ace1
Out of the two options you mention, go with option 1 (like you said, option 2 will write them to the state file) but you should set the variables as sensitive.
Example:
# main.tf
resource "foocloud" {
name = "foobar"
username = var.username
password = var.password
}
# variables.tf
variable "username" {
description = "foobar"
type = string
sensitive = true
}
variable "password" {
description = "foobar"
type = string
sensitive = true
}
# command line or in text file
export TF_VAR_username=foo
export TF_VAR_password=bar
EDIT: in the case of authentication to cloud providers such as AWS you can use the credentials files among other options, as explained here:
https://blog.gruntwork.io/authenticating-to-aws-with-the-credentials-file-d16c0fbcbf9e

Resources