Just inquiring if anyone's aware of any permission limitations with the PagerDuty terraform API? With a base role of Observer in PagerDuty, it appears as though certain objects (which my user created) can be deleted via the GUI, but not via the terraform API even though I’m using the same user account. A PagerDuty Extension is an example of an object where I’m hitting this issue.
The same test case works as expected if I try it with a user with a base role of Manager though. Here’s a quick terraform file I threw together to verify this test case:
resource "pagerduty_schedule" "schedule" {
name = "terraform-test-schedule"
time_zone = "America/Denver"
teams = ["PRDBAEK"]
layer {
name = "weekly"
start = "2020-02-05T09:00:00-06:00"
rotation_virtual_start = "2020-02-05T09:00:00-06:00"
rotation_turn_length_seconds = 604800
users = ["PN94M6Q"]
}
}
resource "pagerduty_escalation_policy" "escalation_policy" {
name = "terraform-test-ep"
description = "terraform-test-ep"
num_loops = 0
teams = ["PRDBAEK"]
rule {
escalation_delay_in_minutes = 10
target {
type = "schedule_reference"
id = pagerduty_schedule.schedule.id
}
}
}
resource "pagerduty_service" "event" {
name = "terraform-test-service"
description = "terraform-test-service"
alert_creation = "create_alerts_and_incidents"
escalation_policy = pagerduty_escalation_policy.escalation_policy.id
incident_urgency_rule {
type = "constant"
urgency = "severity_based"
}
alert_grouping_parameters {
type = "intelligent"
config {
fields = []
timeout =0
}
}
auto_resolve_timeout = "null"
acknowledgement_timeout = "null"
}
resource "pagerduty_extension" "test_extension" {
name = "terraform-test-extension"
extension_schema = data.pagerduty_extension_schema.generic_v2_webhook.id
endpoint_url = https://fakeurl.com
extension_objects = [
pagerduty_service.event.id
]
config = jsonencode({})
}
All objects can be created successfully. I get the following error when testing a terraform destroy with an account with base role Observer though. It can't delete the Extension.
Error: DELETE API call to https://api.pagerduty.com/extensions/P53423F failed 403 Forbidden. Code: 2010, Errors: <nil>, Message: Access Denied
But using that same account, I can delete that extension in the GUI with no issues.
Related
I followed the instructions from here: https://neo4j.com/docs/operations-manual/4.4/kubernetes/quickstart-cluster/server-setup/ and deployed a cluster of three core members using Terraform.
Used helm-charts: https://github.com/neo4j/helm-charts/releases/tag/4.4.10
Used neo4j version: Neo4j 4.4.11 enterprise
The code structure is as follows:
module/neo4j:
\-main.tf
\-variables.tf
\--core-1/main.tf
\--core-1/variables.tf
\--core-1/core-1.values.yaml
\--core-2/main.tf
\--core-2/variables.tf
\--core-2/core-2.values.yaml
\--core-3/main.tf
\--core-3/variables.tf
\--core-3/core-3.values.yaml
So the root main.tf creates modules of each core. Nothing special, nothing fancy.
The helm deployment is as follows:
resource "helm_release" "neo4j-core-1" {
name = "neo4j-core-1"
chart = "https://github.com/neo4j/helm-charts/releases/download/${var.chart_version}/neo4j-cluster-core-${var.chart_version}.tgz"
namespace = var.namespace
wait = false
values = [
templatefile("${path.module}/core-1.values.yaml", {
share_secret = var.share_secret_name
share_name = var.share_name
share_dir = var.share_dir
image_name = var.image
image_version = var.image_version
})
]
timeout = 600
force_update = true
reset_values = true
set {
name = "neo4j.name"
value = "neo4j-cluster"
}
set_sensitive {
name = "neo4j.password"
value = var.password
}
set {
name = "dbms.mode"
value = "CORE"
}
# backup configuration
set {
name = "dbms.backup.enabled"
value = true
}
set {
name = "neo4j.resources.memory" # sets both requests and limit
value = var.memory
}
set {
name = "neo4j.resources.cpu" # sets both requests and limit
value = var.cpu
}
set {
name = "dbms.memory.heap.initial_size"
value = var.dbms_memory
}
set {
name = "dbms.memory.heap.max_size"
value = var.dbms_memory
}
set {
name = "dbms.memory.pagecache.size"
value = var.dbms_memory
}
set {
name = "causal_clustering.minimum_core_cluster_size_at_formation"
value = 3
}
set {
name = "causal_clustering.minimum_core_cluster_size_at_runtime"
value = 3
}
set {
name = "causal_clustering.discovery_type"
value = "K8S"
}
dynamic "set" {
for_each = local.nodes
content {
name = "nodeSelector.${set.key}"
value = set.value
}
}
}
The problem I am facing is: The deployment passes like only 1 out of 10 times. Whenever the deployment fails, it is due to a time-out of the Terraform helm_release of one or two core members stating: "Secret "neo4j-cluster-auth" exists.
Looking into the log of the one (or two) members already deployed, the startup failed, because the cluster is missing members. (initialDelaySeconds have been configured for each core member and have been increased testwise too)
kubernetes pods
2022-11-17 08:59:22.738+0000 ERROR Failed to start Neo4j on 0.0.0.0:7474.
java.lang.RuntimeException: Error starting Neo4j database server at /var/lib/neo4j/data/databases
at org.neo4j.graphdb.facade.DatabaseManagementServiceFactory.startDatabaseServer(DatabaseManagementServiceFactory.java:227) ~[neo4j-4.4.11.jar:4.4.11]
at org.neo4j.graphdb.facade.DatabaseManagementServiceFactory.build(DatabaseManagementServiceFactory.java:180) ~[neo4j-4.4.11.jar:4.4.11]
at com.neo4j.causalclustering.core.CoreGraphDatabase.createManagementService(CoreGraphDatabase.java:38) ~[neo4j-causal-clustering-4.4.11.jar:4.4.11]
at com.neo4j.causalclustering.core.CoreGraphDatabase.<init>(CoreGraphDatabase.java:30) ~[neo4j-causal-clustering-4.4.11.jar:4.4.11]
at com.neo4j.server.enterprise.EnterpriseManagementServiceFactory.createManagementService(EnterpriseManagementServiceFactory.java:34) ~[neo4j-enterprise-4.4.11.jar:4.4.11]
at com.neo4j.server.enterprise.EnterpriseBootstrapper.createNeo(EnterpriseBootstrapper.java:20) ~[neo4j-enterprise-4.4.11.jar:4.4.11]
at org.neo4j.server.NeoBootstrapper.start(NeoBootstrapper.java:142) [neo4j-4.4.11.jar:4.4.11]
at org.neo4j.server.NeoBootstrapper.start(NeoBootstrapper.java:95) [neo4j-4.4.11.jar:4.4.11]
at com.neo4j.server.enterprise.EnterpriseEntryPoint.main(EnterpriseEntryPoint.java:24) [neo4j-enterprise-4.4.11.jar:4.4.11]
Caused by: org.neo4j.kernel.lifecycle.LifecycleException: Component 'com.neo4j.dbms.ClusteredDbmsReconcilerModule#5c2ae7d7' was successfully initialized, but failed to start. Please see the attached cause exception "Failed to join or bootstrap a raft group with id RaftGroupId{00000000} and members RaftMembersSnapshot{raftGroupId=Not yet published, raftMembersSnapshot={ServerId{c72f54d8}=Published as : RaftMemberId{c72f54d8}}} in time. Please restart the cluster. Clue: not enough cores found".
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.start(LifeSupport.java:463) ~[neo4j-common-4.4.11.jar:4.4.11]
at org.neo4j.kernel.lifecycle.LifeSupport.start(LifeSupport.java:110) ~[neo4j-common-4.4.11.jar:4.4.11]
at org.neo4j.graphdb.facade.DatabaseManagementServiceFactory.startDatabaseServer(DatabaseManagementServiceFactory.java:218) ~[neo4j-4.4.11.jar:4.4.11]
... 8 more
I tried different settings for the following two config parameters:
causal_clustering.discovery_type
causal_clustering.initial_discovery_members
First the combination of using default discovery_type=K8S which omits any set initial_discovery_members.
Second the combination of discovery_type=LIST and defining initial_discovery_members by name and port 5000.
Both settings let to an successful clustering in like 1/10 times.
As the cluster members are searching for each other while getting deployed, another thing being tried is configuring terraform conditions that two of the cluster members get build with "wait=false" and the third member gets a depends on:
module "neo4j-cluster-core-3" { depends_on = [module.neo4j-cluster-core-1, module.neo4j-cluster-core-2]
I am attempting to build a series of synthetic browser tests in Datadog via Terraform using a map of URLs. The test will go to a URL, type dummy credentials into the login form, attempt to log in, and assert that there will be an invalid username/password response. My code fails when I attempt to run a terraform apply. I have referenced the documentation, but I have not been able to find examples of browser tests with step types of typeText. Have I set up my params incorrectly?
Code:
resource "datadog_synthetics_test" "login_tests" {
for_each = var.browser_test_urls
type = "browser"
request_definition {
method = "GET"
url = each.value
}
device_ids = ["laptop_large"]
locations = ["aws:us-east-1"]
options_list {
tick_every = 1800
follow_redirects = true
retry {
count = 2
interval = 60000
}
}
name = "Login Test for ${each.key}"
message = "Login test failed for ${each.key} on url ${each.value}"
status = "paused"
browser_step {
name = "Type Username"
type = "typeText"
params {
element = "#userItem"
value = "username"
}
}
browser_step {
name = "Type Password"
params {
element = "#passItem"
value = "password"
}
type = "typeText"
}
browser_step {
name = "Click Login Button"
params {
element = "#btlogin"
}
type = "click"
}
browser_step {
name = "Check for Invalid Login Message"
params {
check = "contains"
value = "Invalid username or password!"
}
type = "assertPageContains"
}
}
Error:
│ Error: error creating synthetics browser test from https://us3.datadoghq.com/api/v1/synthetics/tests/browser: 400 Bad Request: {"errors":["Invalid steps data:
Step 0 has invalid params: None is not of type 'object'"]}
│
│ with module.datadog.datadog_synthetics_test.login_tests["Test"],
│ on modules\datadog\browser_tests.tf line 1, in resource "datadog_synthetics_test" "login_tests":
│ 1: resource "datadog_synthetics_test" "login_tests" {
To anyone facing a similar issue, this is how I ended up solving it.
I created the synthetic through the Datadog UI, and then imported it into my terraform state. From there I looked at my state file to see the value of the element property. It was a long x-path style. I copied and pasted the entire string into my element property, and it worked like a charm!
I am configuring PagerDuty using terraform and part of that is assigning each user to a schedule.
In this scenario the users already exist in PagerDuty as they are pulled in from our SSO provider.
Initially this is how I looked at deploying the setup:
Use a data source to access the users details (such as ID)
users.tf
data "pagerduty_user" "user1" {
email = "user1#test.com"
}
data "pagerduty_user" "user2" {
email = "user2#test.co.nz"
}
Create and assign users to a schedule:
schedule.tf
resource "pagerduty_schedule" "schedule" {
name = "Rotation"
time_zone = "Pacific/Auckland"
layer {
name = "On-Call"
start = "2021-09-10T00:00:00-00:00"
rotation_virtual_start = "2021-09-10T00:00:00-00:00"
// One week rotation
rotation_turn_length_seconds = 604800
// The position of the user on the list determines their order in the layer.
users = [data.pagerduty_user.user1.id, data.pagerduty_user.user2.id]
}
teams = [pagerduty_team.team.id]
}
This works correctly, however each time I wanted to add a new user to a team I would have to add duplicate blocks for each user.
My question is how can I avoid doing this?
My first thought was to use a for_each, so it would look like this:
variables.tf
variable "all_users" {
description = "List of users"
type = map(any)
default = { user1 = "user1#test.com", user2 = "user1#test.com" }
}
users.tf
data "pagerduty_user" "users" {
for_each = var.all_users
email = each.value
}
schedule.tf
resource "pagerduty_schedule" "schedule" {
for_each = data.pagerduty_user.users
name = "Rotation"
time_zone = "Pacific/Auckland"
layer {
name = "On-Call"
start = "2021-09-10T00:00:00-00:00"
rotation_virtual_start = "2021-09-10T00:00:00-00:00"
// One week rotation
rotation_turn_length_seconds = 604800
// The position of the user on the list determines their order in the layer.
users = [data.pagerduty_user.users[each.key].id]
}
teams = [pagerduty_team.team.id]
}
The issue here is that two schedules are being created (which would is the expected behavior).
So, my question is how can I create a list of User IDs what I could then pass to the schedule?
I was able to achieve this with locals:
locals {
users = [
for user in data.pagerduty_user.users :
user.id
]
}
So my final config ended up as:
variables.tf
variable "all_users" {
description = "List of storage users"
type = list(any)
default = ["user1", "user2"]
}
users.tf
data "pagerduty_user" "users" {
for_each = toset(var.all_users)
email = "${each.value}#test.com"
}
schedules.tf
locals {
users = [
for user in data.pagerduty_user.users :
user.id
]
}
resource "pagerduty_schedule" "storage_schedule" {
name = "Storage Team Rotation"
time_zone = "Pacific/Auckland"
layer {
name = "On-Call"
start = "2021-09-10T00:00:00-00:00"
rotation_virtual_start = "2021-09-10T00:00:00-00:00"
// One week rotation
rotation_turn_length_seconds = 604800
// The position of the user on the list determines their order in the layer.
users = local.users
}
teams = [pagerduty_team.storage.id]
}
I'm pretty much new to terraform. I wanted to know is there a way to reuse a resource? Below is my code. Below is the main.tf, where I have a module declared.
module "deployments" {
source = "./modules/odo_deployments"
artifact_versions = local.artifact_versions
}
In the modules/odo_deployments folder, I have two resources which does exactly the same except for a different ad. Is there a way I can use just one resource and pass arguments (ad) like a function to this resource?
variable "artifact_versions" {
description = "What gets injected by terraform at the ET level"
}
resource "odo_deployment" "incident-management-service-dev" {
count = var.artifact_versions["incident-management-service"].version == "skip" ? 0 : 1
ad = "phx-ad-1"
alias = "cloud-incident-management-application"
artifact {
url = var.artifact_versions["incident-management-service"].uri
build_tag = var.artifact_versions["incident-management-service"].version
type = var.artifact_versions["incident-management-service"].type
}
flags = ["SKIP_UP_TO_DATE_NODES"]
}
resource "odo_deployment" "incident-management-service-dev-ad3" {
count = var.artifact_versions["incident-management-service"].version == "skip" ? 0 : 1
ad = "phx-ad-3"
alias = "cloud-incident-management-application"
artifact {
url = var.artifact_versions["incident-management-service"].uri
build_tag = var.artifact_versions["incident-management-service"].version
type = var.artifact_versions["incident-management-service"].type
}
flags = ["SKIP_UP_TO_DATE_NODES"]
}
What I did to solve this is,I added a locals in the main.tf and pass the local variable in the module like below
locals {
ad = ["phx-ad-1", "phx-ad3"]
}
module "deployments" {
source = "./modules/odo_deployments"
artifact_versions = local.artifact_versions
ad = local.ad
and in the resource instead of hard coding the ad value, I used it like below
count = length(var.ad)
ad = var.ad[count.index]
I am trying to access the Kubernetes_secret data.token attribute in terraform, but I keep on getting the error
Resource 'data.kubernetes_secret.misp_whitelist_secret' does not have attribute 'data.token' for variable 'data.kubernetes_secret.misp_whitelist_secret.data.token'
Whats the way to resolve this issue?
resource "kubernetes_service_account" "misp_whitelist_sa" {
metadata {
name = "misp-whitelist-sa"
}
}
data "kubernetes_secret" "misp_whitelist_secret" {
metadata {
name = "${kubernetes_service_account.misp_whitelist_sa.default_secret_name}"
namespace = "${kubernetes_service_account.misp_whitelist_sa.metadata.0.namespace}"
}
depends_on = [
"kubernetes_service_account.misp_whitelist_sa",
]
}
And I'm trying to access the data.token inside the terraform google_cloud_function resource
resource "google_cloudfunctions_function" "misp_whitelist_function" {
name = "${var.cluster}-misp-whitelist"
....<additional data> .....
environment_variables = {
CLUSTER = "${var.cluster}"
PROJECT = "${var.project}"
AUTH = "${data.kubernetes_secret.misp_whitelist_secret.data.token}"
}
}
The correct way to access the data secret key is:
AUTH = "${data.kubernetes_secret.misp_whitelist_secret.data["token"]}"
Ok banged my head against a wall here for a really long time. The other answer is correct, but skips a crucial step.
You need to make sure that the secret declares the correct type (and also maybe specify the annotation?)
resource "kubernetes_secret" "vault" {
metadata {
name = "vault-token"
annotations = {
"kubernetes.io/service-account.name" = "vault"
}
}
type = "kubernetes.io/service-account-token" // THIS!
}
Then, once you have the proper type specified, you can use the token
output "token" {
value = kubernetes_secret.vault.data.token
}