Terraform Snowflake doesn't allow to update/remove Snowflake task - terraform

I created a task in Snowflake with Terraform. It creates it as expected and the new Task shows in both Snowflake and the .tfstate. When I try and update the task (i.e. change the schedule) and apply the changes with terraform apply, Terraform tells me:
│ Error: error retrieving root task TASK_MO: failed to locate the root node of: []: sql: no rows in result set
│
│ with snowflake_task.load_from_s3["MO"],
│ on main.tf line 946, in resource "snowflake_task" "load_from_s3":
│ 946: resource "snowflake_task" "load_from_s3" {
I did this just after creation, so no manual changes were made in Snowflake.
My assumption is that it can't find the actual task in Snowflake.
My resource
resource "snowflake_task" "load_from_s3" {
for_each = snowflake_stage.all
name = "TASK_${each.key}"
database = snowflake_database.database.name
schema = snowflake_schema.load_schemas["SRC"].name
comment = "Task to copy the ${each.key} messages from S3"
schedule = "USING CRON 0 7 * * * UTC"
sql_statement = "COPY into ${snowflake_database.database.name}.${snowflake_schema.load_schemas["SRC"].name}.${each.key} from (select ${local.stages[each.key].fields}convert_timezone('UTC', current_timestamp)::timestamp_ntz,metadata$filename,metadata$file_row_number from #${snowflake_database.database.name}.${snowflake_schema.load_schemas["SRC"].name}.${each.key} (file_format => '${snowflake_database.database.name}.${snowflake_schema.load_schemas["SRC"].name}.${snowflake_file_format.generic.name}')) on_error=skip_file"
enabled = local.stages[each.key].is_enabled
lifecycle {
ignore_changes = [after]
}
}
The resource in .tfstate
{
"index_key": "MO",
"schema_version": 0,
"attributes": {
"after": "[]",
"comment": "Task to copy the MO messages from S3",
"database": "ICEBERG",
"enabled": true,
"error_integration": "",
"id": "ICEBERG|SRC|TASK_MO",
"name": "TASK_MO_FNB",
"schedule": "USING CRON 0 8 * * * UTC",
"schema": "SRC",
"session_parameters": null,
"sql_statement": "COPY into ICEBERG.SRC.MO from (select $1,convert_timezone('UTC', current_timestamp)::timestamp_ntz,metadata$filename,metadata$file_row_number from #ICEBERG.SRC.MO (file_format =\u003e 'ICEBERG.SRC.GENERIC')) on_error=skip_file",
"user_task_managed_initial_warehouse_size": "",
"user_task_timeout_ms": null,
"warehouse": "",
"when": ""
},
"sensitive_attributes": [],
"private": "bnVsbA==",
"dependencies": [
"snowflake_database.database",
"snowflake_file_format.generic",
"snowflake_schema.load_schemas",
"snowflake_stage.all"
]
},
The query that is being ran on Snowflake that (I guess) should identify the existing task. This query returns indeed zero items (which corresponds with the error message from Terraform).
SHOW TASKS LIKE '[]' IN SCHEMA "ICEBERG"."SRC"
Does anyone know what I can do to be able to update the task with Terraform?
Thanks, Chris

The issue is reported here - Existing Task in plan & apply change & error #1071. Upgrading the Provider Version to snowflake-labs/snowflake 0.37.0 should resolve the issue.

Related

Azure Databricks CLI: update workflow/job definition

I have created a pipeline in Azure DevOps to perform the following three steps:
Retrieve the job definition from one Databricks workspace and save it as a json (Databricks CLI config is omitted)
databricks jobs get --job-id $(job_id) > workflow.json
Use this json to update the workflow in a second (separate) Databricks workspace (Databricks CLI is first reconfigured to point to the new workspace)
databricks jobs reset --job-id $(job_id) --json-file workflow.json
Run the updated job in the second Databricks workspace
databricks jobs run-now --job-id $(job_id)
However, my pipeline fails at step 2 with the following error, even though the existing_cluster_id is already defined inside the workflow.json. Any idea?
Error: b'{"error_code":"INVALID_PARAMETER_VALUE","message":"One of job_cluster_key, new_cluster, or existing_cluster_id must be specified."}' 
Here is what my workflow.json looks like (hiding some of the details):
{
"job_id": 123,
"creator_user_name": "user1",
"run_as_user_name": "user1",
"run_as_owner": true,
"settings":
{
"name": "my-workflow",
"existing_cluster_id": "abc-def-123-xyz",
"email_notifications": {
"no_alert_for_skipped_runs": false
},
"webhook_notifications": {},
"timeout_seconds": 0,
"notebook_task": {
"notebook_path": "notebooks/my-notebook",
"base_parameters": {
"environment": "production"
},
"source": "GIT"
},
"max_concurrent_runs": 1,
"git_source": {
"git_url": "https://my-org#dev.azure.com/my-project/_git/my-repo",
"git_provider": "azureDevOpsServices",
"git_branch": "master"
},
"format": "SINGLE_TASK"
},
"created_time": 1676477563075
}
I figured out that you don't need to retrieve the entire workflow definition json file, as shown in step 1, but only the "settings" part, i.e. modifying step 1 to this solved my issue:
databricks jobs get --job-id $(job_id) | jq .settings > workflow.json

renaming files in a nested directory with azure data factory

I have a daily export set up for several subscriptions - the files export like so
with 7 different directories within daily -- i'm simply trying to rename the files to get rid of the underscore for data flows
my parent pipeline looks like so
get metadata gets the folder names and for each invokes the child pipeline like so
here are the screen grabs of the child pipeline
copy data within the foreach1 -- the source
and now the sink - this is where i want to rename the file, the first time i debugged it simply copied them to the correct place with a .txt extension, the next time it got the extension right but it is not renaming the file,
i replaced #replace(item().name, '_', '-') with #replace(activity('FileInfo').output.itemName, '_','-') and got the following error
The expression '#replace(activity('FileInfo').output.itemName, '_','-')' cannot be evaluated because property 'itemName' doesn't exist, available properties are 'childItems, effectiveIntegrationRuntime, executionDuration, durationInQueue, billingReference'.
so then I replaced that with
#replace(activity('FileInfo').output.childItems, '_', '-')
but that gives the following error
Cannot fit childItems return type into the function parameter string
I'm not sure where to go from here
edit 7/14
making the change from the answer below
here is my linked service for the sink dataset with the parameter renamedFile
here is the sink on the copy data1 for the child_Rename pipeline, it grayed out the file extension as this was mentioned
now here is the sink container after running the pipeline
this is the directory structure of the source data - it's dynamically created from scheduled daily azure exports
here is the output of get metadata - FileInfo from the child pipeline
{
"childItems": [
{
"name": "daily",
"type": "Folder"
}
],
"effectiveIntegrationRuntime": "integrationRuntime1 (Central US)",
"executionDuration": 0,
"durationInQueue": {
"integrationRuntimeQueue": 0
},
"billingReference": {
"activityType": "PipelineActivity",
"billableDuration": [
{
"meterType": "AzureIR",
"duration": 0.016666666666666666,
"unit": "Hours"
}
]
}
}
allsubs - source container
daily - directory created by the scheduled export
sub1 - subN - the different subs with scheduled exports
previous-month -> this-month - monthly folders are created automatically
this_fileXX.csv -- files are automatically generated with the underscore in the name - it is my understanding that data flows cannot handle these characters in the file name
├──allsubs/
└── daily/
├── sub1/
| └── previous-month/
└── this_file.csv
└── this_file1.csv
| └── previous-month/
└── this_file11.csv
└── this_file12.csv
| └── this-month/
├── subN/
| └── previous-month/
| └── previous-month/
| └── this-month/
└── this_fileXX.csv
edit 2 - july 20
I think i'm getting closer but there are still some small errors i do not see
the pipeline now moves all the files from the container allsubs to the container renamed-files but it is not renaming the files - it looks like so
Get Metadata -from the dataset allContainers it retrieves the folders with the Child Items
dataset allContainers shown (preview works, linked service works, no paremeters in this dataset)
next the forEach activity calls the output of get metadata
for the items #activity('Get Metadata1').output.childItems
next shown is the copy data within ForEach
the source is the allContainers dataset with the wildcard file path selected, recursively selected and due to the following error max concurrent connections set at 1 -- but this did not resolve the error
error message:
Failure happened on 'Sink' side.
ErrorCode=AzureStorageOperationFailedConcurrentWrite,
'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,
Message=Error occurred when trying to upload a file.
It's possible because you have multiple concurrent copy activities
runs writing to the same file 'renamed-files/rlcosts51122/20220601-20220630/rlcosts51122_082dd29b-95b2-4da5-802a-935d762e89d8.csv'.
Check your ADF configuration.
,Source=Microsoft.DataTransfer.ClientLibrary,
''Type=Microsoft.WindowsAzure.Storage.StorageException,
Message=The remote server returned an error: (400) Bad
Request.,Source=Microsoft.WindowsAzure.Storage,StorageExtendedMessage=The specified block list is invalid.
RequestId:b519219f-601e-000d-6c4c-9c9c5e000000
Time:2022-07-
20T15:23:51.4342693Z,,''Type=System.Net.WebException,
Message=The remote server returned an error: (400) Bad
Request.,Source=Microsoft.WindowsAzure.Storage,'
copy data source:
copy data sink - the dataset is dsRenamesink, it's simply another container in a different storage account, linked service is set up correctly, it has the parameter renamedFile but I suspect this is the source of my error. still testing that.
sink dataset dsRenamesink:
parmeter page:
here's the sink in the copy data where the renamed file is passed the iterator from ForEach1 like so:
#replace(item().name,'_','renameworked')
so the underscore would be replaced with 'renameworked' easy enough to test
debugging the pipeline
the errors look to be consistent for the 7 failures which was shown above as the 'failure happened on the sink side'
however - going into the storage account sink i can see that all of the files from the source were copied over to the sink but the files were not renamed like so
pipeline output:
error messages:
{
"dataRead": 28901858,
"dataWritten": 10006989,
"filesRead": 4,
"filesWritten": 0,
"sourcePeakConnections": 1,
"sinkPeakConnections": 1,
"copyDuration": 7,
"throughput": 4032.067,
"errors": [
{
"Code": 24107,
"Message": "Failure happened on 'Sink' side. ErrorCode=AzureStorageOperationFailedConcurrentWrite,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Error occurred when trying to upload a file. It's possible because you have multiple concurrent copy activities runs writing to the same file 'renamed-files/rlcosts51122/20220601-20220630/rlcosts51122_082dd29b-95b2-4da5-802a-935d762e89d8.csv'. Check your ADF configuration.,Source=Microsoft.DataTransfer.ClientLibrary,''Type=Microsoft.WindowsAzure.Storage.StorageException,Message=The remote server returned an error: (400) Bad Request.,Source=Microsoft.WindowsAzure.Storage,StorageExtendedMessage=The specified block list is invalid.\nRequestId:b519219f-601e-000d-6c4c-9c9c5e000000\nTime:2022-07-20T15:23:51.4342693Z,,''Type=System.Net.WebException,Message=The remote server returned an error: (400) Bad Request.,Source=Microsoft.WindowsAzure.Storage,'",
"EventType": 0,
"Category": 5,
"Data": {
"FailureInitiator": "Sink"
},
"MsgId": null,
"ExceptionType": null,
"Source": null,
"StackTrace": null,
"InnerEventInfos": []
}
],
"effectiveIntegrationRuntime": "AutoResolveIntegrationRuntime (Central US)",
"usedDataIntegrationUnits": 4,
"billingReference": {
"activityType": "DataMovement",
"billableDuration": [
{
"meterType": "AzureIR",
"duration": 0.06666666666666667,
"unit": "DIUHours"
}
]
},
"usedParallelCopies": 1,
"executionDetails": [
{
"source": {
"type": "AzureBlobFS",
"region": "Central US"
},
"sink": {
"type": "AzureBlobStorage"
},
"status": "Failed",
"start": "Jul 20, 2022, 10:23:44 am",
"duration": 7,
"usedDataIntegrationUnits": 4,
"usedParallelCopies": 1,
"profile": {
"queue": {
"status": "Completed",
"duration": 3
},
"transfer": {
"status": "Completed",
"duration": 2,
"details": {
"listingSource": {
"type": "AzureBlobFS",
"workingDuration": 0
},
"readingFromSource": {
"type": "AzureBlobFS",
"workingDuration": 0
},
"writingToSink": {
"type": "AzureBlobStorage",
"workingDuration": 0
}
}
}
},
"detailedDurations": {
"queuingDuration": 3,
"transferDuration": 2
}
}
],
"dataConsistencyVerification": {
"VerificationResult": "NotVerified"
},
"durationInQueue": {
"integrationRuntimeQueue": 0
}
}
all i wanted to do was remove the underscore from the file name to work with data flows....I'm not sure what else to try next
next attempt july 20
it appears that now I have been able to copy and rename some of the files -
changing the sink dataset as follows
#concat(replace(dataset().renamedFile,'_','-'),'',formatDateTime(utcnow(),'yyyyMMddHHmmss'),'.csv')
and removing this parameter from the sink in the copy activity
upon debugging this pipeline I get 1 file in the sink and it is named correctly but there is still something wrong
third attempt 7/20
further updating to be closer to the original answer
sink dataset
copy data activity in the sink - concat works
now after debugging i'm left with 1 file for each of the subs - so there is something still not quite correct
I reproduce the same thing in my environment.
Go to Sink dataset, click and open.First create parameters and add dynamic content, I used this expression #dataset().sinkfilename
In copy activity sink, under dataset properties pass the filename value using the expression #replace(item().name,'_','-') to replace _ with -.
when you create a dataset parameter to pass the filename, the File extension property is automatically disabled.
when the pipeline runs you can see the file name has been renamed accordingly.

How to handle lists for eks assume policy

I have 2 eks cluster as part of our upgrade. I want to handle assume policy such that it has access to both eks cluster. Both the cluster in same AWS account.
i want my policy to look like the below policy. such that the we are not updating any roles, but only the assume policy to handle both clusters.
locals.tf
eks_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::11111111111:oidc-provider/oidc.eks.us-east-1.amazonaws.com/id/yyyyyyyyyyyyyyyyyy",
"Federated": "arn:aws:iam::11111111111:oidc-provider/oidc.eks.us-east-1.amazonaws.com/id/yyyyyyyyyyyyyyyyyy"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"oidc.eks.us-east-1.amazonaws.com/id/xxxxxxxxxxxxxx:sub": "system:serviceaccount:%s:%s",
"oidc.eks.us-east-1.amazonaws.com/id/xxxxxxxxxxxxx:sub": "system:serviceaccount:%s:%s"
}
}
}
]
}
EOF
Launcher = "job-Launcher"
Role.tf
resource "aws_iam_role" "launcher" {
name = local.Launcher
assume_role_policy = format(local.eks_policy, "my-namepsace", local.Launcher)
tags = {
terraform = "true"
owner = "stg"
}
}
So i tried like this in locals.tf
count = length(var.federated)
eks_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::11111111111:oidc-provider/oidc.eks.us-east-1.amazonaws.com/id/${join(",",${element(var.federated, count.index)})}",
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"oidc.eks.us-east-1.amazonaws.com/id/${join(",", ${element(var.federated, count.index)})}:sub": "system:serviceaccount:%s:%s"
}
}
}
]
}
But i'm getting an error as count cannot be used within locals.tf,
Can someone pls help me.
Update2:
How do we get something like this
"Condition": {
"StringEquals": {
"oidc.eks.us-east-1.amazonaws.com/id/xxxxxxxxxxxxx:sub": "system:serviceaccount:ihr-system:ihr-system-external-dns18",
"oidc.eks.us-east-1.amazonaws.com/id/yyyyyyyyyyyyy:sub": "system:serviceaccount:ihr-system:ihr-system-external-dns"
}
}
I tried this ,
federated = [
"xxxxxxxxxxxxxxxxxxxxxx",
"yyyyyyyyyyyyyyyyyyyyyy"
]
Condition : {
"StringEquals" : {
join("",[for oidc in local.federated:"oidc.eks.us-east-1.amazonaws.com/id/${oidc}:sub:","system:serviceaccount:%s:%s"])
}
getting syntax error near in , local expected and another error got
',' or '}' expected got '"system:serviceaccount.."'
for oidc in local.federated
Terraform format function expects an argument per each placeholder. From the documentation:
The specification is a string that includes formatting verbs that are introduced with the % character. The function call must then have one additional argument for each verb sequence in the specification. The verbs are matched with consecutive arguments and formatted as directed, as long as each given argument is convertible to the type required by the format verb.
With that said, you need to provide four arguments, even though it's the same local variables in your case:
format(local.eks_policy, "my-namepsace", local.Launcher, "my-namepsace", local.Launcher)
Depending on your use case, you might also consider defining a list of objects with configuration and build the policy statement using loop in order to prepare final string.
Update 1
Example with dynamic generation might look like this, where role could be assumed by any account from the variable local.params:
locals {
# key = account ID, value could be whatever
params = {
"1111" = { foo = "bar" },
"2222" = { x = "y" }
}
assume_role_str = jsonencode({
# skipped beginning for brevity
Effect = "Allow",
Principal = {
Federated: [ for account in keys(local.params): "arn:aws:iam::11111111111:oidc-provider/oidc.eks.us-east-1.amazonaws.com/id/:${account}" ]
}
})
}

How can I reference a aws_cognito_user_pools id in `Terraform`?

I have below terraform configuration for cognito client:
data "aws_cognito_user_pools" "re_user_pool" {
name = "${var.cognito_user_pool_name}"
}
resource "aws_cognito_user_pool_client" "app_client" {
name = "re-app-client"
user_pool_id = data.aws_cognito_user_pools.re_user_pool.id
depends_on = [data.aws_cognito_user_pools.re_user_pool]
explicit_auth_flows = ["USER_PASSWORD_AUTH"]
prevent_user_existence_errors = "ENABLED"
allowed_oauth_flows_user_pool_client = true
allowed_oauth_flows = ["code"]
allowed_oauth_scopes = ["phone", "openid", "email", "profile", "aws.cognito.signin.user.admin"]
supported_identity_providers = ["COGNITO", "Google"]
callback_urls = ["https://scnothzsf0.execute-api.ap-southeast-2.amazonaws.com/staging/signup"]
}
I references the cognito user pool which already exists on AWS. The error happens on the line user_pool_id = data.aws_cognito_user_pools.re_user_pool.id when it uses the user pool id in aws_cognito_user_pool_client.
I will get the error
Error: Error creating Cognito User Pool Client: InvalidParameterException: 1 validation error detected: Value 're-user' at 'userPoolId' failed to satisfy constraint: Member must satisfy regular expression pattern: [\w-]+_[0-9a-zA-Z]+
on infra/cognito.tf line 5, in resource "aws_cognito_user_pool_client" "app_client":
5: resource "aws_cognito_user_pool_client" "app_client" {`
It seems the format of the ID is not correct. I have read this document https://www.terraform.io/docs/providers/aws/d/cognito_user_pools.html and it has a reference attribute ids - The list of cognito user pool ids.. I wonder why it gives a list of user pool id. How can I reference this ID?
I also tried to reference it as user_pool_id = data.aws_cognito_user_pools.re_user_pool.ids[0] but got an error:
Error: Invalid index
on infra/cognito.tf line 8, in resource "aws_cognito_user_pool_client" "app_client":
8: user_pool_id = data.aws_cognito_user_pools.re_user_pool.ids[0]
This value does not have any indices.
The re_user_pool referenced above is defined here:
resource "aws_cognito_user_pool" "re_user_pool" {
name = "re-user"
}
I came across your question while working through this same problem. I see the question is several months old, but I'm still going to add an answer for anyone else that ends up here like I did.
First, the solution is to convert the ids value from a set to a list via the tolist function and then access it as you would any terraform list.
Caveat: In my case, I have ensured I only have one user pool for a given name, but you could get multiple user pools if you haven't followed this convention. This solution will not be a complete solution for that situation, but perhaps it will still point in the right direction.
Example code:
data "aws_cognito_user_pools" "test" {
name = "a_name"
}
output "test" {
value = "${tolist(data.aws_cognito_user_pools.test.ids)[0]"
}
Second, how I arrived at it:
I added an output block so I could see what I was working with and I commented out the problematic lines in my terraform file so I could successfully execute terraform apply. Next I ran terraform apply followed by terraform output --json (note: the apply must be successful for output to have the latest values).
Example temporary output block:
output "test" {
value = "${data.aws_cognito_user_pools.test}" // output top-level object for debugging
}
Relevant terraform apply output:
test = {
"arns" = [
"<redacted>",
]
"id" = "a_name"
"ids" = [
"us-east-1_<redacted>",
]
"name" = "a_name"
}
Relevant terraform output --json output:
"test": {
"sensitive": false,
"type": [
"object",
{
"arns": [
"set",
"string"
],
"id": "string",
"ids": [
"set",
"string"
],
"name": "string"
}
],
"value": {
"arns": [
"<redacted>"
],
"id": "a_name",
"ids": [
"us-east-1_<redacted>"
],
"name": "a_name"
}
}
As you can see, the ids portion is a set of type string. I decided to try converting ids to a list to see if I could then access the 0 index and it worked. I feel like this could be a terraform bug, but I haven't filed an issue yet.

Terraform breaking Azure Logic App connections

I am creating an Azure Logic App (using it to unzip to a blob storage). For this I need the Logic App workflow and a connection to the blob storage. I create the empty Logic App Workflow with Terraform and the actual Logic App implementation with Visual Studio that I just then deploy to the Logic App created with tf.
I use following tf code to create the empty Logic App Workflow:
resource "azurerm_logic_app_workflow" "logic_unzip" {
name = "ngh-${var.deployment}-unziplogic"
resource_group_name = "${azurerm_resource_group.rg.name}"
location = "${azurerm_resource_group.rg.location}"
}
As the Logic App needs connection to the Blob storage I will use following template to create it:
resource "azurerm_template_deployment" "depl_connection_azureblob" {
name = "azureblob"
resource_group_name = "${azurerm_resource_group.rg.name}"
template_body = <<DEPLOY
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"connection_name": {"type": "string"},
"storage_name": {"type": "string"},
"storage_access_key": {"type": "string"},
"location": {"type": "string"},
"api_id": {"type": "string"}
},
"resources": [{
"type": "Microsoft.Web/connections",
"name": "[parameters('connection_name')]",
"apiVersion": "2016-06-01",
"location": "[parameters('location')]",
"scale": null,
"properties": {
"displayName": "[parameters('connection_name')]",
"api": {
"id": "[parameters('api_id')]"
},
"parameterValues": {
"accountName": "[parameters('storage_name')]",
"accessKey": "[parameters('storage_access_key')]"
}
},
"dependsOn": []
}]
}
DEPLOY
parameters = {
"connection_name" = "azureblob"
"storage_name" = "${azurerm_storage_account.sa-main.name}"
"storage_access_key" = "${azurerm_storage_account.sa-main.primary_access_key}"
"location" = "${azurerm_resource_group.rg.location}"
"api_id" = "${data.azurerm_subscription.current.id}/providers/Microsoft.Web/locations/${azurerm_resource_group.rg.location}/managedApis/azureblob"
}
deployment_mode = "Incremental"
}
Running plan and apply, these work perfect. In Visual Studio I can then create the Logic App and use the azureblob connection to select the correct blob storage.
Now, when I have deployed the Logic App Workflow from Visual Studio and run terraform plan I get following changes:
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
~ update in-place
Terraform will perform the following actions:
~ azurerm_logic_app_workflow.logic_unzip
parameters.$connections: "" => ""
parameters.%: "1" => "0"
Plan: 0 to add, 1 to change, 0 to destroy.
Running the apply command now will break the Logic App as it removes the bound connection. Clearly the Visual Studio deploy has created the binding between the Logic App and the connection.
How can I tell Terraform not to remove the connections (created by the Visual Studio deploy) from the Logic App?
Terraform is not aware of the resources deployed in the arm template, so it detects the state change and tries to "fix" that. I dont see any CF resources for logic app connections, so seeing how it detects that parameters.connections changed from 0 to 1 adding your connection directly to the workflow resource might work, but CF mentions : Any parameters specified must exist in the Schema defined in workflow_schema, but I dont see connections in the schema, which is a bit weird, but I assume I'm misreading the schema
you can also use ignore_changes:
lifecycle {
ignore_changes = [
"parameters.$connections"
]
}
according to the comments and this
reading:
https://www.terraform.io/docs/configuration/resources.html#ignore_changes

Resources