Spinnaker Installation via helm in terraform - terraform

I am trying to install spinnaker on Kubernetes with Terraform using the following resource:
resource "helm_release" "spinnaker" {
chart = "stable/spinnaker"
name = "spinnaker"
repository = "https://helmcharts.opsmx.com/"
version = "2.2.3"
I keep on receiving the following error message:
Error: chart "stable/spinnaker" version "2.2.3" not found in https://helmcharts.opsmx.com/ repository
on spinnaker-helm/helm.tf line 1, in resource "helm_release" "spinnaker":
1: resource "helm_release" "spinnaker" {
Here is the artifact hub chart we try to refer to:
https://artifacthub.io/packages/helm/spinnaker/spinnaker
I am using terraform 0.14 and helm provider 1.2.3
Thank you for your help.
Charles

The chart is "spinnaker/spinnaker" and not "stable/spinnaker".
In general stable repo was deprecated and wiped on Nov 2020.
Those are the available charts in the repot you are adding:
ebiriukov#Evgenis-MacBook-Pro ➜ ~ helm search repo spinnaker <aws:iris-dev>
WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /Users/ebiriukov/.kube/config
WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /Users/ebiriukov/.kube/config
NAME CHART VERSION APP VERSION DESCRIPTION
spinnaker/spinnaker 2.2.3 1.22.2 Open source, multi-cloud continuous delivery pl...
spinnaker/oes 3.4.0 3.4.0 OES is a non-forked version of OSS spinnaker
spinnaker/opsmx-autopilot 2.0.0 2.9.10 Autopilot is a release verification platform

Related

Terraform helm release timing out problem

I am trying to deploy helm_release using terraform script. My .tf file looks like below
resource "helm_release" "ingress-aws" {
name = "ingress-aws"
repository = "https://aws.github.io/eks-charts"
chart = "aws-load-balancer-controller"
version = "2.1.3"
namespace = kubernetes_namespace.aws-ingress-controller.metadata[0].name
#namespace = "ingress-aws"
atomic = true
timeout = 300
values = [<<EOF
clusterName: dcp-daas-test14-dcp-eks
serviceAccount:
name: aws-load-balancer
when i run my deployment pipeline, after reaching at this stage it waits for 5 minutes (as timeout=300 sec) and then gives out an error saying something like this:
Error: [0m[0m[1mrelease ingress-aws failed, and has been uninstalled due to atomic being set: timed out waiting for the condition
on ingress.tf line 64, in resource "helm_release" "ingress-aws":
64: resource "helm_release" "ingress-aws" [4m{[0m
i tried changing the version of the helm release and tried applying this helm_release on newly created EKS cluster yet the same issue.
Any advice would be appreciated and ask question in comments if you want more info about the question.
Also I tried installing helm release chart version V-1.1.6 and it works fine but when i go with the V-1.2.2 it gives me above error. Not sure if it is causing due to any bugs in latest version of helm chart !!
Thanks and cheers!

"Invalid legacy provider address" error on Terraform

I'm trying to deploy a bitbucket pipeline using terraform v0.14.3 to create resources in google cloud. after running terraform command, the pipeline fails with this error:
Error: Invalid legacy provider address
This configuration or its associated state refers to the unqualified provider
"google".
You must complete the Terraform 0.13 upgrade process before upgrading to later
versions.
We updated our local version of terraform to v.0.13.0 and then ran: terraform 0.13upgrade as referenced in this guide: https://www.terraform.io/upgrade-guides/0-13.html. A versions.tf file was generated requiring terraform version >=0.13 and our required provider block now looks like this:
terraform {
backend "gcs" {
bucket = "some-bucket"
prefix = "terraform/state"
credentials = "key.json" #this is just a bitbucket pipeline variable
}
required_providers {
google = {
source = "hashicorp/google"
version = "~> 2.20.0"
}
}
}
provider "google" {
project = var.project_ID
credentials = "key.json"
region = var.project_region
}
We still get the same error when initiating the bitbucket pipeline. Does anyone know how to get past this error? Thanks in advance.
Solution
If you are using a newer version of Terraform, such as v0.14.x, you should:
use the replace-provider subcommand
terraform state replace-provider \
-auto-approve \
"registry.terraform.io/-/google" \
"hashicorp/google"
#=>
Terraform will perform the following actions:
~ Updating provider:
- registry.terraform.io/-/google
+ registry.terraform.io/hashicorp/google
Changing x resources:
. . .
Successfully replaced provider for x resources.
initialize Terraform again:
terraform init
#=>
Initializing the backend...
Initializing provider plugins...
- Reusing previous version of hashicorp/google from the dependency lock file
- Using previously-installed hashicorp/google vx.xx.x
Terraform has been successfully initialized!
You may now begin working with Terraform. Try . . .
This should take care of installing the provider.
Explanation
Terraform only supports upgrades from one major feature upgrade at a time. Your older state file was, more than likely, created using a version earlier than v0.13.x.
If you did not run the apply command before you upgraded your Terraform version, you can expect this error: the upgrade from v0.13.x to v0.14.x was not complete.
You can find more information here.
in our case, we were on aws and had similar error
...
Error: Invalid legacy provider address
This configuration or its associated state refers to the unqualified provider
"aws".
the steps to resolve were :
ensure syntax was upgraded by running terraform init again
check the warnings and resolve them
and finally updating the statefile with following method.
# update provider in state file
terraform state replace-provider -- -/aws hashicorp/aws
# reinit
terraform init
specific to ops problem, if issue still occurs, verify access to the bucket location from local and from pipeline. also verify the version of terraform running in pipeline. depending on configuration it may be the remote statefile is/can not be updated.
Same issue for me. I ran:
terraform providers
That gave me:
Providers required by configuration:
registry.terraform.io/hashicorp/google
Providers required by state:
registry.terraform.io/-/google
So I ran:
terraform state replace-provider registry.terraform.io/-/google registry.terraform.io/hashicorp/google
That did the trick.
To add on, I had installed terraform 0.14.6 but the state seemed to be stuck in 0.12. In my case I had 3 references that were off, this article helped me pinpoint which ones (all the entries in "Providers required by state" which had a - in the link. https://github.com/hashicorp/terraform/issues/27615
I corrected it by running the replace-provider command for each entry which was off, then running terraform init. I note doing this and running a git diff, the tfstate has been updated and now uses 0.14.x terraform instead of my previous 0.12.x. i.e.
terraform providers
terraform state replace-provider registry.terraform.io/-/azurerm registry.terraform.io/hashicorp/azurerm
Explanation: Your terraform project contains tf.state file that is outdated and refereeing to old provider address. The Error message will present this error:
Error: Invalid legacy provider address
This configuration or its associated state refers to the unqualified provider
<some-provider>.
You must complete the Terraform <some-version> upgrade process before upgrading to later
versions.
Solution: In order to solve this issue you should change the tf.state references to link to the newer required providers, update the tf.state file and initialize the project again. The steps are:
Create / Edit the required providers block with the relevant package name and version, I'd rather doing it on versions.tf file.
example:
terraform {
required_version = ">= 0.14"
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 3.35.0"
}
}
}
Run terraform providers command to present the required providers from configuration against the required providers that saved on state.
example:
Providers required by configuration:
.
├── provider[registry.terraform.io/hashicorp/aws] >= 3.35.0
Providers required by state:
provider[registry.terraform.io/-/aws]
Switch and reassign the required provider source address in the terraform state ( using terraform state replace-provider command) so we can tell terraform how to interpret the legacy provider.
The terraform state replace-provider subcommand allows re-assigning
provider source addresses recorded in the Terraform state, and so we
can use this command to tell Terraform how to reinterpret the "legacy"
provider addresses as properly-namespaced providers that match with
the provider source addresses in the configuration.
Warning: The terraform state replace-provider subcommand, like all of
the terraform state subcommands, will create a new state snapshot and
write it to the configured backend. After the command succeeds the
latest state snapshot will use syntax that Terraform v0.12 cannot
understand, so you should perform this step only when you are ready to
permanently upgrade to Terraform v0.13.
example:
terraform state replace-provider registry.terraform.io/-/aws registry.terraform.io/hashicorp/aws
output:
~ Updating provider:
- registry.terraform.io/-/aws
+ registry.terraform.io/hashicorp/aws
run terraform init to update references.
While you were under TF13 did you apply state at least once for the running project?
According to TF docs: https://www.terraform.io/upgrade-guides/0-14.html
There is no automatic update command (separately) in 0.14 (like there was in 0.13). The only way to upgrade is to force state on a project at least once, while under command when moving TF13 to 14.
You can also try terraform init in the project directory.
my case was like this
Error: Invalid legacy provider address
This configuration or its associated state refers to the unqualified provider
"openstack".
You must complete the Terraform 0.13 upgrade process before upgrading to later
versions.
for resolving the issue
remove the .terraform folder
the execute the following command
terraform state replace-provider -- -/openstack terraform-provider-openstack/openstack
after this command, you will see the below print, enter yes
Terraform will perform the following actions:
~ Updating provider:
- registry.terraform.io/-/openstack
+ registry.terraform.io/terraform-provider-openstack/openstack
Changing 11 resources:
openstack_compute_servergroup_v2.kubernetes_master
openstack_networking_network_v2.kube_router
openstack_compute_instance_v2.kubernetes_worker
openstack_networking_subnet_v2.internal
openstack_networking_subnet_v2.kube_router
data.openstack_networking_network_v2.external_network
openstack_compute_instance_v2.kubernetes_etcd
openstack_networking_router_interface_v2.internal
openstack_networking_router_v2.internal
openstack_compute_instance_v2.kubernetes_master
openstack_networking_network_v2.internal
Do you want to make these changes?
Only 'yes' will be accepted to continue.
Enter a value: yes
Successfully replaced provider for 11 resources.
I recently ran into this using Terraform Cloud for the remote backend. We had some older AWS-related workspaces set to version 0.12.4 (in the cloud) that errored out with "Invalid legacy provider address" and refused to run with the latest Terraform client 1.1.8.
I am adding my answer because it is much simpler than the other answers. We did not do any of the following:
terraform providers
terraform 0.13upgrade
remove the .terraform folder
terraform state replace-provider
Instead we simply:
In a clean folder (no local state, using local terraform.exe version 0.13.7) ran 'terraform init'
Made a small insignificant change (to ensure apply would write state) to a .tf file in the workspace
In Terraform Cloud set the workspace version to 0.13.7
Using local 0.13.7 terraform.exe ran apply - that saved new state.
Now we can use cloud and local terraform.exe version 1.1.8 and no more problems.
Note that we did also need to update a few AWS S3-related resources to the newer AWS provider syntax to get all our workspaces working with the latest provider.
We encountered a similar problem in our operational environments today. We successfully completed the terraform 0.13upgrade command. This indeed introduced a versions.tf file.
However, performing a terraform init with this setup was still not possible, and the following error popped up:
Error: Invalid legacy provider address
Further investigation in the state file revealed that, for some resources, the provider block was not updated. We hence had to run the following command to finalize the upgrade process.
terraform state replace-provider "registry.terraform.io/-/google" "hashicorp/google"
EDIT Deployment to the next environment revealed that this was caused by conditional resources. To easily enable/disable some resources we leverage the count attribute and use either 0 or 1. For the resources with count = 0, that were unaltered with Terraform 0.13, the provider was not updated.
I was using terragrunt with remote s3 state and dynamo db and sadly this does not work for me. So posting it here might help someone else.
A long way to make this work, as terragrunt state replace-provider does work for me
download the state file from s3
aws s3 cp s3://bucket-name/path/terraform.tfstate terraform.tfstate --profile profile
replace the provider using terraform
terraform state replace-provider "registry.terraform.io/-/random" "hashicorp/random"
terraform state replace-provider "registry.terraform.io/-/aws" "hashicorp/aws"
upload the state file back to s3 as even terragrunt state push terraform.tfstate does not work for me
aws s3 cp terraform.tfstate s3://bucket-name/path/terraform.tfstate --profile profile
terragrunt apply
the command will throw error with digest value,
update the dynamo db table digest value that received in previous command
Initializing the backend...
Error refreshing state: state data in S3 does not have the expected content.
This may be caused by unusually long delays in S3 processing a previous state
update. Please wait for a minute or two and try again. If this problem
persists, and neither S3 nor DynamoDB are experiencing an outage, you may need
to manually verify the remote state and update the Digest value stored in the
DynamoDB table to the following value: fe2840edf8064d9225eea6c3ef2e5d1d
finally, run terragrunt apply
The other way that this can be strange is if you are using terraform workspaces - especially with the remote state files.
Using a terraform workspace - the order of operations is important.
terraform init - connecting to the default workspace
terraform workspace switch <env> - Even if you have specified the workspace here, the init will happen using the default workspace.
This is an assumption that terraform makes - sometimes erroneously
To fix this - you can run your init using:
TF_WORKSPACE=<your_env> terraform init
Or remove the default workspace.

Error while installing hashicorp/custom: provider registry

I assume I am using the latest version of azurerm:
provider "azurerm" {
version = "=2.34.0"
features {}
}
As soon as I add this resource to my tf script:
https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/policy_assignment
I get this error when I do terraform init:
>terraform init
Initializing the backend...
Initializing provider plugins...
- Finding latest version of hashicorp/custom...
- Finding hashicorp/azurerm versions matching "2.34.0"...
- Installing hashicorp/azurerm v2.34.0...
- Installed hashicorp/azurerm v2.34.0 (signed by HashiCorp)
Error: Failed to install provider
Error while installing hashicorp/custom: provider registry
registry.terraform.io does not have a provider named
registry.terraform.io/hashicorp/custom
Am I missing any custom terraform provider? Looking at the Terraform documentation in the link above, I expect the azurerm_policy_definition resource must be included n the azurerm
Thanks to #ChristianPearce who deserves the credit for the answer.
This is a common and potentially misleading error.
There could me many scripting issues that cause this error.
In my case my resource name had typo in it like below:
resource "azurerm_virtual_network_typo_in_type" "main" {
This could also be a problem with terraform sub-modules as discussed in https://github.com/hashicorp/terraform/issues/25602.
For community providers, every module requires a required_providers block with an entry specifying the provider source.
So basically you need to have this in all your modules and in your main tf-script (replace the custom-prov-name by you actual provider):
terraform {
required_providers {
custom-prov-name = {
source = ".../custom-prov-name"
}
}
}

Multiple provider versions with Terraform

Does anyone know if it is possible to have a Terraform script that uses multiple provider versions?
For example azurerm version 2.0.0 to create one resource, and 1.4.0 for another?
I tried specifying the providers, as documented here: https://www.terraform.io/docs/configuration/providers.html
However it doesn't seem to work as it tries to resolve a single provider that fullfills both 1.4.0 and 2.0.0.
It errors like:
No provider "azurerm" plugins meet the constraint "=1.4.0,=2.0.0".
I'm asking this because we have a large Terraform codebase and I would like to migrate bits by bits if doable.
There used to be a similar question raised, here: Terraform: How to install multiple versions of provider plugins?
But it got no valid answer
How to use multiple version of the same Terraform provider
This allowed us a smooth transition from helm2 to helm3, while enabling new deployments to use helm3 right away, therefore reducing the accumulation of tech debt.
Of course you can do the same for most providers
How we've solved this
So the idea is to download a specific version of our provider (helm 0.10.6 in my case) and move it to one of the filesystem mirrors terraform uses by default. The key part is the renaming of our plugin binary. In the zip we can find terraform-provider-helm_v0.10.6, but we rename it to terraform-provider-helm2_v0.10.6
PLUGIN_PATH=/usr/share/terraform/plugins/registry.terraform.io/hashicorp/helm2/0.10.6/linux_amd64
mkdir -p $PLUGIN_PATH
curl -sLo_ 'https://releases.hashicorp.com/terraform-provider-helm/0.10.6/terraform-provider-helm_0.10.6_linux_amd64.zip'
unzip -p _ 'terraform-provider-helm*' > ${PLUGIN_PATH}/terraform-provider-helm2_v0.10.6
rm _
chmod 755 ${PLUGIN_PATH}/terraform-provider-helm2_v0.10.6
Then when we declare our two provider plugins
We can use hashicorp/helm2 plugin from the filesystem mirror, and let terraform directly download the latest hashicorp/helm provider, which uses helm3
terraform {
required_providers {
helm2 = {
source = "hashicorp/helm2"
}
helm = {
source = "hashicorp/helm"
version = ">= 2.0.0"
}
}
}
# you will find the doc here https://registry.terraform.io/providers/hashicorp/helm/0.10.6/docs
provider "helm2" {
install_tiller = false
namespace = "kube-system"
kubernetes {
...
}
}
# you will find the doc at latest version https://registry.terraform.io/providers/hashicorp/helm/latest/docs
provider "helm" {
kubernetes {
...
}
}
When initializing terraform, you will find that
- Finding latest version of hashicorp/helm...
- Finding latest version of hashicorp/helm2...
- Installing hashicorp/helm v2.0.2...
- Installed hashicorp/helm v2.0.2 (signed by HashiCorp)
- Installing hashicorp/helm2 v0.10.6...
- Installed hashicorp/helm2 v0.10.6 (unauthenticated)
Using it
Its pretty straightforward from this point. By default, helm resources will pick our updated helm provider at v2.0.2. You must explicitly use provider = helm2 for old resources (helm_repositoryand helm_releases in our case). Once migrated, you can remove it to use the default helm provider.
No you cannot do what you want. Terraform expects your constraint to match one plugin version as eluded to in:
Plugin Names and Versions
If multiple versions of a plugin are installed, Terraform will use the
newest version that meets the configuration's version constraints.
So your constraint cannot be parsed to match anyone plugin, hence the error

Installing Istio in Kubernetes with automatic sidecar injection: istio-inializer.yaml Validation Failure

I'm trying to install Istio with automatic sidecar injection into Kubernetes. My environment consists of three masters and two nodes and was built on Azure using the Azure Container Service marketplace product.
Following the documentation located here, I have so far enabled RBAC and DynamicAdmissionControl. I have accomplished this by modifying /etc/kubernetes/istio-inializer.yaml on the Kubernetes Master by adding the following content outlined in red and then restarting the Kubernetes Master using the Unix command, reboot.
The next step in the documentation is to apply the yaml using kubectl. I assume that the documentation intends for the user to clone the Istio repository and cd into it before this step but that is unmentioned.
git clone https://github.com/istio/istio.git
cd istio
kubectl apply -f install/kubernetes/istio-initializer.yaml
After which the following error occurs:
user#hostname:~/istio$ kubectl apply -f install/kubernetes/istio-initializer.yaml
configmap "istio-inject" configured
serviceaccount "istio-initializer-service-account" configured
error: error validating "install/kubernetes/istio-initializer.yaml": error validating data: found invalid field initializers for v1.ObjectMeta; if you choose to ignore these errors, turn validation off with --validate=false
If I attempt to execute kubectl apply with the mentioned flag, validate=false, then this error is generated instead:
user#hostname:~/istio$ kubectl apply -f install/kubernetes/istio-initializer.yaml --validate=false
configmap "istio-inject" configured
serviceaccount "istio-initializer-service-account" configured
deployment "istio-initializer" configured
error: unable to recognize "install/kubernetes/istio-initializer.yaml": no matches for admissionregistration.k8s.io/, Kind=InitializerConfiguration
I'm not sure where to go from here. The problem appears to be related to the admissionregistration.k8s.io/v1alpha1 block in the yaml but I'm unsure what specifically is incorrect in this block.
apiVersion: admissionregistration.k8s.io/v1alpha1
kind: InitializerConfiguration
metadata:
name: istio-sidecar
initializers:
- name: sidecar.initializer.istio.io
rules:
- apiGroups:
- "*"
apiVersions:
- "*"
resources:
- deployments
- statefulsets
- jobs
- daemonsets
Installed version of Kubernetes:
user#hostname:~/istio$ kubectl version
Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.6", GitCommit:"7fa1c1756d8bc963f1a389f4a6937dc71f08ada2", GitTreeState:"clean", BuildDate:"2017-06-16T18:21:54Z", GoVersion:"go1.7.6", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.6", GitCommit:"7fa1c1756d8bc963f1a389f4a6937dc71f08ada2", GitTreeState:"clean", BuildDate:"2017-06-16T18:21:54Z", GoVersion:"go1.7.6", Compiler:"gc", Platform:"linux/amd64"}
I suspect this is a versioning mismatch. As a follow up question, is it possible to deploy a version of kubernetes >= 1.7.4 to Azure using ACS?
I'm fairly new to working with Kubernetes so if anyone could help I would greatly appreciate it. Thank you for your time.
Seems to be a versioning problem as the alpha feature is supported for k8s version> 1.7 as mentioned here (https://kubernetes.io/docs/admin/extensible-admission-controllers/#what-are-initializers).
1.7 introduces two alpha features, Initializers and External Admission
Webhooks, that address these limitations. These features allow admission
controllers to be developed out-of-tree and configured at runtime.
And it is possible to deploy a version of kubernetes >= 1.7.4 to Azure. Note sure about the deployed version using the portal. But if you use acs-egnine to generate the ARM template, it is possible to deploy a cluster with version 1.7.5.
You can refer here for the procedures https://github.com/Azure/acs-engine. Basically it involves three steps. First, you should create the json file by referring to the clusterDefinition section. To use version 1.7.5, you should specify the attribute "orchestratorRelaease" to be "1.7" and also enable the RBAC by specifying the attribute "enableRbac" to be true. Second, use the acs engine (version >= 0.6.0) to parse the json file to ARM template (azuredeploy.json & azuredeploy.parameters.json should be created). Lastly, use the command "New-AzureRmResourceGroupDeployment" in powershell to deploy the cluster to Azure.
Hope this helps :)

Resources