When running import "tfconfig" with sentinel I get import tfconfig is not available - terraform

I am doing some learnings with Terraform and sentinel.
I cant get some of the basic functionality working.
I have a policy here:
import "tfconfig"
default_foo = rule { tfconfig.variables.foo.default is "bar" }
default_number = rule { tfconfig.variables.number.default is 42 }
main = rule { default_foo and default_number }
and a variables file here:
variable "foo" {
default = "bar"
}
variable "number" {
default = 42
}
But when I run:
sentinel apply policy.sentinel
I get the following error:
policy.sentinel:1:1: Import "tfconfig" is not available.
Any ideas as I have been looking for a solution for a number of hours now.
thanks

In order to use the Terraform-specific imports in the Sentinel SDK, you need to use mock data to produce a data structure to test against.
When you run Terraform via Terraform Cloud, a successful plan will produce a Sentinel mocks file that contains the same data that Terraform Cloud would itself use when evaluating policies against that plan, and so you can check that mock data into your repository as part of your test suite for your policies.
You can use speculative plans (run terraform plan on the command line with the remote backend enabled) to create mock data for intentionally-invalid configurations that you want to test your policy against, without having to push those invalid configurations into your version control system.
You can use sentinel test against test cases whose JSON definitions include a mock object referring to those mock files, and then the policies evaluated by those test cases will be able to import tfconfig, tfplan and tfstate and get an equivalent result to if the policies were run against the original plan in Terraform Cloud.

I had my mock data and everything and I was still getting this error.
Runtime error while running the policy:
ensure-policy.sentinel:3:1: Import "tfplan" is not available
I realized I had forgotten the 'sentinel.json' file at the root of my policies dir, which tells sentinel where to look for the mock data
https://www.terraform.io/docs/cloud/sentinel/mock.html
here's how they recommend a default dir setup, but you still need that .json file to tell sentinel where to look
and here's what the sentinel.json file should look like

Related

Terraform - Azure - Extract API from one resource group and import into another resource group

I have 5 different APIs in my Dev environment. This environment was built manually.
However, for the subsequent environments like Test, Pre-Prod, etc.. Terraform is being used.
Since I need to create each of the APIs in the subsequent environments, am extracting each of these APIs as a JSON file, making minor tweaks to the API URLs and importing it into the new environments.
The following is the process that am doing right now.
Went to Resource groups in Azure
Then under API Management service > APIs, clicked on the necessary API
Now, clicked on the three dots next to the API that I need and clicked on Export
Selected OpenAPI v3 (JSON) format
Now, I'm using the extracted JSON file and using the Terraform code below to add it to the APIM
resource "azurerm_api_management_api" "example" {
name = "example-api"
resource_group_name = azurerm_resource_group.example.name
api_management_name = azurerm_api_management.example.name
revision = "1"
display_name = "Example API"
path = "api/path"
protocols = ["https"]
service_url = "https://actualURL-of-the-API"
import {
content_format = "openapi+json"
content_value = file("extracted-filename.json")
}
}
The issue here is:
Even though the API gets added to the APIM, this doesn't create all the data - like Webservice URL, Backend HTTP(s) endpoint
How do I go about doing this?
Are you locked into exporting the Json file and importing it on the other environments through Terraform?
The reason I ask is because I attempted something similar but decided to go another route.
Initially I created the API manually in a Dev environment. I then re-created the same API from the ground up using only Terraform. No Json export & import.
I then used that Terraform script to create my other environments.
That allowed me to bypass the import problem altogether since nothing is imported.
I have found that there are downsides to taking this approach; It is much less intuitive to author the API through the Terraform script than through the Azure GUI. This is therefore more time consuming. Especially since my initial API was discarded for the one generated with the Terraform script.
Additionally, I have had problems with Terraform diffs reporting example changes when there are none (I suspect the same problem is to be had when using the import method).
If you are wondering why I decided to go another route? The reason was twofold; Firstly, similar to you, I had trouble with getting the export/import to generate the API that I wanted. Secondly, I prefer not to rely on auto generated files.

How to manage locally generated stateful files in Terraform

I have a Terraform (1.0+) script that generates a local config file from a template based on some inputs, e.g:
locals {
config_tpl = templatefile("${path.module}/config.tpl", {
foo = "bar"
})
}
resource "local_file" "config" {
content = local._config_tpl
filename = "${path.module}/config.yaml"
}
This file is then used by a subsequent command run from a local-exec block, which in turn also generates local config files:
resource "null_resource" "my_command" {
provisioner "local-exec" {
when = create
command = "../scripts/my_command.sh"
working_dir = "${path.module}"
}
depends_on = [
local_file.config,
]
}
my_command.sh generates infrastructure for which there is no Terraform provider currently available.
All of the generated files should form part of the configuration state, as they are required later during upgrades and ultimately to destroy the environment.
I also would like to run these scripts from a CI/CD pipeline, so naturally you would expect the workspace to be clean on each run, which means the generated files won't be present.
Is there a pattern for managing files such as these? My initial though is to create cloud storage bucket, zip the files up, and store them there before pulling them back down whenever they're needed. However, this feels even more dirty than what is already happening, and it seems like there is the possibility to run into dependency issues.
Or, am I missing something completely different to solve issues such as this?
The problem you've encountered here is what the warning in the hashicorp/local provider's documentation is discussing:
Terraform primarily deals with remote resources which are able to outlive a single Terraform run, and so local resources can sometimes violate its assumptions. The resources here are best used with care, since depending on local state can make it hard to apply the same Terraform configuration on many different local systems where the local resources may not be universally available. See specific notes in each resource for more information.
The short and unfortunate answer is that what you are trying to do here is not a problem Terraform is designed to address: its purpose is to manage long-lived objects in remote systems, not artifacts on your local workstation where you are running Terraform.
In the case of your config.yaml file you may find it a suitable alternative to use a cloud storage object resource type instead of local_file, so that Terraform will just write the file directly to that remote storage and not affect the local system at all. Of course, that will help only if whatever you intend to have read this file is also able to read from the same cloud storage, or if you can write a separate glue script to fetch the object after terraform apply is finished.
There is no straightforward path to treating the result of a provisioner as persistent data in the state. If you use provisioners then they are always, by definition, one-shot actions taken only during creation of a resource.

terraform interpolation with variables returning error [duplicate]

# Using a single workspace:
terraform {
backend "remote" {
hostname = "app.terraform.io"
organization = "company"
workspaces {
name = "my-app-prod"
}
}
}
For Terraform remote backend, would there be a way to use variable to specify the organization / workspace name instead of the hardcoded values there?
The Terraform documentation
didn't seem to mention anything related either.
The backend configuration documentation goes into this in some detail. The main point to note is this:
Only one backend may be specified and the configuration may not contain interpolations. Terraform will validate this.
If you want to make this easily configurable then you can use partial configuration for the static parts (eg the type of backend such as S3) and then provide config at run time interactively, via environment variables or via command line flags.
I personally wrap Terraform actions in a small shell script that runs terraform init with command line flags that uses an appropriate S3 bucket (eg a different one for each project and AWS account) and makes sure the state file location matches the path to the directory I am working on.
I had the same problems and was very disappointed with the need of additional init/wrapper scripts. Some time ago I started to use Terragrunt.
It's worth taking a look at Terragrunt because it closes the gap between Terraform and the lack of using variables at some points, e.g. for the remote backend configuration:
https://terragrunt.gruntwork.io/docs/getting-started/quick-start/#keep-your-backend-configuration-dry

Use variable in Terraform remote backend

# Using a single workspace:
terraform {
backend "remote" {
hostname = "app.terraform.io"
organization = "company"
workspaces {
name = "my-app-prod"
}
}
}
For Terraform remote backend, would there be a way to use variable to specify the organization / workspace name instead of the hardcoded values there?
The Terraform documentation
didn't seem to mention anything related either.
The backend configuration documentation goes into this in some detail. The main point to note is this:
Only one backend may be specified and the configuration may not contain interpolations. Terraform will validate this.
If you want to make this easily configurable then you can use partial configuration for the static parts (eg the type of backend such as S3) and then provide config at run time interactively, via environment variables or via command line flags.
I personally wrap Terraform actions in a small shell script that runs terraform init with command line flags that uses an appropriate S3 bucket (eg a different one for each project and AWS account) and makes sure the state file location matches the path to the directory I am working on.
I had the same problems and was very disappointed with the need of additional init/wrapper scripts. Some time ago I started to use Terragrunt.
It's worth taking a look at Terragrunt because it closes the gap between Terraform and the lack of using variables at some points, e.g. for the remote backend configuration:
https://terragrunt.gruntwork.io/docs/getting-started/quick-start/#keep-your-backend-configuration-dry

Introduce new data source in Terraform

I am new to Terraform and have been trying to understand the constructs of the same. Let's say i have a service which exposes REST API's and i want to call those REST API's as part of my terraform script, what are the steps i need to take ?
My understanding is that i need to write a custom provider but i am unable to connect the dot's on how to add new data source type for the new provider.
Also, assuming that we do have the required provider, whats the protocol that would be used for communicating with my service ? Is it HTTP/s ?
One more point to note is that my service currently is used for configuring storage in the backend.
Recent versions of terraform ( > 0.9 I believe) support external data sources. You don't have to create a custom provider. You can call any arbitrary shell or python script that return values that you can use as data.
data "external" "example" {
program = ["python", "${path.module}/example-data-source.py"]
query = {
# arbitrary map from strings to strings, passed
# to the external program as the data query.
id = "abc123"
}
}
In your case you could use a simple curl in a bash script to call your endpoint and return data to terraform as a map of strings.
Do note the warnings a the top of that page.
This is considerably more difficult then it appears; it is impossible to debug the interaction between what terraform is sending to my script and what the script is expecting. It just fails to parse the arguments and refuses to provide me any feedback as to what is getting into the program

Resources