multiple terraform provider versions in local provider directory - terraform

I wanted to abandon terraform-bundle binary in exchange for recommended solutions described in here https://github.com/hashicorp/terraform/tree/main/tools/terraform-bundle (we are upgrading terraform to version 1.x atm)
but I’m facing problem with specifying multiple versions of the same provider. In past our code looked like this (when using terraform-bundle):
tf-bundle.hcl:
terraform {
version = "0.14.11"
}
providers {
aws = {
source = "hashicorp/aws"
versions = [""3.75.0", "3.75.2"]
}
and we were able to simply run terraform-bundle and we have two versions of aws provider saved locally. However, it seems like it is not possible with terraform providers mirror command.. How can I store it locally and not to use terrafom-bundle binary?

A single Terraform configuration can only depend on one version of each provider at a time. There is no syntax for specifying multiple versions.
I'm assuming that you're trying to use terraform providers mirror to populate a mirror directory.
That command is designed to work with one configuration at a time and mirror the exact dependencies currently required for that configuration. You can run it multiple times in different working directories with the same target directory to mirror potentially many versions of the same provider, but each single configuration can only possibly contribute one version of each provider.
cd config1
terraform providers mirror ~/.terraform.d/plugins
cd ../config2
terraform providers mirror ~/.terraform.d/plugins
The second call to terraform providers mirror will update ~/.terraform.d/plugins to include additional provider packages if needed, while preserving the ones that were placed there by the first call.
If your goal is only to create a local directory to use as a "filesystem mirror" then you don't need the metadata JSON files that terraform providers mirror generates, since those are only for network mirrors.
In that case, the only requirement is to match one of the two supported directory structures, and so you can potentially construct such a directory without Terraform's help:
mkdir ~/.terraform.d/plugins
cd ~/.terraform.d/plugins
mkdir -p registry.terraform.io/hashicorp/aws/3.75.0/linux_amd64
wget https://releases.hashicorp.com/terraform-provider-aws/3.75.0/terraform-provider-aws_3.75.0_linux_amd64.zip
unzip terraform-provider-aws_3.75.0_linux_amd64.zip -d registry.terraform.io/hashicorp/aws/3.75.0/linux_amd64
wget https://releases.hashicorp.com/terraform-provider-aws/3.75.2/terraform-provider-aws_3.75.2_linux_amd64.zip
unzip terraform-provider-aws_3.75.2_linux_amd64.zip -d registry.terraform.io/hashicorp/aws/3.75.2/linux_amd64
After running the above commands, the ~/.terraform.d/plugins directory will contain two specific versions of the AWS provider. That is one of the implied local mirror directories, so unless you've overridden this behavior with a custom provider_installation block in your CLI configuration Terraform should then install that provider only from this local directory and not from the origin registry.
(I've assumed that you are using Linux above; if not then the details will be slightly different but the effect still the same. Specifically, you'll need to use the correct platform name instead of linux_amd64 in the directory paths and URLs; you can see the platform name for your current platform by running terraform version.)

Related

Find file(s) in a directory on the node by pattern in Puppet

How can I find a file in a directory on the node by using shell-patterns or regex?
What I want to do:
I download a tar file to /tmp/myfiles on the appropriate client and unpack this archive. From it come several deb files (about 10 of them). The filenames change with time, because there are version numbers integrated in the name.
The file names looks like:
package1_8.0-22.65.linux.x86_64.deb
package2_6.5-23.89.linux.x86_64.deb
I need to identify some of them (not all) to be able to install them via package with provider => dpkg.
The packages (like package1, package2) do not occur multiple times with different version numbers, so matching could be done easily without having to compare version numbers:
Shell pattern: package1_*.linux.x86_64.deb
Regex: ^package1_.+\.linux\.x86_64\.deb$
Is there a command or module in Puppet to find files by match pattern in a directory?
Or can I grab the result of exec with command => "ls /tmp/myfiles/..." and evaluate it?
Supposing that you mean that the tar file is downloaded as part of the catalog run, what you describe is not possible with Puppet and its standard Package resource type. Keep in mind the catalog-request cycle:
The client gathers node facts.
The client submits a catalog request bearing the gathered facts.
The server uses the node's identity and facts, the manifests of the appropriate environment, and possibly the output of an external node classifier to build a catalog for the client.
The client applies the catalog.
To create Package resources during step (3), Puppet needs to know at least the names of the packages. As you describe it, the names of the packages cannot be determined until step (4), which is too late.
You could have Puppet use the packages downloaded on a previous run by defining a custom fact that gathers their names from the filesystem, but that's risky and failure prone.
I think the whole endeavor is highly suspect and likely to cause you trouble. It would probably be better to put your packages in a local repository at your site and use either node data or a metapackage or both to control which are installed. But if you insist on downloading a tar full of DEBs, then unpacking and installing them, then your best bet is probably to use an Exec for the installation. Something like this:
# Ensure that the TAR file is in place
file { ${download_dir}:
ensure => 'directory',
# ...
}
file { "${download_dir}/packages.tar":
ensure => 'file',
# ...
}
-> exec { 'Install miscellaneous DEBs':
path => ['/bin', '/usr/bin', '/sbin', /usr/sbin'],
command => "rm '${download_dir}'/*.deb || :; tar xf '${download_dir}'/packages.tar && apt-get install '${download_dir}'/*.deb",
provider => 'shell',
}

Shall I remove terraform lock file before running terraform init

I'm writing a small blog post regarding TF Provider migration that includes the following commands:
terraform state replace-provider foo/bar foo2/bar2
# Updating TF configuration file
terraform init
Shall I tell users to run
rm -rf .terraform/
rm .terraform.lock.hcl
before running
terraform init
The terraform state replace-provider command requires an initialized backend and so it will not work unless terraform init were run first.
The .terraform directory contains some transient working directory state that Terraform should typically be able to reproduce if needed, but deleting it does mean that any module dependencies that don't have exact version constraints could potentially select a different version when installed again, that command-line-specified backend configuration arguments will be lost, and that the currently-selected workspace will be reset back to the default.
Therefore I would not suggest just casually recommending that users remove that directory, unless you're able to explain the potential consequences of doing so.
.terraform.lock.hcl is the dependency lock file and should be treated as part of the configuration, even though terraform init automatically updates it, since its purpose is to remember between runs which version of each provider was selected. That file should typically be kept under version control once created.
Along with saving a record of which version was selected for each provider, the dependency lock file also provides a "trust on first use" methodology for Terraform providers, by saving the checksums of the providers so that if you take any steps to verify the providers you installed are trustworthy then you can be sure that Terraform will not allow any package with a different checksum unless you specifically opt to upgrade using terraform init -upgrade.
In that case, removing .terraform.lock.hcl would defeat that mechanism by removing the record of the original checksum, meaning there would be no guarantee that terraform init would install the same package as before.

Is there any way to have Terraform CLI runs scripts in a directory other than PWD?

I understand that when I run terraform CLI within a directory, it takes all the terraform artifacts from the current directory.
Is it possible to have terraform apply and terraform plan to look for scripts in a directory other than current PWD such that I don't have to keep changing current directory?
At the time I'm writing this, Terraform v0.14.0 is currently in release candidate and its final release is expected in the next few weeks.
That new version will introduce a new feature to allow requesting that Terraform switch directory itself before running any of its subcommands. For example:
terraform -chdir=subdirectory init
terraform -chdir=subdirectory apply
This is essentially the same as launching Terraform with the current working directory already set to the given subdirectory, but with two minor differences:
Terraform will set the path.cwd value to be the directory where Terraform was run from, rather than the subdirectory. As before, path.root is the directory containing the root module, so both directories are available.
If your CLI configuration includes paths that Terraform resolves during its startup then they will be resolved relative to the original working directory, because the directory switch applies only to the behavior of the given subcommand, not to actions Terraform takes prior to running the subcommand.
Neither of these differences are usually significant for common Terraform use, so in most cases you can think of the -chdir= option as having the same effect as switching into the target directlry using cd first.

Terraform custom provider - Error asking for user input

I am quite new to terraform and golang, I am trying to implement a custom provider, for a POC, to check if we can leverage terraform for our own use.
I was able to write and build the golang provider according to this video and some GitHub examples.
I created a go workspace an set the $GOPATH to default, $HOME/go.
Terraform packages is installed at $GOPATH/src/github/hashicorp.
Terraform binary is installed at $HOME/dev and specified at the $PATH.
according to the video, I created the provider package at /terraform/builtin/providers/mycustomprovider
And "go build"ed the package to $GOPATH/bin
Once I tried to 'terraform plan' I get the following:
provider.incapsula: no suitable version installed
version requirements: "(any version)"
versions installed: none
I added the custom provider binary to terraform.d/plugins and tried to run 'terraform plan' again.
now I am getting the following error:
Error: Error asking for user input: 1 error(s) occurred:
* provider.incapsula: fork/exec ~/.terraform.d/plugins/darwin_amd64/terraform-provider-incapsula: permission denied
I tried to chmod to 666 and chown the binary file, but with no luck, I am still getting the same error.
I tried to look for this kind of issue but couldn't find any reference.
I would appreciate any assistance.
Thanks!
The provider binary needs execute permissions, so try using 755 instead of 666. Also if the binary isn't somewhere in your $PATH, you generally need to run `terraform init -plugin-dir=.terraform/plugins/darwin_amd64" so terraform picks up the provider and updates the md5 lock file.
So try chmod 755 <wherever the provider is> and if it's still not working, use terraform init with the -plugin-dir argument pointing to the plugin directory (your provider should already be in there).

kerberos authentication in lambda function

I have an aws lambda function(nodejs) right now that writes some data to a test kafka cluster. The one thats in production use's kerberos for auth so I was wondering if there was a way to setup my lambda function to authenticate with kerberos. I wasn't able to find much online regarding this...
There are two ways to handle this.
Call out to CLI utilities
This requires that you supply the contents of the krb5-workstation and its dependency, libkadm5, in your deployment package or via a Layer.
Launch an EC2 instance from the Lambda execution environment's AMI
Update all packages: sudo yum update
Install the MIT Kerberos utilities: sudo yum install krb5-workstation
Make the Layer skeleton: mkdir bin lib
Populate the binaries: rpm -ql krb5-workstation | grep bin | xargs -I %% cp -a %% bin
Populate their libraries: rpm -ql libkadm5 | xargs -I %% cp -a %% lib
Prepare the Layer: zip -r9 krb5-workstation-layer.zip bin lib
Create the Layer and reference it from your Lambda function.
Invoke (e.g.) /opt/bin/kinit from inside your function.
Do it natively
It turns out that if your code calls gss_acquire_cred, which most code does, usually through bindings and an abstraction layer, you don't need the CLI utilities.
Supply a client keytab file to your function, either by bundling it with the deployment package or (probably better) fetching it from S3 + KMS.
Set the KRB5_CLIENT_KTNAME environment variable to the location of the keytab file.
Requested addendum
In either case, if you find you have a need to specify additional Kerberos configuration, see the krb5.conf docs for details. If /etc is off the table, then "Multiple colon-separated filenames may be specified in [the] KRB5_CONFIG [environment variable]; all files which are present will be read."
Surprisingly seems that this issue was not addressed by Amazon.
I have scenario which is restricted to use Kerberos authentication to DB servers.
Since there's no way to run kinit on Lambda instance when it starts it seems impossible.
Looks like it can be achieved in Azure Functions.
What neirbowj said will get you most of the way (And I don't know if this is my particular use case but it got me over the finish line):
You'll need an env var like this : KRB5CCNAME=FILE:/tmp/tgt. See : https://blog.tomecek.net/post/kerberos-in-a-container/ for a better explanation than I have.

Resources