Unable to download terraform modules from azure repo (Private repo) - terraform

My terraform-modules repo location is like this:
https://teamabc.visualstudio.com/dummyproject/_git/terraform-modules?path=%2Fcompute&version=GBmaster
I have three directories/modules at root level, namely compute, resourcegroup and sqlserver.
However, when I run terraform init. terraform is unable to download the required modules.
main.tf
module "app_vms" {
source = "https://teamabc.visualstudio.com/dummyproject/_git/terraform-modules?path=%2Fcompute&version=GBmaster"
rg_name = var.resource_group_name
location = module.resource_group.external_rg_location
vnet_name = var.virtual_network_name
subnet_name = var.sql_subnet_name
app_nsg = var.application_nsg
vm_count = var.count_vm
base_hostname = var.app_host_basename
sto_acc_suffix = var.storage_account_suffix
vm_size = var.virtual_machine_size
vm_publisher = var.virtual_machine_image_publisher
vm_offer = var.virtual_machine_image_offer
vm_sku = var.virtual_machine_image_sku
vm_img_version = var.virtual_machine_image_version
username = var.username
password = var.password
allowed_source_ips = var.ip_list
}
module "resource_group" {
source = "https://teamabc.visualstudio.com/dummyproject/_git/terraform-modules?path=%2Fresourcegroup&version=GBmaster"
rg_name = "test_rg"
}
module "azure_paas_sqlserver" {
source = "https://teamabc.visualstudio.com/dummyproject/_git/terraform-modules?path=%2Fsqlserver&version=GBmaster"
}
It gives me a series of errors like below:(sample only give not all the errors as they are same)
Error: Failed to download module
Could not download module "sql_vms" (main.tf:1) source code from
"https://teamabc.visualstudio.com/dummpproject/_git/terraform-modules?path=%2Fcompute&version=GBmaster":
error downloading
'https://teamabc.visualstudio.com/dummyproject/_git/terraform-modules?path=%2Fcompute&version=GBmaster':
no source URL was returned
Error: Failed to download module
Could not download module "sql_vms" (main.tf:1) source code from
"https://teamabc.visualstudio.com/dummyproject/_git/terraform-modules?path=%2Fcompute&version=GBmaster":
error downloading
'https://teamabc.visualstudio.com/dummyproject/_git/terraform-modules?path=%2Fcompute&version=GBmaster':
no source URL was returned
I tried to remove https:// part but no luck. The repo does require username and password to login.
Wondering if I should be making a public repo in github? but push within the organization is to use Azure Repos.
Post First comment
Thanks for the lead, I did tried but still no charm.
My source url now looks like below
source = "git::https://teamabc:lfithww4xpp4eksvoimgzkpi3ugu6xvrkf26mfq3jth3642jgyoa#visualstudio.com/dummyproject/_git/terraform-modules?path=%2Fcompute&version=GBmaster"
I get error below:
Error: Failed to download module
Could not download module "sql_vms" (main.tf:1) source code from
"git::https://teamabc:lfithww4xpp4eksvoimgzkpi3ugu6xvrkf26mfq3jth3642jgyoa#visualstudio.com/dummyproject/_git/terraform-modules?path=%2Fcompute&version=GBmaster":
error downloading
'https://teamabc:lfithww4xpp4eksvoimgzkpi3ugu6xvrkf26mfq3jth3642jgyoa#visualstudio.com/dummyproject/_git/terraform-modules?path=%2Fcompute&version=GBmaster':
/usr/bin/git exited with 128: Cloning into '.terraform/modules/sql_vms'...
fatal: repository
'https://teamabc:lfithww4xpp4eksvoimgzkpi3ugu6xvrkf26mfq3jth3642jgyoa#visualstudio.com/dummyproject/_git/terraform-modules?path=%2Fcompute&version=GBmaster/'
not found
Here:
teamabc.visuastudio.com is the parent azure devops url
dummyproject is the project name
After Charles Response
Error: Failed to download module
Could not download module "sql_vms" (main.tf:1) source code from
"git::https://teamabc:lfithww4xpp4eksvoimgzkpi3ugu6xvrkf26mfq3jth3642jgyoa#visualstudio.com/dummyproject/_git/terraform-modules?path=%2Fcompute&version=GBmaster.git":
error downloading
'https://teamabc:lfithww4xpp4eksvoimgzkpi3ugu6xvrkf26mfq3jth3642jgyoa#visualstudio.com/dummyproject/_git/terraform-modules?path=%2Fcompute&version=GBmaster.git':
/usr/bin/git exited with 128: Cloning into '.terraform/modules/sql_vms'...
fatal: repository
'https://teamabc:lfithww4xpp4eksvoimgzkpi3ugu6xvrkf26mfq3jth3642jgyoa#visualstudio.com/dummyproject/_git/terraform-modules?path=%2Fcompute&version=GBmaster.git/'
not found

You can take a look at Generic Git Repository, the URL should be a Git URL. And finally, it should like this:
source = "git::https://teamabc:lfithww4xpp4eksvoimgzkpi3ugu6xvrkf26mfq3jth3642jgyoa#visualstudio.com/dummyproject/_git/terraform-modules?path=%2Fcompute&version=GBmaster.git"
Or you can select a branch from your Git Repository like this:
source = "git::https://teamabc:lfithww4xpp4eksvoimgzkpi3ugu6xvrkf26mfq3jth3642jgyoa#visualstudio.com/dummyproject/_git/terraform-modules?path=%2Fcompute&version=GBmaster.git?ref=<branch>"
Finally, got it working by below command:
git::https://<PAT TOKEN>#<Azure DevOps URL>/DefaultCollection/<PROJECT NAME>/_git/<REPO NAME>//<sub directory>

Related

How to add a label to my vm instance in gcp via terraform/terragrunt

I have an issue in our environment where i cannot add a label to a vm instance in GCP via terraform/terragrunt after creation. We have a google repository that is setup via terraform and we use git to clone and update from a local repository, this will activate a trigger on cloudbuild to push the changes to the repo. We do not use terraform/grunt commands at all. It is all controlled via git. The labels are referenced in our compute module as shown.
variable "labels" {
description = "Labels to add."
type = map(string)
default = {}
}
Ok onto the issue. We have in our environment a mix of lift and shift and native cloud vm instances. We recently decided we wanted to add an additional label in the code to identify if the instance was under terraform control - ie terraform = "true/false"
labels = {
application = "demo-test"
businessunit = "homes"
costcentre = "90imt"
createdby = "ab"
department = "it"
disasterrecovery = "no"
environment = "rnd"
contact = "abriers"
terraform = "false"
}
}
So i add the label and use the usual git commands to add/commit push etc which triggers the cloudbuild as usual. The problem is, the label does not appear in the console when viewing it.
It's as if cloudbuild or terraform/terragrunt isn't recognising it as a change. I can change the value of a label no problem, but i cannot seem to add or remove a label after the vm has been created.
It has been suggested to run terraform/terragrunt plan in vs code but as mentioned, this has all been setup to use git so the above commands do not work.
For example i run terragrunt init in the directory and get this error
PS C:\Cloudrepos\placesforpeople> terragrunt init
time=2022-07-27T09:56:27+01:00 level=error msg=Error reading file at path C:/Cloudrepos/placesforpeople/terragrunt.hcl: open C:/Cloudrepos/placesforpeople/terragrunt.hcl: The system cannot find the
file specified.
time=2022-07-27T09:56:27+01:00 level=error msg=Unable to determine underlying exit code, so Terragrunt will exit with error code 1
PS C:\Cloudrepos\placesforpeople> cd org
PS C:\Cloudrepos\placesforpeople\org> cd rnd
PS C:\Cloudrepos\placesforpeople\org\rnd> cd adam_play_area
PS C:\Cloudrepos\placesforpeople\org\rnd\adam_play_area> ls
Mode LastWriteTime Length Name
---- ------------- ------ ----
d----- 20/07/2022 14:18 modules
d----- 20/07/2022 14:18 test_project_001
PS C:\Cloudrepos\placesforpeople\org\rnd\adam_play_area> cd test_project_001
PS C:\Cloudrepos\placesforpeople\org\rnd\adam_play_area\test_project_001> cd compute
PS C:\Cloudrepos\placesforpeople\org\rnd\adam_play_area\test_project_001\compute> ls
Directory: C:\Cloudrepos\placesforpeople\org\rnd\adam_play_area\test_project_001\compute
Mode LastWriteTime Length Name
---- ------------- ------ ----
d----- 07/07/2022 15:51 start_stop_schedule
d----- 20/07/2022 14:18 umig
-a---- 07/07/2022 16:09 1308 .terraform.lock.hcl
-a---- 27/07/2022 09:56 2267 terragrunt.hcl
PS C:\Cloudrepos\placesforpeople\org\rnd\adam_play_area\test_project_001\compute> terragrunt init
Initializing modules...
- data_disk in ..\compute_data_disk
Initializing the backend...
Successfully configured the backend "gcs"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing provider plugins...
- Reusing previous version of hashicorp/google from the dependency lock file
- Reusing previous version of hashicorp/google-beta from the dependency lock file
╷
│ Warning: Backend configuration ignored
│
│ on ..\compute_data_disk\backend.tf line 3, in terraform:
│ 3: backend "gcs" {}
│
│ Any selected backend applies to the entire configuration, so Terraform
│ expects provider configurations only in the root module.
│
│ This is a warning rather than an error because it's sometimes convenient to
│ temporarily call a root module as a child module for testing purposes, but
│ this backend configuration block will have no effect.
╵
╷
│ Error: Failed to query available provider packages
│
│ Could not retrieve the list of available versions for provider
│ hashicorp/google: could not connect to registry.terraform.io: Failed to
│ request discovery document: Get
│ "https://registry.terraform.io/.well-known/terraform.json": Proxy
│ Authorization Required
╵
╷
│ Error: Failed to query available provider packages
│
│ Could not retrieve the list of available versions for provider
│ hashicorp/google-beta: could not connect to registry.terraform.io: Failed
│ to request discovery document: Get
│ "https://registry.terraform.io/.well-known/terraform.json": Proxy
│ Authorization Required
╵
time=2022-07-27T09:57:40+01:00 level=error msg=Hit multiple errors:
Hit multiple errors:
exit status 1
PS C:\Cloudrepos\placesforpeople\org\rnd\adam_play_area\test_project_001\compute>
But as mentioned, we dont use and have never used these commands to push the changes.
I cannot work out why these labels wont add/remove after the vm has already been created.
I have tried making a change to an instance to trigger the change such as increase the disk size.
I have tried to create a block in the module for all the labels needed but this doesn't work as you cannot have labels as a block in this module.
labels {
application = var.labels.application
businessunit = var.labels.businessunit
costcentre = var.labels.costcentre
createdby = var.labels.createdby
department = var.labels.department
disasterrecovery = var.labels.disasterrecovery
environment = var.labels.environment
contact = var.labels.contact
terraform = var.labels.terraform
}
}
Any ideas? I know you cannot add a label to a project post creation, does the same apply to vm instances? Is there any alternative method i can test?
As requested this is the code for the vm instance
terraform {
source = "../../modules//compute_instance_static_ip/"
}
# Include all settings from the root terragrunt.hcl file
include {
path = find_in_parent_folders("org.hcl")
}
dependency "project" {
config_path = "../project"
# Configure mock outputs for the terraform commands that are returned when there are no outputs available (e.g the
# module hasn't been applied yet.
mock_outputs_allowed_terraform_commands = ["plan", "validate"]
mock_outputs = {
project_id = "project-not-created-yet"
}
}
prevent_destroy = false
inputs = {
gcp_instance_sa_email = "testprj-compute#gc-r-prj-testprj-0001-9627.iam.gserviceaccount.com" # This well tell gcp to use the default GCE service account
instance_name = "rnd-demo-test1"
network = "projects/gc-a-prj-vpchost-0001-3312/global/networks/gc-r-vpc-0001"
subnetwork = "projects/gc-a-prj-vpchost-0001-3312/regions/europe-west2/subnetworks/gc-r-snet-middleware-0001"
zone = "europe-west2-c"
region = "europe-west2"
project = dependency.project.outputs.project_id
os_image = "debian-10-buster-v20220118"
machine_type = "n1-standard-4"
boot_disk_size = 100
instance_scope = ["cloud-platform"]
instance_tags = ["demo-test"]
deletion_protection = "false"
metadata = {
windows-startup-script-ps1 = "Set-TimeZone -Id 'GMT Standard Time' -PassThru"
}
ip_address_region = "europe-west2"
ip_address_type = "INTERNAL"
attached_disks = {
data = {
size = 60
type = "pd-standard"
}
}
/*/ instance_schedule_policy = {
name = "start-stop"
#region = "europe-west2"
vm_start_schedule = "30 07 * * *"
vm_stop_schedule = "00 18 * * *"
time_zone = "GMT"
}
*/
labels = {
application = "demo-test"
businessunit = "homes"
costcentre = "90imt"
createdby = "ab"
department = "it"
disasterrecovery = "no"
environment = "rnd"
contact = "abriers"
terraform = "false"
}
}
terragrunt validate-inputs result below
PS C:\Cloudrepos\placesforpeople\org\rnd> terragrunt validate-inputs
time=2022-07-27T14:25:19+01:00 level=warning msg=The following inputs passed in by terragrunt are unused:
prefix=[C:\Cloudrepos\placesforpeople\org\rnd]
time=2022-07-27T14:25:19+01:00 level=warning msg= - billing_account prefix=[C:\Cloudrepos\placesforpeople\org\rnd]
time=2022-07-27T14:25:19+01:00 level=warning msg= - host_project_id prefix=[C:\Cloudrepos\placesforpeople\org\rnd]
time=2022-07-27T14:25:19+01:00 level=warning prefix=[C:\Cloudrepos\placesforpeople\org\rnd]
time=2022-07-27T14:25:19+01:00 level=info msg=All required inputs are passed in by terragrunt. prefix=[C:\Cloudrepos\placesforpeople\org\rnd]
time=2022-07-27T14:25:19+01:00 level=error msg=Terragrunt configuration has misaligned inputs
time=2022-07-27T14:25:19+01:00 level=error msg=Unable to determine underlying exit code, so Terragrunt will exit with error code 1
PS C:\Cloudrepos\placesforpeople\org\rnd>
I have found the culprit!
In the compute instance module i discovered this block of code. I removed labels and voila the extra labels now appear. Thanks for the assistance and advice on post formatting.
lifecycle {
ignore_changes = [
boot_disk.0.initialize_params.0.image,
attached_disk, labels
]
}

Failed to load manifest for dependency

I am trying to import some primitives into pallet in substrate but when I execute cargo check I get this error: failed to load manifest for dependency 'name of primitives'
Dex pallet: https://github.com/Kabocha-Network/cumulus/tree/v0.9.13-elio/pallets/dex
Can somebody please take a look and let me know. Thank you in advance.
if you run cargo check you get:
error: failed to load manifest for workspace member `/root/cumulus/pallets/dex`
Caused by:
failed to load manifest for dependency `acala-primitives`
Caused by:
failed to load manifest for dependency `module-evm-utiltity`
Caused by:
failed to read `/root/cumulus/primitives/modules/evm-utiltity/Cargo.toml`
Caused by:
No such file or directory (os error 2)
The problem is that /root/cumulus/primitives/modules/evm-utiltity/Cargo.toml, is not found because you haven't included this pallet locally or the pallet is misplaced and located somewhere else.
Simple solutions:
1. Locate and correct
Find where the pallet is and correctly link to it, or import the pallet to the location root/cumulus/primitives/modules/evm-utiltity/Cargo.toml so it can be found.
2. Externally linking rather than importing pallets locally.
You can link to the pallet from its external source rather than importing it locally, otherwise you will find you need to take many more dependencies and store them locally just like the /root/cumulus/primitives/modules/evm-utiltity/Cargo.toml mentioned above in the error.
What you can do instead is:
Go directly to the runtime directory, which is /root/cumulus/parachain-template/runtime/Cargo.toml and link to the external dex directly from github.com/acala-network/acala
something like this:
[dependencies.pallet-dexl]
default-features = false
git = 'https://github.com/Acala-Network/acala.git'
branch = polkadot-v0.9.13
version = '3.0.0'
or actually it is still using the older dependency version, which will be like:
pallet-dex = { git = "https://github.com/Acala-Network/acala", default-features = false, branch = "polkadot-v0.9.13" }
and more specifically for this error:
module-evm-utlity = { git = "https://github.com/Acala-Network/acala", default-features = false, branch = "polkadot-v0.9.13" }
but if you link to pallet-dex from its external source, the error should disappear and you will probably not need to link acala-primitives or module-evm-utility.
https://docs.substrate.io/how-to-guides/v3/basics/pallet-integration/
also, evm-utiltity is not spelled correctly (utility).
My fix for this error was setting the correct branch value, from .17 to .18 in my pallets cargo.toml file. For the sp-io dependency I had branch = "polkadot-v0.9.17" which didn't match the polkadot-v0.9.18 version every other dependency is on.
Original with problem on sp-io (last line)
[dependencies]
codec = { package = "parity-scale-codec", version = "3.0.0", default-features = false, features = [
"derive",
] }
scale-info = { version = "2.0.1", default-features = false, features = ["derive"] }
frame-support = { default-features = false, version = "4.0.0-dev", git = "https://github.com/paritytech/substrate.git", branch = "polkadot-v0.9.18"}
frame-system = { default-features = false, version = "4.0.0-dev", git = "https://github.com/paritytech/substrate.git", branch = "polkadot-v0.9.18" }
frame-benchmarking = { default-features = false, version = "4.0.0-dev", git = "https://github.com/paritytech/substrate.git", branch = "polkadot-v0.9.18", optional = true }
sp-io = { default-features = false, git = "https://github.com/paritytech/substrate.git", branch = "polkadot-v0.9.17" }
Fix (sp-io)
sp-io = { default-features = false, git = "https://github.com/paritytech/substrate.git", branch = "polkadot-v0.9.18" }
Now the "branch" matches with everything else and my errors are gone! Back to the Substrate Kitties tutorial I go!

Azure-ML Deployment does NOT see AzureML Environment (wrong version number)

I've followed the documentation pretty well as outlined here.
I've setup my azure machine learning environment the following way:
from azureml.core import Workspace
# Connect to the workspace
ws = Workspace.from_config()
from azureml.core import Environment
from azureml.core import ContainerRegistry
myenv = Environment(name = "myenv")
myenv.inferencing_stack_version = "latest" # This will install the inference specific apt packages.
# Docker
myenv.docker.enabled = True
myenv.docker.base_image_registry.address = "myazureregistry.azurecr.io"
myenv.docker.base_image_registry.username = "myusername"
myenv.docker.base_image_registry.password = "mypassword"
myenv.docker.base_image = "4fb3..."
myenv.docker.arguments = None
# Environment variable (I need python to look at folders
myenv.environment_variables = {"PYTHONPATH":"/root"}
# python
myenv.python.user_managed_dependencies = True
myenv.python.interpreter_path = "/opt/miniconda/envs/myenv/bin/python"
from azureml.core.conda_dependencies import CondaDependencies
conda_dep = CondaDependencies()
conda_dep.add_pip_package("azureml-defaults")
myenv.python.conda_dependencies=conda_dep
myenv.register(workspace=ws) # works!
I have a score.py file configured for inference (not relevant to the problem I'm having)...
I then setup inference configuration
from azureml.core.model import InferenceConfig
inference_config = InferenceConfig(entry_script="score.py", environment=myenv)
I setup my compute cluster:
from azureml.core.compute import ComputeTarget, AksCompute
from azureml.exceptions import ComputeTargetException
# Choose a name for your cluster
aks_name = "theclustername"
# Check to see if the cluster already exists
try:
aks_target = ComputeTarget(workspace=ws, name=aks_name)
print('Found existing compute target')
except ComputeTargetException:
print('Creating a new compute target...')
prov_config = AksCompute.provisioning_configuration(vm_size="Standard_NC6_Promo")
aks_target = ComputeTarget.create(workspace=ws, name=aks_name, provisioning_configuration=prov_config)
aks_target.wait_for_completion(show_output=True)
from azureml.core.webservice import AksWebservice
# Example
gpu_aks_config = AksWebservice.deploy_configuration(autoscale_enabled=False,
num_replicas=3,
cpu_cores=4,
memory_gb=10)
Everything succeeds; then I try and deploy the model for inference:
from azureml.core.model import Model
model = Model(ws, name="thenameofmymodel")
# Name of the web service that is deployed
aks_service_name = 'tryingtodeply'
# Deploy the model
aks_service = Model.deploy(ws,
aks_service_name,
models=[model],
inference_config=inference_config,
deployment_config=gpu_aks_config,
deployment_target=aks_target,
overwrite=True)
aks_service.wait_for_deployment(show_output=True)
print(aks_service.state)
And it fails saying that it can't find the environment. More specifically, my environment version is version 11, but it keeps trying to find an environment with a version number that is 1 higher (i.e., version 12) than the current environment:
FailedERROR - Service deployment polling reached non-successful terminal state, current service state: Failed
Operation ID: 0f03a025-3407-4dc1-9922-a53cc27267d4
More information can be found here:
Error:
{
"code": "BadRequest",
"statusCode": 400,
"message": "The request is invalid",
"details": [
{
"code": "EnvironmentDetailsFetchFailedUserError",
"message": "Failed to fetch details for Environment with Name: myenv Version: 12."
}
]
}
I have tried to manually edit the environment JSON to match the version that azureml is trying to fetch, but nothing works. Can anyone see anything wrong with this code?
Update
Changing the name of the environment (e.g., my_inference_env) and passing it to InferenceConfig seems to be on the right track. However, the error now changes to the following
Running..........
Failed
ERROR - Service deployment polling reached non-successful terminal state, current service state: Failed
Operation ID: f0dfc13b-6fb6-494b-91a7-de42b9384692
More information can be found here: https://some_long_http_address_that_leads_to_nothing
Error:
{
"code": "DeploymentFailed",
"statusCode": 404,
"message": "Deployment not found"
}
Solution
The answer from Anders below is indeed correct regarding the use of azure ML environments. However, the last error I was getting was because I was setting the container image using the digest value (a sha) and NOT the image name and tag (e.g., imagename:tag). Note the line of code in the first block:
myenv.docker.base_image = "4fb3..."
I reference the digest value, but it should be changed to
myenv.docker.base_image = "imagename:tag"
Once I made that change, the deployment succeeded! :)
One concept that took me a while to get was the bifurcation of registering and using an Azure ML Environment. If you have already registered your env, myenv, and none of the details of the your environment have changed, there is no need re-register it with myenv.register(). You can simply get the already register env using Environment.get() like so:
myenv = Environment.get(ws, name='myenv', version=11)
My recommendation would be to name your environment something new: like "model_scoring_env". Register it once, then pass it to the InferenceConfig.

"Failed to parse manifest" when compiling rustc using a locally-modified copy of the libc crate

I need to build the rustc compiler using a modified libc crate. I cloned the libc directory and made the changes, now how do I include the modified libc in my build?
This is my Cargo.toml
[patch.crates-io]
# Similar to Cargo above we want the RLS to use a vendored version of `rustfmt`
# that we're shipping as well (to ensure that the rustfmt in RLS and the
# `rustfmt` executable are the same exact version).
rustfmt-nightly = { path = "src/tools/rustfmt" }
# See comments in `src/tools/rustc-workspace-hack/README.md` for what's going on
# here
rustc-workspace-hack = { path = 'src/tools/rustc-workspace-hack' }
# See comments in `tools/rustc-std-workspace-core/README.md` for what's going on
# here
rustc-std-workspace-core = { path = 'src/tools/rustc-std-workspace-core' }
rustc-std-workspace-alloc = { path = 'src/tools/rustc-std-workspace-alloc' }
rustc-std-workspace-std = { path = 'src/tools/rustc-std-workspace-std' }
libc = {path = "../libc"}
[patch."https://github.com/rust-lang/rust-clippy"]
clippy_lints = { path = "src/tools/clippy/clippy_lints" }
[dependencies]
# libc = {verion = "0.2", default-features= false, path = "../libc"}
This is the error I get:
mahto#hydlnxeng27:/local/mnt/workspace/mahto/rust$ ./x.py build --config config.toml src/libstd 2>&1 | tee build.log
Updating only changed submodules
Submodules updated in 0.04 seconds
error: failed to parse manifest at `/local/mnt/workspace/mahto/rust/Cargo.toml`
Caused by:
virtual manifests do not specify [dependencies]
failed to run: /local/mnt/workspace/mahto/rust/build/x86_64-unknown-linux-gnu/stage0/bin/cargo build --manifest-path /local/mnt/workspace/mahto/rust/src/bootstrap/Cargo.toml
Build completed unsuccessfully in 0:00:00
After commenting-out the dependencies section in Cargo.toml, I get this new error:
error[E0433]: failed to resolve: unresolved import
error: aborting due to previous error
For more information about this error, try `rustc --explain E0433`.
error: could not compile `libc`.

Terraform CLI : Error: Failed to read ssh private key: no key found

I have this variable private_key_path = "/users/arun/aws_keys/pk.pem" defined in terraform.tfvars file
and i am doing SSH in my terraform-template . see the configuration below
connection {
type = "ssh"
host = self.public_ip
user = "ec2-user"
private_key = file(var.private_key_path)
}
The private file is very much available in that path. But still i get the below exception thrown by the terraform-cli
Error: Failed to read ssh private key: no key found
Is there anything else am missing out ?
generate the public and private key using gitbash.
$ ssh-keygen.exe -f demo
call the demo file or copy the demo and demo.pub file to the specific directory

Resources