Trying to assign global reader role and getting below error in terraform-
│ Error: Invalid resource type
│
│ on ../../modules/infrastructure/cloud-scanner-app/main.tf line 51, in resource "azuread_directory_role_assignment" "example":
│ 51: resource "azuread_directory_role_assignment" "example" {
│
│ The provider hashicorp/azuread does not support resource type "azuread_directory_role_assignment"
From the error message I guess that you are using a hashicorp/azuread provider version older than 2.25.0. The resource type azuread_directory_role_assignment was introduced in 2.25.0 (June 23, 2022). I suggest upgrading the provider to at least 2.26.1 because as you can see in the changelog, some bugfixes for the azuread_directory_role_assignment resource type have been made: https://github.com/hashicorp/terraform-provider-azuread/blob/main/CHANGELOG.md
Related
Recently, I've got the problem when running terraoform init. The error showed as below:
>terraform init
Initializing modules...
Initializing the backend...
Initializing provider plugins...
- Reusing previous version of hashicorp/template from the dependency lock file
- Reusing previous version of cloudflare/cloudflare from the dependency lock file
- Reusing previous version of hashicorp/aws from the dependency lock file
- Reusing previous version of hashicorp/null from the dependency lock file
- Installing hashicorp/template v2.2.0...
- Using previously-installed cloudflare/cloudflare v3.18.0
- Installing hashicorp/aws v3.30.0...
- Installing hashicorp/null v3.1.1...
╷
│ Error: Failed to install provider
│
│ Error while installing hashicorp/template v2.2.0: Get "https://releases.hashicorp.com/terraform-provider-template/2.2.0/terraform-provider-template_2.2.0_linux_amd64.zip": read tcp 10.250.192.121:45540->108.157.30.78:443: read:
│ connection reset by peer
╵
╷
│ Error: Failed to install provider
│
│ Error while installing hashicorp/aws v3.30.0: Get "https://releases.hashicorp.com/terraform-provider-aws/3.30.0/terraform-provider-aws_3.30.0_linux_amd64.zip": read tcp 10.250.192.121:37686->108.157.30.40:443: read: connection reset
│ by peer
╵
╷
│ Error: Failed to install provider
│
│ Error while installing hashicorp/null v3.1.1: Get "https://releases.hashicorp.com/terraform-provider-null/3.1.1/terraform-provider-null_3.1.1_linux_amd64.zip": read tcp 10.250.192.121:45552->108.157.30.78:443: read: connection reset
│ by peer
I suppose I need to add some additional configuration relating to network in my server? (IT team just added the new cert in the compary network system)
If I look at the outputs provided by the haskell.nix flake from a M1 computer, it starts building ghc-8.8.4 etc..
❯ nix flake show github:input-output-hk/haskell.nix
github:input-output-hk/haskell.nix/1b54ea01568299a0eda578ae9395e20d5c699ee1
├───checks
│ ├───aarch64-darwin
trace: haskell-nix.haskellLib.cleanGit: /nix/store/jmx2m0ldgrjq7p3gb4yyca47nvbvspfl-source does not seem to be a git repository,
assuming it is a clean checkout.
trace: No index state specified for haskell-project, using the latest index state that we know about (2022-02-07T00:00:00Z)!
trace: No index state specified for haskell-project, using the latest index state that we know about (2022-02-07T00:00:00Z)!
trace: No index state specified for haskell-project, using the latest index state that we know about (2022-02-07T00:00:00Z)!
trace: WARNING: No materialized dummy-ghc-data for ghc-8.8.4-aarch64-darwin.
trace: To make this a fixed-output derivation but not materialized, set `sha256` to the output of the 'calculateMaterializedSha' script in 'passthru'.
trace: To materialize this entirely, pass a writable path as the `materialized` argument and run the 'updateMaterialized' script in 'passthru'.
[1/0/579 built, 0.1 MiB DL] building ghc-8.8.4 (buildPhase):.....
From an Intel Mac, I get
❯ nix flake show github:input-output-hk/haskell.nix
github:input-output-hk/haskell.nix/1b54ea01568299a0eda578ae9395e20d5c699ee1
├───checks
│ ├───aarch64-darwin
trace: haskell-nix.haskellLib.cleanGit: /nix/store/jmx2m0ldgrjq7p3gb4yyca47nvbvspfl-source does not seem to be a git repository,
assuming it is a clean checkout.
trace: No index state specified for haskell-project, using the latest index state that we know about (2022-02-07T00:00:00Z)!
trace: No index state specified for haskell-project, using the latest index state that we know about (2022-02-07T00:00:00Z)!
trace: No index state specified for haskell-project, using the latest index state that we know about (2022-02-07T00:00:00Z)!
error: --- Error ----------------------------------------------------------------------------------------------------------------------------------------------------------------------- nix
a 'aarch64-darwin' with features {} is required to build '/nix/store/sc11rh4l348yw3z4q4fy4byw324nm5yz-nix-tools-plan-to-nix-pkgs.drv', but I am a 'x86_64-darwin' with features {benchmark, big-parallel, nixos-test, recursive-nix}
(use '--show-trace' to show detailed location information)
Why do those flakes need to build anything for showing their outputs ?
If I run this command on some other flake I have, after a few fetches :
❯ nix flake show github:edolstra/dwarffs
github:edolstra/dwarffs/69d73417d83ebeb7711912e33515d87049b39de0
├───checks
│ ├───aarch64-linux
│ │ ├───build: derivation 'dwarffs-0.1.20220128.69d7341'
│ │ └───test: derivation 'vm-test-run-unnamed'
│ ├───i686-linux
│ │ ├───build: derivation 'dwarffs-0.1.20220128.69d7341'
│ │ └───test: derivation 'vm-test-run-unnamed'
│ └───x86_64-linux
│ ├───build: derivation 'dwarffs-0.1.20220128.69d7341'
│ └───test: derivation 'vm-test-run-unnamed'
├───defaultPackage
│ ├───aarch64-linux: package 'dwarffs-0.1.20220128.69d7341'
│ ├───i686-linux: package 'dwarffs-0.1.20220128.69d7341'
│ └───x86_64-linux: package 'dwarffs-0.1.20220128.69d7341'
├───nixosModules
│ └───dwarffs: NixOS module
└───overlay: Nixpkgs overlay
haskell.nix depends heavily on what is commonly called "import from derivation" or IFD. These are expressions such as
foo = import "${mkDerivation bar}/expr.nix";
or
qux = builtins.readFile (somePackage + "/data.json");
These can not be evaluated without building bar and somePackage.
haskell.nix does have a feature that lets you avoid such expressions altogether. They've called it materialization.
In Terraform, Getting ShareBeingDeleted error while updating protocol in azure storage share from SMB to NFS. (Also for NFS to SMB)
I already deployed the azure storage share with SMB protocol using below source
resource "azurerm_storage_share" "example" {
name = "mystrgshr456"
storage_account_name = "mystrgacnt456"
quota = 100
enabled_protocol = "SMB"
}
Now am planning to update the protocol from SMB to NFS. So have updated the source code as shown below.
resource "azurerm_storage_share" "example" {
name = "mystrgshr456"
storage_account_name = "mystrgacnt456"
quota = 100
enabled_protocol = "NFS"
}
After changing source, ran the terraform apply --auto-approve command. And getting below error.
$ terraform apply --auto-approve
azurerm_storage_share.example: Refreshing state... [id=https://mystrgacnt456.file.core.windows.net/mystrgshr456]
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following
symbols:
-/+ destroy and then create replacement
Terraform will perform the following actions:
# azurerm_storage_share.example must be replaced
-/+ resource "azurerm_storage_share" "example" {
~ enabled_protocol = "SMB" -> "NFS" # forces replacement
~ id = "https://mystrgacnt456.file.core.windows.net/mystrgshr456" -> (known after apply)
~ metadata = {} -> (known after apply)
name = "mystrgshr456"
~ resource_manager_id = "/resourceGroups/tf-azure/providers/Microsoft.Storage/storageAccounts/mystrgacnt456/fileServices/default/fileshares/mystrgshr456" -> (known after apply)
~ url = "https://mystrgacnt456.file.core.windows.net/mystrgshr456" -> (known after apply)
# (2 unchanged attributes hidden)
- timeouts {}
}
Plan: 1 to add, 0 to change, 1 to destroy.
azurerm_storage_share.example: Destroying... [id=https://mystrgacnt456.file.core.windows.net/mystrgshr456]
azurerm_storage_share.example: Destruction complete after 3s
azurerm_storage_share.example: Creating...
azurerm_storage_share.example: Still creating... [10s elapsed]
azurerm_storage_share.example: Still creating... [20s elapsed]
azurerm_storage_share.example: Still creating... [30s elapsed]
╷
│ Error: creating Share "mystrgshr456" (Account "mystrgacnt456" / Resource Group "tf-azure"): shares.Client#Create: Failure sending request: StatusCode=409 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="ShareBeingDeleted" Message="The specified share is being deleted. Try operation later.\nRequestId:532d2678-b01a-00f4-4344-149210000000\nTime:2022-01-28T12:45:49.9175758Z"
│
│ with azurerm_storage_share.example,
│ on main.tf line 2, in resource "azurerm_storage_share" "example":
│ 2: resource "azurerm_storage_share" "example" {
│
╵
But storage share protocol has been successfully updated in azure portal.
This is a bug from Terraform azurerm Provider where The Storage Share recreation with updated protocol gives an error like :
Error: creating Share "example" (Account "expstracc" / Resource Group "exap-rg"): shares.Client#Create: Failure sending request: StatusCode=409 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="ShareBeingDeleted" Message="The specified share is being deleted. Try operation later."
Even though the Storage Share Protocol gets successfully updated and can be seen from Azure Portal.
This will be fixed in this Github Pull Request on the azurerm_storage_share .
When running terraform init, the following error is displayed:
Could not retrieve the list of available versions for provider
│ hashicorp/azurerm: could not connect to registry.terraform.io: Failed to
│ request discovery document: Get
│ "https://registry.terraform.io/.well-known/terraform.json": net/http:
│ request canceled while waiting for connection (Client.Timeout exceeded
│ while awaiting headers)
What could be the reason?
So I am following Terrafom's official page to install and start with Terraform, but when I come to terraform init command, I am getting the following error.
Error: Failed to query available provider packages
│
│ Could not retrieve the list of available versions for provider kreuzwerker/docker: could not connect to registry.terraform.io:
│ Failed to request discovery document: Get "https://registry.terraform.io/.well-known/terraform.json": dial tcp: lookup
│ registry.terraform.io on 192.168.0.1:53: server misbehaving
The issue seems to be with the network connectivity. If you are using a VPN stop it and then retry the command terraform init.
If you happen to be using an Apple M1 chip, I posted this answer elsewhere:
I am using a Macbook with an M1 chip as well and kept facing the same error. To fix, I had to uninstall terraform "brew uninstall terraform", follow these instructions https://benobi.one/posts/running_brew_on_m1_for_x86/, run "ibrew install hashicorp/tap/terraform".
Although "terraform version" will provide the same output as before, it now works. For me at least. Hope this helps someone!
I encountered the same error when running terraform init for a new AWS resource. I get the error below:
Initializing modules...
Initializing the backend...
Initializing provider plugins...
- Finding hashicorp/aws versions matching "4.9.0"...
╷
│ Error: Failed to query available provider packages
│
│ Could not retrieve the list of available versions for provider hashicorp/aws: could not connect to registry.terraform.io: Failed to request discovery document: Get
│ "https://registry.terraform.io/.well-known/terraform.json": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Here's how I solved it:
I think it is a temporary network issue.
All I had to do was to upgrade the Terraform version from 1.17 to 1.18 using the command:
brew upgrade terraform
And then tried to run terraform init again. Waited for some minutes and it was successful.