Rust Workspaces do not respect individual project targets defined in .cargo/config.toml - rust

Consider the following project directory:
root/
├──project_one/
│ ├──.cargo/
│ │ └──config.toml
│ ├──Cargo.toml
│ └──target.json
└──Cargo.toml
Cargo.toml is the workspace manifest with members = [ "project_one" ]
project_one/Cargo.toml is the project manifest
project_one/avr-atmega328p.json defines target properties used by rustc(?)
root/project_one/.cargo/config.toml listing target = "target.json" under [build]
Problem
project_one does not compile using the configured target unless I delete root/Cargo.toml.
The build fails with an error message indicating that information for a correct target platform is missing (error: language item required, but not found: 'eh_personality').
Potential Solution
At the time of writing there has been a very recent PR merged into the rust-lang master branch: https://github.com/rust-lang/cargo/pull/9030
Are there any other solutions to avoid having to wait for a fix?

Related

Is "root" the part of "directory"?

Analyzing the path, the Node.js considering the root as the part of the directory:
/home/user/dir/file.txt
┌─────────────────────┬────────────┐
│ dir │ base │
├──────┬ ├──────┬─────┤
│ root │ │ name │ ext │
" / home/user/dir / file .txt "
└──────┴──────────────┴──────┴─────┘
C:\\path\\dir\\file.txt
┌─────────────────────┬────────────┐
│ dir │ base │
├──────┬ ├──────┬─────┤
│ root │ │ name │ ext │
" C:\ path\dir \ file .txt "
└──────┴──────────────┴──────┴─────┘
Is it actually so? Developing a new library, I am thinking must I consider the root as the part of directory, or no.
Well, actually the "directory" is a fuzzy term. According the definition,
In computing, a directory is a file system cataloging structure which
contains references to other computer files, and possibly other
directories.
Wikipedia
Nothing that answers on my question. What else we know?
When we are using the cd (the abbreviation of "change directory") command, we are specifying the path relative to current location. Nothing related with root.
The cd command works inside the specific drive (at least, on Windows). This indirectly could means that the root and directory could be combined but initially separated.
More exact terms are the "absolute path of the directory" and "relative path of the directory". But what is the "directory" itself?
For the Windows case, the data storage name could be different on separate computers but it does not affect to files structure inside the storage. Again, the root and directory are separate in this case.

Terraform - Variable defined in "*.auto.tfvars" file, but still cannot be discovered

I have the following directory Structure:
.
├── ./first_terraform.tf
├── ./modules
│   └── ./modules/ssh_keys
│   ├── ./modules/ssh_keys/main.tf
│   ├── ./modules/ssh_keys/outputs.tf
│   └── ./modules/ssh_keys/variables.tf
├── ./terraform.auto.tfvars
├── ./variables.tf
I am trying to pass a variable ssh_key to my child module defined as main.tf inside ./modules/ssh_keys/main.tf
resource "aws_key_pair" "id_rsa_ec2" {
key_name = "id_rsa_ec2"
public_key = file(var.ssh_key)
}
I also have this variable defined both at root and child level variables.tf file. For the value, I have set it in terraform.auto.tfvars as below
# SSH Key
ssh_key = "~/.ssh/haha_key.pub"
I also have a variable defined in root level and child level variables.tf file:
variable "ssh_key" {
type = string
description = "ssh key for EC2 login and checks"
}
My root terraform configuration has this module declared as:
module "ssh_keys" {
source = "./modules/ssh_keys"
}
I first did a terraform init -upgrade on my root level. Then ran terraform refresh and got hit by the following error.
Error: Missing required argument
on first_terraform.tf line 69, in module "ssh_keys":
69: module "ssh_keys" {
The argument "ssh_key" is required, but no definition was found.
Just for reference, line 69 in my root level configuration is where the module declaration has been made. I don't know what I have done wrong here. It seems I have all the variables declared, so am I missing some relationship between root/child module variable passing etc.?
Any help is appreciated! Thanks
I Think I know what I did wrong.
Terraform Modules - as per the documentation requires parents to pass on variables as part of invocation. For example:
module "foo" {
source = "./modules/foo"
var1 = value
var2 = value
}
The above var1, var2 can come from either auto.tfvars file, environment variables (recommended) or even command line -var-file calls. In fact, this is what Terraform calls "Calling a Child Module" here
Once I did that, everything worked like a charm! I hope I did find the correct way of doing things.

Using terragrunt generate provider block causes conflicts with require providers block in module

I'm using Terragrunt with Terraform version 0.14.8.
My project uses mono repo structure as it is a project requirement to package Terragrunt files and Terraform modules together in a single package.
Folder structure:
project root:
├── environments
│   └── prd
│   ├── rds-cluster
│   │   └── terragrunt.hcl
│   └── terragrunt.hcl
└── modules
├── rds-cluster
│   ├── README.md
│   ├── main.tf
│   ├── output.tf
│   └── variables.tf
└── secretsmanager-secret
├── README.md
├── main.tf
├── output.tf
└── variables.tf
In prd/terragrunt.hcl I define the remote state block and the generate provider block.
remote_state {
backend = "s3"
...
}
generate "provider" {
path = "provider.tf"
if_exists = "overwrite_terragrunt"
contents = <<EOF
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.0"
}
}
}
provider "aws" {
region = "ca-central-1"
}
EOF
}
In environments/prd/rds-cluster/terragrunt.hcl, I defined the following:
include {
path = find_in_parent_folders()
}
terraform {
source = "../../../modules//rds-cluster"
}
inputs = {
...
}
In modules/rds-cluster/main.tf, I defined the following:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 3.0"
}
}
}
// RDS related resources...
My problem is that when I try to run terragrunt plan under environments/prd/rds-cluster, I get the following error message:
Error: Duplicate required providers configuration
on provider.tf line 3, in terraform:
3: required_providers {
A module may have only one required providers configuration. The required
providers were previously configured at main.tf:2,3-21.
I can resolve this by declaring the version within the provider block as shown here. However, the version attribute in provider blocks has been deprecated in Terraform 0.13; Terraform recommends the use of the required_providers sub-block under terraform block instead.
Does anyone know what I need to do to use the new required_providers block for my aws provider?
As you've seen, Terraform expects each module to have only one definition of its required providers, which is intended to avoid a situation where it's unclear why Terraform is detecting a particular when the declarations are spread among multiple files.
However, to support this sort of piecemeal code generation use-case Terraform has an advanced feature called Override Files which allows you to explicitly mark certain files for a different mode of processing where they selectively override particular definitions from other files, rather than creating entirely new definitions.
The details of this mechanism depend on which block type you're overriding, but the section on Merging terraform blocks` discusses the behavior relevant to your particular situation:
If the required_providers argument is set, its value is merged on an element-by-element basis, which allows an override block to adjust the constraint for a single provider without affecting the constraints for other providers.
In both the required_version and required_providers settings, each override constraint entirely replaces the constraints for the same component in the original block. If both the base block and the override block both set required_version then the constraints in the base block are entirely ignored.
The practical implication of the above is that if you have an override file with a required_providers block that includes an entry for the AWS provider then Terraform will treat it as a full replacement for any similar entry already present in a non-override file, but it won't affect other provider requirements entries which do not appear in the override file at all.
Putting all of this together, you should be able to get the result you were looking for by asking Terragrunt to name this generated file provider_override.tf instead of just provider.tf, which will then activate the override file processing behavior and thus allow this generated file to override any existing definition of AWS provider requirements, while allowing the configurations to retain any other provider requirements they might also be defining.

contextisolation is off in this electron file?

I'm trying to get used to using the 'electronegativity' tool to look around inside opensource electron projects. Just to get familiar with it.
So I'm running it on a non-descript open source project, (this one if you would like to know https://github.com/hello-efficiency-inc/raven-reader)
──────────┤
│ CONTEXT_ISOLATION_JS_CHECK │ /home/ask/Git/raven-reader/src/main/pocket.js │ 11:23 │ Review the use of the contextIsolation option │
│ HIGH | FIRM │ │ │ https://git.io/Jeu1p │
This looks interesting, it tells me that there are some issues with the context isolation in the pcoket.js file at line 11.
This would in my mind indicate that I can find something like this:
contextIsolation: true
However, when I look in the code in the location that electronegativity indicates, I just find this:
const authWindow = new BrowserWindow({
width: 1024,
height: 720,
show: true
})
And I can't find any mention of context isolation anywhere in the file. Does this mean that context isolation is finding something that isn't there? or is there something smelly in the creation of the Browserwindow?

Terraform modules: correct references of variables?

I'm writing a terraform script to create an EKS cluster with its worker nodes on AWS. First time doing it so I'm a bit confused.
Here is the folder organisation:
├─── Int AWS Account
│ ├─── variables.tf
│ ├─── eks-cluster.tf (refers the modules)
│ ├─── others
│
├─── Prod AWS Account
│ ├─── (will be the same than Int with different settings in variables)
│
├─── ReadMe.md
│
├─── data sources
│
├─── Modules
│ ├─── cluster.tf
│ ├─── worker-nodes.tf
│ ├─── worker-nodes-sg.tf
I am a bit confused regarding how to use and pass variables. Right now, what I'm doing is that I refer to ${var.name} in the module folder, in the eks-cluster.tf, I either put a direct value name = blabla (mostly avoiding it), or refer to the variable again and have a variable file in the account folder.
Is that correct?
I'm not sure if I get your question correctly but in general you would want to keep your module files with variables only, as modules are intended to be generic so you can easily include them in different environments.
When including the module in eks_cluster_int.tf or eks_cluster_prod.tf you would then pass the values for all variables defined in the module itself. This way you can use the environment specific values in the same module.
module "cluster" {
source = "..."
var1 = value1 # directly passing value
var2 = ${var.int_specific_var} # can be defined in variables.tf of environment
...
}
Does this answer your question?

Resources