How to import a terraform data source for an aws_organizations_organizaation - terraform

I have a simple aws_organizations_organization data source
data "aws_organizations_organization" "my_org" {
name = "my_org"
}
I am trying to import the datasource into my state.
Expected
run terraform import data.aws_organizations_organization my_org. Then the datasource is imported properly.
Actual
run terraform import data.aws_organizations_organization my_org. Then I get the error
Error: Invalid address
│
│ on <import-address> line 1:
│ 1: data.aws_organizations_organization
│
│ Resource specification must include a resource type and name.
Can someone explain to me what is wrong with this command? Thank you.

Related

How does Polars' global string cache work?

I am trying to replace the null values in a polars categorical series with another literal string. The solution I had worked in an older version of polars before categoricals started using the global string cache by default.
let data = "Name
ONE
TWO
THREE
FOUR
FIVE";
let schema = Schema::new()
.insert_index(0, "Name".to_string(), DataType::Categorical(None))
.unwrap();
let buff = std::io::Cursor::new(data);
// toggle_string_cache(true); // this is the line that fixes the error
let frame = CsvReader::new(buff)
.infer_schema(Some(0))
.with_dtypes(Some(&schema))
.finish()
.unwrap();
println!("{:?}", frame);
// ┌───────┐
// │ Name │
// │ --- │
// │ cat │
// ╞═══════╡
// │ ONE │
// ├╌╌╌╌╌╌╌┤
// │ TWO │
// ├╌╌╌╌╌╌╌┤
// │ THREE │
// ├╌╌╌╌╌╌╌┤
// │ FOUR │
// ├╌╌╌╌╌╌╌┤
// │ null │
// ├╌╌╌╌╌╌╌┤
// │ FIVE │
// └───────┘
//toggle_string_cache(true); // this is the line that doesnt work as expected
let null_filled_frame = frame
.clone()
.lazy()
.with_column(
when(col("Name").is_null())
.then(lit("Missing"))
.otherwise(col("Name"))
.alias("Name"),
)
.collect()
.unwrap();
println!("{:?}", null_filled_frame);
// ┌─────────┐ // the expected result
// │ Name │
// │ --- │
// │ cat │
// ╞═════════╡
// │ ONE │
// ├╌╌╌╌╌╌╌╌╌┤
// │ TWO │
// ├╌╌╌╌╌╌╌╌╌┤
// │ THREE │
// ├╌╌╌╌╌╌╌╌╌┤
// │ FOUR │
// ├╌╌╌╌╌╌╌╌╌┤
// │ Missing │
// ├╌╌╌╌╌╌╌╌╌┤
// │ FIVE │
// └─────────┘
The polars CSV reader will enable the global string cache for the categorical columns and disable it afterwards. It makes sense that it will have issues when I want to change the categorical data. I get an error regarding how I can't mix data from a global cache and non-cached categorical.
Enabling the global string cache before the read of the CSV allows me to modify the data as expected.
The central question is why I cannot re-enable the global cache before performing my replacement operation but after the CSV has been read.
When I do this, I get the error message "The two categorical arrays are not created under the same global string cache. They cannot be merged."
This is weird because this implies that more than one global string cache is being used when I do it this way, which does not make sense. Could someone explain?

Failed to create my first OpenStack VM by way of terraform

I am trying to see if I could create OpenStack VMs by terraform for the first time, but so far no luck.
here is what I have in my main.tf file:
...
terraform {
required_providers {
openstack = {
source = "terraform-provider-openstack/openstack"
}
}
}
provider "openstack" {
cloud = "osp_admin" # cloud defined in cloud.yml file
}
# Variables
variable "keypair" {
type = string
default = "ubuntu" # name of keypair created
}
variable "network" {
type = string
default = "Public_External_1" # default network to be used
}
variable "security_groups" {
type = list(string)
default = ["default"] # Name of default security group
}
# Data sources
## Get flavor id
data "openstack_compute_flavor_v2" "flavor" {
name = "mt.small" # flavor to be used
}
## Get Image ID
data "openstack_images_image_v2" "image" {
name = "Debian-10" # Name of image to be used
most_recent = true
}
And image of "Debian-10" has bee created, as I have verified it from image list. Now If I was running this on my command line.
terraform plan
I have got such message in return:
data.openstack_images_image_v2.image: Reading...
data.openstack_compute_flavor_v2.flavor: Reading...
╷
│ Error: Error creating OpenStack compute client: Post "http://van3-st-vn-01.corp.<domain_name>.com:5000/v3/auth/tokens": OpenStack connection error, retries exhausted. Aborting. Last error was: EOF
│
│ with data.openstack_compute_flavor_v2.flavor,
│ on main.tf line 32, in data "openstack_compute_flavor_v2" "flavor":
│ 32: data "openstack_compute_flavor_v2" "flavor" {
│
╵
╷
│ Error: Error creating OpenStack image client: Post "http://van3-st-vn-01.corp.<domain_name>.com:5000/v3/auth/tokens": OpenStack connection error, retries exhausted. Aborting. Last error was: EOF
│
│ with data.openstack_images_image_v2.image,
│ on main.tf line 37, in data "openstack_images_image_v2" "image":
│ 37: data "openstack_images_image_v2" "image" {
│
╵
I was running terraform on ubuntu 22.04, and here is the terraform version message:
/usr/bin/terraform --version
Terraform v1.2.9
on linux_amd64
+ provider registry.terraform.io/terraform-provider-openstack/openstack v1.48.0
If I was to log-in OpenStack, I was able to create this instance with the same set of parameters.
Any ideas what I did wrong here ?
Thanks,
Chun
update:
curl -v http://van3-st-vn-01.corp.<domain_name>.com:5000/v3/auth/tokens
* Trying 10.95.36.130:5000...
* TCP_NODELAY set
* Connected to van3-st-vn-01.corp.<domain_name>.com (10.95.36.130) port 5000 (#0)
> GET /v3/auth/tokens HTTP/1.1
> Host: van3-st-vn-01.corp.<domain_name>.com:5000
> User-Agent: curl/7.68.0
> Accept: */*
>
* Empty reply from server
* Connection #0 to host van3-st-vn-01.corp.<domain_name>.com left intact
curl: (52) Empty reply from server

How to add a label to my vm instance in gcp via terraform/terragrunt

I have an issue in our environment where i cannot add a label to a vm instance in GCP via terraform/terragrunt after creation. We have a google repository that is setup via terraform and we use git to clone and update from a local repository, this will activate a trigger on cloudbuild to push the changes to the repo. We do not use terraform/grunt commands at all. It is all controlled via git. The labels are referenced in our compute module as shown.
variable "labels" {
description = "Labels to add."
type = map(string)
default = {}
}
Ok onto the issue. We have in our environment a mix of lift and shift and native cloud vm instances. We recently decided we wanted to add an additional label in the code to identify if the instance was under terraform control - ie terraform = "true/false"
labels = {
application = "demo-test"
businessunit = "homes"
costcentre = "90imt"
createdby = "ab"
department = "it"
disasterrecovery = "no"
environment = "rnd"
contact = "abriers"
terraform = "false"
}
}
So i add the label and use the usual git commands to add/commit push etc which triggers the cloudbuild as usual. The problem is, the label does not appear in the console when viewing it.
It's as if cloudbuild or terraform/terragrunt isn't recognising it as a change. I can change the value of a label no problem, but i cannot seem to add or remove a label after the vm has been created.
It has been suggested to run terraform/terragrunt plan in vs code but as mentioned, this has all been setup to use git so the above commands do not work.
For example i run terragrunt init in the directory and get this error
PS C:\Cloudrepos\placesforpeople> terragrunt init
time=2022-07-27T09:56:27+01:00 level=error msg=Error reading file at path C:/Cloudrepos/placesforpeople/terragrunt.hcl: open C:/Cloudrepos/placesforpeople/terragrunt.hcl: The system cannot find the
file specified.
time=2022-07-27T09:56:27+01:00 level=error msg=Unable to determine underlying exit code, so Terragrunt will exit with error code 1
PS C:\Cloudrepos\placesforpeople> cd org
PS C:\Cloudrepos\placesforpeople\org> cd rnd
PS C:\Cloudrepos\placesforpeople\org\rnd> cd adam_play_area
PS C:\Cloudrepos\placesforpeople\org\rnd\adam_play_area> ls
Mode LastWriteTime Length Name
---- ------------- ------ ----
d----- 20/07/2022 14:18 modules
d----- 20/07/2022 14:18 test_project_001
PS C:\Cloudrepos\placesforpeople\org\rnd\adam_play_area> cd test_project_001
PS C:\Cloudrepos\placesforpeople\org\rnd\adam_play_area\test_project_001> cd compute
PS C:\Cloudrepos\placesforpeople\org\rnd\adam_play_area\test_project_001\compute> ls
Directory: C:\Cloudrepos\placesforpeople\org\rnd\adam_play_area\test_project_001\compute
Mode LastWriteTime Length Name
---- ------------- ------ ----
d----- 07/07/2022 15:51 start_stop_schedule
d----- 20/07/2022 14:18 umig
-a---- 07/07/2022 16:09 1308 .terraform.lock.hcl
-a---- 27/07/2022 09:56 2267 terragrunt.hcl
PS C:\Cloudrepos\placesforpeople\org\rnd\adam_play_area\test_project_001\compute> terragrunt init
Initializing modules...
- data_disk in ..\compute_data_disk
Initializing the backend...
Successfully configured the backend "gcs"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing provider plugins...
- Reusing previous version of hashicorp/google from the dependency lock file
- Reusing previous version of hashicorp/google-beta from the dependency lock file
╷
│ Warning: Backend configuration ignored
│
│ on ..\compute_data_disk\backend.tf line 3, in terraform:
│ 3: backend "gcs" {}
│
│ Any selected backend applies to the entire configuration, so Terraform
│ expects provider configurations only in the root module.
│
│ This is a warning rather than an error because it's sometimes convenient to
│ temporarily call a root module as a child module for testing purposes, but
│ this backend configuration block will have no effect.
╵
╷
│ Error: Failed to query available provider packages
│
│ Could not retrieve the list of available versions for provider
│ hashicorp/google: could not connect to registry.terraform.io: Failed to
│ request discovery document: Get
│ "https://registry.terraform.io/.well-known/terraform.json": Proxy
│ Authorization Required
╵
╷
│ Error: Failed to query available provider packages
│
│ Could not retrieve the list of available versions for provider
│ hashicorp/google-beta: could not connect to registry.terraform.io: Failed
│ to request discovery document: Get
│ "https://registry.terraform.io/.well-known/terraform.json": Proxy
│ Authorization Required
╵
time=2022-07-27T09:57:40+01:00 level=error msg=Hit multiple errors:
Hit multiple errors:
exit status 1
PS C:\Cloudrepos\placesforpeople\org\rnd\adam_play_area\test_project_001\compute>
But as mentioned, we dont use and have never used these commands to push the changes.
I cannot work out why these labels wont add/remove after the vm has already been created.
I have tried making a change to an instance to trigger the change such as increase the disk size.
I have tried to create a block in the module for all the labels needed but this doesn't work as you cannot have labels as a block in this module.
labels {
application = var.labels.application
businessunit = var.labels.businessunit
costcentre = var.labels.costcentre
createdby = var.labels.createdby
department = var.labels.department
disasterrecovery = var.labels.disasterrecovery
environment = var.labels.environment
contact = var.labels.contact
terraform = var.labels.terraform
}
}
Any ideas? I know you cannot add a label to a project post creation, does the same apply to vm instances? Is there any alternative method i can test?
As requested this is the code for the vm instance
terraform {
source = "../../modules//compute_instance_static_ip/"
}
# Include all settings from the root terragrunt.hcl file
include {
path = find_in_parent_folders("org.hcl")
}
dependency "project" {
config_path = "../project"
# Configure mock outputs for the terraform commands that are returned when there are no outputs available (e.g the
# module hasn't been applied yet.
mock_outputs_allowed_terraform_commands = ["plan", "validate"]
mock_outputs = {
project_id = "project-not-created-yet"
}
}
prevent_destroy = false
inputs = {
gcp_instance_sa_email = "testprj-compute#gc-r-prj-testprj-0001-9627.iam.gserviceaccount.com" # This well tell gcp to use the default GCE service account
instance_name = "rnd-demo-test1"
network = "projects/gc-a-prj-vpchost-0001-3312/global/networks/gc-r-vpc-0001"
subnetwork = "projects/gc-a-prj-vpchost-0001-3312/regions/europe-west2/subnetworks/gc-r-snet-middleware-0001"
zone = "europe-west2-c"
region = "europe-west2"
project = dependency.project.outputs.project_id
os_image = "debian-10-buster-v20220118"
machine_type = "n1-standard-4"
boot_disk_size = 100
instance_scope = ["cloud-platform"]
instance_tags = ["demo-test"]
deletion_protection = "false"
metadata = {
windows-startup-script-ps1 = "Set-TimeZone -Id 'GMT Standard Time' -PassThru"
}
ip_address_region = "europe-west2"
ip_address_type = "INTERNAL"
attached_disks = {
data = {
size = 60
type = "pd-standard"
}
}
/*/ instance_schedule_policy = {
name = "start-stop"
#region = "europe-west2"
vm_start_schedule = "30 07 * * *"
vm_stop_schedule = "00 18 * * *"
time_zone = "GMT"
}
*/
labels = {
application = "demo-test"
businessunit = "homes"
costcentre = "90imt"
createdby = "ab"
department = "it"
disasterrecovery = "no"
environment = "rnd"
contact = "abriers"
terraform = "false"
}
}
terragrunt validate-inputs result below
PS C:\Cloudrepos\placesforpeople\org\rnd> terragrunt validate-inputs
time=2022-07-27T14:25:19+01:00 level=warning msg=The following inputs passed in by terragrunt are unused:
prefix=[C:\Cloudrepos\placesforpeople\org\rnd]
time=2022-07-27T14:25:19+01:00 level=warning msg= - billing_account prefix=[C:\Cloudrepos\placesforpeople\org\rnd]
time=2022-07-27T14:25:19+01:00 level=warning msg= - host_project_id prefix=[C:\Cloudrepos\placesforpeople\org\rnd]
time=2022-07-27T14:25:19+01:00 level=warning prefix=[C:\Cloudrepos\placesforpeople\org\rnd]
time=2022-07-27T14:25:19+01:00 level=info msg=All required inputs are passed in by terragrunt. prefix=[C:\Cloudrepos\placesforpeople\org\rnd]
time=2022-07-27T14:25:19+01:00 level=error msg=Terragrunt configuration has misaligned inputs
time=2022-07-27T14:25:19+01:00 level=error msg=Unable to determine underlying exit code, so Terragrunt will exit with error code 1
PS C:\Cloudrepos\placesforpeople\org\rnd>
I have found the culprit!
In the compute instance module i discovered this block of code. I removed labels and voila the extra labels now appear. Thanks for the assistance and advice on post formatting.
lifecycle {
ignore_changes = [
boot_disk.0.initialize_params.0.image,
attached_disk, labels
]
}

The package import path is different for dynamic codegen and static codegen

Here is the structure for src directory of my project:
.
├── config.ts
├── protos
│ ├── index.proto
│ ├── index.ts
│ ├── share
│ │ ├── topic.proto
│ │ ├── topic_pb.d.ts
│ │ ├── user.proto
│ │ └── user_pb.d.ts
│ ├── topic
│ │ ├── service.proto
│ │ ├── service_grpc_pb.d.ts
│ │ ├── service_pb.d.ts
│ │ ├── topic.integration.test.ts
│ │ ├── topic.proto
│ │ ├── topicServiceImpl.ts
│ │ ├── topicServiceImplDynamic.ts
│ │ └── topic_pb.d.ts
│ └── user
│ ├── service.proto
│ ├── service_grpc_pb.d.ts
│ ├── service_pb.d.ts
│ ├── user.proto
│ ├── userServiceImpl.ts
│ └── user_pb.d.ts
└── server.ts
share/user.proto:
syntax = "proto3";
package share;
message UserBase {
string loginname = 1;
string avatar_url = 2;
}
topic/topic.proto:
syntax = "proto3";
package topic;
import "share/user.proto";
enum Tab {
share = 0;
ask = 1;
good = 2;
job = 3;
}
message Topic {
string id = 1;
string author_id = 2;
Tab tab = 3;
string title = 4;
string content = 5;
share.UserBase author = 6;
bool good = 7;
bool top = 8;
int32 reply_count = 9;
int32 visit_count = 10;
string create_at = 11;
string last_reply_at = 12;
}
As you can see, I try to import share package and use UserBase message type in Topic message type. When I try to start the server, got error:
no such Type or Enum 'share.UserBase' in Type .topic.Topic
But when I changed the package import path to a relative path import "../share/user.proto";. It works fine and got server logs: Server is listening on http://localhost:3000.
Above is the usage of dynamic codegen.
Now, I switch to using static codegen, here is the shell script for generating the codes:
protoc \
--plugin=protoc-gen-ts=./node_modules/.bin/protoc-gen-ts \
--ts_out=./src/protos \
-I ./src/protos \
./src/protos/**/*.proto
It seems protocol buffer compiler doesn't support relative path, got error:
../share/user.proto: Backslashes, consecutive slashes, ".", or ".." are not allowed in the virtual path
And, I changed the the package import path back to import "share/user.proto";. It generated code correctly, but when I try to start my server, got same error:
no such Type or Enum 'share.UserBase' in Type .topic.Topic
It's weird.
Package versions:
"grpc-tools": "^1.6.6",
"grpc_tools_node_protoc_ts": "^4.1.3",
protoc --version
libprotoc 3.10.0
UPDATE:
repo: https://github.com/mrdulin/nodejs-grpc/tree/master/src
Your dynamic codegen is failing because you are not specifying the paths to search for imported .proto files. You can do this using the includeDirs option when calling protoLoader.loadSync, which works in a very similar way to the -I option you pass to protoc. In this case, you are loading the proto files from the src/protos directory, so it should be sufficient to pass the option includeDirs: [__dirname]. Then the import paths in your .proto files should be relative to that directory, just like when you use protoc.
You are probably seeing the same error when you try to use the static code generation because it is actually the dynamic codegen error; you don't appear to be removing the dynamic codegen code when trying to use the statically generated code.
However, the main problem you will face with the statically generated code is that you are only generating the TypeScript type definition files. You also need to generate JavaScript files to actually run it. The official Node gRPC plugin for proto is distributed in the grpc-tools package. It comes with a binary called grpc_tools_node_protoc, which should be used in place of protoc and automatically includes the plugin. You will still need to pass a --js_out flag to generate that code.

AWS SAM Nested Application in Python with Dynamorm

I am using AWS SAM to build a Serverless application. I followed the instruction to build a nested application.
My application structure is basically the following:
.
├── MAKEFILE
├── README.md
├── __init__.py
├── apps
│ ├── __init__.py
│ ├── account
│ │ ├── __init__.py
│ │ ├── endpoints.py
│ │ ├── models.py
│ │ ├── requirements.txt
│ │ └── template.yaml
├── samconfig.toml
└── template.yaml
The requirements.txt in the folder apps/account/ has the following python packages: boto3 marshmallow and dynamorm.
The sam build and sam deploy works fine and the lambda functions are deployed correctly. However, I receive an error when calling the lambda function. The logs show the following error Unable to import module 'endpoints': No module named 'dynamorm'.
Here are excerpts from my code:
endpoints.py
import json
import boto3
from models import Account
print('Loading function')
def account_info(event, context):
apiKey = event["requestContext"]["identity"]["apiKeyId"]
account_info = Account.get(id= apiKey)
return {
"statusCode": 200,
"body": json.dumps(account_info)
}
models.py
import datetime
from dynamorm import DynaModel, GlobalIndex, ProjectAll
from marshmallow import Schema, fields, validate, validates, ValidationError
class Account(DynaModel):
# Define our DynamoDB properties
class Table:
name = 'XXXXXXXXXX'
hash_key = 'id'
read = 10
write = 5
class Schema:
id = fields.String(required=True)
name = fields.String()
email = fields.String()
phonenumber = fields.String()
status = fields.String()
I am not sure what am I missing? Are there additional instructions to build a nested app in SAM?
Thank you so much for the help!
According to https://github.com/awslabs/aws-sam-cli/issues/1213, this feature is not supported yet.
In my case, I did 'sam build' on every nested stacks and fix parent yaml template as following (use template.yaml generated by sam build command), then works. But just workaround and not nice way.
XXX_APP:
Type: AWS::Serverless::Application
Properties:
Location: nest_application/.aws-sam/build/template.yaml

Resources