How do I get instanceId dynamically in NixOps when setting a route? - nixos

I'm using NixOps 1.7 and deploying to AWS. I'm trying to set up some route tables for my subnet and I'm using one of my boxes as a NAT. How can I get the instanceId dynamically of my NAT box? I tried this:
resources.vpcRoutes = {
private-nat-route = { nodes, resources, ... }: {
inherit region;
routeTableId = resources.vpcRouteTables.private-route-table;
destinationCidrBlock = "0.0.0.0/0";
instanceId = nodes.natGateway.config.deployment.ec2.instanceId; # <- How do I get this?
};
};
But I get this error:
attribute 'config' missing, at /root/deploy/network.nix:70:20
error: evaluation of the deployment specification failed
This box is up and I can ssh out to it. I can also set the id manually if I get it from a nixops info command.
Relevant documentation:
https://releases.nixos.org/nixops/nixops-1.7/manual/manual.html#idm140737320214400

Related

Onepassword_item error terraform plan "status 401: Authentication: (Invalid token signature), Each header's KID must match the KID of the public key"

I have a kubernetes cluster running on google cloud. This cluster has a op_connect_server up and running and I am trying to use terraform to create the items on some specific vaults.
To be able to run it locally, I am port-forwarding the port 8080 to my kubernetes op_connect_server pod. (
kubectl get pods -A |grep onepassword-connect |grep -v operator |awk '{print $2}' 8080:8080 -n tools)
My kubernetes cluster is a private one with a public address attached to it. To run it locally, I am accessing it's public address, and to run it on gitlab, I am accessing it's private address (Because my gitlab pipeline machine is running from inside kubernetes cluster and has access to it's private address. It works for other features)
When I run it locally, everything works well. The items are created on vault without any problems, and also during the terraform plan it can connect to the op_connect_server and check the items without any error.
On my terraform provider for one_password I am setting the token and the op_connect_server address.
When I run it on my pipeline (gitlab), I get the error: status 401: Authentication: (Invalid token signature), Each header's KID must match the KID of the public key.
This error happens during terraform plan, when checking for some onepassword_item. I tried to retrieve the same information using curl and I am able to do it, but for some reason, it fails on terraform.
I already checked/tried:
Check all variables like token, op_connect server address, vault id and they are the same on both (local and gitlab)
Tried using the same cluster endpoint (public one) when running locally and from gitlab
Delete the cluster and create/run everything from the gitlab pipeline.
The creation process works (op_connect_server, all items are created and so on) but when I run it again, it fails with the same error message.
This is my code for creating the items:
resource "onepassword_item" "credentials" {
vault = ""
title = "Redis Database cache"
category = "database"
type = "other"
username = ""
database = "Redis Database"
hostname = module.beta_redis.database_host_access_private
port = module.beta_redis.database_host_access_port
password = module.beta_redis.auth_string
section {
label = "TLS"
field {
label = "tls_cert"
value = module.beta_redis.tls_cert
type = "CONCEALED"
}
field {
label = "tls_transit_encryption_mode"
value = module.beta_redis.tls_transit_encryption_mode
type = "CONCEALED"
}
field {
label = "tls_sha1_fingerprint"
value = module.beta_redis.tls_sha1_fingerprint
type = "CONCEALED"
}
}
My op_connect_server has these settings:
set {
name = "connect.credentials_base64"
value = data.local_file.input.content_base64
type = "string"
}
set {
name = "connect.serviceType"
value = "NodePort"
}
set {
name = "operator.create"
value = "true"
}
set {
name = "operator.autoRestart"
value = "true"
}
set {
name = "operator.clusterRole.create"
value = "true"
}
set {
name = "operator.roleBinding.create"
value = "true"
}
set {
name = "connect.api.name"
value = "beta-connect-api"
}
set {
name = "operator.token.value"
value = var.op_token_beta
}
My one password version is:
1.1.4
Does someone have any clue why this could be happening, or how can I debug it?

Unable to get machine type information for machine type n1-standard-2 in zone us-central-c because of insufficient permissions - Google Cloud Dataflow

I am not sure what I am missing but somehow I am not able to start the job and gets failed with insufficient permission:
Here is terraform code I run:
resource "google_dataflow_job" "poc-pubsub-stream" {
project = local.project_id
region = local.region
zone = local.zone
name = "poc-pubsub-to-cloud-storage"
template_gcs_path = "gs://dataflow-templates-us-central1/latest/Cloud_PubSub_to_GCS_Text"
temp_gcs_location = "gs://${module.poc-bucket.bucket.name}/tmp"
enable_streaming_engine = true
on_delete = "cancel"
service_account_email = google_service_account.poc-stream-sa.email
parameters = {
inputTopic = google_pubsub_topic.poc-topic.id
outputDirectory = "gs://${module.poc-bucket.bucket.name}/"
outputFilenamePrefix = "poc-"
outputFilenameSuffix = ".txt"
}
labels = {
pipeline = "poc-stream"
}
depends_on = [
module.poc-bucket,
google_pubsub_topic.poc-topic,
]
}
My SA permission that is used in the terraform code:
Any thoughts what I am missing?
The error describes being unable to get the machine type information because of insufficient permissions. To access the machine type information, add the roles/compute.viewer role to your service account.
The roles/compute.viewer role, to access machine type information and view other settings.
Refer to this doc for more information about the required permissions to create a Dataflow job.
It seems my DataFlow job needed to provide these following options. Per docs it was optional but in my case that needed to be defined.
...
network = data.terraform_remote_state.dev.outputs.network.network_name
subnetwork = data.terraform_remote_state.dev.outputs.network.subnets["us-east4/us-east4-dev"].self_link
...

How to create an IBM Cloud VSI using a custom image

I want to use Terraform to create a new virtual server using an existing Customer Image, just like manually under https://cloud.ibm.com/vpc-ext/compute/images.
I used an example code snippet and only replaced the name of the image (r010-...).
data "ibm_is_image" "centos" {
name = "r010-489ff05b-1494-4a05-8b12-c6f44a958859"
}
# Virtual Server Insance
resource "ibm_is_instance" "vsi1" {
name = "${local.BASENAME}-vsi1"
vpc = ibm_is_vpc.vpc-instance.id
keys = [data.ibm_is_ssh_key.ssh_key_id.id]
zone = local.ZONE
image = data.ibm_is_image.centos.id
profile = "cx2-2x4"
# References to the subnet and security groups
primary_network_interface {
subnet = ibm_is_subnet.subnet1.id
security_groups = [ibm_is_security_group.sg1.id]
}
}
The error message is:
Error: No image found with name r010-489ff05b-1494-4a05-8b12-c6f44a958859
It seems that only public AWS images can be used.
Seems like you're using id in place of name here
data "ibm_is_image" "centos" {
name = "r010-489ff05b-1494-4a05-8b12-c6f44a958859"
}
try using the name of the image
Here is an example: https://github.com/IBM-Cloud/isv-vsi-product-deploy-sample/blob/main/image-map.tf
This terraform file has image ids for different regions. Based on your VSI region, it will fetch the image id.
Your custom images are private. The visibility is an attribute that you can specify when looking up the data using ibm_is_image.
Thus, I recommend trying:
data "ibm_is_image" "centos" {
name = "r010-489ff05b-1494-4a05-8b12-c6f44a958859"
visibility = "private"
}
I confused name with id. The image name is expected rather than the id. Thanks!

terraform google cloud nat using reserved static ip

We have reserved static (whitelisted) IP addresses that need to be assigned to a CloudNAT on GCP by terraform. The IPs are reserved and registered with a service provider, which takes weeks to get approved and added to their firewalls, so dynamic allocation is not an option.
The main problem for us is that the google_compute_router_nat section requires the nat_ip_allocate_option, but in this case the IP address has already been allocated, so it fails with an error stating exactly that. The only options for allocate are AUTO_ONLY and MANUAL_ONLY, but it seems maybe an EXISTING or RESERVED might be needed, unless I'm missing something obvious.
Here is the failing configuration:
resource "google_compute_address" "static_ip" {
name = "whitelisted-static-ip"
region = "${var.project_region}"
}
resource "google_compute_router_nat" "cluster-nat" {
name = "cluster-stg-nat"
router = "${google_compute_router.router.name}"
region = "${google_compute_router.router.region}"
nat_ip_allocate_option = "MANUAL_ONLY"
nat_ips = ["${google_compute_address.static_ip.self_link}"]
source_subnetwork_ip_ranges_to_nat = "LIST_OF_SUBNETWORKS"
subnetwork {
name = "${google_compute_subnetwork.service.self_link}"
source_ip_ranges_to_nat = ["ALL_IP_RANGES"]
}
}
Results in the following error:
Error: Error creating Address: googleapi: Error 409: The resource 'projects/staging-cluster/regions/us-central1/addresses/whitelisted-static-ip' already exists, alreadyExists
because the static IP resource is already reserved in GCP External IP Addresses and registered with the service provider.
Changing the google_compute_address resource to a data object was the magic. I modified it to be:
data "google_compute_address" "static_ip" {
name = "whitelisted-static-ip"
region = "${var.project_region}"
}
Where the name of "whitelisted-static-ip" is what we assigned to the reserved external IP address when we created it. The updated router NAT resource then became:
resource "google_compute_router_nat" "cluster-nat" {
name = "${var.cluster_name}-nat"
router = "${google_compute_router.router.name}"
region = "${google_compute_router.router.region}"
nat_ip_allocate_option = "MANUAL_ONLY"
nat_ips = ["${data.google_compute_address.static_ip.self_link}"]
source_subnetwork_ip_ranges_to_nat = "LIST_OF_SUBNETWORKS"
subnetwork {
name = "${google_compute_subnetwork.service.self_link}"
source_ip_ranges_to_nat = ["PRIMARY_IP_RANGE"]
}
}
which is only a mod to the nat_ips field to point to the data object. Simple two word change and we're good to go. Excellent!
It looks like the problem is with the google_compute_address resource, not the NAT. You are trying to create a resource that already exists. Instead you should do one of the following:
If you want Terraform to manage this resource for you import the resource into Terraform, see here https://www.terraform.io/docs/import/ and here https://www.terraform.io/docs/providers/google/r/compute_address.html#import
If you do not want Terraform to manage the IP address for you then you can use a data object instead of a resource object. This is essentially a read only resource lookup so that you can reference it in Terraform but manage it somewhere else. See here https://www.terraform.io/docs/configuration/data-sources.html and here https://www.terraform.io/docs/providers/google/d/datasource_compute_address.html

Using terraform to create an aggregate ethernet interface on paloalto?

I've been trying to get terraform to create a new ae interface with no luck.
My tf files are very basic working with a factory reset PA3020 that only has the user, password, and IP preconfigured.
It's connecting correctly as I've been able to create/modify other values such as a management profile.
Has anyone successfuly been able to create an aggregate group in paloalso using terraform? If so how was that done?
provider "panos" {
hostname = "${var.pa-mgt-ip}"
username = "${var.pa-username}"
password = "${var.pa-password}"
}
resource "panos_ethernet_interface" "ae_int1" {
name = "ae1"
vsys = "vsys1"
mode = "layer3"
comment = "AE interface from TF"
}
resource "panos_ethernet_interface" "phy_int1" {
name = "ethernet1/3"
vsys = "vsys1"
mode = "aggregate-group"
aggregate_group = "${panos_ethernet_interface.ae_int1.name}"
comment = "AE1 physical interface from TF"
}
resource "panos_ethernet_interface" "phy_int2" {
name = "ethernet1/4"
vsys = "vsys1"
mode = "aggregate-group"
aggregate_group = "${panos_ethernet_interface.ae_int1.name}"
comment = "AE1 physical interface from TF"
}
The error is ae1 'ae1' is not a valid reference and the interface is not getting created. If I manually create the ae1 interface in the UI and set the group to ae1 in for the physical interfaces in the TF file they fail with the error aggregate-group is invalid.
Does panos not currently support creating AE interfaces? I couldn't find any issues in github related to creating interfaces.

Resources