I'm rewriting a CloudFormation template into terraform and there is a CF resource I don't know the equivalent in TF.
The CF resource is AWS::SageMaker::Pipeline
Below is the fragment of the template.yaml
pipeline:
Type: AWS::SageMaker::Pipeline
Properties:
PipelineName: !Ref pPipelineName
PipelineDisplayName: !Ref pPipelineName
PipelineDescription: !Ref pPipelineDescription
PipelineDefinition:
PipelineDefinitionBody: !Sub "{\"Version\":\"2020-12-01\",........}}]}"
RoleArn: !Ref pPipelineRoleArn
Tags:
- Key: project
Value: !Ref pProjectName
Have someone defined this resource in Terraform?
It has not been officially added to the Terraform aws provider. However, recently a provider that helps people test different new features has been added to Terraform, and it is called AWS Cloud Control provider, awscc. An example on how to use this provider is given in [1]. There are also tutorials on HashiCorp Learn [2]. The resource you are looking for is given in [3].
[1] https://registry.terraform.io/providers/hashicorp/awscc
[2] https://learn.hashicorp.com/tutorials/terraform/aws-cloud-control
[3] https://registry.terraform.io/providers/hashicorp/awscc/latest/docs/resources/sagemaker_pipeline
Related
I have been working on deploying terraform package using azure devops pipeline.
We have our tf state file locally, and no plans to move to azure storage account. Could you please help how can we define the attribute values in terraform init step in pipeline.
- task: TerraformTaskV2#2
displayName: Terraform init
inputs:
provider: 'azurerm'
command: 'init'
workingDirectory: 'some directory'
backendServiceArm: 'some service conn'
**backendAzureRmContainerName: ??
backendAzureRmResourceGroupName: ??
backendAzureRmStorageAccountName: ??
backendAzureRmKey: **
What should be the values for Resource group, storage account name, container name. If I don't specify these values, pipeline is failing with below error
##[error]Error: Input required: backendAzureRmStorageAccountName
Any help on this is much appreciated. Thanks in advance.
I'm unsure if you can use the TerraformTaskV2 without utilizing a cloud provider's backend. In the README for said task it doesn't show options for using a local backend, only the following for terraform init:
... AzureRM backend configuration
... Amazon Web Services(AWS) backend configuration
... Google Cloud Platform(GCP) backend configuration
I haven't had experience with this yet, but you could look at the extension Azure Pipelines Terraform Tasks, which does explicitly support a local backend:
The Terraform CLI task supports the following terraform backends
local
...
Just a note on working in teams:
if you're working in a team deploying infrastructure, using a local backend can lead to potential undefined state and/or undesirable outcomes. The benefits of choosing a good backend can offer "...support locking the state while operations are being performed, which helps prevent conflicts and inconsistencies." - docs
I'm building a Route53 route utilizing AWS::Route53::RecordSet resource within cloudformation template ( sample below).
I looked at some samples and quite didn't understand what the HostedZoneId and HostedZoneName parameter were, I should pass in the template below. Do i need to create some other resource before this. What do these parameter refer to - HostedZone Id and name?
Record:
Type: AWS::Route53::RecordSet
Properties:
HostedZoneName: !Ref 'HostedZoneName'
HostedZoneId: !Ref 'HostedZoneId'
Comment: DNS name for my instance.
Name: !Join ['', [!Ref 'Subdomain', ., !Ref 'HostedZoneName']]
Type: CNAME
....
Some parts of my AWS infrastructure like S3 buckets/CloudFront distributions are deployed with Terraform and some other parts like serverless stuff are done with Serverless framework which is producing CloudFormation templates under the hood.
Changes in Serverless/CloudFormation stacks produces changes in API Gateway endpoint URLs, and running terraform plan against S3/CloudFront shows the difference in origin CloudFront block.
origin {
- domain_name = "qwerty.execute-api.eu-west-1.amazonaws.com"
+ domain_name = "asdfgh.execute-api.eu-west-1.amazonaws.com"
origin_id = "my-origin-id"
origin_path = "/path"
My idea was to write SSM on CloudFormation/Serverless deploy and read it in Terraform to be in sync.
Reading from SSM in serverless.yml is pretty straightforward, but I was unable to find the way to update SSM when deploying CloudFormation Stack. Any ideas?
I found serverless-SSM-publish plugin which is doing the job of writing/updating SSM
just need to add this to serverless.yml
plugins:
- serverless-ssm-publish
custom:
ssmPublish:
enabled: true
params:
- path: /qa/service_name/apigateway_endpoint_url
source: ServiceEndpoint
description: API Gateway endpoint url
secure: false
How to find all VMs in my Azure Subscription without any tags?
I tried below policy, but it doesn't seems to work:
policies:
- name: az-vm-tag-complience
resource: azure.vm
filters:
tag: none
This is not perfect but close:
policies:
- name: find-vms-without-any-tags
resource: azure.vm
filters:
- type: value
key: "tags"
value: absent
It will also filter out VMs that probably once had tags but they were removed so the tags container element is empty but present.
I'm working on integrating AKV and AKS, although I'm hitting a number of road blocks.
At any rate, what I want to ultimately do is automate pulling credentials and API keys from it for local dev clusters too. That way, devs don't have to be bothered with "go here", "do x", etc. They just start-up their dev cluster, the keys and credentials, are pulled automatically and can be managed from a central location.
The AKV and AKS integration, if I could get it working, makes sense because it is the same context. The local dev environments will be entirely different, minikube, clusters so a different context.
I'm trying to wrap my brain around how to grab the keys in the local dev cluster:
Will the secrets-store.csi.k8s.io in the following be available to use for local dev clusters (as taken from the AKV-AKS integration documentation)?
apiVersion: v1
kind: Pod
metadata:
name: nginx-secrets-store-inline
labels:
aadpodidbinding: azure-pod-identity-binding-selector
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- name: secrets-store-inline
mountPath: "/mnt/secrets-store"
readOnly: true
volumes:
- name: secrets-store-inline
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: azure-kvname
Or do I need to do something like the following as it is outlined here?
az keyvault secret show --name "ExamplePassword" --vault-name "<your-unique-keyvault-name>" --query "value"
Will the secrets-store.csi.k8s.io in the following be available to use for local dev clusters (as taken from the AKV-AKS integration documentation)?
No, it will not be available in local.
The secrets-store.csi.k8s.io uses managed identity(MSI) to access the keyvault, essentially makes an API call to azure instance metadeta endpoint to get the access token, then use the token to get the secret automatically, it is just available in an Azure environment supported MSI.
Or do I need to do something like the following as it is outlined here?
Yes, to get the secret from azure keyvault in local, your option is to do that manually, for example use the Azure CLI az keyvault secret show you mentioned.