Cloudfront cache display mobile view for desktop users - terraform

I am using cloudfront caching for several weeks now (7 days caching).
Since then, I have several pages for which the mobile version is displayed to desktop users, as if the cache was storing a mobile version for every users.
Here is the terraform configuration of cloudfront:
resource "aws_cloudfront_cache_policy" "proxy_hubspot_cache_policy" {
name = "custom-caching-policy"
comment = "Our caching policy for the Cloudfront proxy"
default_ttl = 604800 # seven day of cache
max_ttl = 604800
min_ttl = 604800
parameters_in_cache_key_and_forwarded_to_origin {
enable_accept_encoding_brotli = true
enable_accept_encoding_gzip = true
cookies_config {
cookie_behavior = "none"
}
headers_config {
header_behavior = "none"
}
query_strings_config {
query_string_behavior = "all"
}
}
}
resource "aws_cloudfront_origin_request_policy" "proxy_hubspot_request_policy" {
name = "custom-request-policy-proxy"
cookies_config {
cookie_behavior = "all"
}
headers_config {
header_behavior = "allViewer"
}
query_strings_config {
query_string_behavior = "all"
}
}
resource "aws_cloudfront_distribution" "proxy_cdn" {
enabled = true
price_class = "PriceClass_100"
origin {
origin_id = local.workspace["cdn_proxy_origin_id"]
domain_name = local.workspace["cdn_domain_name"]
custom_header {
name = "X-HubSpot-Trust-Forwarded-For"
value = "true"
}
custom_header {
name = "X-HS-Public-Host"
value = local.workspace["destination_url"]
}
custom_origin_config {
origin_protocol_policy = "https-only"
http_port = "80"
https_port = "443"
origin_ssl_protocols = ["TLSv1", "TLSv1.1", "TLSv1.2"]
}
}
default_cache_behavior {
viewer_protocol_policy = "redirect-to-https"
allowed_methods = ["GET", "HEAD", "OPTIONS", "PUT", "POST", "PATCH", "DELETE"]
cached_methods = ["GET", "HEAD"]
target_origin_id = local.workspace["cdn_proxy_origin_id"]
cache_policy_id = aws_cloudfront_cache_policy.proxy_hubspot_cache_policy.id
origin_request_policy_id = aws_cloudfront_origin_request_policy.proxy_hubspot_request_policy.id
compress = true
}
logging_config {
include_cookies = true
bucket = data.terraform_remote_state.shared_infra.outputs.cloudfront_logs_s3_bucket_url
prefix = "proxy_${local.workspace["env_type"]}"
}
restrictions {
geo_restriction {
restriction_type = "none"
}
}
viewer_certificate {
acm_certificate_arn = aws_acm_certificate.proxy_certificate.arn
ssl_support_method = "sni-only"
}
aliases = [local.workspace["destination_url"]]
depends_on = [
aws_acm_certificate_validation.proxy_certificate_validation
]
}
resource "aws_cloudfront_monitoring_subscription" "monitor_www_proxy" {
distribution_id = aws_cloudfront_distribution.proxy_cdn.id
monitoring_subscription {
realtime_metrics_subscription_config {
realtime_metrics_subscription_status = "Enabled"
}
}
}
Any idea what can be wrong in the configuration?
Thanks a lot

I believe the easiest way to get CloudFront to cache mobile pages separately from desktop pages is to configure the CloudFront-Is-Mobile-Viewer and CloudFront-Is-Desktop-Viewer headers as part of the cache key. Note all the headers that are available there, if you also want separate cache for table viewers, or iOS and Android caches, etc.
The Terraform config would look like:
resource "aws_cloudfront_cache_policy" "proxy_hubspot_cache_policy" {
name = "custom-caching-policy"
comment = "Our caching policy for the Cloudfront proxy"
default_ttl = 604800 # seven day of cache
max_ttl = 604800
min_ttl = 604800
parameters_in_cache_key_and_forwarded_to_origin {
enable_accept_encoding_brotli = true
enable_accept_encoding_gzip = true
cookies_config {
cookie_behavior = "none"
}
headers_config {
header_behavior = "whitelist"
headers {
items = ["CloudFront-Is-Mobile-Viewer", "CloudFront-Is-Desktop-Viewer"]
}
}
query_strings_config {
query_string_behavior = "all"
}
}
}
Note that these headers will also be passed to your backend origin after you implement this configuration, so you could change the logic of your application to render mobile vs desktop based on the value of those headers instead of inspecting the user-agent header.

Related

Error when creating Kinesis Delivery Streams with OpenSearch

I created an OpenSearch domain using Terraform with the OpenSearch_2.3 engine. I also managed to create Kinesis data streams without any issues but when I want to add a delivery stream I need to configure elasticsearch_configuration for the delivery stream as I want to send data to OpenSearch. But I get an error so I am not sure what I am doing wrong, is something wrong with the aws_opensearch_domain resource or is it Kinesis related?
resource "aws_opensearch_domain" "domain" {
domain_name = "test"
engine_version = "OpenSearch_2.3"
cluster_config {
instance_type = "r4.large.search"
}
tags = {
Domain = "TestDomain"
}
}
resource "aws_kinesis_stream" "stream" {
name = "terraform-kinesis-test"
shard_count = 1
retention_period = 48
stream_mode_details {
stream_mode = "PROVISIONED"
}
tags = {
Environment = "test"
}
}
resource "aws_kinesis_firehose_delivery_stream" "delivery_stream" {
name = "terraform-kinesis-firehose-delivery-stream"
destination = "elasticsearch"
s3_configuration {
role_arn = aws_iam_role.firehose_role.arn
bucket_arn = aws_s3_bucket.bucket.arn
buffer_size = 10
buffer_interval = 400
compression_format = "GZIP"
}
elasticsearch_configuration {
domain_arn = aws_opensearch_domain.domain.arn
role_arn = aws_iam_role.firehose_role.arn
index_name = "test"
type_name = "test"
processing_configuration {
enabled = "true"
processors {
type = "Lambda"
parameters {
parameter_name = "LambdaArn"
parameter_value = "${aws_lambda_function.lambda_processor.arn}:$LATEST"
}
}
}
}
}
Error: elasticsearch domain `my-domain-arn` has an unsupported version: OpenSearch_2.3 How is it not supported? Supported Versions
I am new to Kinesis and OpenSearch, pardon my lack of understanding.
A few weeks ago, I had a similar problem as I thought 2.3 was supported. However, Kinesis Firehose actually does not support OpenSearch_2.3 (yet). I downgraded to OpenSearch_1.3 and it worked as expected. You can find more information in the upgrade guide.
Supported Upgrade Paths
resource "aws_opensearch_domain" "domain" {
domain_name = "test"
engine_version = "OpenSearch_1.3"
cluster_config {
instance_type = "r4.large.search"
}
tags = {
Domain = "TestDomain"
}
}

Terraform Azure CDN Custom Domain Certificate not supported for this profile

I am trying to enable https for cdn endpoint custom domain. When trying to submit the code, i get the following error:
CertificateType value provided is not supported for this profile for enabling https.
The custom domain code:
resource "azurerm_cdn_endpoint_custom_domain" "endpointfrontend" {
name = "mykappdev"
cdn_endpoint_id = azurerm_cdn_endpoint.cdnendpoint.id
host_name = "${azurerm_dns_cname_record.cnamefrontend.name}.${data.azurerm_dns_zone.dnszone.name}"
cdn_managed_https {
certificate_type = "Dedicated"
protocol_type = "ServerNameIndication"
}
}
The rest of the cdn code:
resource "azurerm_cdn_profile" "cdnprofile" {
name = "mycdn${var.environment}"
location = data.azurerm_resource_group.rg.location
resource_group_name = data.azurerm_resource_group.rg.name
sku = "Standard_Microsoft"
}
resource "azurerm_cdn_endpoint" "cdnendpoint" {
name = "${var.environment}-example"
profile_name = azurerm_cdn_profile.cdnprofile.name
location = azurerm_cdn_profile.cdnprofile.location
resource_group_name = data.azurerm_resource_group.rg.name
is_https_allowed = true
origin {
name = "${var.environment}-origin"
host_name = azurerm_storage_account.frontend.primary_web_host
}
depends_on = [
azurerm_cdn_profile.cdnprofile
]
}
data "azurerm_dns_zone" "dnszone" {
name = "my.app"
resource_group_name = "rg-my"
}
Everything works fine when doing it via UI so the problem has to be in the code.
Edit the block azurerm_cdn_endpoint
resource "azurerm_cdn_endpoint" "cdnendpoint" {
name = "${var.environment}-example"
profile_name = azurerm_cdn_profile.cdnprofile.name
location = azurerm_cdn_profile.cdnprofile.location
resource_group_name = data.azurerm_resource_group.rg.name
is_https_allowed = true
origin {
name = "${var.environment}-origin"
host_name = azurerm_storage_account.frontend.primary_web_host
}
### Code added
delivery_rule {
name = "EnforceHTTPS"
order = "1"
request_scheme_condition {
operator = "Equal"
match_values = ["HTTP"]
}
url_redirect_action {
redirect_type = "Found"
protocol = "Https"
}
}
### End code added
depends_on = [
azurerm_cdn_profile.cdnprofile
]
}
Also, you can check this blog post https://www.emilygorcenski.com/post/migrating-a-static-site-to-azure-with-terraform/
Hope this helps!
After enabling custom https once per hand in the azure portal and than disabling it in portal, it was possible to change it via terraform.
I hope this helps!

Does the Terraform resource kubernetes_ingress_v1 have a "use_annotation" equivalent?

We're currently migrating our terraform kubernetes_ingress resource to a kubernetes_ingress_v1 resource. Previously, we had these annotations on the ingress:
annotations = {
"kubernetes.io/ingress.class" = "alb"
"alb.ingress.kubernetes.io/scheme" = "internet-facing"
"alb.ingress.kubernetes.io/certificate-arn" = var.create_acm_certificate ? aws_acm_certificate.eks_domain_cert[0].id : var.aws_acm_certificate_arn
"alb.ingress.kubernetes.io/listen-ports" = "[{\"HTTP\": 80}, {\"HTTPS\":443}]"
"alb.ingress.kubernetes.io/actions.ssl-redirect" = "{\"Type\": \"redirect\", \"RedirectConfig\": { \"Protocol\": \"HTTPS\", \"Port\": \"443\", \"StatusCode\": \"HTTP_301\"}}"
"alb.ingress.kubernetes.io/ssl-policy" = "ELBSecurityPolicy-TLS-1-2-Ext-2018-06"
"alb.ingress.kubernetes.io/healthcheck-path" = "/healthz"
}
along with this segment several times in the spec:
path {
backend {
service_name = "ssl-redirect"
service_port = "use-annotation"
}
path = "/*"
}
However, the kubernetes_ingress_v1 requires a format like:
path {
backend {
service {
name = "ssl-redirect"
port {
number = <number_value>
}
}
}
path = "/*"
}
where port is an actual number and not "use-annotation". Is there any way to replicate this "use-annotation" behavior in a kubernetes_ingress_v1 resource? Or, even better, is there a simpler way to handle this ssl-redirect rule in a kubernetes_ingress_v1?
You can achive that using the following sintaxis:
backend {
service {
name = "ssl-redirect"
port {
name = "use-annotation"
}
}
}
As you can see, you need to use the argument name instead port.

GKE Terraformed Cluster Release Channel Setting

According to THIS documentation right here I can set the release channel on a cluster. Yet it doesn't work at all. It "sees" the setting is there during the apply summary but it doesn't actually apply to a new cluster in the end. What am I missing? There are no examples given in the documentation so I'm just having to guess here. In the console I see this:
Not set, can't even set it manually:
I'm trying to set it to RAPID
release_channel {
channel = "RAPID"
}
Here's my full TF:
resource "google_container_cluster" "standard-cluster" {
enable_binary_authorization = false
enable_kubernetes_alpha = false
enable_legacy_abac = false
enable_shielded_nodes = false
initial_node_count = 0
location = local.ws_vars["zone"]
logging_service = "logging.googleapis.com/kubernetes"
monitoring_service = "monitoring.googleapis.com/kubernetes"
name = local.ws_vars["cluster-name"]
network = "projects/${local.ws_vars["project-id"]}/global/networks/${local.ws_vars["environment"]}"
project = local.ws_vars["project-id"]
subnetwork = "projects/${local.ws_vars["project-id"]}/regions/us-east4/subnetworks/${local.ws_vars["environment"]}"
release_channel {
channel = local.ws_vars["channel"]
}
ip_allocation_policy {
#cluster_ipv4_cidr_block = local.ws_vars["cidr-block"]
cluster_secondary_range_name = "subnet-pods"
services_secondary_range_name = "subnet-services"
}
addons_config {
horizontal_pod_autoscaling {
disabled = false
}
http_load_balancing {
disabled = false
}
network_policy_config {
disabled = false
}
}
database_encryption {
state = "DECRYPTED"
}
maintenance_policy {
daily_maintenance_window {
start_time = "01:00"
}
}
network_policy {
enabled = true
provider = "CALICO"
}
node_pool {
initial_node_count = 1
name = "scoped-two-cpu-high-mem-preemptible"
node_locations = [
local.ws_vars["zone"],
]
autoscaling {
max_node_count = 30
min_node_count = 0
}
management {
auto_repair = true
auto_upgrade = true
}
node_config {
disk_size_gb = 100
disk_type = "pd-standard"
guest_accelerator = []
image_type = "COS"
labels = {}
local_ssd_count = 0
machine_type = "n1-highmem-4"
metadata = {
"disable-legacy-endpoints" = "true"
workload_metadata_config = "GKE_METADATA_SERVER"
}
oauth_scopes = [
"https://www.googleapis.com/auth/devstorage.read_only",
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring",
"https://www.googleapis.com/auth/ndev.clouddns.readwrite",
"https://www.googleapis.com/auth/service.management.readonly",
"https://www.googleapis.com/auth/servicecontrol",
"https://www.googleapis.com/auth/trace.append",
]
preemptible = true
service_account = "default"
tags = []
taint = []
shielded_instance_config {
enable_integrity_monitoring = true
enable_secure_boot = false
}
}
upgrade_settings {
max_surge = 1
max_unavailable = 0
}
}
private_cluster_config {
enable_private_endpoint = false
enable_private_nodes = true
master_ipv4_cidr_block = "172.16.0.0/28"
}
vertical_pod_autoscaling {
enabled = true
}
workload_identity_config {
identity_namespace = "${local.ws_vars["project-id"]}.svc.id.goog"
}
}
I think the key is in the error message the GUI is giving you. Setting the release channel to "RAPID" today would mean to jump to GKE 1.20, which is 2 major versions newer than your cluster and this seems to be unsuported.
What happens if you set it to "STABLE" ? This is still 1.18 and shouldn't fail to set up.
The answer was in two parts:
The state file had a version set that was previously supported but was no longer supported. The version kept being set to this previously supported version and thus the RAPID setting couldn't take effect.
GKE requires a minimum version setting that matches one of these supported versions in order for the channel to be set correctly. This defeats the purpose of terraform and infrastructure as code since one will have to eventually change this version in the terraform in order to apply the channel. This means a drift in the TF, always. This is an obvious flaw in GKE. Ideally it should just set the version to whatever is supported.

how to implement for_each in terraform avi gslb to create and attach multiple pool to gslb?

I have below which creates avi gslbservice with a single pool created and attached to it. I would like to create a second pool created and attached to it. Can any one please guide?
I am new to terraform, I saw few tutorials on for_each fn. But not able to figure out, how to apply it for my need.
I have highlighted the block which create the gslb pool
resource "avi_gslbservice" "avi_gslbservice" {
name = "helloworldssl-gslb"
tenant_ref = data.avi_tenant.avi_tenant.id
domain_names = ["xxxxxxxxx"]
health_monitor_refs = [avi_healthmonitor.avi_healthmonitor_gslb.id]
enabled = true
pool_algorithm = "GSLB_SERVICE_ALGORITHM_GEO"
ttl = "30"
created_by = "xxxxxx"
description = "xxxxxx"
down_response {
type = "GSLB_SERVICE_DOWN_RESPONSE_ALL_RECORDS"
}
**groups {
priority = 10
members {
ip {
type = "V4"
addr = ""
}
fqdn = "xxxxxxxxxxxxxx"
vs_uuid = ""
cluster_uuid = ""
ratio = 1
enabled = true
}
name = "helloworldssl-gslb-pool1"
algorithm = "GSLB_ALGORITHM_TOPOLOGY"
}**
}
Edit Aug 8th 2021 - For now I have a work around of duplicating whole groups block two times.
Here is how you do it,
dynamic "groups" {
for_each = var.avi_gslbservice_groups
content {
dynamic "members" {
for_each = groups.value.avi_gslbservice_groups_ip
content {
ip {
type = "V4"
addr = ""
}
fqdn = members.value["host"]
vs_uuid = ""
cluster_uuid = ""
ratio = 1
enabled = members.value["enabled"]
}
}
name = groups.value["name"]
priority = groups.value["priority"]
algorithm = groups.value["algorithm"]
}
}
values will come from json file as below,
{
"avi_gslbservice_groups": [
{
"name": "us-east-1",
"priority": 7,
"algorithm": "GSLB_ALGORITHM_ROUND_ROBIN",
"avi_gslbservice_groups_ip": [
{
"host": "host1",
"enabled": "true"
},
{
"host": "host2",
"enabled": "false"
}
]
},
{
"name": "us-east-2",
"priority": 10,
"algorithm": "GSLB_ALGORITHM_TOPOLOGY",
"avi_gslbservice_groups_ip": [
{
"host": "host1",
"enabled": "true"
},
{
"host": "host2",
"enabled": "false"
}
]
}
]
}

Resources