When i run docker.io/jboss/drools-workbench-showcase locally it works fine at localhost:8080/business-central per instructions
Trying to run it in Azure container instance using the following terraform configuration it times out.
resource "azurerm_resource_group" "drools-jrg" {
name = "drools-jrg"
location = "eastus"
}
resource "azurerm_container_group" "drools1-jrg" {
name = "drools1-jrg"
location = "${azurerm_resource_group.drools-jrg.location}"
resource_group_name = "${azurerm_resource_group.drools-jrg.name}"
ip_address_type = "public"
dns_name_label = "mydroolsurlyyy-${azurerm_resource_group.drools-jrg.name}"
os_type = "Linux"
container {
name = "drools-workbench-showcase-jrg-1"
image = "docker.io/jboss/drools-workbench-showcase"
cpu = "0.5"
memory = "1.5"
ports {
port = 8080
protocol = "TCP"
}
ports {
port = 8081
protocol = "TCP"
}
}
When i go to the portal, it appears to be created fine. No errors on CLI either. I tried this manually to on the portal. Giving terraform config as i think it would help.
Only thing im wondering is if its not allowing HTTP over 8080?
Can anyone help explain why i can't business central can't be loaded over:
http://mydroolsurlyyy--drools-jrg.eastus.azurecontainer.io:8080/business-central
*dnsnamechanged
As I see in the docker image, you need to expose the ports 8080 and 8001, not the 8081. Additional, there is nothing wrong if you change the port. Just a piece of advice, you should request for bigger CPU and memory, for example, 2 for CPU and 4 for memory, then it will work well.
Related
I am trying to implement AKS Baselines with terraform, but I can't get my Application Gateway connect to the internal load balancer created by AKS.
My AKS config contains of a solr instance and a service with azure-load-balancer-internal annotation. AKS and created LB are in the same SUBNET while Application Gateway has it's own SUBNET, but they are all in the same VNET.
Kubernetes.tf
resource "kubernetes_service" "solr-service" {
metadata {
name = local.solr.name
annotations = {
"service.beta.kubernetes.io/azure-load-balancer-internal" : "true"
"service.beta.kubernetes.io/azure-load-balancer-internal-subnet" : "aks-subnet"
}
}
spec {
external_traffic_policy = "Local"
selector = {
app = kubernetes_deployment.solr.metadata.0.labels.app
}
port {
name = "http"
port = 80
target_port = 8983
}
type = "LoadBalancer"
load_balancer_ip = "192.168.1.200"
}
}
This config creates an internal load balancer in the MC_* resource group with frontend IP 192.168.1.200. The health check in the metrics blade is returning 100. So it looks like the created internal loadbalancer is working as expected.
Now I am trying to add this load balancer as backend_pool target in my Application gateway.
application-gateway.tf
resource "azurerm_application_gateway" "agw" {
name = local.naming.agw_name
resource_group_name = azurerm_resource_group.this.name
location = azurerm_resource_group.this.location
sku {
name = "Standard_Medium"
tier = "Standard"
capacity = 1
}
gateway_ip_configuration {
name = "Gateway-IP-Config"
subnet_id = azurerm_subnet.agw_snet.id
}
frontend_port {
name = "http-port"
port = 80
}
frontend_ip_configuration {
name = "public-ip"
public_ip_address_id = azurerm_public_ip.agw_ip.id
}
backend_address_pool {
name = "lb"
ip_addresses = ["192.168.1.200"]
}
backend_http_settings {
name = "settings"
cookie_based_affinity = "Disabled"
port = 80
protocol = "Http"
request_timeout = 60
}
http_listener {
name = "http-listener"
frontend_ip_configuration_name = "public-ip"
frontend_port_name = "http-port"
protocol = "Http"
}
request_routing_rule {
name = local.request_routing_rule_name
rule_type = "Basic"
http_listener_name = "http-listener"
backend_address_pool_name = "lb"
backend_http_settings_name = "settings"
}
}
I would expect Application Gateway now be connected to the internal load balancer and send all request over to it. But I get the message, that all backend pools are unhealthy. So it looks like, the Gateway can't access the provided IP.
I took a look at the Azure GIT baseline, but as far as I can see, they using FQDN instead of IP. I am pretty sure it's just some minor configuration issue, but I just can't find it.
I tried already using the Application Gateway as ingress controller (or http routing) and this worked, but I would like to implement it with internal load balancer, I also tried to add health check to the backend nodepool, this did not worked.
EDIT: I changed the LB to public and added the public IP to the Application Gateway and everything worked, so it looks like this is the issue, but I don't get why Application Gateway can't access the sibling subnet. I don't have any restrictions in place and by default Azure allows communication between subnets.
My mistake was to place the internal-load-balancer into the same snet like my kubernetes. When I changed the code and provided its own subnet, everything worked out fine. My final service config:
resource "kubernetes_service" "solr-service" {
metadata {
name = local.solr.name
annotations = {
"service.beta.kubernetes.io/azure-load-balancer-internal" : "true"
"service.beta.kubernetes.io/azure-load-balancer-internal-subnet" : "lb-subnet"
}
}
spec {
external_traffic_policy = "Local"
selector = {
app = kubernetes_deployment.solr.metadata.0.labels.app
}
port {
name = "http"
port = 80
target_port = 8983
}
type = "LoadBalancer"
load_balancer_ip = "192.168.3.200"
}
}
I am new to terraform and i am trying to deploy a machine (with qcow2 image) on KVM automatically via Terraform.
i found this tf file:
provider "libvirt" {
uri = "qemu:///system"
}
#provider "libvirt" {
# alias = "server2"
# uri = "qemu+ssh://root#192.168.100.10/system"
#}
resource "libvirt_volume" "centos7-qcow2" {
name = "centos7.qcow2"
pool = "default"
source = "https://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2"
#source = "./CentOS-7-x86_64-GenericCloud.qcow2"
format = "qcow2"
}
# Define KVM domain to create
resource "libvirt_domain" "db1" {
name = "db1"
memory = "1024"
vcpu = 1
network_interface {
network_name = "default"
}
disk {
volume_id = "${libvirt_volume.centos7-qcow2.id}"
}
console {
type = "pty"
target_type = "serial"
target_port = "0"
}
graphics {
type = "spice"
listen_type = "address"
autoport = true
}
}
my questions are:
(source) the path of my qcow file has to be localy on my computer ?
I have a KVM machine ip that i connected it remotely by its ip. where should i put this ip in this tf file?
when i did it manually, i run "virt manager", do i need to write it here anywhere?
thank's a lot.
No. It can be https also.
Do you mean a KVM host that VMs will be created ? Then you need to configure remote kvm access on that host and in the uri section of provider block you need to write its ip.
uri = "qemu+ssh://username#IP_OF_HOST/system"
You dont need virt-manager when you use terraform. You should use terraform resources for managing VM.
https://registry.terraform.io/providers/dmacvicar/libvirt/latest/docs
https://github.com/dmacvicar/terraform-provider-libvirt/tree/main/examples/v0.13
I created an ECS cluster, along with a Load Balancer, to expose a basc hello-world node app on Fargate using Terraform. Terraform manages to create my aws resources just fine, and deploys the correct image on ECS Fargate, but the task never passes the initial health-check and restarts indefinitely. I think this is a port-forwarding problem, but I believe my Dockerfile, Load Balancer and Task Definition all expose the correct ports.
Below is the error I see when looking at my service's "events" tab on the ECS dashboard:
service my-first-service (port 2021) is unhealthy in target-group target-group due to (reason Request timed out).
Below is my Application code, the Dockerfile, and the Terraform files I am using to deploy to Fargate:
index.js
const express = require('express')
const app = express()
const port = 2021
app.get('/', (req, res) => res.send('Hello World!'))
app.listen(port, () => console.log(`Example app listening on port ${port}!`))
Dockerfile
# Use an official Node runtime as a parent image
FROM node:12.7.0-alpine
# Set the working directory to /app
WORKDIR '/app'
# Copy package.json to the working directory
COPY package.json .
# Install any needed packages specified in package.json
RUN yarn
# Copying the rest of the code to the working directory
COPY . .
# Make port 2021 available to the world outside this container
EXPOSE 2021
# Run index.js when the container launches
CMD ["node", "index.js"]
application_load_balancer_target_group.tf
resource "aws_lb_target_group" "target_group" {
name = "target-group"
port = 80
protocol = "HTTP"
target_type = "ip"
vpc_id = "${aws_default_vpc.default_vpc.id}" # Referencing the default VPC
health_check {
matcher = "200,301,302"
path = "/"
}
}
resource "aws_lb_listener" "listener" {
load_balancer_arn = "${aws_alb.application_load_balancer.arn}" # Referencing our load balancer
port = "80"
protocol = "HTTP"
default_action {
type = "forward"
target_group_arn = "${aws_lb_target_group.target_group.arn}" # Referencing our tagrte group
}
}
application_load_balaner.tf
resource "aws_alb" "application_load_balancer" {
name = "test-lb-tf" # Naming our load balancer
load_balancer_type = "application"
subnets = [ # Referencing the default subnets
"${aws_default_subnet.default_subnet_a.id}",
"${aws_default_subnet.default_subnet_b.id}",
"${aws_default_subnet.default_subnet_c.id}"
]
# Referencing the security group
security_groups = ["${aws_security_group.load_balancer_security_group.id}"]
}
# Creating a security group for the load balancer:
resource "aws_security_group" "load_balancer_security_group" {
ingress {
from_port = 80 # Allowing traffic in from port 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"] # Allowing traffic in from all sources
}
egress {
from_port = 0 # Allowing any incoming port
to_port = 0 # Allowing any outgoing port
protocol = "-1" # Allowing any outgoing protocol
cidr_blocks = ["0.0.0.0/0"] # Allowing traffic out to all IP addresses
}
}
ecs_cluster.tf
resource "aws_ecs_cluster" "my_cluster" {
name = "my-cluster" # Naming the cluster
}
ecs_service.tf
# Providing a reference to our default VPC (these are needed by the aws_ecs_service at the bottom of this file)
resource "aws_default_vpc" "default_vpc" {
}
# Providing a reference to our default subnets (NOTE: Make sure the availability zones match your zone)
resource "aws_default_subnet" "default_subnet_a" {
availability_zone = "us-east-2a"
}
resource "aws_default_subnet" "default_subnet_b" {
availability_zone = "us-east-2b"
}
resource "aws_default_subnet" "default_subnet_c" {
availability_zone = "us-east-2c"
}
resource "aws_ecs_service" "my_first_service" {
name = "my-first-service" # Naming our first service
cluster = "${aws_ecs_cluster.my_cluster.id}" # Referencing our created Cluster
task_definition = "${aws_ecs_task_definition.my_first_task.arn}" # Referencing the task our service will spin up
launch_type = "FARGATE"
desired_count = 1 # Setting the number of containers we want deployed to 1
# NOTE: The following 'load_balancer' snippet was added here after the creation of the application_load_balancer files.
load_balancer {
target_group_arn = "${aws_lb_target_group.target_group.arn}" # Referencing our target group
container_name = "${aws_ecs_task_definition.my_first_task.family}"
container_port = 2021 # Specifying the container port
}
network_configuration {
subnets = ["${aws_default_subnet.default_subnet_a.id}", "${aws_default_subnet.default_subnet_b.id}", "${aws_default_subnet.default_subnet_c.id}"]
assign_public_ip = true # Providing our containers with public IPs
}
}
resource "aws_security_group" "service_security_group" {
ingress {
from_port = 0
to_port = 0
protocol = "-1"
# Only allowing traffic in from the load balancer security group
security_groups = ["${aws_security_group.load_balancer_security_group.id}"]
}
egress {
from_port = 0 # Allowing any incoming port
to_port = 0 # Allowing any outgoing port
protocol = "-1" # Allowing any outgoing protocol
cidr_blocks = ["0.0.0.0/0"] # Allowing traffic out to all IP addresses
}
}
ecs_task_definition.tf
resource "aws_ecs_task_definition" "my_first_task" {
family = "my-first-task" # Naming our first task
container_definitions = <<DEFINITION
[
{
"name": "my-first-task",
"image": "${var.ECR_IMAGE_URL}",
"essential": true,
"portMappings": [
{
"containerPort": 2021,
"hostPort": 2021
}
],
"memory": 512,
"cpu": 256
}
]
DEFINITION
requires_compatibilities = ["FARGATE"] # Stating that we are using ECS Fargate
network_mode = "awsvpc" # Using awsvpc as our network mode as this is required for Fargate
memory = 512 # Specifying the memory our container requires
cpu = 256 # Specifying the CPU our container requires
execution_role_arn = "${aws_iam_role.ecsTaskExecutionRole.arn}"
}
resource "aws_iam_role" "ecsTaskExecutionRole" {
name = "ecsTaskExecutionRole"
assume_role_policy = "${data.aws_iam_policy_document.assume_role_policy.json}"
}
data "aws_iam_policy_document" "assume_role_policy" {
statement {
actions = ["sts:AssumeRole"]
principals {
type = "Service"
identifiers = ["ecs-tasks.amazonaws.com"]
}
}
}
resource "aws_iam_role_policy_attachment" "ecsTaskExecutionRole_policy" {
role = "${aws_iam_role.ecsTaskExecutionRole.name}"
policy_arn = "arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy"
}
Where am I going wrong here?
I had same similar issue when I was migrating from k8s to ECS Fargate.
My task could not start, it was nightmare.
Same image in k8s was working great with same health checks.
I can see that you are missing healthCheck in task_definition, at least that was issue for me.
here is my containerDefinition :
container_definitions = jsonencode([{
name = "${var.app_name}-container-${var.environment}"
image = "${var.container_repository}:${var.container_image_version}"
essential = true
environment: concat(
var.custom_env_variables,
[
{
name = "JAVA_TOOL_OPTIONS"
value = "-Xmx${var.container_memory_max_ram}m -XX:MaxRAM=${var.container_memory_max_ram}m -XX:+UseParallelGC -XX:MinHeapFreeRatio=5 -XX:MaxHeapFreeRatio=10 -XX:GCTimeRatio=4"
},
{
name = "SPRING_PROFILES_ACTIVE"
value = var.spring_profile
},
{
name = "APP_NAME"
value = var.spring_app_name
}
]
)
portMappings = [
{
protocol = "tcp"
containerPort = var.container_port
},
{
protocol = "tcp"
containerPort = var.container_actuator_port
}
]
healthCheck = {
retries = 10
command = [ "CMD-SHELL", "curl -f http://localhost:8081/actuator/liveness || exit 1" ]
timeout: 5
interval: 10
startPeriod: var.health_start_period
}
logConfiguration = {
logDriver = "awslogs"
options = {
awslogs-group = aws_cloudwatch_log_group.main.name
awslogs-stream-prefix = "ecs"
awslogs-region = var.aws_region
}
}
mountPoints = [{
sourceVolume = "backend_efs",
containerPath = "/data",
readOnly = false
}]
}])
there is healthCheck aprt:
healthCheck = {
retries = 10
command = [ "CMD-SHELL", "curl -f http://localhost:8081/actuator/liveness || exit 1" ]
timeout: 5
interval: 10
startPeriod: var.health_start_period
}
container in order to start needs to have a way to check is that task running OK.
And I could only get that via curl . I have one endpoint that returns me is it live or not. You need to specify your, it is jut important that return 200.
Also there is no curl command by default, you need to add it in you DockerFile as that was next issue where I spent few hours, as there was not clear error on ECS.
I added this line:
RUN apt-get update && apt-get install -y --no-install-recommends curl
By the look of it, you are create new VPC with subnets, but there are no route tables defined, no internet gateway and attached to the VPC. So your VPC is simply private and not accessible from the internet, nor it can access ECR to get your docker image.
Maybe instead of creating a new VPC called default_vpc, you want to use an existing default vpc. If so you have to use data source:
data "aws_vpc" "default_vpc" {
default = true
}
to get subnets:
data "aws_subnet_ids" "default" {
vpc_id = data.aws_vpc.default_vpc.id
}
and modify the remaining of the code to reference these data sources.
Also for Fargate, it should remove:
"hostPort": 2021
And you forgot to setup security group for your ECS service. It should be:
network_configuration {
subnets = data.aws_subnet_ids.default.ids
assign_public_ip = true # Providing our containers with public IPs
security_groups = [aws_security_group.service_security_group.id]
}
I am new to using Terraform/Azure and I have been trying to work on a automation job since few days, which has wasted a lot of my time as I couldn't find any solutions on internet for the same.
So, if anyone knows how to pull and deploy a docker image from an azure container registry using Terraform then please do share the details for the same, any assistance will be most appreciated.
A sample code snippet word be of great assistance.
You can use the docker provider and docker_image resource from terraform which pulls the image to your local docker registry. There are some Authentication options to authenticate to ACR.
provider "docker" {
host = "unix:///var/run/docker.sock"
registry_auth {
address = "<ACR_NAME>.azurecr.io"
username = "<DOCKER_USERNAME>"
password = "<DOCKER_PASSWORD>"
}
}
resource "docker_image" "my_image" {
name = "<ACR_NAME>.azurecr.io/<IMAGE>:<TAG>"
}
output "image_id" {
value = docker_image.my_image.name
}
Then use the image to deploy it with docker_container resource. I have not used this resource so I cannot give you a snippet, but there are some examples on the documentation.
One option to pull (and run) an image from ACR with Terraform is using Container Instances. Simple example for image reference:
resource "azurerm_container_group" "example" {
name = "my-continst"
location = "location"
resource_group_name = "name"
ip_address_type = "public"
dns_name_label = "aci-uniquename"
os_type = "Linux"
container {
name = "hello-world"
image = "acr-name.azurecr.io/aci-helloworld:v1"
cpu = "0.5"
memory = "1.5"
ports {
port = 80
protocol = "TCP"
}
}
image_registry_credential {
server = "acr-name.azurecr.io"
username = ""
password = ""
}
}
Not sure what I'm doing wrong. I only have one NAT rule in the terraform config and I'm not using a NAT pool.
Error:
azurerm_virtual_machine_scale_set.development-eastus-ss: compute.VirtualMachineScaleSetsClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code="InvalidRequestFormat" Message="Cannot parse the request." Details=[{"code":"InvalidJsonReferenceWrongType","message":"Reference Id /subscriptions/sub-id/resourceGroups/prod-eastus-rg/providers/Microsoft.Network/loadBalancers/development-eastus-lb/inboundNatRules/development-eastus-lb-nat-http is referencing resource of a wrong type. The Id is expected to reference resources of type loadBalancers/inboundNatPools. Path Properties.UpdateGroups[0].NetworkProfile.networkInterfaceConfigurations[0].properties.ipConfigurations[0].properties.loadBalancerInboundNatPools[0]."}]
NAT Rule:
resource "azurerm_lb_nat_rule" "development-eastus-lb-nat-http" {
name = "development-eastus-lb-nat-http"
resource_group_name = "${var.resource_group_name}"
loadbalancer_id = "${azurerm_lb.development-eastus-lb.id}"
protocol = "Tcp"
frontend_port = 80
backend_port = 8080
frontend_ip_configuration_name = "development-eastus-lb-frontend"
Looks like this is an issue with trying to bind a single nat rule to a scaleset. As the error suggests, it is expecting a nat pool rather than a nat rule a nat pool will allow the load balancer and the scale set to build a group of rules where the load balancer will expose a different port per underlying VM to the same port on the VM.
Think about RDP where you want to be able to remote onto a specific VM, this would enable this by giving you a unique port to use that maps onto that VM.
resource "azurerm_lb_nat_pool" "test" {
resource_group_name = "${azurerm_resource_group.test.name}"
loadbalancer_id = "${azurerm_lb.test.id}"
name = "SampleApplicationPool"
protocol = "Tcp"
frontend_port_start = 80
frontend_port_end = 81
backend_port = 8080
frontend_ip_configuration_name = "PublicIPAddress"
}
If however you are looking to run a service such as a HTTP website on a different port internally than externally, such as 8080 on the local network and then port 80 on the external public network, then I would take a look at the lb rule as this specifically allows you to set the ports, as shown below.
resource "azurerm_lb_rule" "test" {
resource_group_name = "${azurerm_resource_group.test.name}"
loadbalancer_id = "${azurerm_lb.test.id}"
name = "LBRule"
protocol = "Tcp"
frontend_port = 3389
backend_port = 3389
frontend_ip_configuration_name = "PublicIPAddress"
}
Hopefully this helps