How to create custom checks in tfsec - terraform

I have the following policy that I wish to implement in my IaC code scan using tfsec:
Custom Check: GCP Firewall rule allows all traffic on Telnet port (23)
The below is my custom check in .json format:
{
"checks":
[
{
"code": "CUS003",
"description": "Custom Check: GCP Firewall rule allows all traffic on Telnet port (23)",
"requiredTypes":
[
"resource"
],
"requiredLabels":
[
"google_compute_firewall"
],
"severity": "WARNING",
"matchSpec":
{
"name": "CUS003_matchSpec_name",
"action": "and",
"predicateMatchSpec":
[
{
"name": "source_ranges",
"action": "contains",
"value": "0.0.0.0/0"
},
{
"name": "ports",
"action": "contains",
"value": "23"
}
]
},
"errorMessage": "[WARNING] GCP Firewall rule allows all traffic on Telnet port (23)",
"relatedLinks":
[
"https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/compute_firewall"
]
}
]
}
I have tried using "not", "notContains", "equals", a combination of "subMatch" and/or "predicateMatchSpec" but nothing worked.
To test it out I have purposefully created firewall rules that should fail and others that should pass the checks. When I get check failures, it's for all rules, not just a few ones. Similarly when I get check passes, it's for all rules, not just a few ones.
Docs that might be useful: tfsec custom checks
Any help is appreciated. Unfortunately "tfsec" isn't a tag so I am hoping it's a terraform issue that I am facing.

I think now looking at it formatted its clear that source_ranges is a child of the google_compute_firewall resource. The ports attribute is a child of the allow. Your check is assuming that ports is a sibling of source_ranges.
I think this check is achievable with the following - it does a predicate check that there is source_range as required AND there is a block called allow, with an attribute ports containing 23
{
"checks": [
{
"code": "CUS003",
"description": "Custom Check: GCP Firewall rule allows all traffic on Telnet port (23)",
"requiredTypes": [
"resource"
],
"requiredLabels": [
"google_compute_firewall"
],
"severity": "WARNING",
"matchSpec": {
"action": "and",
"predicateMatchSpec": [
{
"name": "source_ranges",
"action": "contains",
"value": "0.0.0.0/0"
},
{
"name": "allow",
"action": "isPresent",
"subMatch": {
"name": "ports",
"action": "contains",
"value": "23",
"ignoreUndefined": true
}
}
]
},
"errorMessage": "[WARNING] GCP Firewall rule allows all traffic on Telnet port (23)",
"relatedLinks": [
"https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/compute_firewall"
]
}
]
}
I've tested it against the following body
resource "google_compute_firewall" "default" {
name = "test-firewall"
network = google_compute_network.default.name
allow {
protocol = "tcp"
ports = ["23", "8080", "1000-2000"]
}
source_ranges = ["0.0.0.0/0"]
source_tags = ["web"]
}
resource "google_compute_network" "default" {
name = "test-network"
}

Related

BICEP - Parameter file variable assignment

I was following the repo for separate parameter file to each env as defined in the https://github.com/Azure/bicep/discussions/4586
I tried the separate parameters file for dev, stage, prod but the value assignment in main module variable remains flagged by intelligence even though it exists same param exist in the respective parameter file.
Other approach I tried is loadjson variable, but it does not show auto completion for items under subnet block as it stopes right after value.
Maybe I am overthinking and not applying the correct approach, Perhaps I should ignore intellisense and try deploying by applying parameter and hope it will auto pick correct value during the deployment param check.
Here is my parameter file and the same value applies to each env param json.
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"department": {
"value": "finance"
},
"saAccountCount": {
"value": 1
},
"vmCount": {
"value": 1
},
"locationIndex": { //idenx 1 = app server, 2=AD, 3=Tool server, 4= dchp server
"value": 1
},
"appRoleIndex": { //idenx 1 = westus2, 2= westus, 3= eastus, 4=centralus, 5=uswest3
"value": 1
},
"appRole": {
"value": {
"Applicatoin Server": "ap",
"Active Directory": "dc",
"Tool server": "tool",
"DHCP server": "dhcp"
}
},
"environment": {
"value": "dev"
},
"addressPrefixes": {
"value": [
"172.16.0.0/20"
]
},
"dnsServers": {
"value": [
"1.1.1.1",
"4.4.4.4"
]
},
"locationList": {
"value": {
"westus2": "azw2",
"westus": "azw",
"Eastus": "aze",
"CentralUS": "azc",
"westus3": "azw3"
}
},
"subnets": {
"value": [
{
"name": "frontend",
"subnetPrefix": "172.16.2.0/24",
"delegation": "Microsoft.Web/serverfarms",
"privateEndpointNetworkPolicies": "disabled",
"serviceEndpoints": [
{
"service": "Microsoft.KeyVault",
"locations": [
"*"
]
},
{
"service": "Microsoft.Web",
"locations": [
"*"
]
}
]
},
{
"name": "backend",
"subnetPrefix": "172.16.3.0/24",
"delegation": "Microsoft.Web/serverfarms",
"privateEndpointNetworkPolicies": "enabled",
"serviceEndpoints": [
{
"service": "Microsoft.KeyVault",
"locations": [
"*"
]
},
{
"service": "Microsoft.Web",
"locations": [
"*"
]
},
{
"service": "Microsoft.AzureCosmosDB",
"locations": [
"*"
]
}
]
}
]
}
}
}
You appear to be attempting to deploy an Azure Resource Management (ARM) template using a parameter file.
The parameter file is used to pass values to the ARM template during deployment. The parameter file must use the same types as the ARM template and can only include values for the ARM template's parameters.
You will receive an error if the parameter file contains extra parameters that do not match the ARM template's parameters.
In the same deployment process, you can use both inline parameters and a local parameter file. If you specify a parameter's value in both the local parameter file and inline, the inline value takes priority.
Refer to create a parameter file of an ARM template
About the different parameters file for dev, stage, and prod, it's likely that the parameter file is not correctly linked to the ARM template.
You can deploy the ARM template with the parameter file to determine if it will automatically select the proper value during the deployment parameter check.
Regarding the loadjson variable, it is possible that the loadjson variable is not properly formatted.
You can double-check the loadjson variable's format to ensure it's proper.
After a workaround on this, I created a sample parameter.json file for a webapp to deploy in a production environment and that worked for me.
Note: Alternatively, You can use az deployment group create with a parameters file and deploy into Azure to avoid these conflicts.

Azure Policy field type

I want to create a policy where i audit/deny PostgreSQL Databases which do not have firewall rules configured. This is a policy which is workin but the compliance state shows every single rule in it...
{
"mode": "All",
"policyRule": {
"if": {
"allOf": [
{
"field": "type",
"equals": "Microsoft.DBforPostgreSQL/servers/firewallRules"
},
{
"field": "Microsoft.DBforPostgreSQL/servers/firewallRules/startIpAddress",
"exists": "false"
},
{
"field": "Microsoft.DBforPostgreSQL/servers/firewallRules/endIpAddress",
"exists": "false"
}
]
},
"then": {
"effect": "[parameters('effect')]"
}
},
"parameters": {
"effect": {
"type": "String",
"metadata": {
"displayName": "Effect",
"description": "The effect determines what happens when the policy rule is evaluated to match"
},
"allowedValues": [
"Audit",
"Deny",
"Disabled"
],
"defaultValue": "Audit"
}
}
}
As soon as I change Microsoft.DBforPostgreSQL/servers/firewallRules to Microsoft.DBforPostgreSQL/servers it cannot create the policy with error:
The policy definition '6bab4b2f-30b3-4f07-a92e-496b6309d14d' targets multiple resource types, but the policy rule is authored in a way that makes the policy not applicable to the target resource types 'Microsoft.DBforPostgreSQL/servers,Microsoft.DBforPostgreSQL/servers/firewallRules'. This is because the policy rule has a condition that can never be satisfied by the target resource types. If an alias is used, please make sure that the alias gets evaluated against only the resource type it belongs to by adding a type condition before it, or split the policy into multiple ones to avoid targeting multiple resource types.
Does anyone have an idea how to fix that?

Azure policy modify effect

I have a azure custom policy, it checks all storage account, if there's no VNet and subnet setup on them as selected network, it would go and modify them to have VNet integration according to the parameters I entered. The parameter I entered is an array of subnet info as following
"allowedNetworks": {
"type": "array",
"metadata": {
"description": "The list of allowed virtual networks",
"displayName": "Allowed Networks"
},
"defaultValue": [
{
"id": "/subscriptions/xxx/resourceGroups/test3/providers/Microsoft.Network/virtualNetworks/rogertest3-vnet/subnets/default",
"action": "Allow",
"state": "Succeeded"
},
{
"id": "/subscriptions/xxx/resourceGroups/test3/providers/Microsoft.Network/virtualNetworks/rogertest3-vnet/subnets/AzureBastionSubnet",
"action": "Allow",
"state": "Succeeded"
}
]
}
and the effect is as following
"then": {
"effect": "[parameters('effect')]",
"details": {
"roleDefinitionIds": [
"/providers/microsoft.authorization/roleDefinitions/17d1049b-9a84-46fb-8f53-869881c3d3ab"
],
"conflictEffect": "audit",
"operations": [
{
"operation": "addOrReplace",
"field": "Microsoft.Storage/storageAccounts/networkAcls.virtualNetworkRules",
"value": "[parameters('allowednetworks')]"
},
{
"operation": "addOrReplace",
"field": "Microsoft.Storage/storageAccounts/networkAcls.defaultAction",
"value": "Deny"
}
]
}
}
it works well, however there're some behaviours around this modify effect I'm bit confused about.
If I create a new storage account, and it falls under the scope of this policy. I notice it would automatically adds this VNet integration, even if I select "all networks" at the time of creation
If I try manually change any storage account to all network, the UI would quickly revert to VNet integration, so it's not doing anything, and it would not give an error message. Doing with powershell gives the same result.
This is a bit contradictory to what I understand as modify effect, I thought modify effect is not mandatory, it would only apply to storage accounts, if you go with remediation
actually It is by design, just found out.
Modify effect gives this desired state configuration effect, so when you create something, policy will evaluate it, if it fits with the policy, Policy will take effect.

Key Vault ipRules property as parameter issue

I am trying to add Firewall rules for Azure Key Vault using ARM templates. It works as expected if ipRules property in conjunction with multiple IPs are defined in template (not as parameter).
However, if I try to define it as parameter getting "Bad JSON content found in the request."
Property defined in Template ("apiVersion": "2019-09-01"):
"kv-ipRules": {
"type": "array",
"metadata": {
"description": "The address space (in CIDR notation) to use for the Azure Key Vault to be deployed as Firewall rules."
}
}
"networkAcls": {
"defaultAction": "Deny",
"bypass": "AzureServices",
"virtualNetworkRules": [
{
"id": "[concat(parameters('kv-virtualNetworks'), '/subnets/','kv-subnet')]",
"ignoreMissingVnetServiceEndpoint": false
}
],
"ipRules": "[parameters('kv-ipRules')]"
}
Property defined in Parameters:
"kv-ipRules": {
"value": [
"xx.xx.xx.xxx",
"yy.yy.yy.yyy"
]
}
Given the documentation (https://learn.microsoft.com/en-us/azure/templates/Microsoft.KeyVault/vaults?tabs=json#IPRule), I would use:
"kv-ipRules": {
"value": [
{
"value": "xx.xx.xx.xxx"
},
{
"value": "yy.yy.yy.yyy"
}
]
}

How ignore/hide/deactivate my policy non-compliance state and get 100% compliance?

Below is my policy definition and is correctly working (policy is responsible to assign tags from resource group to resources):
{
"properties": {
"displayName": "inheritTags",
"policyType": "Custom",
"mode": "Indexed",
"metadata": {
"createdBy": "3332dc03-2402-46e3-9098-c7350b0bc8dd",
"createdOn": "2019-11-25T14:49:57.8136557Z",
"updatedBy": "3332dc03-2402-46e3-9098-c7350b0bc8dd",
"updatedOn": "2019-11-26T19:43:48.752452Z"
},
"parameters": {},
"policyRule": {
"if": {
"allOf": [
{
"value": "[resourceGroup().tags]",
"exists": "true"
},
{
"value": "[resourceGroup().tags]",
"notEquals": ""
}
]
},
"then": {
"effect": "modify",
"details": {
"operations": [
{
"operation": "addOrReplace",
"field": "tags",
"value": "[resourceGroup().tags]"
}
],
"roleDefinitionIds": [
"/providers/Microsoft.Authorization/roleDefinitions/b24988ac-6180-42a0-ab88-20f7382dd24c"
]
}
}
}
},
"id": "/subscriptions/78afced4-1c58-4e66-8242-c042890d34c3/providers/Microsoft.Authorization/policyDefinitions/9f2a0e94-5ada-47b0-8125-42464f93cf37",
"type": "Microsoft.Authorization/policyDefinitions",
"name": "9f2a0e94-5ada-47b0-8125-42464f93cf37"
}
Nevertheless in Overview tab on Policy page I have information that assigned policy definition is in Non-compliant state:
Reason for that? Different values due to comparing previous state to expected state and I know that source of "issues" are keywords like exists, notEquals and so on: https://learn.microsoft.com/en-us/azure/governance/policy/how-to/determine-non-compliance#compliance-reasons
How to ignore those compliance messages and get resource compliance, you know tags are correctly assigned so what is the problem? or maybe I have wrong understanding of azure policies?
The resources that are coming up as non-compliant is because they do not have the same tags as the RG. For some resources, modify won't because they don't support tags or don't support updating/adding tags. I can't tell what type of resource it is so I can't say forsure that is the issues. If this is the case that those resources don't support tags, then you can just excluded that resource from the assignment to see the compliance percentage not including those. (Edit the assignment and add an exclusion)

Resources