unable to getting memory usage of Virtual machine using management API - azure

i am not getting the timeseries object in the below api call for Virtual machine memory use.
I tried this:
Method :Get
Url:https://management.azure.com/subscriptions/XXXXXXXXXXXXXXXXXXXX/resourceGroups/XXXXXXXXXXXX/providers/Microsoft.Compute/virtualMachines/XXXXXXX/providers/microsoft.insights/metrics?timespan=2019-03-31T11:30:00.000Z/2020-09-14T11:00:00.000Z&interval=P1D&metricnames=\Memory\% Committed Bytes In Use&aggregation=Average&api-version=2018-01-01&metricnamespace=azure.vm.windows.guestmetrics
Authentication: Barer token
**Response :**
{
"cost": 0,
"timespan": "2020-08-14T11:00:00Z/2020-09-14T11:00:00Z",
"interval": "P1D",
"value": [
{
"id": "/subscriptions/xxxxxxxxxxxxxxxxxx/resourceGroups/xxxxxxxxxxxxx/providers/Microsoft.Compute/virtualMachines/xxxxxxx/providers/Microsoft.Insights/metrics/\Memory\% Committed Bytes In Use",
"type": "Microsoft.Insights/metrics",
"name": {
"value": "\Memory\% Committed Bytes In Use",
"localizedValue": "\Memory\% Committed Bytes In Use"
},
"unit": "Unspecified",
"timeseries": [],
"errorCode": "Success"
}
],
"namespace": "azure.vm.windows.guestmetrics",
"resourceregion": "westus2"
}

Try this as a query against the Log Analytics Resource.
Reference: https://learn.microsoft.com/en-us/rest/api/loganalytics/dataaccess/query/get
let usedMemory = Perf | where (ObjectName == 'Memory' and CounterName contains 'Committed Bytes') | summarize UsedMemory = (avg(CounterValue)) by Computer; let AvailMemory = InsightsMetrics | extend localTimestamp = TimeGenerated - 7h | where TimeGenerated > ago(1d) | where Namespace == 'Memory' and Name == 'AvailableMB' | extend AvailableMem = Val | summarize arg_max(TimeGenerated, *) by Computer; AvailMemory | join kind=leftouter usedMemory on Computer | extend FreeMemoryGB = round(AvailableMem/1024) | parse Tags with * ':' TotalMemoryMB '}' Err | project Computer, FreeMemoryGB, UsedMemory, TotalMemoryMB, localTimestamp, Namespace, Tags, AgentId, _ResourceId

Related

Create data frame from json step exaction history aws

I have got step function execution history in JSON format
[{
"timestamp": "2022-07-18T13:03:03.346000+00:00",
"type": "ExecutionFailed",
"id": 3,
"previousEventId": 2,
"executionFailedEventDetails": {
"error": "States.Runtime",
"cause": "An error occurred while executing the state 'Workflow Choice state' (entered at the event id #2). Invalid path '$.contributor_id': The choice state's condition path references an invalid value."
}
}, {
"timestamp": "2022-07-18T13:03:03.306000+00:00",
"type": "ChoiceStateEntered",
"id": 2,
"previousEventId": 0,
"stateEnteredEventDetails": {
"name": "Workflow Choice state",
"input": "{\n \"Comment\": \"Insert your JSON here\"\n}",
"inputDetails": {
"truncated": false
}
}
}, {
"timestamp": "2022-07-18T13:03:03.252000+00:00",
"type": "ExecutionStarted",
"id": 1,
"previousEventId": 0,
"executionStartedEventDetails": {
"input": "{\n \"Comment\": \"Insert your JSON here\"\n}",
"inputDetails": {
"truncated": false
},
"roleArn": "arn:aws:iam::asdfg:role/step-all"
}
}]
We want to create a view like below
The issue is i am not able to create executionFailedEventDetails ,stateEnteredEventDetails,executionStartedEventDetails as new row .
It comes in first row only .
Step column is the name in the stateEnteredEventDetails
This is what i am doing
import json
import pandas as pd
from tabulate import tabulate
raw = r"""[{
"timestamp": "2022-07-18T13:03:03.346000+00:00",
"type": "ExecutionFailed",
"id": 3,
"previousEventId": 2,
"executionFailedEventDetails": {
"error": "States.Runtime",
"cause": "An error occurred while executing the state 'Workflow Choice state' (entered at the event id #2). Invalid path '$.contributor_id': The choice state's condition path references an invalid value."
}
}, {
"timestamp": "2022-07-18T13:03:03.306000+00:00",
"type": "ChoiceStateEntered",
"id": 2,
"previousEventId": 0,
"stateEnteredEventDetails": {
"name": "Workflow Choice state",
"input": "{\n \"Comment\": \"Insert your JSON here\"\n}",
"inputDetails": {
"truncated": false
}
}
}, {
"timestamp": "2022-07-18T13:03:03.252000+00:00",
"type": "ExecutionStarted",
"id": 1,
"previousEventId": 0,
"executionStartedEventDetails": {
"input": "{\n \"Comment\": \"Insert your JSON here\"\n}",
"inputDetails": {
"truncated": false
},
"roleArn": "arn:aws:iam::asdfg:role/step-all"
}
}]"""
data = json.loads(raw, strict=False)
data = pd.json_normalize(data)
# print(data.to_csv(), index=False)
print(tabulate(data, headers='keys', tablefmt='psql'))
data.to_csv('file.csv',encoding='utf-8', index=False)
and the output is
+----+----------------------------------+--------------------+------+-------------------+-------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------+----------------------------------------+---------------------------------------------------+----------------------------------------+-------------------------------------------------------+----------------------------------------+
| | timestamp | type | id | previousEventId | executionFailedEventDetails.error | executionFailedEventDetails.cause | stateEnteredEventDetails.name | stateEnteredEventDetails.input | stateEnteredEventDetails.inputDetails.truncated | executionStartedEventDetails.input | executionStartedEventDetails.inputDetails.truncated | executionStartedEventDetails.roleArn |
|----+----------------------------------+--------------------+------+-------------------+-------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------+----------------------------------------+---------------------------------------------------+----------------------------------------+-------------------------------------------------------+----------------------------------------|
| 0 | 2022-07-18T13:03:03.346000+00:00 | ExecutionFailed | 3 | 2 | States.Runtime | An error occurred while executing the state 'Workflow Choice state' (entered at the event id #2). Invalid path '$.contributor_id': The choice state's condition path references an invalid value. | nan | nan | nan | nan | nan | nan |
| 1 | 2022-07-18T13:03:03.306000+00:00 | ChoiceStateEntered | 2 | 0 | nan | nan | Workflow Choice state | { | 0 | nan | nan | nan |
| | | | | | | | | "Comment": "Insert your JSON here" | | | | |
| | | | | | | | | } | | | | |
| 2 | 2022-07-18T13:03:03.252000+00:00 | ExecutionStarted | 1 | 0 | nan | nan | nan | nan | nan | { | 0 | arn:aws:iam::asdfg:role/step-all |
| | | | | | | | | | | "Comment": "Insert your JSON here" | | |
| | | | | | | | | | | } | | |
+----+----------------------------------+--------------------+------+-------------------+-------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------+----------------------------------------+---------------------------------------------------+----------------------------------------+-------------------------------------------------------+----------------------------------------+
The Details column 5th column is dynamic like all columns currently i have given example of only 3 event but it can go up to any number .
Final Expected Output
Given file.json:
[{
"timestamp": "2022-07-18T13:03:03.346000+00:00",
"type": "ExecutionFailed",
"id": 3,
"previousEventId": 2,
"executionFailedEventDetails": {
"error": "States.Runtime",
"cause": "An error occurred while executing the state 'Workflow Choice state' (entered at the event id #2). Invalid path '$.contributor_id': The choice state's condition path references an invalid value."
}
}, {
"timestamp": "2022-07-18T13:03:03.306000+00:00",
"type": "ChoiceStateEntered",
"id": 2,
"previousEventId": 0,
"stateEnteredEventDetails": {
"name": "Workflow Choice state",
"input": "{\n \"Comment\": \"Insert your JSON here\"\n}",
"inputDetails": {
"truncated": false
}
}
}, {
"timestamp": "2022-07-18T13:03:03.252000+00:00",
"type": "ExecutionStarted",
"id": 1,
"previousEventId": 0,
"executionStartedEventDetails": {
"input": "{\n \"Comment\": \"Insert your JSON here\"\n}",
"inputDetails": {
"truncated": false
},
"roleArn": "arn:aws:iam::asdfg:role/step-all"
}
}]
Doing
import pandas as pd
df = pd.read_json('file.json')
df = df.melt(['timestamp', 'type', 'id', 'previousEventId'], var_name='step', value_name='details').dropna()
print(df.to_markdown(index=False))
Output (Markdown was just the easiest for me to show all):
| timestamp | type | id | previousEventId | step | details |
|:---------------------------------|:-------------------|-----:|------------------:|:-----------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 2022-07-18 13:03:03.346000+00:00 | ExecutionFailed | 3 | 2 | executionFailedEventDetails | {'error': 'States.Runtime', 'cause': "An error occurred while executing the state 'Workflow Choice state' (entered at the event id #2). Invalid path '$.contributor_id': The choice state's condition path references an invalid value."} |
| 2022-07-18 13:03:03.306000+00:00 | ChoiceStateEntered | 2 | 0 | stateEnteredEventDetails | {'name': 'Workflow Choice state', 'input': '{\n "Comment": "Insert your JSON here"\n}', 'inputDetails': {'truncated': False}} |
| 2022-07-18 13:03:03.252000+00:00 | ExecutionStarted | 1 | 0 | executionStartedEventDetails | {'input': '{\n "Comment": "Insert your JSON here"\n}', 'inputDetails': {'truncated': False}, 'roleArn': 'arn:aws:iam::asdfg:role/step-all'} |

Azure resource graph query re-write default tag response

I am trying to alter the default response of Azure Resource graph query into similar that Azure portal uses. My query is:
resourcecontainers | where type == "microsoft.resources/subscriptions" | project name, tags
From where the response for tags is:
"tags": {
"TagA": "TeamA",
"TagB": "TeamB",
"TagC": "TeamC"
},
I would like to alter this into:
"tags": [
{
"name": "TagA",
"value": "TeamA"
},
{
"name": "TagB",
"value": "TeamB"
},
{
"name": "TagC",
"value": "TeamC"
}
]
How to do it? All examples I have found are either for only one tag or static set of tags. Mine would need to support dynamic amount of tags.
As you have confirmed to achieve the above requirement we can do it using the below query
resourcecontainers
| where type =~ 'microsoft.resources/subscriptions'
| mvexpand parsejson(tags)
| extend tagname = tostring(bag_keys(tags)[0])
| extend tagvalue = tostring(tags[tagname])
| project name, tagname, tagvalue
Output:-
For more information please refer this BLOG

Creating Dashboard using Kusto query in ARM Template format

Hope you all are doing good.
I've working on Kusto query which runs against Azure Kubernetes cluster. it works fine though when i try to incorporate within ARM template to create dashboard it throws me an error related to "expected token 'RightParenthesis' and actual 'Identifier'".
Running query in log analytic workspace is given below:
let clusterName = 'AKS';
Perf
| where TimeGenerated > ago(2m)
| where ObjectName == "K8SNode"
| where CounterName == "cpuAllocatableNanoCores"
| where InstanceName contains clusterName
| summarize arg_max(TimeGenerated, *) by Computer
| summarize TotalCores=sum(CounterValue), x="join"
| join kind = inner (
KubePodInventory
| where TimeGenerated > ago(2m)
| where ClusterName contains clusterName
| summarize arg_max(TimeGenerated, *) by ContainerName
| project Name, ContainerName, Namespace
| join kind = inner (
Perf
| where TimeGenerated > ago(2m)
| where ObjectName == 'K8SContainer'
| where CounterName == 'cpuUsageNanoCores'
| where InstanceName contains clusterName
| extend ContainerNameParts = split(InstanceName, '/')
| extend ContainerNamePartCount = array_length(ContainerNameParts)
| extend PodUIDIndex = ContainerNamePartCount - 2, ContainerNameIndex = ContainerNamePartCount - 1
| extend ContainerName = strcat(ContainerNameParts[PodUIDIndex], '/', ContainerNameParts[ContainerNameIndex])
| summarize arg_max(TimeGenerated, *) by ContainerName
| project ContainerName, CounterValue
)
on ContainerName
| project Name, Namespace, CounterValue
| summarize CoresPerNamespace=sum(CounterValue) by Namespace, x="join"
)
on x
| project UtcTime=now(), Namespace, CpuUtilizationPerNamespace=(CoresPerNamespace / TotalCores) * 100
==========================
Later i tried to incorporate this with in ARM template and it throws an error.
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"dashboardName": {
"defaultValue": "AKS-Temp-Dashboard",
"type": "String",
"metadata": {
"description": "DASHBOARD name for Azure kubernetes service cluster"
}
},
"resourceGroupName": {
"defaultValue": "aks-rg",
"type": "String",
"metadata": {
"description": "Azure resourceGroup name for Azure kubernetes service cluster"
}
},
"aksClusterName": {
"defaultValue": "aks",
"type": "String",
"metadata": {
"description": "Azure kubernetes service cluster name"
}
}
},
"variables": {
"dashboardName": "[concat(parameters('dashboardName'), '-DASHBOARD')]",
"aksResourceId": "[resourceId(parameters('resourceGroupName') , 'Microsoft.ContainerService/managedClusters', parameters('aksClusterName'))]"
},
"resources": [
{
"properties": {
"lenses": {
"0": {
"order": 0,
"parts": {
"1": {
"position": {
"x": 0,
"y": 3,
"colSpan": 5,
"rowSpan": 3
},
"metadata": {
"inputs": [
{
"name": "resourceTypeMode",
"isOptional": true
},
{
"name": "ComponentId",
"isOptional": true
},
{
"name": "Scope",
"value": {
"resourceIds": [
"/subscriptions/7777t-a1ae-4ca9-89bc-gfgf7577575/resourcegroups/rg-eus/providers/microsoft.operationalinsights/workspaces/aks""
]
},
"isOptional": true
},
{
"name": "PartId",
"value": "4ae7fd78-1a4f-4025-a091-7d57e08d6822",
"isOptional": true
},
{
"name": "Version",
"value": "2.0",
"isOptional": true
},
{
"name": "TimeRange",
"isOptional": true
},
{
"name": "DashboardId",
"isOptional": true
},
{
"name": "DraftRequestParameters",
"isOptional": true
},
{
"name": "Query",
"value": "[concat('let clustername = \"', parameters('aksClusterName'), '\"; Perf\n| where TimeGenerated > ago(2m)\n| where ObjectName == \"K8SNode\" \n| where CounterName == \"cpuAllocatableNanoCores\" \n| where InstanceName contains clusterName\n| summarize arg_max(TimeGenerated, *) by Computer\n| summarize TotalCores=sum(CounterValue), x=\"join\"\n| join kind = inner (\n KubePodInventory\n | where TimeGenerated > ago(2m)\n | where ClusterName contains clusterName\n | summarize arg_max(TimeGenerated, *) by ContainerName\n | project Name, ContainerName, Namespace\n | join kind = inner (\n Perf\n | where TimeGenerated > ago(2m)\n | where ObjectName == 'K8SContainer'\n | where CounterName == 'cpuUsageNanoCores'\n | where InstanceName contains clusterName\n | extend ContainerNameParts = split(InstanceName, '/')\n | extend ContainerNamePartCount = array_length(ContainerNameParts) \n | extend PodUIDIndex = ContainerNamePartCount - 2, ContainerNameIndex = ContainerNamePartCount - 1 \n | extend ContainerName = strcat(ContainerNameParts[PodUIDIndex], '/', ContainerNameParts[ContainerNameIndex])\n | summarize arg_max(TimeGenerated, *) by ContainerName\n | project ContainerName, CounterValue\n )\n on ContainerName\n | project Name, Namespace, CounterValue\n | summarize CoresPerNamespace=sum(CounterValue) by Namespace, x=\"join\"\n )\n on x\n| project UtcTime=now(), Namespace, CpuUtilizationPerNamespace=(CoresPerNamespace / TotalCores) * 100\n\n ')]",
"isOptional": true
},
{
"name": "ControlType",
"value": "FrameControlChart",
"isOptional": true
},
{
"name": "SpecificChart",
"value": "StackedColumn",
"isOptional": true
},
{
"name": "PartTitle",
"value": "Analytics",
"isOptional": true
},
{
"name": "PartSubTitle",
"value": "gaks-la1",
"isOptional": true
},
{
"name": "Dimensions",
"value": {
"xAxis": {
"name": "UtcTime",
"type": "datetime"
},
"yAxis": [
{
"name": "CpuUtilizationPerNamespace",
"type": "real"
}
],
"splitBy": [
{
"name": "Namespace",
"type": "string"
}
],
"aggregation": "Sum"
},
"isOptional": true
},
{
"name": "LegendOptions",
"value": {
"isEnabled": true,
"position": "Bottom"
},
"isOptional": true
},
{
"name": "IsQueryContainTimeRange",
"value": true,
"isOptional": true
}
],
"type": "Extension/Microsoft_OperationsManagementSuite_Workspace/PartType/LogsDashboardPart",
"settings": {}
}
}
}
}
},
"metadata": {
"model": {}
}
},
"name": "[variables('dashboardName')]",
"type": "Microsoft.Portal/dashboards",
"location": "[resourceGroup().location]",
"tags": {
"hidden-title": "AKS-Monitoring-Dashboard"
},
"apiVersion": "2015-08-01-preview"
}
]
}
Kindly assist me to sort out this issue.
NOTE: TO me it looks concat() function generating an issue.
Find an error screenshot below:
Note: Seems Concat() function is not setup correctly.
*****Adding Update running query after struggling a lot so that end users can take help *****
"[concat('let clustername = \"', parameters('aksClusterName'), '\"; Perf \r\n| where TimeGenerated > ago(2m) \r\n| where ObjectName == \"K8SNode\" \r\n| where CounterName == \"cpuAllocatableNanoCores\" \r\n| where InstanceName contains clustername \r\n| summarize arg_max(TimeGenerated, *) by Computer \r\n| summarize TotalCores=sum(CounterValue), x=''join'' \r\n| join kind = inner (\r\n KubePodInventory \r\n | where TimeGenerated > ago(2m) \r\n | where ClusterName contains clustername \r\n | summarize arg_max(TimeGenerated, *) by ContainerName \r\n | project Name, ContainerName, Namespace \r\n | join kind = inner (\r\n Perf \r\n | where TimeGenerated > ago(2m) \r\n | where ObjectName == ''K8SContainer'' \r\n | where CounterName == ''cpuUsageNanoCores'' \r\n | where InstanceName contains clustername \r\n | extend ContainerNameParts = split(InstanceName, ''/'') \r\n | extend ContainerNamePartCount = array_length(ContainerNameParts) \r\n | extend PodUIDIndex = ContainerNamePartCount - 2 \r\n| extend ContainerNameIndex = ContainerNamePartCount - 1 \r\n | extend ContainerName = strcat(ContainerNameParts[PodUIDIndex], ''/'', ContainerNameParts[ContainerNameIndex]) \r\n | summarize arg_max(TimeGenerated, *) by ContainerName\r\n | project ContainerName, CounterValue\r\n )\r\n on ContainerName\r\n | project Name, Namespace, CounterValue\r\n | summarize CoresPerNamespace=sum(CounterValue) by Namespace, x=''join''\r\n )\r\n on x \r\n| project UtcTime=now(), Namespace, CpuUtilizationPerNamespace=(CoresPerNamespace / TotalCores) * 100')]"
In your KQL query you have apostrophes (eg where CounterName == 'cpuUsageNanoCores'\n) that you didn’t escape. ARM treats them as and of string and expects a comma and another parameter of concat function, hence the error. AFAIR you escape apostrophes with another one: ''.

Replacing a value for a given key in Kusto

I am trying to use the .set-or-replace command to amend the "subject" entry below from sample/consumption/backups to sample/consumption/backup but I am not having much look in the world of Kusto.
I can't seem to reference the sub headings within Records, data.
"source_": CustomEventRawRecords,
"Records": [
{
"metadataVersion": "1",
"dataVersion": "",
"eventType": "consumptionRecorded",
"eventTime": "1970-01-01T00:00:00.0000000Z",
"subject": "sample/consumption/backups",
"topic": "/subscriptions/1234567890id/resourceGroups/rg/providers/Microsoft.EventGrid/topics/webhook",
"data": {
"resourceId": "/subscriptions/1234567890id/resourceGroups/RG/providers/Microsoft.Compute/virtualMachines/vm"
},
"id": "1234567890id"
}
],
Command I've tried to get to work;
.set-or-replace [async] CustomEventRawRecords [with (subject = sample/consumption/backup [, ...])] <| QueryOrCommand
If you're already manipulating the data, why not turn it into a columnar representation? that way you can easily make the corrections you want to make and also get the full richness of the tabular operators plus an intellisense experience that will help you formulate queries easily
here's an example query that will do that:
execute query in browser
datatable (x: dynamic)[dynamic({"source_": "CustomEventRawRecords",
"Records": [
{
"metadataVersion": "1",
"dataVersion": "",
"eventType": "consumptionRecorded",
"eventTime": "1970-01-01T00:00:00.0000000Z",
"subject": "sample/consumption/backups",
"topic": "/subscriptions/1234567890id/resourceGroups/rg/providers/Microsoft.EventGrid/topics/webhook",
"data": {
"resourceId": "/subscriptions/1234567890id/resourceGroups/RG/providers/Microsoft.Compute/virtualMachines/vm"
},
"id": "1234567890id"
}
]})]
| extend records = x.Records
| mv-expand record=records
| extend subject = tostring(record.subject)
| extend subject = iff(subject == "sample/consumption/backups", "sample/consumption/backup", subject)
| extend metadataVersion = tostring(record.metadataVersion)
| extend dataVersion = tostring(record.dataVersion)
| extend eventType = tostring(record.eventType)
| extend topic= tostring(record.topic)
| extend data = record.data
| extend id = tostring(record.id)
| project-away x, records, record

AWS CLI Query IF condition

Could I get help on getting the values of Alias record in the cli table output?
The test.emea.example.com. is an alias.
BTW, if the DNS record is multi-value, how to flat and concatenate the values
Below is the example:
$ aws route53 list-resource-record-sets --hosted-zone-id Z34XXXXXXXX4EF --query "ResourceRecordSets[?Type=='A']"
[
{
"ResourceRecords": [
{
"Value": "10.189.134.12"
}
],
"Type": "A",
"Name": "dnsforwarder0.emea.example.com.",
"TTL": 300
},
{
"ResourceRecords": [
{
"Value": "10.189.134.47"
}
],
"Type": "A",
"Name": "dnsforwarder1.emea.example.com.",
"TTL": 300
},
{
"ResourceRecords": [
{
"Value": "10.189.134.78"
}
],
"Type": "A",
"Name": "dnsforwarder2.emea.example.com.",
"TTL": 300
},
{
"AliasTarget": {
"HostedZoneId": "Z32O12XQLNTSW2",
"EvaluateTargetHealth": false,
"DNSName": "dualstack.kubernetes-elb-k8fca-prod-emea-1745420721.eu-west-1.elb.amazonaws.com."
},
"Type": "A",
"Name": "test.emea.example.com."
}
]
[Tiger-Pengs-MacBook-Pro:~/aws/aws_fed]
tpeng $ aws route53 list-resource-record-sets --hosted-zone-id Z34XXXXXXXX4EF --query "ResourceRecordSets[?Type=='A'].[Name,Type,ResourceRecords[0].Value]" --output table --color off
-----------------------------------------------------------
| ListResourceRecordSets |
+----------------------------------+----+-----------------+
| dnsforwarder0.emea.example.com. | A | 10.189.134.12 |
| dnsforwarder1.emea.example.com. | A | 10.189.134.47 |
| dnsforwarder2.emea.example.com. | A | 10.189.134.78 |
| test.emea.example.com. | A | None |
+----------------------------------+----+-----------------+
Based on John and Alex's idea, I use the union, as well as join function, to get what I want. However, it is too verbose.
$ aws route53 list-resource-record-sets --hosted-zone-id Z34XXXXXXXX4EF --query "[ResourceRecordSets[?Type=='A' && contains(keys(#), 'AliasTarget')].[Name,Type,AliasTarget.DNSName],ResourceRecordSets[?Type=='A' && contains(keys(#), 'ResourceRecords')].[Name,Type,join(', ', ResourceRecords[].Value)]]" --output table
-------------------------------------------------------------------------------------------------------------------------
| ListResourceRecordSets |
+-----------------------------+----+------------------------------------------------------------------------------------+
| test.example.com. | A | dualstack.kubernetes-elb-k8fca-prod-emea-1745420721.eu-west-1.elb.amazonaws.com. |
| dnsforwarder0.example.com. | A | 10.189.134.12 |
| dnsforwarder1.example.com. | A | 10.189.134.47 |
| dnsforwarder2.example.com. | A | 10.189.134.78 |
| privatezone.example.com. | A | 10.20.30.40, 10.10.30.40, 10.30.30.40 |
+-----------------------------+----+------------------------------------------------------------------------------------+
You can do this by combining a JMESpath And expression with the keys and contains functions. So this would work:
aws route53 list-resource-record-sets --hosted-zone-id Z34XXXXXXXX4EF \
--query 'ResourceRecordSets[?Type==`A` && contains(keys(#), `AliasTarget`)].Name'

Resources