Print all VM names and private IP from subnet - azure

I want to list all Virtual Machine names that contain a private IP address under a specific subnet (e.g., named "sub-a"). How do I do that?
I was hoping that this query in Azure Resource Graph Explorer would at least print all non empty private IP addresses:
Resources
| where type =~ 'microsoft.compute/virtualmachines' and isnotempty(properties.privateIPAddress)

You need to look at the Network Interfaces and expand the properties to pull the Private IP address. Something like this should do the trick. I modified one of our examples to pull private IP instead of public IP.
Resources
| where type =~ 'microsoft.compute/virtualmachines'
| extend nics=array_length(properties.networkProfile.networkInterfaces)
| mv-expand nic=properties.networkProfile.networkInterfaces
| where nics == 1 or nic.properties.primary =~ 'true' or isempty(nic)
| project vmId = id, vmName = name, vmSize=tostring(properties.hardwareProfile.vmSize), nicId = tostring(nic.id)
| join kind=leftouter (
Resources
| where type =~ 'microsoft.network/networkinterfaces'
| extend ipConfigsCount=array_length(properties.ipConfigurations)
| extend subnet = tostring(properties.ipConfigurations[0].properties.subnet)
| mv-expand ipconfig=properties.ipConfigurations
| where ipConfigsCount == 1 or ipconfig.properties.primary =~ 'true'
| project nicId = id, subnet, privateIp = tostring(ipconfig.properties.privateIPAddress))
on nicId
| order by subnet asc

Related

Azure Graph Query in az graph query command

Resources
| where type has "microsoft.compute/disks"
| extend diskState = tostring(properties.diskState)
| where managedBy == ""
or diskState == 'Attached'
or diskState == 'Unattached'
| project name, diskState,managedBy,resourceGroup, location, subscriptionId, properties.diskSizeGB, properties.timeCreated
How do I convert this KQL Query into a az graph query command?
I'm from the Microsoft for Founders Hub team. I was able to run this and it worked as intended:
az graph query -q
"Resources
| where type has 'microsoft.compute/disks'
| extend diskState = tostring(properties.diskState)
| where managedBy == ''
or diskState == 'Attached'
or diskState == 'Unattached'
| project name, diskState,managedBy,resourceGroup, location, subscriptionId, properties.diskSizeGB, properties.timeCreated"
Upon reviewing your code block you submitted:
az graph query -q “
Resources
| where type =~ ‘microsoft.compute/disks’
| extend diskState = tostring(properties.diskState)
| where managedBy == "" or diskState == 'Attached' or diskState == 'Unattached'
| project name, diskState,managedBy,resourceGroup, location, subscriptionId, diskSize=properties.diskSizeGB, timeCreation=properties.timeCreated
”
--query ‘
data[].{Disk_Name:name, Disk_State:diskState, Managed_By:managedBy, Resource_Group:resourceGroup, Location:location, Subscription_Id:subscriptionId, Disk_Size:diskSize, Time_of_Creation:timeCreation}
’
-o tsv
I noticed you have two "query" parameters and you have double quotes within your query. Please convert the double quotes to single quotes and only use one query parameter.
Please review this for more information: https://learn.microsoft.com/en-us/azure/governance/resource-graph/concepts/explore-resources

Parse `key1=value1 key2=value2` in Kusto

I'm running Cilium inside an Azure Kubernetes Cluster and want to parse the cilium log messages in the Azure Log Analytics. The log messages have a format like
key1=value1 key2=value2 key3="if the value contains spaces, it's wrapped in quotation marks"
For example:
level=info msg="Identity of endpoint changed" containerID=a4566a3e5f datapathPolicyRevision=0
I couldn't find a matching parse_xxx method in the docs (e.g. https://learn.microsoft.com/en-us/azure/data-explorer/kusto/query/parsecsvfunction ). Is there a possibility to write a custom function to parse this kind of log messages?
Not a fun format to parse... But this should work:
let LogLine = "level=info msg=\"Identity of endpoint changed\" containerID=a4566a3e5f datapathPolicyRevision=0";
print LogLine
| extend KeyValuePairs = array_concat(
extract_all("([a-zA-Z_]+)=([a-zA-Z0-9_]+)", LogLine),
extract_all("([a-zA-Z_]+)=\"([a-zA-Z0-9_ ]+)\"", LogLine))
| mv-apply KeyValuePairs on
(
extend p = pack(tostring(KeyValuePairs[0]), tostring(KeyValuePairs[1]))
| summarize dict=make_bag(p)
)
The output will be:
| print_0 | dict |
|--------------------|-----------------------------------------|
| level=info msg=... | { |
| | "level": "info", |
| | "containerID": "a4566a3e5f", |
| | "datapathPolicyRevision": "0", |
| | "msg": "Identity of endpoint changed" |
| | } |
|--------------------|-----------------------------------------|
With the help of Slavik N, I came with a query that works for me:
let containerIds = KubePodInventory
| where Namespace startswith "cilium"
| distinct ContainerID
| summarize make_set(ContainerID);
ContainerLog
| where ContainerID in (containerIds)
| extend KeyValuePairs = array_concat(
extract_all("([a-zA-Z0-9_-]+)=([^ \"]+)", LogEntry),
extract_all("([a-zA-Z0-9_]+)=\"([^\"]+)\"", LogEntry))
| mv-apply KeyValuePairs on
(
extend p = pack(tostring(KeyValuePairs[0]), tostring(KeyValuePairs[1]))
| summarize JSONKeyValuePairs=parse_json(make_bag(p))
)
| project TimeGenerated, Level=JSONKeyValuePairs.level, Message=JSONKeyValuePairs.msg, PodName=JSONKeyValuePairs.k8sPodName, Reason=JSONKeyValuePairs.reason, Controller=JSONKeyValuePairs.controller, ContainerID=JSONKeyValuePairs.containerID, Labels=JSONKeyValuePairs.labels, Raw=LogEntry

Nestled query Log analytics

Hi i'm trying to get to a log event by nestling a query in the "where" of another query. is this possible?
AzureDiagnostics
| where resource_workflowName_s == "[Workflow Name]"
| where resource_runId_s == (AzureDiagnostics | where trackedProperties_PayloadID_g == "[GUID]" | distinct resource_runId_s)
try:
AzureDiagnostics
| where resource_workflowName_s == "[Workflow Name]"
| where resource_runId_s in (
toscalar(AzureDiagnostics
| where trackedProperties_PayloadID_g == "[GUID]"
| distinct resource_runId_s))

Parse number out of field and put in it's own column

I have data that looks like the following
Public Name | Internal Name |
_____________________|_______________|
Name of object1 (#1) | 1345312 |
Name of object2 (#2) | 1387924 |
..
object2000 (#2000) | 6875238 |
And I'm hoping to parse out the (#*) into it's own column. To look like below
Public Number | Public Name | Internal Name |
______________|__________________|_______________|
(#1) | Name of object1 | 1345312 |
(#2) | Name of object2 | 1387924 |
..
(#2000) | object2000 | 6875238 |
I have absolutely no idea how I would begin to do this. Thoughts?
SELECT
[Public Number] = CASE WHEN [Public Name] LIKE '%(#%' THEN
SUBSTRING([Public Name], CHARINDEX('(', [Public Name]), 255)
ELSE '' END,
[New Public Name] = CASE WHEN [Public Name] LIKE '%(#%' THEN
RTRIM(LEFT([Public Name], CHARINDEX('(', [Public Name])-1))
ELSE [Public Name] END,
[Internal Name]
FROM dbo.table;

Understanding dequeue_rt_stack() for RT scheduling class linux

enqueue_task_rt function in ./kernel/sched/rt.c is responsible for queuing the task to the run queue. enqueue_task_rt contains call to enqueue_rt_entity which calls dequeue_rt_stack. Most part of the code seems logical but I am a bit lost because of the function dequeue_rt_stack unable to understand what it does. Can somebody tell what is the logic that I am missing or suggest some good read.
Edit: The following is the code for dequeue_rt_stack function
struct sched_rt_entity *back = NULL;
/* macro for_each_sched_rt_entity defined as
for(; rt_se; rt_se = rt_se->parent)*/
for_each_sched_rt_entity(rt_se) {
rt_se->back = back;
back = rt_se;
}
for (rt_se = back; rt_se; rt_se = rt_se->back) {
if (on_rt_rq(rt_se))
__dequeue_rt_entity(rt_se);
}
More specifically, I do not understand why there is a need for this code:
for_each_sched_rt_entity(rt_se) {
rt_se->back = back;
back = rt_se;
}
What is its relevance.
When a task is to be added to some queue, it must first be removed from the queue that it currently is on, if any.
With the group scheduler, a task is always at the lowest level of the tree, and might have multiple ancestors:
NULL
^
|
+-----parent------+
| |
| top-level group |
| |
+-----------------+
^ ^_____________
| \
+-----parent------+ +-----parent------+
| | | |
| mid-level group | | other group | ...
| | | |
+-----------------+ +-----------------+
^ ^_____________
| \
+-----parent------+ +-----------------+
| | | |
| task | | other task | ...
| | | |
+-----------------+ +-----------------+
To remove the task from the tree, it must be removed from all groups' queues, and this must be done first at the top-level group (otherwise, the scheduler might try to run an already partially-removed task). Therefore, dequeue_rt_stack uses the back pointers to constructs a list in the opposite direction:
NULL back
^ |
| V
+-parent----------+
| |
| top-level group |
| |
+----------back---+
^ | ^_____________
| V \
+-parent----------+ +-----parent------+
| | | |
| mid-level group | | other group | ...
| | | |
+----------back---+ +-----------------+
^ | ^_____________
| V \
+-parent----------+ +-----------------+
| | | |
| task | | other task | ...
| | | |
+----------back---+ +-----------------+
|
V
NULL
That back list can then be used to walk down the tree to remove the entities in the correct order.
I am a fresh man in kernel hacking. This is my first time to answer linux kernel question.
Maybe this help to you.
I read the source code. I think it maybe relates to group scheduling.
When kernel have these codes:
#ifdef CONFIG_RT_GROUP_SCHED
It represents that we can collect some schedule entities in to one schduling group.
static void enqueue_rt_entity(struct sched_rt_entity *rt_se, bool head)
{
dequeue_rt_stack(rt_se);
for_each_sched_rt_entity(rt_se)
__enqueue_rt_entity(rt_se, head);
}
Function dequeue_rt_stack(rt_se) extracts all the scheduling entities belong to the group, then add them to run queue.
Hierarchical group I/O scheduling
CFS group scheduling

Resources