Disk query in Log Analytics on Azure - azure

I was wondering if I could get some help with Log analytics. New to this so bear with me.
I'm trying to create a query that will provide informtaion on disk utilisation in Azure. I've gottwo commands (below), however I'm not able to merge them as I would like one query which gives me % free space, overall size of disk, name of vm and name of disk. Anything else I can get in terms of disk usage would be great, not overly concerned with IOPs at the moment.
The commands are:
This command below proivides info on free space:
search ObjectName == "LogicalDisk" and CounterName == "% Free Space"
This command below provides information on free Mb remaining.
search ObjectName == "LogicalDisk" and CounterName == "Free Megabytes"
I have tried this which helps, but again information is quite limited
search ObjectName == "LogicalDisk" and CounterName == "Free Megabytes" and TimeGenerated > ago(1d)
| summarize FreeSpace = min(CounterValue) by Computer, InstanceName
| where strlen(InstanceName) ==2 and InstanceName contains ":"
Thanks in advance :)

You can use the below script to query the Azure log database:
// % Disk free space
Perf | where ObjectName == "LogicalDisk" and CounterName == "% Free Space" and InstanceName != "_Total"
| summarize CounterValue = min(CounterValue) by Computer, InstanceName, CounterName
| order by CounterValue asc nulls first
To limit output to disks with less than 20% free space just add an extra condition:
| where CounterValue < 20

You could use the following command
Perf | where (ObjectName == "LogicalDisk" and CounterName == "Free Megabytes") | summarize arg_max(TimeGenerated, *) by Computer | sort by TimeGenerated desc
More information about this you could check this link.

Related

Azure Graph Query in az graph query command

Resources
| where type has "microsoft.compute/disks"
| extend diskState = tostring(properties.diskState)
| where managedBy == ""
or diskState == 'Attached'
or diskState == 'Unattached'
| project name, diskState,managedBy,resourceGroup, location, subscriptionId, properties.diskSizeGB, properties.timeCreated
How do I convert this KQL Query into a az graph query command?
I'm from the Microsoft for Founders Hub team. I was able to run this and it worked as intended:
az graph query -q
"Resources
| where type has 'microsoft.compute/disks'
| extend diskState = tostring(properties.diskState)
| where managedBy == ''
or diskState == 'Attached'
or diskState == 'Unattached'
| project name, diskState,managedBy,resourceGroup, location, subscriptionId, properties.diskSizeGB, properties.timeCreated"
Upon reviewing your code block you submitted:
az graph query -q “
Resources
| where type =~ ‘microsoft.compute/disks’
| extend diskState = tostring(properties.diskState)
| where managedBy == "" or diskState == 'Attached' or diskState == 'Unattached'
| project name, diskState,managedBy,resourceGroup, location, subscriptionId, diskSize=properties.diskSizeGB, timeCreation=properties.timeCreated
”
--query ‘
data[].{Disk_Name:name, Disk_State:diskState, Managed_By:managedBy, Resource_Group:resourceGroup, Location:location, Subscription_Id:subscriptionId, Disk_Size:diskSize, Time_of_Creation:timeCreation}
’
-o tsv
I noticed you have two "query" parameters and you have double quotes within your query. Please convert the double quotes to single quotes and only use one query parameter.
Please review this for more information: https://learn.microsoft.com/en-us/azure/governance/resource-graph/concepts/explore-resources

Print all VM names and private IP from subnet

I want to list all Virtual Machine names that contain a private IP address under a specific subnet (e.g., named "sub-a"). How do I do that?
I was hoping that this query in Azure Resource Graph Explorer would at least print all non empty private IP addresses:
Resources
| where type =~ 'microsoft.compute/virtualmachines' and isnotempty(properties.privateIPAddress)
You need to look at the Network Interfaces and expand the properties to pull the Private IP address. Something like this should do the trick. I modified one of our examples to pull private IP instead of public IP.
Resources
| where type =~ 'microsoft.compute/virtualmachines'
| extend nics=array_length(properties.networkProfile.networkInterfaces)
| mv-expand nic=properties.networkProfile.networkInterfaces
| where nics == 1 or nic.properties.primary =~ 'true' or isempty(nic)
| project vmId = id, vmName = name, vmSize=tostring(properties.hardwareProfile.vmSize), nicId = tostring(nic.id)
| join kind=leftouter (
Resources
| where type =~ 'microsoft.network/networkinterfaces'
| extend ipConfigsCount=array_length(properties.ipConfigurations)
| extend subnet = tostring(properties.ipConfigurations[0].properties.subnet)
| mv-expand ipconfig=properties.ipConfigurations
| where ipConfigsCount == 1 or ipconfig.properties.primary =~ 'true'
| project nicId = id, subnet, privateIp = tostring(ipconfig.properties.privateIPAddress))
on nicId
| order by subnet asc

Nestled query Log analytics

Hi i'm trying to get to a log event by nestling a query in the "where" of another query. is this possible?
AzureDiagnostics
| where resource_workflowName_s == "[Workflow Name]"
| where resource_runId_s == (AzureDiagnostics | where trackedProperties_PayloadID_g == "[GUID]" | distinct resource_runId_s)
try:
AzureDiagnostics
| where resource_workflowName_s == "[Workflow Name]"
| where resource_runId_s in (
toscalar(AzureDiagnostics
| where trackedProperties_PayloadID_g == "[GUID]"
| distinct resource_runId_s))

Array placeholder in Gherkin syntax

Hi I am trying to write express a set of requirements in gherkin syntax, but it requires a good deal of repetition. I saw here that I can use placeholders which would be perfect for my task, however some of the data in my Given and in my then are collections. How would I go about representing collections in the examples?
Given a collection of spaces <spaces>
And a <request> to allocate space
When I allocate the request
Then I should have <allocated_spaces>
Examples:
| spaces | request | allocated_spaces |
| ? | ? | ? |
A bit hacky, but you can delimit a string:
Given a collection of spaces <spaces>
And a <request> to allocate space
When I allocate the request
Then I should have <allocated_spaces>
Examples:
| spaces | request | allocated_spaces |
| a,b,c | ? | ? |
Given(/^a collection of spaces (.*?)$/) do |arg1|
collection = arg1.split(",") #=> ["a","b","c"]
end
You can use Data Tables. I never try to have param in data table before, but in theory it should work.
Given a collection of spaces:
| space1 |
| space2 |
| <space_param> |
And a <request> to allocate space
When I allocate the request
Then I should have <allocated_spaces>
Examples:
| space_param | request | allocated_spaces |
| ? | ? | ? |
The given data table would be an instance of Cucumber::Ast::Table, checkout the rubydoc for its API.
Here's an example, again using split, but without the regex:
Scenario Outline: To ensure proper allocation
Given a collection of spaces <spaces>
And a <request> to allocate space
When I allocate the request
Then I should have <allocated_spaces>
Examples:
| spaces | request | allocated_spaces |
| "s1, s2, s3" | 2 | 2 |
| "s1, s2, s3" | 3 | 3 |
| "s1, s2" | 3 | 2 |
I use cucumber-js so this is what the code may look like:
Given('a collection of spaces {stringInDoubleQuotes}', function (spaces, callback) {
// Write code here that turns the phrase above into concrete actions
this.availableSpaces = spaces.split(", ");
callback();
});
Given('a {int} to allocate space', function (numToAllocate, callback) {
this.numToAllocate = numToAllocate;
// Write code here that turns the phrase above into concrete actions
callback();
});
When('I allocate the request', function (callback) {
console.log("availableSpaces:", this.availableSpaces);
console.log("numToAllocate:", this.numToAllocate);
// Write code here that turns the phrase above into concrete actions
callback();
});
Then('I should have {int}', function (int, callback) {
// Write code here that turns the phrase above into concrete actions
callback(null, 'pending');
});

How to get overall CPU usage (e.g. 57%) on Linux [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed last year.
Improve this question
I am wondering how you can get the system CPU usage and present it in percent using bash, for example.
Sample output:
57%
In case there is more than one core, it would be nice if an average percentage could be calculated.
Take a look at cat /proc/stat
grep 'cpu ' /proc/stat | awk '{usage=($2+$4)*100/($2+$4+$5)} END {print usage "%"}'
EDIT please read comments before copy-paste this or using this for any serious work. This was not tested nor used, it's an idea for people who do not want to install a utility or for something that works in any distribution. Some people think you can "apt-get install" anything.
NOTE: this is not the current CPU usage, but the overall CPU usage in all the cores since the system bootup. This could be very different from the current CPU usage. To get the current value top (or similar tool) must be used.
Current CPU usage can be potentially calculated with:
awk '{u=$2+$4; t=$2+$4+$5; if (NR==1){u1=u; t1=t;} else print ($2+$4-u1) * 100 / (t-t1) "%"; }' \
<(grep 'cpu ' /proc/stat) <(sleep 1;grep 'cpu ' /proc/stat)
You can try:
top -bn1 | grep "Cpu(s)" | \
sed "s/.*, *\([0-9.]*\)%* id.*/\1/" | \
awk '{print 100 - $1"%"}'
Try mpstat from the sysstat package
> sudo apt-get install sysstat
Linux 3.0.0-13-generic (ws025) 02/10/2012 _x86_64_ (2 CPU)
03:33:26 PM CPU %usr %nice %sys %iowait %irq %soft %steal %guest %idle
03:33:26 PM all 2.39 0.04 0.19 0.34 0.00 0.01 0.00 0.00 97.03
Then some cutor grepto parse the info you need:
mpstat | grep -A 5 "%idle" | tail -n 1 | awk -F " " '{print 100 - $ 12}'a
Might as well throw up an actual response with my solution, which was inspired by Peter Liljenberg's:
$ mpstat | awk '$12 ~ /[0-9.]+/ { print 100 - $12"%" }'
0.75%
This will use awk to print out 100 minus the 12th field (idle), with a percentage sign after it. awk will only do this for a line where the 12th field has numbers and dots only ($12 ~ /[0-9]+/).
You can also average five samples, one second apart:
$ mpstat 1 5 | awk 'END{print 100-$NF"%"}'
Test it like this:
$ mpstat 1 5 | tee /dev/tty | awk 'END{print 100-$NF"%"}'
EDITED: I noticed that in another user's reply %idle was field 12 instead of field 11. The awk has been updated to account for the %idle field being variable.
This should get you the desired output:
mpstat | awk '$3 ~ /CPU/ { for(i=1;i<=NF;i++) { if ($i ~ /%idle/) field=i } } $3 ~ /all/ { print 100 - $field }'
If you want a simple integer rounding, you can use printf:
mpstat | awk '$3 ~ /CPU/ { for(i=1;i<=NF;i++) { if ($i ~ /%idle/) field=i } } $3 ~ /all/ { printf("%d%%",100 - $field) }'
Do this to see the overall CPU usage. This calls python3 and uses the cross-platform psutil module.
printf "%b" "import psutil\nprint('{}%'.format(psutil.cpu_percent(interval=2)))" | python3
The interval=2 part says to measure the total CPU load over a blocking period of 2 seconds.
Sample output:
9.4%
The python program it contains is this:
import psutil
print('{}%'.format(psutil.cpu_percent(interval=2)))
Placing time in front of the call proves it takes the specified interval time of about 2 seconds in this case. Here is the call and output:
$ time printf "%b" "import psutil\nprint('{}%'.format(psutil.cpu_percent(interval=2)))" | python3
9.5%
real 0m2.127s
user 0m0.119s
sys 0m0.008s
To view the output for individual cores as well, let's use this python program below. First, I obtain a python list (array) of "per CPU" information, then I average everything in that list to get a "total % CPU" type value. Then I print the total and the individual core percents.
Python program:
import psutil
cpu_percent_cores = psutil.cpu_percent(interval=2, percpu=True)
avg = sum(cpu_percent_cores)/len(cpu_percent_cores)
cpu_percent_total_str = ('%.2f' % avg) + '%'
cpu_percent_cores_str = [('%.2f' % x) + '%' for x in cpu_percent_cores]
print('Total: {}'.format(cpu_percent_total_str))
print('Individual CPUs: {}'.format(' '.join(cpu_percent_cores_str)))
This can be wrapped up into an incredibly ugly 1-line bash script like this if you like. I had to be sure to use only single quotes (''), NOT double quotes ("") in the Python program in order to make this wrapping into a bash 1-liner work:
printf "%b" \
"\
import psutil\n\
cpu_percent_cores = psutil.cpu_percent(interval=2, percpu=True)\n\
avg = sum(cpu_percent_cores)/len(cpu_percent_cores)\n\
cpu_percent_total_str = ('%.2f' % avg) + '%'\n\
cpu_percent_cores_str = [('%.2f' % x) + '%' for x in cpu_percent_cores]\n\
print('Total: {}'.format(cpu_percent_total_str))\n\
print('Individual CPUs: {}'.format(' '.join(cpu_percent_cores_str)))\n\
" | python3
Sample output: notice that I have 8 cores, so there are 8 numbers after "Individual CPUs:":
Total: 10.15%
Individual CPUs: 11.00% 8.50% 11.90% 8.50% 9.90% 7.60% 11.50% 12.30%
For more information on how the psutil.cpu_percent(interval=2) python call works, see the official psutil.cpu_percent(interval=None, percpu=False) documentation here:
psutil.cpu_percent(interval=None, percpu=False)
Return a float representing the current system-wide CPU utilization as a percentage. When interval is > 0.0 compares system CPU times elapsed before and after the interval (blocking). When interval is 0.0 or None compares system CPU times elapsed since last call or module import, returning immediately. That means the first time this is called it will return a meaningless 0.0 value which you are supposed to ignore. In this case it is recommended for accuracy that this function be called with at least 0.1 seconds between calls. When percpu is True returns a list of floats representing the utilization as a percentage for each CPU. First element of the list refers to first CPU, second element to second CPU and so on. The order of the list is consistent across calls.
Warning: the first time this function is called with interval = 0.0 or None it will return a meaningless 0.0 value which you are supposed to ignore.
Going further:
I use the above code in my cpu_logger.py script in my eRCaGuy_dotfiles repo.
References:
Stack Overflow: How to get current CPU and RAM usage in Python?
Stack Overflow: Executing multi-line statements in the one-line command-line?
How to display a float with two decimal places?
Finding the average of a list
Related
https://unix.stackexchange.com/questions/295599/how-to-show-processes-that-use-more-than-30-cpu/295608#295608
https://askubuntu.com/questions/22021/how-to-log-cpu-load

Resources