Trying to query Azure Resource Graph Explorer for NSGs with missing rules - azure

The following query fails with 2 ParserFailure errors, both on line 5. At least that's where the query builder shows the red curly line.
The intention of this query is probably obvious to the Azure KQL initiates, but I'll explain nonetheless just to make sure it's clear.
This query should list all NSGs that do not have either one of the rules named "AllowThis" or "AllowThat".
Resources
| where type == "microsoft.network/networksecuritygroups"
| where isnotempty(properties.securityRules)
| where not(properties.securityRules
| where (tolower(tostring(properties.securityRules.ruleName)) =~ "allowthis|allowthat"))
| project NSGName = name
| order by NSGName asc
It would even be nicer if the table shows the actual missing rule(s) for the listed NSGs, but I have no idea where to start with that.
Does anyone have a working version of this type of query? Having to go through a lot of NSGs manually can't be the answer.
I have tried multiple variations of the query, but I couldn't find a single working version.

Below are my findings and observations from the query posted in question.
Lines 1 to 3 looks good and will give you list of NSG resources which has values for "securityRules" field.
For line number 4
| where not(properties.securityRules)
I am not sure what are you trying to achieve in this step. The not() takes bool values as mentioned in the documentation.
For line number 5
| where (tolower(tostring(properties.securityRules.ruleName)) =~ "allowthis|allowthat")
There is no need to use tolower() when you are using =~ as this supports case-insensitive match. Also under "securityRules" in NSG json object there is no field named as "ruleName", however there is a field "name". Please find the document for the same - Link. You can use the same documentation to check for the fields available to query NSG resource data.
When you are trying to write condition for "AllowThis" or "AllowThat" in Azure Resource Graph Explorer you should use the syntax properties.securityRules.name == "allowthis" or properties.securityRules.name == "allowthat"
If you write anything within quotes it will be taken as single string. Hence in your query "allowthis|allowthat" will be considered as a single string.

Related

display only specific resources by type with kusto in Resource Graph Explorer

I have an issue with showing specific resources with azure kusto query.
what i want is to write a kusto query that show only database resources and server resources in azure.
i have written following query regarding Databases:
resources
| where type in ("microsoft.sql/servers/databases","microsoft.dbforpostgresql/servers","microsoft.azuredata/postgresinstances","microsoft.dbformariadb/servers","microsoft.dbformysql/flexibleservers","microsoft.dbformysql/servers","microsoft.dbforpostgresql/flexibleservers","microsoft.dbforpostgresql/servergroups","microsoft.kusto/clusters/databases","microsoft.sql/managedinstances/databases","microsoft.synapse/workspaces/sqldatabases","ravenhq.db/databases","microsoft.documentdb/databaseaccounts")
| summarize Amount=count() by type
But when i execute the query it shows me two Databases even though i only have create one, the extra one is a "master" which should not be included because there is only one resource in the resource group
i have also tried with the following query:
resources
| where type contains "database" | distinct type
| summarize Amount=count() by type
But then the issue is that it doesnt include all the db's that doesnt have the word "database" in the type name for example "microsoft.azuredata/postgresinstances"
so the question is, how do i write a query that shows ALL the databases on my dashboard.
The second part of the question which is similar to the previous with databases is how i show all the Servers.
I have tried with the following queries:
resources
| where split(type,"/")[array_length(split(type,"/"))] contains "servers"
it gave me no result even though i had a server.
then i tried:
resources
| where type contains "/server" | distinct type
| summarize Amount=count() by type
that didnt work because it also returned all the database resources cuntaining the work "server"
i have tried to look through microsofts documentation, but cannot figure out what to do.
If you don't want the master databases (which are the databases that store system level data in SQL databases, you can simply filter them out:
resources
| where type in ("microsoft.sql/servers/databases","microsoft.dbforpostgresql/servers","microsoft.azuredata/postgresinstances","microsoft.dbformariadb/servers","microsoft.dbformysql/flexibleservers","microsoft.dbformysql/servers","microsoft.dbforpostgresql/flexibleservers","microsoft.dbforpostgresql/servergroups","microsoft.kusto/clusters/databases","microsoft.sql/managedinstances/databases","microsoft.synapse/workspaces/sqldatabases","ravenhq.db/databases","microsoft.documentdb/databaseaccounts")
| where name type != "microsoft.sql/servers/databases" or name != "master"
| summarize Amount=count() by type
Regarding the 2nd question, this should work since the has operator will only match whole tokens (and a slash separates tokens):
resources | where type has "servers"

Regular expression with if condition activity in Azure

I want to check if a file name contains a date pattern (dd-mmm-yyy) in it using if condition activity in Azure Data Factory. For example: The file name I have is like somestring_23-Apr-1984.csv which has a date pattern in it.
I get the file name using Get Metadata activity and passing it to the if condition activity where I want to check if the file name has the date pattern in it and based on the result I would like to perform different tasks. The only way I know to do this is by using regex to check if the pattern is available in the file name string but, Azure does not have a regex solution mentioned in the documentation.
Is there any other way to achieve my requirement in ADF? Your help is much appreciated.
Yes,there is no regex in expression.There is another way to do this,but it is very complex.
First,get the date string(23-Apr-1984) from the output of Get Metadata.
Then,split the date string and determine whether each part match date pattern.
Below is my test pipeline:
First Set variable:
name: fileName
value: #split(split(activity('MyGetMetadataActivity').output.itemName,'_')[1],'.csv')[0]
Second Set variable:
name: fileArray
value: #split(variables('fileName'),'-')
If Condition:
Expression:#and(contains(variables('DateArray'),variables('fileArray')[0]),contains(variables('MonthArray'),variables('fileArray')[1]))
By the way,I want to compare date with 0 and 30 originally,but greaterOrEquals() doesn't support nested properties.So I use contains().
Hope this can help you.

Understanding Kusto

I am trying to understand Kusto (Log Analytics Query Language in Azure).
According to the documentation;
To retrieve , project name and resultsCode from the dependencies table, I need to enter the following:
dependencies
| project name, resultCode
The machines I have subscribed to do not have this table.
I am using the heartbeat table and trying to retrieve computer and category like so:
Heartbeat
| Category, Computer , IsGatewayInstalled
I however get the following error:
Query could not be parsed at 'Category' on line [2,2]
Token: Category Line: 2 Position: 2
This seems trivial and will appreciate any pointers on this.
the error you're getting is due to the fact there's no valid operator after the pipe (|), you should use the project operator before specifying the column names you want to retrieve

How to visualize a count of all values in an array field in Kibana

I am having trouble creating a particular type of visualization in Kibana. My events in Kibana are statistics on communications between two ip address. Two of the fields are lists of ports used by the particular ip address. An example of the fields would be:
ip1 = 192.168.101.2
ip2 = 192.168.101.3
ip2Ports = 80,443
ip1Ports = 80,57000,0
I would like to have a top count of all the values such as
port count
80 2
57000 1
443 1
I have been able to parse ip2Ports to be ip2Ports_List.column1, ip2Ports_List.column2, ect, but I can only choose one term with term aggregation in the visualization. I can split the chart, but that leads to separate counts for each field. If I go by the original ip2Ports field, it is just aggregated as the string such as, "80,443".
Is it even possible to create a top count visualization of fields with multiple values? If so, how would I do so. If not, is there a way to restructure my data so I can do it? Thank you!
My issue stemmed from the format of the values being sent in by Logstash. I had thought that the 'ip2Ports_List.column1' format, which was a result from using the csv filter, was part of an array. It wasn't. After analyzing it, 'ip2Ports_List.column1' didn't seem to be much different from a new field.
Elastic needed an array to give me the visualization I wanted. I wasn't sure what the best way to produce it was, so I just ended up using the ruby filter. This is what the code ended up looking like:
ruby {
code => "fields = event.get('portsIp').split(',')
event.set('portsIpArray',fields)"
}
Where 'portsIp' looked something like "80,443". Splitting it turned 'portsIp' into a Ruby array. I just set that array as the value for a new event field, 'portsIpArray'.
From there when I tried visualize the 'portsIpArray' field, it looked exactly how I wanted it to, treating each port as separate value, and still associating each port with the same event/field.
Extra:
Also something I discovered is if you're writing your code like I was, directly in the Logstash conf file, Logstash doesn't like it if you use double quotes within the double quoted code. In hindsight it makes sense, but it doesn't give a clear error so it's difficult to figure out.

Using indexed types for ElasticSearch in Titan

I currently have a VM running Titan over a local Cassandra backend and would like the ability to use ElasticSearch to index strings using CONTAINS matches and regular expressions. Here's what I have so far:
After titan.sh is run, a Groovy script is used to load in the data from separate vertex and edge files. The first stage of this script loads the graph from Titan and sets up the ES properties:
config.setProperty("storage.backend","cassandra")
config.setProperty("storage.hostname","127.0.0.1")
config.setProperty("storage.index.elastic.backend","elasticsearch")
config.setProperty("storage.index.elastic.directory","db/es")
config.setProperty("storage.index.elastic.client-only","false")
config.setProperty("storage.index.elastic.local-mode","true")
The second part of the script sets up the indexed types:
g.makeKey("property").dataType(String.class).indexed("elastic",Edge.class).make();
The third part loads in the data from the CSV files, this has been tested and works fine.
My problem is, I don't seem to be able to use the ElasticSearch functions when I do a Gremlin query. For example:
g.E.has("property",CONTAINS,"test")
returns 0 results, even though I know this field contains the string "test" for that property at least once. Weirder still, when I change CONTAINS to something that isn't recognised by ElasticSearch I get a "no such property" error. I can also perform exact string matches and any numerical comparisons including greater or less than, however I expect the default indexing method is being used over ElasticSearch in these instances.
Due to the lack of errors when I try to run a more advanced ES query, I am at a loss on what is causing the problem here. Is there anything I may have missed?
Thanks,
Adam
I'm not quite sure what's going wrong in your code. From your description everything looks fine. Can you try the follwing script (just paste it into your Gremlin REPL):
config = new BaseConfiguration()
config.setProperty("storage.backend","inmemory")
config.setProperty("storage.index.elastic.backend","elasticsearch")
config.setProperty("storage.index.elastic.directory","/tmp/es-so")
config.setProperty("storage.index.elastic.client-only","false")
config.setProperty("storage.index.elastic.local-mode","true")
g = TitanFactory.open(config)
g.makeKey("name").dataType(String.class).make()
g.makeKey("property").dataType(String.class).indexed("elastic",Edge.class).make()
g.makeLabel("knows").make()
g.commit()
alice = g.addVertex(["name":"alice"])
bob = g.addVertex(["name":"bob"])
alice.addEdge("knows", bob, ["property":"foo test bar"])
g.commit()
// test queries
g.E.has("property",CONTAINS,"test")
g.query().has("property",CONTAINS,"test").edges()
The last 2 lines should return something like e[1t-4-1w][4-knows-8]. If that works and you still can't figure out what's wrong in your code, it would be good if you can share your full code (e.g. in Github or in a Gist).
Cheers,
Daniel

Resources