I have got step function execution history in JSON format
[{
"timestamp": "2022-07-18T13:03:03.346000+00:00",
"type": "ExecutionFailed",
"id": 3,
"previousEventId": 2,
"executionFailedEventDetails": {
"error": "States.Runtime",
"cause": "An error occurred while executing the state 'Workflow Choice state' (entered at the event id #2). Invalid path '$.contributor_id': The choice state's condition path references an invalid value."
}
}, {
"timestamp": "2022-07-18T13:03:03.306000+00:00",
"type": "ChoiceStateEntered",
"id": 2,
"previousEventId": 0,
"stateEnteredEventDetails": {
"name": "Workflow Choice state",
"input": "{\n \"Comment\": \"Insert your JSON here\"\n}",
"inputDetails": {
"truncated": false
}
}
}, {
"timestamp": "2022-07-18T13:03:03.252000+00:00",
"type": "ExecutionStarted",
"id": 1,
"previousEventId": 0,
"executionStartedEventDetails": {
"input": "{\n \"Comment\": \"Insert your JSON here\"\n}",
"inputDetails": {
"truncated": false
},
"roleArn": "arn:aws:iam::asdfg:role/step-all"
}
}]
We want to create a view like below
The issue is i am not able to create executionFailedEventDetails ,stateEnteredEventDetails,executionStartedEventDetails as new row .
It comes in first row only .
Step column is the name in the stateEnteredEventDetails
This is what i am doing
import json
import pandas as pd
from tabulate import tabulate
raw = r"""[{
"timestamp": "2022-07-18T13:03:03.346000+00:00",
"type": "ExecutionFailed",
"id": 3,
"previousEventId": 2,
"executionFailedEventDetails": {
"error": "States.Runtime",
"cause": "An error occurred while executing the state 'Workflow Choice state' (entered at the event id #2). Invalid path '$.contributor_id': The choice state's condition path references an invalid value."
}
}, {
"timestamp": "2022-07-18T13:03:03.306000+00:00",
"type": "ChoiceStateEntered",
"id": 2,
"previousEventId": 0,
"stateEnteredEventDetails": {
"name": "Workflow Choice state",
"input": "{\n \"Comment\": \"Insert your JSON here\"\n}",
"inputDetails": {
"truncated": false
}
}
}, {
"timestamp": "2022-07-18T13:03:03.252000+00:00",
"type": "ExecutionStarted",
"id": 1,
"previousEventId": 0,
"executionStartedEventDetails": {
"input": "{\n \"Comment\": \"Insert your JSON here\"\n}",
"inputDetails": {
"truncated": false
},
"roleArn": "arn:aws:iam::asdfg:role/step-all"
}
}]"""
data = json.loads(raw, strict=False)
data = pd.json_normalize(data)
# print(data.to_csv(), index=False)
print(tabulate(data, headers='keys', tablefmt='psql'))
data.to_csv('file.csv',encoding='utf-8', index=False)
and the output is
+----+----------------------------------+--------------------+------+-------------------+-------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------+----------------------------------------+---------------------------------------------------+----------------------------------------+-------------------------------------------------------+----------------------------------------+
| | timestamp | type | id | previousEventId | executionFailedEventDetails.error | executionFailedEventDetails.cause | stateEnteredEventDetails.name | stateEnteredEventDetails.input | stateEnteredEventDetails.inputDetails.truncated | executionStartedEventDetails.input | executionStartedEventDetails.inputDetails.truncated | executionStartedEventDetails.roleArn |
|----+----------------------------------+--------------------+------+-------------------+-------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------+----------------------------------------+---------------------------------------------------+----------------------------------------+-------------------------------------------------------+----------------------------------------|
| 0 | 2022-07-18T13:03:03.346000+00:00 | ExecutionFailed | 3 | 2 | States.Runtime | An error occurred while executing the state 'Workflow Choice state' (entered at the event id #2). Invalid path '$.contributor_id': The choice state's condition path references an invalid value. | nan | nan | nan | nan | nan | nan |
| 1 | 2022-07-18T13:03:03.306000+00:00 | ChoiceStateEntered | 2 | 0 | nan | nan | Workflow Choice state | { | 0 | nan | nan | nan |
| | | | | | | | | "Comment": "Insert your JSON here" | | | | |
| | | | | | | | | } | | | | |
| 2 | 2022-07-18T13:03:03.252000+00:00 | ExecutionStarted | 1 | 0 | nan | nan | nan | nan | nan | { | 0 | arn:aws:iam::asdfg:role/step-all |
| | | | | | | | | | | "Comment": "Insert your JSON here" | | |
| | | | | | | | | | | } | | |
+----+----------------------------------+--------------------+------+-------------------+-------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------+----------------------------------------+---------------------------------------------------+----------------------------------------+-------------------------------------------------------+----------------------------------------+
The Details column 5th column is dynamic like all columns currently i have given example of only 3 event but it can go up to any number .
Final Expected Output
Given file.json:
[{
"timestamp": "2022-07-18T13:03:03.346000+00:00",
"type": "ExecutionFailed",
"id": 3,
"previousEventId": 2,
"executionFailedEventDetails": {
"error": "States.Runtime",
"cause": "An error occurred while executing the state 'Workflow Choice state' (entered at the event id #2). Invalid path '$.contributor_id': The choice state's condition path references an invalid value."
}
}, {
"timestamp": "2022-07-18T13:03:03.306000+00:00",
"type": "ChoiceStateEntered",
"id": 2,
"previousEventId": 0,
"stateEnteredEventDetails": {
"name": "Workflow Choice state",
"input": "{\n \"Comment\": \"Insert your JSON here\"\n}",
"inputDetails": {
"truncated": false
}
}
}, {
"timestamp": "2022-07-18T13:03:03.252000+00:00",
"type": "ExecutionStarted",
"id": 1,
"previousEventId": 0,
"executionStartedEventDetails": {
"input": "{\n \"Comment\": \"Insert your JSON here\"\n}",
"inputDetails": {
"truncated": false
},
"roleArn": "arn:aws:iam::asdfg:role/step-all"
}
}]
Doing
import pandas as pd
df = pd.read_json('file.json')
df = df.melt(['timestamp', 'type', 'id', 'previousEventId'], var_name='step', value_name='details').dropna()
print(df.to_markdown(index=False))
Output (Markdown was just the easiest for me to show all):
| timestamp | type | id | previousEventId | step | details |
|:---------------------------------|:-------------------|-----:|------------------:|:-----------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 2022-07-18 13:03:03.346000+00:00 | ExecutionFailed | 3 | 2 | executionFailedEventDetails | {'error': 'States.Runtime', 'cause': "An error occurred while executing the state 'Workflow Choice state' (entered at the event id #2). Invalid path '$.contributor_id': The choice state's condition path references an invalid value."} |
| 2022-07-18 13:03:03.306000+00:00 | ChoiceStateEntered | 2 | 0 | stateEnteredEventDetails | {'name': 'Workflow Choice state', 'input': '{\n "Comment": "Insert your JSON here"\n}', 'inputDetails': {'truncated': False}} |
| 2022-07-18 13:03:03.252000+00:00 | ExecutionStarted | 1 | 0 | executionStartedEventDetails | {'input': '{\n "Comment": "Insert your JSON here"\n}', 'inputDetails': {'truncated': False}, 'roleArn': 'arn:aws:iam::asdfg:role/step-all'} |
i am not getting the timeseries object in the below api call for Virtual machine memory use.
I tried this:
Method :Get
Url:https://management.azure.com/subscriptions/XXXXXXXXXXXXXXXXXXXX/resourceGroups/XXXXXXXXXXXX/providers/Microsoft.Compute/virtualMachines/XXXXXXX/providers/microsoft.insights/metrics?timespan=2019-03-31T11:30:00.000Z/2020-09-14T11:00:00.000Z&interval=P1D&metricnames=\Memory\% Committed Bytes In Use&aggregation=Average&api-version=2018-01-01&metricnamespace=azure.vm.windows.guestmetrics
Authentication: Barer token
**Response :**
{
"cost": 0,
"timespan": "2020-08-14T11:00:00Z/2020-09-14T11:00:00Z",
"interval": "P1D",
"value": [
{
"id": "/subscriptions/xxxxxxxxxxxxxxxxxx/resourceGroups/xxxxxxxxxxxxx/providers/Microsoft.Compute/virtualMachines/xxxxxxx/providers/Microsoft.Insights/metrics/\Memory\% Committed Bytes In Use",
"type": "Microsoft.Insights/metrics",
"name": {
"value": "\Memory\% Committed Bytes In Use",
"localizedValue": "\Memory\% Committed Bytes In Use"
},
"unit": "Unspecified",
"timeseries": [],
"errorCode": "Success"
}
],
"namespace": "azure.vm.windows.guestmetrics",
"resourceregion": "westus2"
}
Try this as a query against the Log Analytics Resource.
Reference: https://learn.microsoft.com/en-us/rest/api/loganalytics/dataaccess/query/get
let usedMemory = Perf | where (ObjectName == 'Memory' and CounterName contains 'Committed Bytes') | summarize UsedMemory = (avg(CounterValue)) by Computer; let AvailMemory = InsightsMetrics | extend localTimestamp = TimeGenerated - 7h | where TimeGenerated > ago(1d) | where Namespace == 'Memory' and Name == 'AvailableMB' | extend AvailableMem = Val | summarize arg_max(TimeGenerated, *) by Computer; AvailMemory | join kind=leftouter usedMemory on Computer | extend FreeMemoryGB = round(AvailableMem/1024) | parse Tags with * ':' TotalMemoryMB '}' Err | project Computer, FreeMemoryGB, UsedMemory, TotalMemoryMB, localTimestamp, Namespace, Tags, AgentId, _ResourceId
I have store-procedure which returns multiple results but when I retrieve it from npm mssql it returns the only first result.
in my T-SQL script:
CREATE PROCEDURE usp_myStoreProcedure #param1 varchar(3),#param2 varchar(3)
AS
BEGIN
select * from firstTable where name=#param1;
select * from secondTable where name=#param2;
END
when run this :
result1:
| Name | Subject | Mark|
|----------------------|
| Alice| Maths | 96 |
result2:
| Name | Subject | Mark|
|----------------------|
| Bob | Science | 93 |
in my nodejs using npm mssql package
let conn = await mssql.connect(config);
let output= await conn
.request()
.input("param1", mssql.VarChar(10), "Alcie")
.input("param2", mssql.VarChar(10), "Bob")
.execute("usp_myStoreProcedure");
mssql.close();
console.log(output);
current result:
{
"recordsets":
[
[
{
"Name": "Alice",
"Subject":"Maths"
"Mark": 96
}
],
[]
],
"recordset":
[
{
"Name": "Alice",
"Subject":"Maths"
"Mark": 96
}
],
"output": {},
"rowsAffected": [1,0],
"returnValue": 0
}
below result2 missing in the output:
| Name | Subject | Mark|
|----------------------|
| Bob | Science | 93 |
I have an attribute with null value. and some rows have boolean value. Now I want to write a filter expression to filter for the rows with attribute having null value. I am using dynamodb document client.
If you look at the docs, for a scan you can do:
ScanFilter: {
'<AttributeName>': {
ComparisonOperator: EQ | NE | IN | LE | LT | GE | GT | BETWEEN | NOT_NULL | NULL | CONTAINS | NOT_CONTAINS | BEGINS_WITH, /* required */
AttributeValueList: [
someValue /* "str" | 10 | true | false | null | [1, "a"] | {a: "b"} */,
/* more items */
]
},
/* '<AttributeName>': ... */
}
If you are using query:
QueryFilter: {
'<AttributeName>': {
ComparisonOperator: EQ | NE | IN | LE | LT | GE | GT | BETWEEN | NOT_NULL | NULL | CONTAINS | NOT_CONTAINS | BEGINS_WITH, /* required */
AttributeValueList: [
someValue /* "str" | 10 | true | false | null | [1, "a"] | {a: "b"} */,
/* more items */
]
},
/* '<AttributeName>': ... */
},
They both clearly support filtering by attributes with value null.
My search query is:
query: {
match: {
name: "le sul"
}
},
I expect to see the output as:
.-------------------------------------------------.
| ID | Name | Score |
|-------|-----------------------------|-----------|
| 9 | le sultan | ... |
| 467 | le sultan | ... |
| 23742 | LE DUONG | 1.1602057 |
| 11767 | LE VICTORIA | 0.9554229 |
| 11758 | LE CANONNIER | 0.9554229 |
| 23762 | PHA LE XANH | 0.9281646 |
| 15795 | LE SURCOUF HOTEL & SPA | 0.9281646 |
| 33066 | LE CORAL HIDEAWAY BEYOND | 0.8695703 |
| 11761 | LE MERIDIEN MAURITIUS | 0.8682439 |
| 11871 | LE RELAX HOTEL & RESTAURANT | 0.8682439 |
'-------------------------------------------------'
But what I see is:
.-------------------------------------------------.
| ID | Name | Score |
|-------|-----------------------------|-----------|
| 23742 | LE DUONG | 1.1602057 |
| 9 | le sultan | 1.0869629 | <----
| 11767 | LE VICTORIA | 0.9554229 |
| 11758 | LE CANONNIER | 0.9554229 |
| 467 | le sultan | 0.9554229 | <----
| 23762 | PHA LE XANH | 0.9281646 |
| 15795 | LE SURCOUF HOTEL & SPA | 0.9281646 |
| 33066 | LE CORAL HIDEAWAY BEYOND | 0.8695703 |
| 11761 | LE MERIDIEN MAURITIUS | 0.8682439 |
| 11871 | LE RELAX HOTEL & RESTAURANT | 0.8682439 |
'-------------------------------------------------'
As you can see, "le sultan" not the first element of the result set.
Where am I going wrong?
Your query result is not match because elasticsearch search via _score.
In your case you want to search in an analyzed search and get result in not analyzed.
So you should put your mapping like given below
Put your_index_name
{
"mappings": {
"your_type_name": {
"properties": {
"name": {
"type": "string",
"analyzer": "english",
"fields": {
"your_temporary_sort_filed_name": {
"type": "string",
"index": "not_analyzed"
}
}
}
}
}
}
}
And then
GET /your_index_name/your_type_name/_search
{
"sort": [
{
"name.your_temporary_sort_filed_name":{
"order": "desc"
}
}
],
"query": {
"match": {
"name": "le sul"
}
}
}
If you want to get le sultan as use following query:
{
"query": {
"query_string": {
"default_field": "name",
"query": "le sul*"
}
}
}