I am having trouble parsing the JSON in livecode - livecode

I'm trying to create a simple mobile app that queries an API and parses the response to display certain values.
The mobile has 2 fields viz:
Button to query the api
Large text box to display the contents
In my livecode stack, I've the following inclusions:
JSON Library
mergJSON
tsNet
The api response is as follows:
{
"data": [
{
"id": 1,
"date_created": "2021-11-08T17:12:03Z",
"date_updated": "2021-11-22T16:08:55Z",
"first_name": "John",
"last_name": "Doe",
"email": "john.doe#unknown.com",
"phone": "9876543210",
"dob": "1980-01-01",
"password": "xxxxxxxxx",
"plan_start": "2021-11-22T16:07:46Z",
"plan_expiry": "2021-12-21T16:06:25Z"
}
]
}
I want to parse the JSON to display the email field value in the textbox.
In my livecode stack:
The button is named as "getdata"
The textbox is named as "flddata"
In the button script, I've added the following code:
put "<api url endpoint>" into tUrl
put "Authorization: Bearer xxxxxxxxx" into tHeaders
put tsNetGetSync(tUrl, tHeaders, tRecvHeaders, tResult, tBytes) into tData
put JSONToArray(tData) into tDataArray
put tDataArray["email"] into field "flddata"
But this doesn't work. Nothing happens. For the life of me, I can't figure out what's wrong. Any help would be appreciated. Thanks a ton!

To access the "email" key of the array that is built from the JSON you shared. You must first access the "data" key and then key 1. So the last line of your code would be as follows:
put tDataArray ["data"] [1] ["email"] into field "flddata"
Tips:
Put a break point on that line. This will allow you to see the contents of the variables so that you can see the structure of the array.

It looks like it might be a multidimensional array. Here's a simple way to get a look at how it's structured:
Drag a Tree View widget onto your card.
Set the arrayData property of the widget to your array tDataArray. Like this:
set the arrayData of widget "Tree View" to tDataArray
You should see the structure of the array in your tree view widget. It's possible that the array that was created looks something like this:
put tDataArray[1]["email"] into field "flddata"

Related

Rest API - GET request help - Need help in constructing a URI with parameters

I am new to constructing Rest API GET queries.
Need help in GET request of Rest API by passing the fields in sysparm_fields.
I trying to query a Table via Rest API URI by passing only a limited number of columns of the table using Sysparam_fields.
However few column names have Spaces and parenthesis in it. So the JSON result set is excluding those columns and respective data when I do a GET request.
So if I am querying with column names "assigned_to (need the empid)" and "Number", I only get "Number" data.
If I dont pass, sysparm_fields it is returning the result set with all the columns including "assigned_to (need the empid)"
My end point looks like this
https://MyInstance.service-now.com/api/now/table/ticket?sysparm_exclude_reference_link=true&sysparm_fields=number,assigned_to (need the empid)&sysparm_query=sys_updated_onBETWEENjavascript:gs.dateGenerate('2020-09-01','00:00:00')#javascript:gs.dateGenerate('2020-09-01','23:59:59')
The result is only
{
"result": [
{
"number": "TK00001"
}
}
If my URI is
https://MyInstance.service-now.com/api/now/table/ticket?sysparm_exclude_reference_link=true&sysparm_query=sys_updated_onBETWEENjavascript:gs.dateGenerate('2020-09-01','00:00:00')#javascript:gs.dateGenerate('2020-09-01','23:59:59')
Then I get the result
{
"result": [
{
"number": "TK00001",
"assigned_to (need the empid)":"MYQ001",
"Other_Field1":"Other Value 1",
:
"Other_FieldN":"Other Value N"
}
}
So how do I pass the column names that has Spaces and parenthesis like "assigned_to (need the empid)"
Thanks for the Help in Advance

How to Create Azure Resource Graph Explorer Scheduled Reports and Email Alerts

I have a Kusto query taken from this example that looks like this:
Resources
| where type =~ 'microsoft.compute/virtualmachines'
| extend vmPowerState = tostring(properties.extended.instanceView.powerState.code)
| summarize count() by vmPowerState
I would like to create an weekly alert that send the result through an e-mail in a CSV file.
The Logic App is organized in 5 steps:
One:
Two:
With
URL: https://management.azure.com/providers/Microsoft.ResourceGraph/resources
Body:
{
"query": "Resources | where type =~ 'microsoft.compute/virtualmachines' | extend vmPowerState = tostring(properties.extended.instanceView.powerState.code) | summarize count() by vmPowerState"
}
Three:
Where I parse the Body and I give an extract of the JSON Schema:
{
"count": 3,
"data": [
{
"count_": 3,
"vmPowerState": "PowerState/stopped"
},
{
"count_": 29,
"vmPowerState": "PowerState/deallocated"
},
{
"count_": 118,
"vmPowerState": "PowerState/running"
}
],
"skip_token": null,
"total_records": 3
}
Here I have a few doubt because I found a guide that says that I should use array formula instead. I'm not very sure about that because I cannot see the details in the example. Anyway this is what I do:
Four:
Five:
Where I create the attachment from the CSV
The e-mail in the end arrives but the attachment is not a CSV, it's a JSON file:
What the hack am I doing wrong?
if you want to use "Create CSV table" with Columns set to "Automatic", do pass the "body" of "parse Json".
you don't need to use the array variable but whatever you use need to return an array like this:
The body of the json parser on your example has many other json nodes enveloping that. You should have the option "data" as there is an array there called "data"
if you want to cut it short, try "data"
you can change to "custom". that would allow you to remove redundant data or format data (like the "PowerState" in "PowerState/stopped"):
you can also add the .csv to the file name:
The above worked for me but it can be enhanced
The suggestoin posted by #BrunoLucasAzure really helped me understand how Logic Apps works.
However I would like to reply to my own question with the right solution: I had to paste a sample of the JSON output pressing on the button Use sample payload to generate schema.
Then follow the workflow and everything will be fine.
The next problem I need to fix is pagination but apparently there is a solution for that too: https://techcommunity.microsoft.com/t5/integrations-on-azure-blog/logic-app-http-pagination-deeper-look-build-custom-paging/ba-p/2907605

Only one custom field been populated out of many

I have a document that I’d like to pre-populate. In the document the person’s name is repeated multiple times so I’ve set up a text fields, given them the same label e.g. “CandidateName” and set them all to not required and read only.
The reason why all of them are set to read only is because I programmatically set them when I call DocuSign via API (TextTabs). As result only the final field is populated. All the previous ones are blank.
Cheers
If your template contains multiple fields that have the same tabLabel and you want to populate all of those fields with the same value by using the API, you need to prefix the tabLabel value with \\*.
For example, here's the JSON for the tabs portion of a CreateEnvelope request that would populate every field which has the label CandidateName with the value John Smith.
"tabs":
{
"textTabs": [
{
"tabLabel": "\\*CandidateName",
"value": "John Smith"
}
]
}

Azure Stream Processing upsert to DocumentDB with array

I'm using Azure Stream Analytics to copy my Json over to DocumentDB using upsert to overwrite the document with the latest data. This is great for my base data, but I would love to be able to append the list data, as unfortunately I can only send one list item at a time.
In the example below, the document is matched on id, and all items are updated, but I would like the "myList" array to keep growing with the "myList" data from each document (with the same id). Is this possible? Is there any other way to use Stream Analytics to update this list in the document?
I'd rather steer clear of using a tumbling window if possible, but is that an option that would work?
Sample documents:
{
"id": "1234",
"otherData": "example",
"myList": [{"listitem": 1}]
}
{
"id": "1234",
"otherData": "example 2",
"myList": [{"listitem": 2}]
}
Desired output:
{
"id": "1234",
"otherData": "example 2",
"myList": [{"listitem": 1}, {"listitem": 2}]
}
My current query:
SELECT id, otherData, myList INTO [myoutput] FROM [myinput]
Currently arrays are not merged, this is the existing behavior of DocumentDB output from ASA, also mentioned in this article. I doubt using a tumbling window would help here.
Note that changes in the values of array properties in your JSON document result in the entire array getting overwritten, i.e. the array is not merged.
You could transform the input that is coming as an array (myList) into a dictionary using GetArrayElements function .
Your query might look something like --
SELECT i.id , i.otherData, listItemFromArray
INTO myoutput
FROM myinput i
CROSS APPLY GetArrayElements(i.myList) AS listItemFromArray
cheers!

How to update embedded document?

How to update the text of second comment to "new content"
{
name: 'Me',
comments: [{
"author": "Joe S.",
"text": "I'm Thirsty"
},
{
"author": "Adder K.",
"text": "old content"
}]
}
Updating the embedded array basically involves two steps:
1.
You create a modified version of the whole array. There are multiple operations that you can use to modify an array, and they are listed here: http://www.rethinkdb.com/api/#js:document_manipulation-insert_at
In your example, if you know that the document that you want to update is the second element of the array, you would write something like
oldArray.changeAt(1, oldArray.nth(1).merge({text: "new content"}))
to generate the new array. 1 here is the index of the second element, as indexes start with 0. If you do not know the index, you can use the indexesOf function to search for a specific entry in the array. Multiple things are happening here: changeAt replaces an element of the array. Here, the element at index 1 is replaced by the result of oldArray.nth(1).merge({text: "new content"}). In that value, we first pick the element that we want to base our new element from, by using oldArray.nth(1). This gives us the JSON object
{
"author": "Adder K.",
"text": "old content"
}
By using merge, we can replace the text field of this object by the new value.
2.
Now that we can construct the new object, we still have to actually store it in the original row. For this, we use update and just set the "comments" field to the new array. We can access the value of the old array in the row through the ReQL r.row variable. Overall, the query will look as follows:
r.table(...).get(...).update({
comments: r.row('comments').changeAt(1,
r.row('comments').nth(1).merge({text: "new content"}))
}).run(conn, callback)
Daniel's solution is correct. However, there are several open issues on Github for planned enhancements, including:
generic object and array modification (https://github.com/rethinkdb/rethinkdb/issues/895)
being able to specify optional arguments for merge and update (https://github.com/rethinkdb/rethinkdb/issues/872)
being able to specify a conflict resolution function for merge (https://github.com/rethinkdb/rethinkdb/issues/873)
...among other related issues. Until those are introduced into ReQL (particularly #895), Daniel's approach is the correct one.

Resources