Related
I have a response from a REST API with ~40k lines and I need to verify if the values in it match those in an already existing local file. The response does have a certain element which is dynamic and is based on the date and time the response has been received.
I need to compare the response with the local json file and make sure the values match, while ignoring that specific date&time element that comes through.
This is a snippet example of the response/json file:
[
{
"code" : "SJM",
"valuesCount" : [
{
"code" : "SJM",
"description" : "LAST_NAME FIRST_NAME-SJM (LMM/null)",
"period" : "November 2020",
"printedOn" : "31/01/2022 09:39:39",
"name" : "null",
"logo" : { },
"standardId" : true,
"dailyValues" : [
{
"index" : "odd",
"day" : "01",
"description" : "",
"time" : "23:59",
"cms" : "",
"hcv" : "",
"nm" : "",
"hcp" : "",
"hcz" : "",
"synDc" : "",
"jiJnm" : "",
"c1" : "",
"c2" : "",
"proHvs" : "",
"dcd" : "",
"dcs" : "",
"d1" : "",
"d2" : "",
"nbdays" : ""
},
]
}
}
]
]
The file contains hundreds of code entries, each with their own valuesCount maps, in which the the printedOn element appears and that's what I'm trying to ignore.
I've been trying to do this in Groovy and the only thing I've managed to find in the groovy docs is the removeAll() method thinking I could perhaps remove that key:value entirely but I don't think I'm using it right.
import groovy.json.*
def groovyUtils = new com.eviware.soapui.support.GroovyUtils(context)
def projectDir = groovyUtils.projectPath
def response = new JsonSlurper().parseText(context.expand( '${Request#Response}' ))
File jsonFile = new File(projectDir, "/Project/Resources/jsonFile.json")
def defaultValues = new JsonSlurper().parse(jsonFile)
var1 = response.removeAll{"printedOn"}
var2 = defaultValues.removeAll{"printedOn"}
if (var1 == var2) {
log.info "The request matches the default values."
} else {
throw new Exception("The request does not match the default values.");
}
This just returns a true statement for both cases.
Could anyone point me in the right direction?
Much appreciated.
You need to traverse into the hierarchy of lists/maps to alter the inner map and remove the printedOn key/value pair.
Also your json is broken with missing/extra brackets.
The following code:
import groovy.json.*
def data1 = '''\
[
{
"code": "SJM",
"valuesCount": [
{
"code": "SJM",
"description": "LAST_NAME FIRST_NAME-SJM (LMM/null)",
"period": "November 2020",
"printedOn": "31/01/2022 09:39:39",
"name": "null",
"logo": {},
"standardId": true,
"dailyValues": [
{
"index": "odd",
"day": "01",
"description": "",
"time": "23:59",
"cms": "",
"hcv": "",
"nm": "",
"hcp": "",
"hcz": "",
"synDc": "",
"jiJnm": "",
"c1": "",
"c2": "",
"proHvs": "",
"dcd": "",
"dcs": "",
"d1": "",
"d2": "",
"nbdays": ""
}
]
}
]
}
]'''
def data2 = '''\
[
{
"code": "SJM",
"valuesCount": [
{
"code": "SJM",
"description": "LAST_NAME FIRST_NAME-SJM (LMM/null)",
"period": "November 2020",
"printedOn": "25/11/2021 09:39:39",
"name": "null",
"logo": {},
"standardId": true,
"dailyValues": [
{
"index": "odd",
"day": "01",
"description": "",
"time": "23:59",
"cms": "",
"hcv": "",
"nm": "",
"hcp": "",
"hcz": "",
"synDc": "",
"jiJnm": "",
"c1": "",
"c2": "",
"proHvs": "",
"dcd": "",
"dcs": "",
"d1": "",
"d2": "",
"nbdays": ""
}
]
}
]
}
]'''
def parser = new JsonSlurper()
def json1 = parser.parseText(data1)
def json2 = parser.parseText(data2)
def clean1 = removeNoise(json1)
def clean2 = removeNoise(json2)
def out1 = JsonOutput.toJson(clean1)
def out2 = JsonOutput.toJson(clean2)
assert out1 == out2
if(out1 == out2) println "the two documents are identical"
else println "the two documents differ"
def removeNoise(json) {
json.collect { codeMap ->
codeMap.valuesCount = codeMap.valuesCount.collect { valuesMap ->
valuesMap.remove('printedOn')
valuesMap
}
codeMap
}
}
will produce the following output when executed:
─➤ groovy solution.groovy
the two documents are identical
─➤
The json in the above code has been corrected to be parseable. I manually changed the string dates so that data1 and data2 differ on the printedOn property.
edit - it seems that you can actually just do:
assert clean1 == clean2
// or
if (clean1 == clean2) ...
for the comparison. Cuts out the extra serialization step to json but also leaves me feeling a tad uncomfortable with trusting groovy to do the right thing when comparing a nested data structure like this.
I would probably still go with the serialization.
I have a scenario : I want to build an azure logic app, where I have to got documents from various folder from the Sharepoint get process and give email notification. My confusion is how can I give multiple input folder path?
I'm going to make an assumptions with your architecture in my answer. I'm assuming you want to process multiple files in different sites within the same SharePoint tenant. So, not across tenants.
To achieve what you're asking for, I created a Parse JSON action which takes in the following structure (as an example, obviously the structure is the key point here, not the data) ...
Scenario 1 - Specific Files
[
{
"SiteName": "ExampleSolution",
"FileName": "/Shared Documents/General/Book.xlsx"
},
{
"SiteName": "TestSite",
"FileName": "/Shared Documents/Test Folder/Document.docx"
}
]
The SP tenant needs to be authenticated to with the appropriate user.
Then, in a For Each action, loop through each item and retrieve the contents of each document using the Get file content using path action.
Site Address = concat('https://yourtenant.sharepoint.com/sites/', items('For_each')?['SiteName'])
File Path = File Name (from Dynamic Content)
It will then retrieve the contents dynamically using those expressions.
File 1 (Excel Document)
File 2 (Word Document)
Scenario 2 - All Files
If you want to do it for all files, just change it up slightly ...
[
{
"FolderName": "/Shared Documents/General",
"SiteName": "ExampleSolution"
},
{
"FolderName": "/Shared Documents/Test Folder",
"SiteName": "TestSite"
}
]
Site Address = concat('https://yourtenant.sharepoint.com/sites/', items('For_each')?['SiteName'])
File Identifier = Folder Name (from Dynamic Content)
Output - Folder 1
[
{
"Id": "%252fShared%2bDocuments%252fGeneral%252fBook.xlsx",
"Name": "Book.xlsx",
"DisplayName": "Book.xlsx",
"Path": "/Shared Documents/General/Book.xlsx",
"LastModified": "2021-12-24T02:56:14Z",
"Size": 15330,
"MediaType": "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet",
"IsFolder": false,
"ETag": "\"{23948609-0DA0-43E0-994C-2703FEEC8567},7\"",
"FileLocator": "dataset=aHR0cHM6Ly9icmFka2RpeG9uLnNoYXJlcG9pbnQuY29tL3NpdGVzL0V4YW1wbGVTb2x1dGlvbg==,id=JTI1MmZTaGFyZWQlMmJEb2N1bWVudHMlMjUyZkdlbmVyYWwlMjUyZkJvb2sueGxzeA==",
"LastModifiedBy": null
},
{
"Id": "%252fShared%2bDocuments%252fGeneral%252fTest%2bDocument.docx",
"Name": "Test Document.docx",
"DisplayName": "Test Document.docx",
"Path": "/Shared Documents/General/Test Document.docx",
"LastModified": "2021-12-30T11:49:28Z",
"Size": 17959,
"MediaType": "application/vnd.openxmlformats-officedocument.wordprocessingml.document",
"IsFolder": false,
"ETag": "\"{7A3C7133-02FC-4A63-9A58-E11A815AB351},8\"",
"FileLocator": "dataset=aHR0cHM6Ly9icmFka2RpeG9u etc",
"LastModifiedBy": null
},
{
"Id": "%252fShared%2bDocuments%252fGeneral%252fHierarchy.xlsx",
"Name": "Hierarchy.xlsx",
"DisplayName": "Hierarchy.xlsx",
"Path": "/Shared Documents/General/Hierarchy.xlsx",
"LastModified": "2022-01-07T02:49:38Z",
"Size": 41719,
"MediaType": "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet",
"IsFolder": false,
"ETag": "\"{C919454C-48AB-4897-AD8C-E3F873B52E50},72\"",
"FileLocator": "dataset=aHR0cHM6Ly9icmFka2RpeG9uL etc",
"LastModifiedBy": null
}
]
Output - Folder 2
[
{
"Id": "%252fShared%2bDocuments%252fTest%2bFolder%252fTest.xlsx",
"Name": "Test.xlsx",
"DisplayName": "Test.xlsx",
"Path": "/Shared Documents/Test Folder/Test.xlsx",
"LastModified": "2022-01-09T11:08:31Z",
"Size": 17014,
"MediaType": "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet",
"IsFolder": false,
"ETag": "\"{CCF71CE7-89E7-4F89-B5CB-0F078E22C951},163\"",
"FileLocator": "dataset=aHR0cHM6Ly9icmFka2RpeG9u etc",
"LastModifiedBy": null
},
{
"Id": "%252fShared%2bDocuments%252fTest%2bFolder%252fDocument.docx",
"Name": "Document.docx",
"DisplayName": "Document.docx",
"Path": "/Shared Documents/Test Folder/Document.docx",
"LastModified": "2022-01-09T11:08:16Z",
"Size": 17293,
"MediaType": "application/vnd.openxmlformats-officedocument.wordprocessingml.document",
"IsFolder": false,
"ETag": "\"{317C5767-04EC-4264-A58B-27A3FA8E4DF3},3\"",
"FileLocator": "dataset=aHR0cHM6Ly9icmFka2RpeG etc",
"LastModifiedBy": null
}
]
From here, just process each file individually using one of the files actions like in the first scenario above.
Note: You'll need to work through sub folders and recursion. There doesn't appear to be a way to do that easily.
You've provided very little information but it should be enough for you to adapt it accordingly.
Also, I strongly recommend you use a means other than a hardcoded JSON document in the action itself. There are way better means for housing that information which wouldn't result in a need to update the action itself everytime you want to add or delete a file.
The concept of the loop and and the expressions are the most important part to grasp as they will give you what you want.
I'm scraping a JS loaded website using requests. In order to do so, I go to inspect website, network console and look for the XHR calls to know where is the website calling for the data and how. Process would be as follows
Go to the link https://www.888sport.es/futbol/#/event/1006276426 in Chrome. Once that is loaded, you can click on many items with an unique ID. After doing so, a pop up window with information appears. In the XHR call I mentioned above you get a direct link to get this information as follows:
import requests
url='https://eu-offering.kambicdn.org/offering/v2018/888es/betoffer/outcome.json?lang=es_ES&market=ES&client_id=2&channel_id=1&ncid=1586874367958&id=2740660278'
#ncid is the date in timestamp format, and id is the unique id of the node clicked
response=requests.get(url=url,headers=headers)
Problem is, this isn't user friendly and require python. If I put this last url in the Chrome driver, I get the information but in plain text, and I can't interact with it. Is there any way to get a workable link from the request so that manually inserting it in a Chrome driver it loads that pop up window directly, as a regular website?
You've to make the requests as .json() so you receive a json dict, which you can access it with keys.
import requests
import json
def main(url):
r = requests.get(url).json()
print(r.keys())
hview = json.dumps(r, indent=4)
print(hview) # here to see it in nice view.
main("https://eu-offering.kambicdn.org/offering/v2018/888es/betoffer/outcome.json?lang=es_ES&market=ES&client_id=2&channel_id=1&ncid=1586874367958&id=2740660278")
Output:
dict_keys(['betOffers', 'events', 'prePacks'])
{
"betOffers": [
{
"id": 2210856430,
"closed": "2020-04-17T14:30:00Z",
"criterion": {
"id": 1001159858,
"label": "Final del partido",
"englishLabel": "Full Time",
"order": [],
"occurrenceType": "GOALS",
"lifetime": "FULL_TIME"
},
"betOfferType": {
"id": 2,
"name": "Partido",
"englishName": "Match"
},
"eventId": 1006276426,
"outcomes": [
{
"id": 2740660278,
"label": "1",
"englishLabel": "1",
"odds": 1150,
"participant": "FC Lokomotiv Gomel",
"type": "OT_ONE",
"betOfferId": 2210856430,
"changedDate": "2020-04-14T09:11:55Z",
"participantId": 1003789012,
"oddsFractional": "1/7",
"oddsAmerican": "-670",
"status": "OPEN",
"cashOutStatus": "ENABLED"
},
{
"id": 2740660284,
"label": "X",
"englishLabel": "X",
"odds": 6750,
"type": "OT_CROSS",
"betOfferId": 2210856430,
"changedDate": "2020-04-14T09:11:55Z",
"oddsFractional": "23/4",
"oddsAmerican": "575",
"status": "OPEN",
"cashOutStatus": "ENABLED"
},
{
"id": 2740660286,
"label": "2",
"englishLabel": "2",
"odds": 11000,
"participant": "Khimik Svetlogorsk",
"type": "OT_TWO",
"betOfferId": 2210856430,
"changedDate": "2020-04-14T09:11:55Z",
"participantId": 1001024009,
"oddsFractional": "10/1",
"oddsAmerican": "1000",
"status": "OPEN",
"cashOutStatus": "ENABLED"
}
],
"tags": [
"OFFERED_PREMATCH",
"MAIN"
],
"cashOutStatus": "ENABLED"
}
],
"events": [
{
"id": 1006276426,
"name": "FC Lokomotiv Gomel - Khimik Svetlogorsk",
"nameDelimiter": "-",
"englishName": "FC Lokomotiv Gomel - Khimik Svetlogorsk",
"homeName": "FC Lokomotiv Gomel",
"awayName": "Khimik Svetlogorsk",
"start": "2020-04-17T14:30:00Z",
"group": "1\u00aa Divisi\u00f3n",
"groupId": 2000053499,
"path": [
{
"id": 1000093190,
"name": "F\u00fatbol",
"englishName": "Football",
"termKey": "football"
},
{
"id": 2000051379,
"name": "Bielorrusa",
"englishName": "Belarus",
"termKey": "belarus"
},
{
"id": 2000053499,
"name": "1\u00aa Divisi\u00f3n",
"englishName": "1st Division",
"termKey": "1st_division"
}
],
"nonLiveBoCount": 6,
"sport": "FOOTBALL",
"tags": [
"MATCH"
],
"state": "NOT_STARTED",
"groupSortOrder": 1999999000000000000
}
],
"prePacks": []
}
Starting to the following kind of string:
const json = '{"list":"[{"additionalInformation": {"source": "5f645d7d94-c6ktd"}, "alarmName": "data", "description": "Validation Error. Fetching info has been skipped.", "eventTime": "2020-01-27T14:42:44.143200 UTC", "expires": 2784, "faultyResource": "Data", "name": "prisco", "severity": "Major"}]"}'
How can I manage this as a JSON? The following approach doesn't work
const obj = JSON.parse(json );
unuspected result
How can I parse it correctly?
In conclusion, I should extract the part relative to the first item list and then parse the JSON that it contains.
Your JSON is invalid. The following is the valid version of your JSON:
const json= {
"list": [ {
"additionalInformation": {
"source": "5f645d7d94-c6ktd"
},
"alarmName": "data",
"description": "Validation Error. Fetching info has been skipped.",
"eventTime": "2020-01-27T14:42:44.143200 UTC",
"expires": 2784,
"faultyResource": "Data",
"name": "prisco",
"severity": "Major"
}
]
}
The above is already a JSON and parsing it as JSON again throws an error.
JSON.parse() parse string/ text and turn it into JavaScript object. The string/ text should be in a JSON format or it will throw an error.
Update:
Create a function to clean your string and prepare it for JSON.parse():
cleanString(str) {
str = str.replace('"[', '[');
str = str.replace(']"', ']');
return str;
}
And use it like:
json = this.cleanString(json);
console.log(JSON.parse(json));
Demo:
let json = '{"list":"[{"additionalInformation": {"source": "5f645d7d94-c6ktd"}, "alarmName": "data", "description": "Validation Error. Fetching info has been skipped.", "eventTime": "2020-01-27T14:42:44.143200 UTC", "expires": 2784, "faultyResource": "Data", "name": "prisco", "severity": "Major"}]"}';
json = cleanString(json);
console.log(JSON.parse(json));
function cleanString(str) {
str = str.replace('"[', '[');
str = str.replace(']"', ']');
return str;
}
Remove the double quotes from around the array brackets to make the json valid:
const json = '{"list":[{"additionalInformation": {"source": "5f645d7d94-c6ktd"}, "alarmName": "data", "description": "Validation Error. Fetching info has been skipped.", "eventTime": "2020-01-27T14:42:44.143200 UTC", "expires": 2784, "faultyResource": "Data", "name": "prisco", "severity": "Major"}]}'
I am not geeting output and error
------Exception-------
Class: Kitchen::ActionFailed
Message: 1 actions failed."
cookbook/test/integration/nodes
Json file
{
"id": "hive server",
"chef_type": "node",
"environment": "dev",
"json_class": "Chef::Node",
"run_list": [],
"automatic": {
"hostname": "test.net",
"fqdn": "127.0.0.1",
"name": "test.net",
"ipaddress": "127.0.0.1",
"node_zone": "green",
"roles": []
},
"attributes": {
"hiveserver": "true"
}
}
Recipe
hiveNodes = search(:node, "hiveserver:true AND environment:node.environment AND node_color:node["node_color"])
# hiveserverList = ""
# hiveNodes.each |hnode| do
# hiveserverList += hnode
#end
#file '/tmp/test.txt' do
# content '#{hiveserverList}'
#end
I think you mean to be using "hiveserver:true AND chef_environment:#{node.chef_environment} AND node_color:#{node["node_color"]}" as your search string. The #{} syntax is how you embed a Ruby expression value in to a string. Also for reasons of complex backwards compat, the environment on a node is called chef_environment.