How to add extra data in a json request - groovy

In my SOAP UI ihave two steps, a groovy script step and a rest request step for a POST crud method.
In the groovy script I am creating a random test case property named 'adults'. This value is a random value between 2-5.
testRunner.testCase.setPropertyValue('adults', String.valueOf((int)Math.random()*5)+2);
Below is my rest request for the POST:
{
"xxx": "xxx",
"ratePlanCode": "xxx"
"roomOccupancies": [
{
"passengersInformation": [
{
"firstName": "Test",
"lastName": "Tester",
"isLeadPassenger": true,
"age": 30
},
]
}
],
"xxx": "xxx"
}
Now this request is fixed for 1 adult passenger, but the issue is that if I have multiple passengers, I actually need multiple passengers under "passengersInformation". So virtually for every extra adult I need to add:
{
"firstName": "Test",
"lastName": "Tester",
"isLeadPassenger": false,
"age": 30
},
So what i am thinking is for the name of the passenger as we are not allowed duplicate names, we just add a number to the end of the first and last name. The other two fields we can keep the same.
So my question is how do we add additional passenger details within the request based on the number of adults randomly selected from the groovy script?
Thank you,

Here's one way to replicate the passenger: Note I had to fix a couple of commas (extra and missing) in the JSON string.
import groovy.json.*
def jsonData = '''{
"hotelArrivalDate": "2017-06-01T18:15:00",
"ratePlanCode": "xxx=",
"roomOccupancies": [
{
"passengersInformation": [
{
"firstName": "Test",
"lastName": "Tester",
"isLeadPassenger": true,
"age": 30
}
]
}
],
"holidaysBookingReference": "TestRef"
}'''
def n=1
def data = (new JsonSlurper()).parseText(jsonData)
def newPerson = data.roomOccupancies[0].
passengersInformation[0].
collectEntries {k,v ->
['firstName','lastName'].contains(k) ? [k,v+n] : [k,v]
}
data.roomOccupancies[0].passengersInformation << newPerson
jsonData = (new JsonBuilder(data)).toPrettyString()
result
{
"hotelArrivalDate": "2017-06-01T18:15:00",
"ratePlanCode": "xxx=",
"roomOccupancies": [
{
"passengersInformation": [
{
"firstName": "Test",
"lastName": "Tester",
"isLeadPassenger": true,
"age": 30
},
{
"firstName": "Test1",
"lastName": "Tester1",
"isLeadPassenger": true,
"age": 30
}
]
}
],
"holidaysBookingReference": "TestRef"
}

Related

Jmeter how to construct payload with same size from csv file by iteration

I had a csv file that contains 10 user ids and the requirement is to construct a payload for every 5 users in the csv. Below is my code:
def start = (vars.get('__jm__Thread Group__idx') as int)
def offset = 5
def payload = [:]
def data = []
def file = 'C:/path/dataset.csv'
start.upto(offset, { index ->
def lineFromCsv = new File(file).readLines().get(index)
data.add(['userId': lineFromCsv.split(',')[0], 'groupId': lineFromCsv.split(',')[1]])
})
payload.put('data', data)
log.info("%%%The Payload is%%%:" + payload)
vars.put('payload', new groovy.json.JsonBuilder(payload).toPrettyString())
My 1st question is why there were 6 items in the first payload (1st iteration), where I was expecting 5. And there were 5 items in the 2nd payload (2nd iteration) as expected. Every payload was supposed to have the same # of items in it.
My 2nd question is that how do I make the 2nd payload start parsing from where the 1st payload left off. The 2nd payload was supposed to contain the next 5 users in the csv? There should not have any overlap items between each payloads.
Below is the payload:
1st payload:
POST data:
{
"data": [
{
"userId": "fakeUser3k0000002",
"groupId": "1"
},
{
"userId": "fakeUser3k0000003",
"groupId": "2"
},
{
"userId": "fakeUser3k0000004",
"groupId": "2"
},
{
"userId": "fakeUser3k0000005",
"groupId": "3"
},
{
"userId": "fakeUser3k0000006",
"groupId": "4"
},
{
"userId": "fakeUser3k0000007",
"groupId": "5"
}
]
}
2nd payload:
POST data:
{
"data": [
{
"userId": "fakeUser3k0000003",
"groupId": "2"
},
{
"userId": "fakeUser3k0000004",
"groupId": "2"
},
{
"userId": "fakeUser3k0000005",
"groupId": "3"
},
{
"userId": "fakeUser3k0000006",
"groupId": "4"
},
{
"userId": "fakeUser3k0000007",
"groupId": "5"
}
]
}
def start = (vars.get('__jm__Thread Group__idx') as int)
def offset = 5
i guess Thread Group is your jmeter loop name.
you have to build loop like this
(start*offset).step((start+1)*offset,1){ index->
println it
}
so for start=5 you'll have index 25 to 29
In order to get 5 items in first payload you need to define the offset to 4 as __jm__Thread Group__idx variable value is 0 during the first iteration, you can check it using Debug Sampler and View Results Tree listener combination
In order start 2nd iteration from position 4 you need to store the offset value into a JMeter Variable after constructing the first payload and read it during 2nd iteration.

How to extract selected key and value from nested dictionary object in a list?

I have a list example_list contains two dict objects, it looks like this:
[
{
"Meta": {
"ID": "1234567",
"XXX": "XXX"
},
"bbb": {
"ccc": {
"ddd": {
"eee": {
"fff": {
"xxxxxx": "xxxxx"
},
"www": [
{
"categories": {
"ppp": [
{
"content": {
"name": "apple",
"price": "0.111"
},
"xxx: "xxx"
}
]
},
"date": "A2020-01-01"
}
]
}
}
}
}
},
{
"Meta": {
"ID": "78945612",
"XXX": "XXX"
},
"bbb": {
"ccc": {
"ddd": {
"eee": {
"fff": {
"xxxxxx": "xxxxx"
},
"www": [
{
"categories": {
"ppp": [
{
"content": {
"name": "banana",
"price": "12.599"
},
"xxx: "xxx"
}
]
},
"date": "A2020-01-01"
}
]
}
}
}
}
}
]
now I want to filter the items and only keep "ID": "xxx" and the correspoding value for "price": "0.111", expected result can be something similar to :
[{"ID": "1234567", "price": "0.111"}, {"ID": "78945612", "price": "12.599"}]
or something like {"1234567":"0.111", "78945612":"12.599" }
Here's what I've tried:
map_list=[]
map_dict={}
for item in example_list:
#get 'ID' for each item in 'meta'
map_dict['ID'] = item['meta']['ID']
# get 'price'
data_list = item['bbb']['ccc']['ddd']['www']
for data in data_list:
for dataitem in data['categories']['ppp']
map_dict['price'] = item["content"]["price"]
map_list.append(map_dict)
print(map_list)
The result for this doesn't look right, feels like the item isn't iterating properly, it gives me result:
[{"ID": "78945612", "price": "12.599"}, {"ID": "78945612", "price": "12.599"}]
It gave me duplicated result for the second ID but where is the first ID?
Can someone take a look for me please, thanks.
Update:
From some comments from another question, I understand the reason for the output keeps been overwritten is because the key name in the dict is always the same, but I'm not sure how to fix this because the key and value needs to be extracted from different level of for loops, any help would be appreciated, thanks.
as #Scott Hunter has mentioned, you need to create a new map_dict everytime you are trying to do this. Here is a quick fix to your solution (I am sadly not able to test it right now, but it seems right to me).
map_list=[]
for item in example_list:
# get 'price'
data_list = item['bbb']['ccc']['ddd']['www']
for data in data_list:
for dataitem in data['categories']['ppp']:
map_dict={}
map_dict['ID'] = item['meta']['ID']
map_dict['price'] = item["content"]["price"]
map_list.append(map_dict)
print(map_list)
But what are you doing here is that you are basically just "forcing" your way through ... I recommend you to take a break and check out somekind of tutorial, which will help you to understand how it really works in the back-end. This is how I would have written it:
list_dicts = []
for example in example_list:
for www in item['bbb']['ccc']['ddd']['www']:
for www_item in www:
list_dicts.append({
'ID': item['meta']['ID'],
'price': www_item["content"]["price"]
})
Good luck with this problem and hope it helps :)
You need to create a new dictionary for map_dict for each ID.

PySpark Dataframe to Json - grouping data

We are trying to create a json from a dataframe. Please find the dataframe below,
+----------+--------------------+----------+--------------------+-----------------+--------------------+---------------+--------------------+---------------+--------------------+--------------------+
| CustId| TIN|EntityType| EntityAttributes|AddressPreference| AddressDetails|EmailPreference| EmailDetails|PhonePreference| PhoneDetails| MemberDetails|
+----------+--------------------+----------+--------------------+-----------------+--------------------+---------------+--------------------+---------------+--------------------+--------------------+
|1234567890|XXXXXXXXXXXXXXXXXX...| Person|[{null, PRINCESS,...| Alternate|[{Home, 460 M XXX...| Primary|[{Home, HEREBY...| Alternate|[{Home, {88888888...|[{7777777, 999999...|
|1234567890|XXXXXXXXXXXXXXXXXX...| Person|[{null, PRINCESS,...| Alternate|[{Home, 460 M XXX...| Primary|[{Home, HEREBY...| Primary|[{Home, {88888888...|[{7777777, 999999...|
|1234567890|XXXXXXXXXXXXXXXXXX...| Person|[{null, PRINCESS,...| Primary|[{Home, PO BOX 695020...| Primary|[{Home, HEREBY...| Alternate|[{Home, {88888888...|[{7777777, 999999...|
|1234567890|XXXXXXXXXXXXXXXXXX...| Person|[{null, PRINCESS,...| Primary|[{Home, PO BOX 695020...| Primary|[{Home, HEREBY...| Primary|[{Home, {88888888...|[{7777777, 999999...|
+----------+--------------------+----------+--------------------+-----------------+--------------------+---------------+--------------------+---------------+--------------------+--------------------+
So the initial columns custid, TIN, Entitytype,EntityAttributes will be same for a particular customer, say 1234567890 in our example. But he might be having multiple addresses/phone/email. Could you please help us on how to group them under 1 json.
Expected Structure :
{
"CustId": 1234567890,
"TIN": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
"EntityType": "Person",
"EntityAttributes": [
{
"FirstName": "PRINCESS",
"LastName": "XXXXXX",
"BirthDate": "xxxx-xx-xx",
"DeceasedFlag": "False"
}
],
"Address": [
{
"AddressPreference": "Alternate",
"AddressDetails": {
"AddressType": "Home",
"Address1": "460",
"City": "XXXX",
"State": "XXX",
"Zip": "XXXX"
}
},
{
"AddressPreference": "Primary",
"AddressDetails": {
"AddressType": "Home",
"Address1": "PO BOX 695020",
"City": "XXX",
"State": "XXXX",
"Zip": "695020",
}
}
],
"Phone": [
{
"PhonePreference": "Primary",
"PhoneDetails": {
"PhoneType": "Home",
"PhoneNumber": "xxxxx",
"FormatPhoneNumber": "xxxxxx"
}
},
{
"PhonePreference": "Alternate",
"PhoneDetails": {
"PhoneType": "Home",
"PhoneNumber": "xxxx",
"FormatPhoneNumber": "xxxxx"
}
},
{
],
"Email": [
{
"EmailPreference": "Primary",
"EmailDetails": {
"EmailType": "Home",
"EmailAddress": "xxxxxxx#GMAIL.COM"
}
}
],
}
]
}
UPDATE
Tried with the below recommended group by method, it ended up giving 1 customer details, but the email is repeated 4 times in the list. Ideally it should be having only 1 email. Also In the Address Preference Alternate has 1 address and primary has 1 address, but the Alternate shows 2 entries and primary shows 2. Could you please help with an ideal solution.
Probably this should work. id is like a custid in your example which has repeating values.
>>> df.show()
+----+------------+----------+
| id| address| email|
+----+------------+----------+
|1001| address-a| email-a|
|1001| address-b| email-b|
|1002|address-1002|email-1002|
|1003|address-1003|email-1002|
|1002| address-c| email-2|
+----+------------+----------+
Aggregate on those repeating columns and then convert to JSON
>>> results = df.groupBy("id").agg(collect_list("address").alias("address"),collect_list("email").alias("email")).toJSON().collect()
>>> for i in results: print(i)
...
{"id":"1003","address":["address-1003"],"email":["email-1002"]}
{"id":"1002","address":["address-1002","address-c"],"email":["email-1002","email-2"]}
{"id":"1001","address":["address-a","address-b"],"email":["email-a","email-b"]}

How to find common struct for all documents in collection?

I have an array of documents, that have more or less same structure. But I need find fields that present in all documents. Somethink like:
{
"name": "Jow",
"salary": 7000,
"age": 25,
"city": "Mumbai"
},
{
"name": "Mike",
"backname": "Brown",
"sex": "male",
"city": "Minks",
"age": 30
},
{
"name": "Piter",
"hobby": "footbol",
"age": 25,
"location": "USA"
},
{
"name": "Maria",
"age": 22,
"city": "Paris"
},
All docs have name and age. How to find them with ArangoDB?
You could do the following:
Retrieve the attribute names of each document
Get the intersection of those attributes
i.e.
LET attrs = (FOR item IN test RETURN ATTRIBUTES(item, true))
RETURN APPLY("INTERSECTION", attrs)
APPLY is necessary so each list of attributes in attrs can be passed as a separate parameter to INTERSECTION.
Documentation:
ATTRIBUTES: https://www.arangodb.com/docs/stable/aql/functions-document.html#attributes
INTERSECTION: https://www.arangodb.com/docs/stable/aql/functions-array.html#intersection
APPLY: https://www.arangodb.com/docs/stable/aql/functions-miscellaneous.html#apply

Merge documents by fields

I have two types of docs. Main docs and additional info for it.
{
"id": "371"
"name": "Mike",
"location": "Paris"
},
{
"id": "371-1",
"age": 20,
"lastname": "Piterson"
}
I need to merge them by id, to get result doc. The result should look like:
{
"id": "371"
"name": "Mike",
"location": "Paris"
"age": 20,
"lastname": "Piterson"
}
Using COLLECT / INTO, SPLIT(), and MERGE():
FOR doc IN collection
COLLECT id = SPLIT(doc.id, '-')[0] INTO groups
RETURN MERGE(MERGE(groups[*].doc), {id})
Result:
[
{
"id": "371",
"location": "Paris",
"name": "Mike",
"lastname": "Piterson",
"age": 20
}
]
This will:
Split each id attribute at any - and return the first part
Group the results into sepearate arrays (groups)
Merge #1: Merge all objects into one
Merge #2: Merge the id into the result
See REMOVE & INSERT or REPLACE for write operations.

Resources