How to use fixtures inside class - missing 2 required positional arguments - apache-spark

I want to use fixtures to test different cases in my transformation tests.
What I try is:
class Name_Of_Class(BaseTest):
enter_data = [
{
"Col1": mValue1",
"Col2": "Value2"
}
]
expected_data = [
{
"Col1": "Expected1",
"Col2": "Expected1"
}
]
#pytest.mark.parametrize("test_input, test_expected",
[
(enter_data, expected_data)
]
)
def test_fixture(self, enter_data, expected_data):
to_be_transformed_df = self.create_df(enter_data)
expected_df = self.create_df(expected_data)
...
I try to follow the documentation: https://docs.pytest.org/en/stable/how-to/parametrize.html
And as a result, I receive the following error:
def _callTestMethod(self, method):
=>method()
E TypeError: test_fixture() missing 2 required positional arguments: 'enter_data' and 'expected_data'
I don't understand why test_fixture() doesn't see those arguments coming from the fixture.
Does anyone have some suggestions?

Related

Groovy to round all array decimal places down and return unique number count?

Thank you aspok for your help!
My goal is to get my list to be [3, 3, 4] and then get a count of unique values within it. Can anyone point me in the right direction for doing this?
My script consumes a JSON and puts all F4211_LNID values into a list. [3.1, 3.9, 4]. I need to now round all decimal places down.
I'm not sure if it's doable, but I am trying to use Math.floor(intListItems) to round my array values down. When I try this I receive the following error: Exception No signature of method: static java.lang.Math.floor() is applicable for argument types: (ArrayList) values: [[3.1, 3.9, 4]] Possible solutions: floor(double), log(double), find(), macro(groovy.lang.Closure), acos(double), cos(double)
I see my simplified list in the error, but I can't get it to round down and not sure what the error means.
(UPDATED) My Working Groovy
// Read Input Values
String aInputJson = aInputMap.InputJson ?: "{}"
// Initialize Output Values
def intListItems = []
def uniqueCount = 0
// Parse JSON
def json = new JsonSlurper().parseText( aInputJson )
// Determine Row Numbers
def rowset = json?.fs_DATABROWSE_F4211?.data?.gridData?.rowset
intListItems = rowset.collect{ Math.floor(it.F4211_LNID) }
intListItems.unique()
uniqueCount = intListItems.size()
JSON I am using.
{
"fs_DATABROWSE_F4211": {
"title": "Data Browser - F4211 [Sales Order Detail File]",
"data": {
"gridData": {
"id": 58,
"fullGridId": "58",
"rowset": [
{
"F4211_LNTY": "S",
"F4211_CPNT": 0,
"F4211_MCU": " 114000",
"F4211_DSC2": "NAS133N3EK166",
"F4211_NXTR": "580",
"F4211_LNID": 3.1,
"F4211_DOCO": 2845436
},
{
"F4211_LNTY": "S",
"F4211_CPNT": 0,
"F4211_MCU": " 114000",
"F4211_DSC2": "NAS133N3EK166",
"F4211_NXTR": "580",
"F4211_LNID": 3.9,
"F4211_DOCO": 2845436
},
{
"F4211_LNTY": "S",
"F4211_CPNT": 0,
"F4211_MCU": " 114000",
"F4211_DSC2": "NAS133N3EK166",
"F4211_NXTR": "580",
"F4211_LNID": 4,
"F4211_DOCO": 2845436
}
],
"summary": {
"records": 1,
"moreRecords": false
}
}
},
"errors": [],
"warnings": []
},
"currentApp": "DATABROWSE_F4211",
"timeStamp": "2000-06-01:09.42.02",
"sysErrors": []
}
You are getting the error Exception No signature of method: static java.lang.Math.floor() is applicable for argument types: (ArrayList) because there is no version of Math.floor() that accepts a List as a parameter.
Instead, you need to call Math.floor() on each individual item in the list. The easiest way to do this is in the collect { } call you are already doing.
def flooredList = rowset.collect { Math.floor(it.F4211_LNID) }
assert flooredList == [3.0, 3.0, 4.0]

Pydantic: Define an attribute of type List with nested lists and multiple sub-models

I have to deal with such a schema:
{
"method": "Data.get",
"params": [
[
{ # name this "Fetch"
"select_": "DowntimeMan",
"filter_string": "cluster='A' AND nodename%('s2')",
},
{ # name this "Output"
"fields": [
"DowntimeMan",
"rowquality"
],
}
],
{ # Name this "Repeat"
"repeat_timefilter": [
{ # Name this "Time"
"time_start": "20220503.1600",
"time_end": "420m",
}
],
}
],
"context": {
"nwid": "abc"
}
}
I wish to create a pydantic model, so my effort is:
from typing import List, Optional
from pydantic import BaseModel
class Payload(BaseModel):
method: str = "Data.get"
params: list[list[Fetch, Output], Repeat]
context: Context
class Fetch(BaseModel):
select_: str
filter_string: str
class Output(BaseModel):
fields: List
class Repeat(BaseModel):
repeat_timefilter: List[Time]
class Context(BaseModel):
nwid: str
class Time(BaseModel):
time_start: str
time_end: str
So In the code i would like to set values like
_payload = Payload(params=[[FetchPM(select_counters="DowntimeMan",
filter_string="cluster='A' AND nodename%('s2')"
),
Output(fields=['DowntimeMan', 'rowquality'])
],
Repeat(repeat_timefilter=[Time(time_start="20220503.1600",
time_end="420m")]
], context=Context()
)
But i get:
File "pydantic\main.py", line 331, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 2 validation errors for Payload
params -> 0 -> 1 -> select_
field required (type=value_error.missing)
params -> 0 -> 1 -> filter_string
field required (type=value_error.missing)
So the question is: Can i define multiple Models that are part of a list somehow in pydantic and expect validation?
Note that i would not like to use Union since it implies or relationship. Here my response is always going to look like this: A list of list with Fetch and Output and Repeat

How to create json array and values using Groovy jsonBiulder

trying to create the below json structure with groovy jsonBuilder in jmeter using JSR223 sampler
{
"Catalogues":[
{
"Catalogue":{
"itemName":"XYZ",
"Level":"Class1",
"Color":"Orange",
"Id":"232f2d6820822840"
},
"sales":[
[
"7:19 PM 3-31-2022",
"gadfma53742w3585657er43"
],
[
"5:02 PM 3-30-2022",
"iytyvh53742w3585657er41"
]
]
}
]
}
I have tried the below groovy script
def json = new groovy.json.JsonBuilder()
json {
Catalogues(
[{
Catalogue {
itemName('XYZ')
Level('Class1')
Color('Orange')
Id('232f2d6820822840')
},
{
sales(
('7:19 PM 3-31-2022'), ('gadfma53742w3585657er43')
)
}
}]
)
}
log.info '\n\n\n' + json.toPrettyString() + '\n\n\n'
Output:
{
"Catalogues":[
{
"Catalogue":[
{
"itemName":"XYZ",
"Level":"Class1",
"Color":"Orange",
"Id":"232f2d6820822840"
},
{
"sales":[
"7:19 PM 3-31-2022",
"gadfma53742w3585657er43"
]
}
]
}
]
}
Problems:
If I remove the '{' before sales and after (at corresponding location), it adds sales values into catalogue
unable to include second set of sales values
I'm suggesting another way to use builder because it's easier to understand.
To declare array in groovy use [1, 2, 3]
To declare map [a:1, b:2, c:3]
So, if you replace in original json { to [ - you will get valid groovy object that corresponds to parsed json
def data = [
"Catalogues":[
[
"Catalogue":[
"itemName":"XYZ",
"Level":"Class1",
"Color":"Orange",
"Id":"232f2d6820822840"
},
"sales":[
[
"7:19 PM 3-31-2022",
"gadfma53742w3585657er43"
],
[
"5:02 PM 3-30-2022",
"iytyvh53742w3585657er41"
]
]
]
]
]
//then you could convert it to json:
def json = new groovy.json.JsonBuilder(data).toPrettyString()
log.info(json)
JsonBuilder translates Map to JSON Object and List to JSON Array
So I would recommend for better readability and clarity amending your code to look like:
def body = [:]
def Catalogues = []
def Catalogue = [:]
def entry = [:]
Catalogue.put('itemName', 'XYZ')
Catalogue.put('Level', 'Class1')
Catalogue.put('Color', 'Orange')
Catalogue.put('Id', '232f2d6820822840')
def sales = []
sales.add(['7:19 PM 3-31-2022', 'gadfma53742w3585657er43'])
sales.add(['5:02 PM 3-30-2022', 'iytyvh53742w3585657er41'])
entry.put('Catalogue', Catalogue)
entry.put('sales', sales)
Catalogues.add(entry)
body.put('Catalogues', Catalogues)
def json = new groovy.json.JsonBuilder(body)
More information:
JsonBuilder
Apache Groovy - Parsing and producing JSON
Apache Groovy - Why and How You Should Use It

Groovy (GPars) and MissingMethodException when calling eachParallel()

When I run the following code in the console (groovy 2.1.3):
strings = [ "butter", "bread", "dragon", "table" ]
strings.eachParallel{println "$it0"}
I get:
groovy.lang.MissingMethodException: No signature of method: java.util.ArrayList.eachParallel() is applicable for argument types: (ConsoleScript40$_run_closure1) values: [ConsoleScript40$_run_closure1#a826f5]
Anyone can tell me what I am doing wrong?
I think you are missing the set up. Try
#Grab(group='org.codehaus.gpars', module='gpars', version='1.0.0')
import groovyx.gpars.GParsPool
GParsPool.withPool {
def strings = [ "butter", "bread", "dragon", "table" ]
strings.eachParallel { println it }
}

get this JSON representation of your neo4j objects

I want to get data from thisarray of json object :
[
{
"outgoing_relationships": "http://myserver:7474/db/data/node/4/relationships/out",
"data": {
"family": "3",
"batch": "/var/www/utils/batches/d32740d8-b4ad-49c7-8ec8-0d54fcb7d239.resync",
"name": "rahul",
"command": "add",
"type": "document"
},
"traverse": "http://myserver:7474/db/data/node/4/traverse/{returnType}",
"all_typed_relationships": "http://myserver:7474/db/data/node/4/relationships/all/{-list|&|types}",
"property": "http://myserver:7474/db/data/node/4/properties/{key}",
"self": "http://myserver:7474/db/data/node/4",
"properties": "http://myserver:7474/db/data/node/4/properties",
"outgoing_typed_relationships": "http://myserver:7474/db/data/node/4/relationships/out/{-list|&|types}",
"incoming_relationships": "http://myserver:7474/db/data/node/4/relationships/in",
"extensions": {},
"create_relationship": "http://myserver:7474/db/data/node/4/relationships",
"paged_traverse": "http://myserver:7474/db/data/node/4/paged/traverse/{returnType}{?pageSize,leaseTime}",
"all_relationships": "http://myserver:7474/db/data/node/4/relationships/all",
"incoming_typed_relationships": "http://myserver:7474/db/data/node/4/relationships/in/{-list|&|types}"
}
]
what i tried is :
def messages=[];
for ( i in families) {
messages?.add(i);
}
how i can get familes.data.name in message array .
Here is what i tried :
def messages=[];
for ( i in families) {
def map = new groovy.json.JsonSlurper().parseText(i);
def msg=map*.data.name;
messages?.add(i);
}
return messages;
and get this error :
javax.script.ScriptException: groovy.lang.MissingMethodException: No signature of method: groovy.json.JsonSlurper.parseText() is applicable for argument types: (com.tinkerpop.blueprints.pgm.impls.neo4j.Neo4jVertex) values: [v[4]]\nPossible solutions: parseText(java.lang.String), parse(java.io.Reader)
Or use Groovy's native JSON parsing:
def families = new groovy.json.JsonSlurper().parseText( jsonAsString )
def messages = families*.data.name
Since you edited the question to give us the information we needed, you can try:
def messages=[];
families.each { i ->
def map = new groovy.json.JsonSlurper().parseText( i.toString() )
messages.addAll( map*.data.name )
}
messages
Though it should be said that the toString() method in com.tinkerpop.blueprints.pgm.impls.neo4j.Neo4jVertex makes no guarantees to be valid JSON... You should probably be using the getProperty( name ) function of Neo4jVertex rather than relying on a side-effect of toString()
What are you doing to generate the first bit of text (which you state is JSON and make no mention of how it's created)
Use JSON-lib.
GJson.enhanceClasses()
def families = json_string as JSONArray
def messages = families.collect {it.data.name}
If you are using Groovy 1.8, you don't need JSON-lib anymore as a JsonSlurper is included in the GDK.
import groovy.json.JsonSlurper
def families = new JsonSlurper().parseText(json_string)
def messages = families.collect { it.data.name }

Resources