Passing input from Excel to test class - excel

I have a multiple test cases in single class. Test Excel has three different worksheets/tabs 1, 2 and 3. There are three test cases in my test class.
I looked into data provider annotation; what I understood is it will execute same test case for whole object passed. In my case it will test test case 1 first for all rows from tab 1, test case 2 for all rows from tab 2 and so on.
What I am looking for is as below:
for i number of rows in excel
Execute test 1 with row i from tab 1
Execute test 2 with row i from tab 2
Execute test 3 with row i from tab 3
(Form i complete, proceed to second form data)
What I could do is read the whole Excel put it in object[][]. Create data providers for each test case and let them run in for loop. For example:
CLASS
{
for loop
{
data provider 1, 2, 3;
#Test
function testcase1()
#Test
function testcase2()
#Test
function testcase3()
}
}
Is it valid approach or does it defeat the purpose of TestNG?

This approach is called Data-driven testing. Your particular case is a type of the DB part
Data pools
ODBC sources
CSV files
Excel files
DAO objects
ADO objects
IMHO the TestNG feature - parametric testing, is using the same approach. It allow you to run the same test over and over again using different values.
I'm not a big fan of the static test data (see Pesticide paradox), my advice is you either generate your Excel files per run (as a Fresh Fixture) or you pass a dynamically generated test data to your tests.

DataProvider takes Method as argument as well.
You can return objects based on the testmethod call.
i.e. if testmethod.name = testcase1 - read tab 1 - create object and return
...and so on

Related

Update a parameter value in Brightway

It seems to be a simple question but I have a hard time to find an answer to it. I already have a project with several parameters (project and database parameters). I would like to obtain the LCA results for several scenarios with my parameters having different values each time. I was thinking of the following simple procedure:
change the parameters' value,
update the exchanges in my project,
calculate the LCA results.
I know that the answer should be in the documentation somewhere, but I have a hard time to understand how I should apply it to my ProjectParameters, DatabaseParameters and ActivityParameters.
Thanks in advance!
EDIT: Thanks to #Nabla, I was able to come up with this:
For ProjectParameter
for pjparam in ProjectParameter.select():
if pjparam.name=='my_param_name':
break
pjparam.amount = 3
pjparam.save()
bw.parameters.recalculate()
For DatabaseParameter
for dbparam in DatabaseParameter.select():
if dbparam.name=='my_param_name':
break
dbparam.amount = 3
dbparam.save()
bw.parameters.recalculate()
For ActivityParameter
for param in ActivityParameter.select():
if param.name=='my_param_name':
break
param.amount = 3
param.save()
param.recalculate_exchanges(param.group)
You could import DatabaseParameter and ActivityParameter iterate until you find the parameter you want to change, update the value, save it and recalculate the exchanges. I think you need to do it in tiers. First you update the project parameters (if any) then the database parameters that may depend on project parameters and then the activity parameters that depend on them.
A simplified case without project parameters:
from bw2data.parameters import ActivityParameter,DatabaseParameter
# find the database parameter to be updated
for dbparam in DatabaseParameter.select():
if (dbparam.database == uncertain_db.name) and (dbparam.name=='foo'):
break
dbparam.amount = 3
dbparam.save()
#there is also this method if foruma depend on something else
#dbparam.recalculate(uncertain_db.name)
# here updating the exchanges of a particular activity (act)
for param in ActivityParameter.select():
if param.group == ":".join(act.key):
param.recalculate_exchanges(param.group)
you may want to update all the activities in the project instead of a single one like in the example. you just need to change the condition when looping through the activity parameters.

How to append array data to array variable in logic app?

I'm calling API using HTTP connector getting result array data. and used until loop. so every time i will get some records into result array.
Now I want to append all records so that i will those all.
Like 1st time i got 2 records like below and 2nd time 1 then I want to append so that it will be 3 total.
1st iteration result -
"results":[
{"id":"2","name":"t1"},{"id":"3","name":"t4"}
]
2nd iteration result -
"results":[
{"id":"66","name":"i7"}]
I want to append all data so that final result will be like -
[{"id":"2","name":"t1"},{"id":"3","name":"t4"}, {"id":"66","name":"i7"}]
instead of foreach I tried using append array variable but it throws below error -
its a type of array need to be string to append.
I can able to achieve it using foreach but it does not make sense just to add values use foreach instead if we found any way to directly add array it will be great.
You can use JS inline code to implement your requirement. I did some test on my side, post to arrays(result1 and result2) to logic app and compose them using JS :
Result :
Please note if you want to use this feature , you should create an integration account and associated with your logic app in "Workflow settings" blade .
The above solution works only if you have integration account.
Other simple option - use union function inside compose action to append two array collections:
union(variables('ResponseArray'),body('Response'))
https://learn.microsoft.com/en-us/azure/logic-apps/workflow-definition-language-functions-reference#union

Mocking Postgresql now() function for testing

I have the following stack
Node/Express backend
Postgresql 10 database
Mocha for testing
Sinon for mocking
I have written a bunch of end-to-end tests to test all my webservices. The problem is that some of them are time dependent (as in "give me the modified records of the last X seconds").
sinon is pretty good at mocking all the time/dated related stuff in Node, however I have a modified field in my Postgresql tables that is populated with a trigger:
CREATE FUNCTION update_modified_column()
RETURNS TRIGGER AS $$
BEGIN
NEW.modified = now();
RETURN NEW;
END;
$$ LANGUAGE 'plpgsql';
The problem of course is that sinon can't override that now() function.
Any idea on how I could solve this? The problem is not setting a specific date at the start of the test, but advancing time faster than real-time (in one of my tests I want to change some stuff in the database, advance the 'current time' with one day, change some more stuff in the database and do webservice calls to see the result).
I can figure out a few solutions myself, but they all involve changing the application code and making it less elegant. I don't think your application code should be impacted by the fact that you want to test it.
Here's an idea: Create your own mock_now() with mock_dates table:
create table mock_dates (
id serial PRIMARY KEY,
mock_date timestamptz not null
);
create or replace function mock_now()
returns timestamptz
as $$
declare
RET timestamptz;
begin
-- Delete first added date and assign it to RET
delete from mock_dates where id in (
select id from mock_dates order by id asc limit 1
)
returning mock_dates.mock_date into RET;
-- If no deletion happened just return the current timestamp
if RET is null then
return now();
end if;
-- Otherwise return the mocked date
return RET;
end;
$$
language plpgsql;
And insert some mocked dates
insert into mock_dates (mock_date) values ('2001-03-11 02:34:00'::timestamptz);
insert into mock_dates (mock_date) values ('2002-05-22 01:49:00'::timestamptz);
and use mock_now() instead of now(). It will return the timestamps inserted to the mock_dates table once (first in first out).
When the table is empty it will work like the default now().
Just ensure the mock_dates table is empty in production 😁
Or you could even define a different function for production which does not even try to read the mock_dates table.
I found this very neat gist which provides a fake NOW() function that lives in a separate schema. You load it into your test database and then modify the search path of each testing session to search override before pg_catalog. Two functions freeze_time and unfreeze_time are provided to enable and disable frozen time.
To be honest, DB internal stuff is always hard to test from application code. In my experience, the best thing to do is to just verify the record's state.
So, rather than testing specifically that the now function is called internally, write a test that creates a new record, then verify that record has created and modified fields set. At that point they should equal each other, and probably be within the last second or two, so that's all stuff you can write assertions around.
Then you write another test that changes some value in the record, and write assertions that the modified stamp is different from the created stamp, is more recent, and probably within the last second d or two.

Transfer Groovy script response to properties in SOAP UI 5.21

Can any one know how to transfer the groovyscript response into the properties step of SOAP UI. I am trying to generate the random numbers using the groovy script, and when i am gettign the random generated numbers how do i transfer that value to properties in soap ui which can be used for the TCs as a parametered value.
TIA
To make it simple,
Use below code to store any value on,
test case level custom properties:
testRunner.testCase.setPropertyValue("propertyName","value");
test suite level custom properties:
testRunner.testCase.testSuite.setPropertyValue("propertyName","value");
project level custom properties:
testRunner.testCase.testSuite.project.setPropertyValue("propertyName","value");
Use below code to check whether value stored successfully on runtime:
test case level:
log.info testRunner.testCase.getPropertyValue("propertyName");
test suite level:
log.info testRunner.testCase.testSuite.getPropertyValue("propertyName");
project level:
log.info testRunner.testCase.testSuite.project.getPropertyValue("propertyName");
Use below code to use the property value inside anywhere on,
test case level:
${#TestCase#propertyName}
test suite level:
${#TestSuite#propertyName}
project level:
${#Project#propertyName}
global level:
${#Global#propertyName}
Here you go:
The below groovy script code snippet will generate a random number and set the value into to a test case level custom property, say PROPERTY_NAME.
Groovy Script
context.testCase.setPropertyValue('PROPERTY_NAME', (Math.abs(new Random().nextInt()) + 1).toString())
In the same test case, it can be accessed in any test requests as ${#TestCase#PROPERTY_NAME}
EDIT: Based on the change you wanted while above original code works though
def a = 9
def AccountName = ''
(0..a).each { AccountName = AccountName + new Random().nextInt(a) }
context.testCase.setPropertyValue('Property_Name', AccountName.toString())
Even you achieve the same thing using below (just updated value in nextInt() to the first answer)
context.testCase.setPropertyValue('PROPERTY_NAME', (Math.abs(new Random().nextInt(999999998)) + 1).toString())

cucumber multiple assertions in a step

I am trying to validate one block of json data that I receive from server
json consists of information about bunch of orders. An each order includes cost of each part, taxes and total. And it is kind of strict requirement each order contains exactly 4 parts. And each order has three kind of taxes and a total.
I have a step which looks like this
And "standardorder" includes parts "1..4", taxes "1..3" and total
and step implementation is like following. Here #jsonhelper.json is shared state (json for one order) passed from previous step.
And /^"([^"]*)" includes parts "([^"]*)", taxes "([^"]*)" and total$/ do |arg1, arg2, arg3|
json = #jsonhelper.json
validkeys = ["total"]
parts = arg2.split('..').map{|d| Integer(d)}
(parts[0]..parts[1]).each do |i|
validkeys.push "p#{i}"
end
taxes = arg3.split('..').map{|d| Integer(d)}
(taxes[0]..taxes[1]).each do |i|
validkeys.push "t#{i}"
end
validkeys.each do |key|
json[arg1].keys.include?(key).should be_true
end
end
Now this script works fine except that if any one key is missing it doesn't state which one is missing. Either it passes or fails as assertions are iterated for each key.
I would like to know if there is any possibility of sending keys which are found ok to result stream. Thus my intention is to know to which keys are ok and which failed and which one skipped. As such order of keys is not expected in json.
Thanks in advance.
It's probably best to split the step definitions first:
And "standardorder" should be received
And the order should include parts 1 to 4
And the order should include taxes 1 to 3
And the order should include the total
Then you can re-use the steps elsewhere.
The 'order' check easy to implement as you're just checking one element.
For the other two, you are really just checking the presence of items in an array, e.g.:
actual_values.should == expected_values
If that fails, RSpec will give you a report showing how the arrays differ.

Resources