Recommended way to achieve `rescue-ensure` kind of functionality with Kiba? - kiba-etl

We have a Kiba pipeline where we need to do some task after the job has ended, no matter if there were errors or not (the whole pipeline doesn't fail, we just have couple of validation errors or similar).
This is what the documentation says:
:warning: Post-processors won't get called if an error occurred before them.
https://github.com/thbar/kiba/wiki/Implementing-pre-and-post-processors
Would this be recommended way to do this:
Kiba.run(
Kiba.parse do
source(...)
transform(...)
destination(....)
post_process do
# we cannot do it here, because it won't get called
end
end
)
# is this the location to do it?
Job.some_special_cleanup_task
Thanks!
PS what does it mean:
Post-processors won't get called if an error occurred before them.
Does this mean if the error occurred and wasn't rescued from?

Indeed post_process will not be called in case of errors, as documented and as you pointed out!
At this point, the best solution is to use a form of ensure statement:
A common way to structure that is:
module ETL
module Job
module_function
def setup(config)
Kiba.parse do
source(...)
transform(...)
destination(....)
end
end
def some_special_cleanup_task
# ...
end
def run(job)
Kiba.run(job)
ensure
Job.some_special_cleanup_task
end
end
end
Doing so allows to keep the code for the always-run task close to the ETL job, which is nice.
If your task is instead very independent from the job, and you want to encourage reuse between jobs, you can also create a generic block-form component:
module ETL
module Middlewares
module CommonTask
module_function
def with_common_task
yield
ensure
do_the_common_task
end
end
end
end
Which you would use like this:
ETL::Middlewares::CommonTask.with_common_task do
Kiba.run(...)
end
This second form is used by Kiba Pro FileLock, for instance.
In the future, Kiba ETL will introduce a form of middleware to make this even easier.
Hope this helps, and please mark the question as answered if it properly solves your problem!

Related

How to print a user's error message when lr script fails

I am new to loadrunner. I know error handling part. But recently I came across a situation in which I use a parameter unique each iteration setting. This data is exuastive in nature i.e I cannot use them again.
Now in my script getting error where where it shouldn't be. Probably at the breaking point of the application. So I want to print that parameter whenever it fails anywhere in the script.
Any idea how to achieve this ?
Thanks in advnace.

How to stop the whole test execution but with PASS status in RobotFramework?

Is there any way I can stop the whole robot test execution with PASS status?
For some specific reasons, I need to stop the whole test but still get a GREEN report.
Currently I am using FATAL ERROR which will raise a assertion error and return FAIL to report.
I was trying to create a user keyword to do this, but I am not really familiar with the robot error handling process, could anyone help?
There's an attribute ROBOT_EXIT_ON_FAILURE in BuiltIn.py, and I am thinking about to create another attribute like ROBOT_EXIT_ON_SUCCESS, but have no idea how to.
Environment: robotframework==3.0.2 with Python 3.6.5
There is nothing built-in to support this. By design, a fatal error will cause all remaining tests and suites to have a FAIL status.
Just about your only choice is to write a keyword that sets a global variable, and then have every test include a setup that uses pass execution if to skip the test if the flag is set.
If I understood you correctly, you need to pass the test execution forcefully and return green status for that test, is that right? You have a built in keyword "Pass Execution" for that. Did you try using that?

How do I export runtime datatable into excel if any error occurs due to data?

I want to know if I can export a datatable into excel when I get an error due to data while running the scripts.
If i am having 5 records in a sheet, and 2 records processed well, while running the third record my script encounters an error. Am I able to export into excel in that moment?
Errors may occur at any places because of the data.
Your question doesn't explicitly say QTP, but I'm assuming QTP because you used the tag HP-UFT.
I'm not sure what you mean by "when we get error", so I'll explore two possibilites.
1) You're getting an error in the application you are testing; QTP itself is still executing the script.
In this situation, your script should have validation checks (if statements that check to make sure that what you expected to happen did indeed just happen), and if those checks fail, you could immediately do a DataTable.Export(filename) to save the data to disk before QTP ends. Then, the script could continue, or you can add an ExitTest to fail out and stop the test.
Based on your question, I think it's more likely that:
2) You're getting an error in QTP itself. When QTP crashes, it drops any dynamic changes to the DataTable (i.e. if you had done a DataTable.Import(filename) or updated any fields, it would loose that and go back to it's design time DataTable instead)
In this situation, your script is encountering something that is causing QTP itself to stop the script. Perhaps it's hitting an error where an object cannot be found, or some kind of syntax error. You should consider adding defensive statements to check on things before your code reaches the point that this kind of error would occur... For example, perhaps add...
If not Browser("ie").Page("page").WebTable("table").Exists then
FailTestBecause "Can't find table"
End If
...
function FailTestBecause (reason)
Print "Test Failed Because: " & reason
Reporter.ReportEvent micFail, Environment("ActionName"), reason
DataTable.Export(filename)
ExitTest
end Function
Or, you could just use an On Error Resume Next and put in a command to DataTable.Export(filename) immediately after where it is failing...

How to pass parameters into your ETL job?

I am building an ETL which will be run on different sources, by a variable.
How can I execute my job (rake task)
Kiba.run(Kiba.parse(IO.read(etl_file),etl_file))
and pass in parameters for my etl_file to then use for its sources?
source MySourceClass(variable_from_rake_task)
Author of Kiba here.
EDIT: the solution below still applies, but if you need more flexibility, you can use Kiba.parse with a block to get more flexibility. See https://github.com/thbar/kiba/wiki/Considerations-for-running-Kiba-jobs-programmatically-(from-Sidekiq,-Faktory,-Rake,-...) for a detailed explanation.
Since you are using a Rake task (and not calling Kiba in a parallel environment, like Resque or Sidekiq), what you can do right now is leverage ENV variables, like this:
CUSTOMER_IDS=10,11,12 bundle exec kiba etl/upsert-customers.etl
Or, if you are using a rake task you wrote, you can do:
task :upsert_customers => :environment do
ENV['CUSTOMER_IDS'] = [10, 11, 12].join(',)
etl_file = 'etl/upsert-customers.etl'
Kiba.run(Kiba.parse(IO.read(etl_file),etl_file))
end
Then in upsert-customers.etl:
# quick parsing
ids = ENV['CUSTOMER_ID'].split(',').map { |c| Integer(c) }
source Customers, ids: ids
As I stated before, this will only work for command line mode, where ENV can be leveraged safely.
For parallel executions, please indeed track https://github.com/thbar/kiba/issues/18 since I'm going to work on it.
Let me know if this properly answers your need!
Looks like this is tracked here https://github.com/thbar/kiba/issues/18 and already asked here Pass Parameters to Kiba run Method

Load Steps into Console?

Is it possible to load the step definitions I have defined into the calabash-android console?
I would like to be able to use them when navigating the app within the console.
Thanks
No from the console you can not run a single step definition.
But you can start execution of a test at a specific line appending parameter to the call to start your test
:<linenumber>
This will start execution of your feature file from that specific line and it will run from there to the end of the file.
So while it is not what you are looking for at least it is something.
Did you try step('<step_name>') method?
To be honest I'm not sure if this will work. I know it's working insinde Ruby methods and step definitions - I wanted to post a comment but I can't with 28 points of reputation ;)
You can also try making ruby methods with code from within the step definition:
Then /^I do something$/ do
some code
goes here
end
def do_something
some code
goes here
# same code as in step definition
end
or just use step method:
def do_something
step('I do something')
end
and then call it in a calabash console (I prefer using binding.pry inside some script rather than calling "pure" calabash-console - it makes me sure that I will have all needed methods included).

Resources