Accessing the test interface shutdown callback method inside the DUT class? - origen-sdk

Is it possible to access the same shutdown method found in the test interface within the DUT class? I see the test program generation callbacks here but none of them get to the end of the Origen generation command. We would like to be able to do this without having to point the Gemfile at a local path for the test interface gem and setting a breakpoint there.
# This will be called at the end of every flow or sub-flow (at the end of every
# Flow.create block).
# Any options passed to Flow.create will be passed in here.
# The options will contain top_level: true, whenever this is called at the end of a
# top-level flow file.
def shutdown(options = {})
binding.pry
# Write the tests disabled/removed to the .tf file
render "\n"
[:defined, :enabled, :disabled, :removed].each do |category|
test_list = Origen.top_level.test_modules(options[:test_module]).send("tests_#{category}".to_sym)
render "-- #{category.to_s.capitalize} test count: #{test_list.size}"
unless test_list.empty?
render "-- #{category.to_s.capitalize} Tests: #{test_list.to_csv}" if category.smatch(/remove|disable/)
end
end
The existing 'on_flow_end' callback is not equivalent to the 'shutdown' callback in the test interface shown above.
thx

There is no automatic hookup for that, but it's easy enough to implement in your application:
# my/interface.rb
def shutdown(options = {})
dut.some_shutdown_method
end
You may also consider the on_origen_shutdown callback if you want to target the very end of the Origen generate command:
http://origen-sdk.org/origen/guides/misc/callbacks/#Environment_Teardown

Related

Jmeter: how to disable listener by code (groovy)

I've tried to disable view resutls tree by groovy code. The code runs, correctly shows and changes the name and enable property (as reported by log), but neither actual ceasing of info in GUI nor writing to file by the listener (both GUI and non-gui mode) happens. Listeners are processed at the end, so IMHO the code that is executed in setUp thread should have effect on logging of other threads. What enabled property do?
I've seen a workaround by editing jmeter plan file in place (JMeter: how can I disable a View Results Tree element from the command line?), but I'd like internal jmeter/groovy solution.
The code (interestingly each listener is processed twice, first printed view resuts tree, next already foo):
import org.apache.jmeter.engine.StandardJMeterEngine
import org.apache.jorphan.collections.HashTree
import org.apache.jorphan.collections.SearchByClass
import org.apache.jmeter.reporters.ResultCollector
def engine = ctx.getEngine()
def test = engine.getClass().getDeclaredField("test")
test.setAccessible(true)
def testPlanTreeRC = (HashTree) test.get(engine)
def rcSearch = new SearchByClass<>(ResultCollector.class)
testPlanTreeRC.traverse(rcSearch)
def rcTrees = rcSearch.getSearchResults()
for (rc in rcTrees) {
log.error(rc.getName())
if (rc.isEnabled()) {log.error("yes")} else {log.error("no")}
rc.setEnabled(false)
if (rc.isEnabled()) {log.error("yes")} else {log.error("no")}
if (rc.getName()=="View Results Tree") {rc.setName ("foo")}
}
ADDED: when listener is disabled in test plan in GUI, it's not found by traverse tree code above.
disabled property is used/checked by JMeter on startup so there must be a change in JMeter code
I open an enhancement Add option to disable View Results Tree/Listeners in non GUI
You can vote on
There are other options executing JMeter externally, using Taurus tool or execute JMeter using Java and disable it:
HashTree testPlanTree = SaveService.loadTree(new File("/path/to/your/testplan"));
SearchByClass<ResultCollector> listenersSearch = new SearchByClass<>(ResultCollector.class);
testPlanTree.traverse(listenersSearch);
Collection<ResultCollector> listeners = listenersSearch.getSearchResults();
listeners.forEach(listener -> listener.setProperty(TestElement.ENABLED, false));
jmeter.configure(testPlanTree);
jmeter.run();

JMeter timers not waits the time configured in sampler

I try to create a scenario that I put a user defined delay in my test.
In the start of the test I created JSR sampler and created a variable called
vertica_results_delay and put in it the value of 400000.
Than I crated a timer and put ${vertica_results_delay}, since I want the delay will be configured in the start of the test, the problem is that Jmeter ignores my value, and not wait.
If I used Use defined field and put vertica_results_delay = 4000 it worked, but than all the tests will get the same delay, I do not want to create hard coded delay. I want to enter all properties of the test in the start of the test using JSR.
String vertica_results_delay = "400000";
vars.put("vertica_results_delay", vertica_results_delay);
log.error("vertica_results_delay " + vertica_results_delay);
Check JMeter order of execution
Configuration elements
Pre-Processors
Timers
Sampler
Your sampler executed after Timer, you need to set it before,
Add JSR223 PreProcessor outside Thread Group with your code and the delay value will be set before Timer is executed.
Timer is a scoped element which is executed before each sampler so what happens in your case is that :
JSR223 Sampler is executed after Timer
See:
http://jmeter.apache.org/usermanual/test_plan.html#scoping_rules
http://jmeter.apache.org/usermanual/test_plan.html#executionorder
To fix your issue, set your timers in a setup Thread Group, or if you only want to set it from outside of JMeter, just use function __P
and pass values on command-line:
-Jkey=value

Invoking Transaction controller or HTTP sampler from Bean shell/JSR223

Problem statement.
Set of transactions(1000+) and need to call or reuse(without duplicating in different if/switch controllers) by invoking from the Beanshell or JSR233.
In SoapUI we have groovy script option to break sequential execution and divert control to any request using the below command.
if( Math.random() > 0.5 )
testRunner.runTestStepByName( "Request 1")
else
testRunner.runTestStepByName( "Request 2")
// do something else
....
Same functionality available in Loadrunner(Run time setting with different action) and neoload too.
Do we have any built-in objects or function to execute by transaction or Sampler name from JSR223/BeanShell without using if/while/switch controller ?
For Example:
In script 10 transactions are there and to use same script for different scenario by setting a JMeter property during execution through Jenkins or command prompt .
__P(Flow,RoomBooking)
Then from JSR233 /beanshell sampler
if(Flow=="RoomBooking"){
invoke Login
invoke BookRoom
invoke Logout
} else if(Flow=="RoomBookingNBookItinerary")
invoke Login
invoke BookRoom
invoke BookItinerary
invoke Logout
}else if(Flow=="RoomBookingNcancel")
invoke Login
invoke BookRoom
Invoke ParkTicket
invoke CancelRoom
invoke Logout
}Like different flows with different thread and throughput
In this case I can mix and match different flows and and reuse same script for different flow.
This would help to reduce script rework effort during application changes.
You are right, JMeter doesn't have JSR 223 Logic Controller at all,
I think that it can help changing also the if controller,
I suggest you open an enhancement to JMeter product (choose Severity: enhancement)
EDIT
There's a new Bug 61711 - Add JSR223 Logic Controller you can vote on.
If you are looking for a way to execute a previous sampler one more time from the JSR223 Script it would be something like:
ctx.getPreviousSampler().sample(null)
where ctx stands for JMeterContext for all available methods and fields.
Demo:
However a better idea would be using JMeter's Module Controller which allows executing a part of JMeter test plan somewhere else, this way you can implement a form of goto statement in JMeter
You can possibly do it with Switch Controller
Any step will be a Transaction Controller
And in a JSR223 Sampler you'll set which step you want:

test program flow generation losing options from main flow file on subsequent inits of test interface

I pass in some parameters via the options hash in my main flow file:
Flow.create(environment: :ws, interface: 'MyTestLibrary::Interface', lib_version: Origen.top_level.lib_version) do
import "components/bist"
import "components/func"
pass 1, softbin: 55
end
The problem is that the options do not stay persistent when other sub-flows are called. Here is the pry session from the fist time the test interface is called:
14: def initialize(options = {})
15: options = {
16: lib_version: nil
18: }.merge!(options)
=> 19: binding.pry
[1] pry(#<MyTestLibrary::Interface>)> options
=> {:lib_version=>"3.14", :environment=>:ws, :interface=>"MyTestLibrary::Interface"}
However, here is the pry session from the second time the same breakpoint is hit:
[1] pry(#<MyTestLibrary::Interface>)> options
=> {:lib_version=>nil}
I guess I have a couple questions:
Aren't the main flow options supposed to be persistent to
sub-flows, with no added work on the user?
Why is the interface being re-initialized at all? Seems like that should only occur once per generation command.
thx in advance
EDIT *
#Ginty, you said in your answer the following:
As far as the options passed into the top-level flow go, there is not really any guarantee about passing them into initialize. Rather the interface should create startup and shutdown methods if it wants to intercept them:
But in the docs, I see the following stated:
For an interface to run it must implement all of the methods that will be called by your flow. It is also customary to create an initialize method that will capture any options that are passed in to Flow.create (such as declaring the environment as probe in our flow example).
Also, the startup method looks like a callback that gets run after the interface is initialized. The information I am passing using the options hash is required before the interface completes initialization. Isn't this creating a brittle run-order dependency the downstream user shouldn't need to worry about?
regards
Say we have two top-level flows, and a flow component:
# program/prb1.rb
Flow.create interface: 'MyApp::Interface', temperature: 25 do
import 'components/ram'
end
# program/prb2.rb
Flow.create interface: 'MyApp::Interface', temperature: 125 do
import 'components/ram'
end
# program/components/_ram.rb
Flow.create do |options|
end
And this interface:
module MyApp
class Interface
include OrigenTesters::ProgramGenerators
def initialize(options = {})
puts "New interface!"
puts "The temperature is: #{options[:temperature]}"
super
end
end
end
Then if we were to generate both flows by running the prog gen on the program directory, origen p program, then we would see the interface get instantiated twice, once per top-level flow:
$ origen p program
[INFO] 0.006[0.006] || **********************************************************************
[INFO] 0.010[0.004] || Generating... prb1.rb
New interface!
The temperature is: 25
[INFO] 0.024[0.014] || Generating... prb2.rb
New interface!
The temperature is: 125
[INFO] 0.052[0.028] || Writing... prb1.tf
[INFO] 0.053[0.001] || *** NEW FILE *** To save it: cp output/testflow/mfh.testflow.group/prb1.tf .ref/prb1.tf
[INFO] 0.054[0.000] || **********************************************************************
[INFO] 0.058[0.004] || Writing... prb2.tf
[INFO] 0.059[0.001] || *** NEW FILE *** To save it: cp output/testflow/mfh.testflow.group/prb2.tf .ref/prb2.tf
[INFO] 0.059[0.000] || **********************************************************************
Referenced pattern list written to: list/referenced.list
[INFO] 0.061[0.002] || *** NEW FILE *** To save it: cp list/referenced.list .ref/referenced.list
[INFO] 0.061[0.000] || **********************************************************************
So, from the output we can see that the two instances of the interface get created, one per top-level flow that is generated, and the options passed to Pattern.create are passed into the interface's initialize method.
Note that no new interface is instantiated when the top-level flow imports a sub-flow/component.
Originally, a new interface instance was created every time Flow.create was encountered, which is the same time that the target is re-loaded. We did that because we had seen issues from an earlier implementation when the target was persisted for a whole flow. This led to some flow generation order dependencies starting to creep into some applications e.g. the output from prb1.rb was different when you generated it standalone vs. generating it at the same time as other flows.
So by starting from a clean slate each time, it eliminated the possibility of un-intentionally changing the output of a flow depending on what your target had done earlier.
Ultimately though, we found that within the context of generating a complete top-level flow, we really needed some persistent state to be available for things like tracking the test number count. So to compromise, we kept the target refresh on every Flow.create but refreshed the interface only upon encountering a new top-level Flow.create.
So far that has been working well in practice. However, if you feel that you need an interface that persists for a whole Origen program generation command, then perhaps you are coming across a use case that we haven't envisaged, or maybe there is another way to do what you are trying to achieve.
Open another question to give more details on that if required.

Writing a persistent perl script

I am trying to write a persistent/cached script. The code would look something like this:
...
Memoize('process_fille');
print process_file($ARGV[0]);
...
sub process_file{
my $filename = shift;
my ($a, $b, $c) = extract_values_from_file($filename);
if (exists $my_hash{$a}{$b}{$c}){
return $my_hash{$a}{$b}{$c};
}
return $default;
}
Which would be called from a shell script in a loop as follows
value=`perl my_script.pl`;
Is there a way I could call this script in such a way that it will keep its state. from call to call. Lets assume that both initializing '%my_hash' and calling extract_values_from_file is an expensive operation.
Thanks
This is kind of dark magic, but you can store state after your script's __DATA__ token and persist it.
use Data::Dumper; # or JSON, YAML, or any other data serializer
package MyPackage;
my $DATA_ptr;
our $state;
INIT {
$DATA_ptr = tell DATA;
$state = eval join "", <DATA>;
}
...
manipulate $MyPackage::state in this and other scripts
...
END {
open DATA, '+<', $0; # $0 is the name of this script
seek DATA, $DATA_ptr, 0;
print DATA Data::Dumper::Dumper($state);
truncate DATA, tell DATA; # in case new data is shorter than old data
close DATA;
}
__DATA__
$VAR1 = {
'foo' => 123,
'bar' => 42,
...
}
In the INIT block, store the position of the beginning of your file's __DATA__ section and deserialize your state. In the END block, you reserialize the current state and overwrite the __DATA__ section of your script. Of course, the user running the script needs to have write permission on the script.
Edited to use INIT block instead of BEGIN block -- the DATA block is not set up during the compile phase.
If %my_hash in your example have moderate size in its final initialized state, you can simply use one of serialization modules like Storable, JSON::XS or Data::Dumper to keep your data in pre-assembled form between runs. Generate a new file when it is absent and just reload ready content from there when it is present.
Also, you've mentioned that you would call this script in loops. A good strategy would be to not call script right away inside the loop, but build a queue of arguments instead and then pass all of them to script after the loop in single execution. Script would set up its environment and then loop over arguments doing its easy work without need to redo setup steps for each of them.
You can't get the script to keep state. As soon as the process exists any information not written to disk is gone.
There are a few ways you can accomplish this though:
Write a daemon which listens on a network or unix socket. The daemon can populate my_hash and answer questions sent from a very simple my_script.pl. It'd only have to open a connection to the daemon, send the question and return an answer.
Create an efficient look-up file format. If you need the information often it'll probably stay in the VFS cache anyway.
Set up a shared memory region. The first time your scripts starts you save the information there, then re-use it later. That might be tricky from a Perl script though.
No. Not directly but can be achieved by very many ways.
1) I understand **extract_values_from_file()** parses given file returning hash.
2) 1 can be made as a script, then dump the parsed hash using **Data::Dumper** into file.
3) When running my_script.pl, ensure that file generated by 2 is later than of the config file. Can achieve this via **make**
3.1) **use** the file generated by 2 to retrieve values.
The same can be achieved via freeze/thaw

Resources