post test execution callbacks available? - origen-sdk

I am looking to apply a callback post test execution that will check for an alarm flag. I don't see any listed here so I then checked the test interface and only see what looks like a flow level callback:
# This will be called at the end of every flow or sub-flow (at the end of every
# Flow.create block).
# Any options passed to Flow.create will be passed in here.
# The options will contain top_level: true, whenever this is called at the end of a
# top-level flow file.
def shutdown(options = {})
end
We need the ability to check the alarm flags after every test but still apply a common group ID to a list of tests like this:
group "func tests", id: :func do
[:minvdd, :maxvdd].each do |cond|
func :bin1_1200, ip: :cpu, testmode: :speed, cond: cond
end
end
Here is an example of the V93K alarm flow flag:
thx!

It is common when writing interfaces to funnel all test generation methods through a common single method to add them to the flow:
def func(name, options = {})
t = test_suites.add(name)
t.test_method = test_methods.origen.functional_test(options)
add_to_flow(t, options)
end
def para(name, options = {})
t = test_suites.add(name)
t.test_method = test_methods.origen.parametric_test(options)
add_to_flow(t, options)
end
def add_to_flow(test_obj, options = {})
# Here you can do anything you want before adding each test to the flow
flow.test(test_obj, options)
# Here you can do anything you want after adding each test to the flow
end
So while there is no per-test callback, you can generally achieve whatever you wanted to do with one via the above interface architecture.
EDIT:
With reference to the alarm flag flow structure you want to create, you would code it like this:
func :some_func_test, id: :sft1
if_failed :sft1 do
bin 10, if_flag: "Alarm"
bin 11, unless_flag: "Alarm"
end
Or, if you prefer, this is equivalent:
func :some_func_test, id: :sft1
bin 10, if_flag: "Alarm", if_failed: :sft1
bin 11, unless_flag: "Alarm", if_failed: :sft1
At the time of writing, that will generate something logically correct but with a sub-optimal branch structure.
In the next release that will be fixed, see the test case that has been added here and the output it generates here.
You can call all of the flow control methods from the interface the same way you can from within the flow, so you can inject such conditions in the add_to_flow method if you want.
Note also that in the test case both if_flag and if_enable are used. if_enable should generally be used if the flag is something that would be set at the start of the flow (e.g. by the operator) and would not change. if_flag should be used if it is a flag that is subject to modification by the flow at runtime.

Related

Setting the current test insertion within the DUT model

We have evolved our Origen usage such that we have a params file and a flow file for each test module (scan, mbist, etc.). We are now at the point where we need to take into account the test insertion when handling the DUT model and the test flow generation. I can see here that using a job flag is the preferred method for specifying test insertion specifics into the flow file. And this video shows how to specify a test insertion when simulating the test flow. My question is how can a test insertion be specified when not generating a flow, only loading params files into the DUT model? Take this parameter set that defines some test conditions for a scan/ATPG test module.
scan.define_params :test_flows do |p|
p.flows.ws1.chain = [:vmin, :vmax]
p.flows.ft1.chain = [:vmin, :vmax]
p.flows.ws1.logic = [:vmin, :vmax]
p.flows.ft1.logic = [:vmin]
p.flows.ws1.delay = [:pmax]
p.flows.ft1.delay = [:pmin]
end
You can see in the parameter set hierarchy that there are two test insertions defined: 'ws1' and 'ft1'. Am I right to assume that the --job option only sets a flag somewhere when used with the origen testers:run command? Or can this option be applied to origen i, such that just loading some parameter sets will have access to the job selected?
thx
There's no built-in way to do what you want here, but given that you are using parameters in this example the way I would do it would be to align your parameter contexts to the job name:
scan.define_params :ws1 do |p|
p.flows.chain = [:vmin, :vmax]
p.flows.logic = [:vmin, :vmax]
p.flows.delay = [:pmax]
end
scan.define_params :ft1 do |p|
p.flows.chain = [:vmin, :vmax]
p.flows.logic = [:vmin]
p.flows.delay = [:pmin]
end
There are various ways to actually set the current context, one way would be to have a target setup per job:
# target/ws1.rb
MyDUT.new
dut.params = :ws1
# target/ft1.rb
MyDUT.new
dut.params = :ft1
Here it is assuming that the scan object is configured to track the context of the top-level DUT - http://origen-sdk.org/origen//guides/models/parameters/#Tracking_the_Context_of_Another_Object

Custom type/provider not ensurable

I'm trying to create a new custom type/provider but not ensurable.
I've already checked the exec and augeas types, but I couldn't figure out clearly how exactly the integration between type and provider work when we don't define the ensurable mode.
Type:
Puppet::Type.newtype(:ptemplates) do
newparam(:name) do
desc ""
isnamevar
end
newproperty(:run) do
defaultto 'now'
# Actually execute the command.
def sync
provider.run
end
end
end
Provider:
require 'logger'
Puppet::Type.type(:ptemplates).provide(:ptemplates) do
desc ""
def run
log = Logger.new(STDOUT)
log.level = Logger::INFO
log.info("x.....................................")
end
But I don't know why the provider is being executed twice
root#puppet:/# puppet apply -e "ptemplates { '/tmp': }" --environment=production
Notice: Compiled catalog for puppet.localhost in environment production in 0.12 seconds
I, [2017-07-30T11:00:15.827103 #800] INFO -- : x.....................................
I, [2017-07-30T11:00:15.827492 #800] INFO -- : x.....................................
Notice: /Stage[main]/Main/Ptemplates[/tmp]/run: run changed 'true' to 'now'
Notice: Applied catalog in 4.84 seconds
Also, I had to define the defaultto to force the execution of the provider.run method.
What am I missing ?
Best Regards.
First you should spend some time reading this blog http://garylarizza.com/blog/2013/11/25/fun-with-providers/ and the two following by Gary Larizza. It gives a very good introduction to puppet type/providers.
Your log is being executed twice because of your def sync in the type that calls the run define, second when puppet tries to determine the value of your run property.
In order to write a type/provider that is not ensurable you need to do something like:
Type:
Puppet::Type.newtype(:ptemplates) do
#doc = ""
newparam(:name, :namevar => true) do
desc ""
end
newproperty(:run) do
desc ""
newvalues(:now, :notnow)
defaultto :now
end
end
Provider:
Puppet::Type.type(:ptemplates).provide(:ruby) do
desc ""
def run
#Do something to determine if run value and is now or notnow and return it
end
def run= value
#Do something to set the value of run
end
end
Note that all type providers must be able to determine the value of the property and be able to set it. The difference between an ensurable and a not ensurable type/provider is that the ensurable type/prover is able to create and destroy it, fx remove an user or add an user. A type/provider that is not ensurable is not able to create and destroy the property, fx selinux, you can set its value, but you cannot remove selinux.

Test an infinite loop in minitest

I have a job named ActivityJob which fetches an user's github public activities.
class ActivityJob < ActiveJob::Base
queue_as :git
def perform(user)
begin
#fetch github activities for a user using user's auth_token
activities = ActivitiesFetcher.new(user).fetch
# process the retrieved activities.
rescue Github::Error::NotFound
#user is not present. ignore and complete the job.
rescue Github::Error::Unauthorized
#auth token is invalid. re-assign a new token on next sign in
user.auth_token = nil
user.save
# refresh_gh_client gets a new client using a random user's auth_token
user.refresh_gh_client
retry
rescue Github::Error::Forbidden
# Probably hit the Rate-limit, use another token
user.refresh_gh_client
retry
end
end
end
The method refresh_gh_client gets a random user from db and uses its auth_token to create an new gh_client. the new gh_client is assigned to the current user. This is working fine. In test cases, I am using mocha to stub the method calls.
class ActivityJobTest < ActiveJob::TestCase
def setup
super
#user = create :user, github_handle: 'prasadsurase', auth_token: 'somerandomtoken'
#round = create :round, :open
clear_enqueued_jobs
clear_performed_jobs
assert_no_performed_jobs
assert_no_enqueued_jobs
end
test 'perform with Github::Error::Unauthorized exception' do
User.any_instance.expects(:refresh_gh_client).returns(nil)
ActivitiesFetcher.any_instance.expects(:fetch).raises(Github::Error::Unauthorized, {})
ActivityJob.perform_now(#user, 'all', #round)
#user.reload
assert_nil #user.auth_token
end
end
The problem is that the 'retry' in the job calls the ActivitiesFetcher breaking the expectation that it was supposed to be called only once.
:
unexpected invocation: #<AnyInstance:User>.refresh_gh_client()
unsatisfied expectations:
- expected exactly once, invoked twice: # <AnyInstance:User>.refresh_gh_client(any_parameters)
- expected exactly once, invoked twice: #<AnyInstance:ActivitiesFetcher>.fetch
Make an interface for the ActivitiesFetcher behavior you need. You create an implementation of the interface used only for testing that deals with the idiosyncrasies of testing.

Include monotonically increasing value in logstash field?

I know there's no built in "line count" functionality while processing files through logstash (for various, understandable and documented reasons). But - there should be a mechanism, within any given logstash instance - to have an monotonically increasing variable / count for every parsed line.
I don't want to go the metrics route since it's a continuous polling mechanism (every n-seconds). Alternatives include pre-processing of log files which given my particular use case - is unacceptable.
Again, let me reiterate - I need the ability to generate/read a monotonically increasing variable that I can store during in a logstash filter.
Thoughts?
here's nothing built into logstash to do it.
You can build a filter to do it pretty easily
Just drop something like this into lib/logstash/filters/seq.rb
# encoding: utf-8
require "logstash/filters/base"
require "logstash/namespace"
require "set"
#
# This filter will adds a sequence number to a log entry
#
# The config looks like this:
#
# filter {
# seq {
# field => "seq"
# }
# }
#
# The `field` is the field you want added to the event.
class LogStash::Filters::Seq < LogStash::Filters::Base
config_name "seq"
milestone 1
config :field, :validate => :string, :required => false, :default => "seq"
public
def register
# Nothing
end # def register
public
def initialize(config = {})
super
#threadsafe = false
# This filter needs to keep state.
#seq=1
end # def initialize
public
def filter(event)
return unless filter?(event)
event[#field] = #seq
#seq = #seq + 1
filter_matched(event)
end # def filter
end # class LogStash::Filters::Seq
This will start at 1 every time Logstash is restarted, but for most situations, this would be ok. If you need something that is persistent across restarts, you need to do a bit more work to persist it somewhere
For anyone finding this in 2018+: logstash now has a ruby filter that makes this much simpler. Put the following in a file somewhere:
# encoding: utf-8
def register(params)
#seq = 1
end
def filter(event)
event.set("seq", #seq)
#seq += 1
return [event]
end
And then configure it like this in your logstash.conf (substitute in the filename you used):
ruby {
path => "/usr/local/lib/logstash/seq.rb"
}
It would be pretty easy to make the field name configurable from logstash.conf, but I'll leave that as an exercise for the reader.
I suspect this isn't thread-safe, so I'm running only a single logstash worker.
this is another choice to slove the problem,this work for me,thanks to the answer from the previous person about thread safe. i use seq field to sort my desc
this is my configure
logstash.conf
filter {
ruby {
code => 'event.set("seq", Time.now.strftime("%N").to_i)'
}
}
logstash.yml
pipeline.batch.size: 200
pipeline.batch.delay: 60
pipeline.workers: 1
pipeline.output.workers: 1

Groovy CliBuilder: only last LongOpt is taken in account

I'm trying to use the groovy CliBuilder to parse command line options. I'm trying to use multiple long options without a short option.
I have the following processor:
def cli = new CliBuilder(usage: 'Generate.groovy [options]')
cli.with {
h longOpt: "help", "Usage information"
r longOpt: "root", args: 1, type: GString, "Root directory for code generation"
x args: 1, type: GString, "Type of processor (all, schema, beans, docs)"
_ longOpt: "dir-beans", args: 1, argName: "directory", type: GString, "Custom location for grails bean classes"
_ longOpt: "dir-orm", args: 1, argName: "directory", type: GString, "Custom location for grails domain classes"
}
options = cli.parse(args)
println "BEANS=${options.'dir-beans'}"
println "ORM=${options.'dir-orm'}"
if (options.h || options == null) {
cli.usage()
System.exit(0)
}
According to the groovy documentation I should be able to use multiple "_" values for an option when I want it to ignore the short option name and use a long option name only. According to the groovy documentation:
Another example showing long options (partial emulation of arg
processing for 'curl' command line):
def cli = new CliBuilder(usage:'curl [options] <url>')
cli._(longOpt:'basic', 'Use HTTP Basic Authentication')
cli.d(longOpt:'data', args:1, argName:'data', 'HTTP POST data')
cli.G(longOpt:'get', 'Send the -d data with a HTTP GET')
cli.q('If used as the first parameter disables .curlrc')
cli._(longOpt:'url', args:1, argName:'URL', 'Set URL to work with')
Which has the following usage message:
usage: curl [options] <url>
--basic Use HTTP Basic Authentication
-d,--data <data> HTTP POST data
-G,--get Send the -d data with a HTTP GET
-q If used as the first parameter disables .curlrc
--url <URL> Set URL to work with
This example shows a common convention. When mixing short and long
names, the short names are often one
character in size. One character
options with arguments don't require a
space between the option and the
argument, e.g. -Ddebug=true. The
example also shows the use of '_' when
no short option is applicable.
Also note that '_' was used multiple times. This is supported but
if any other shortOpt or any longOpt is repeated, then the behavior is undefined.
http://groovy.codehaus.org/gapi/groovy/util/CliBuilder.html
When I use the "_" it only accepts the last one in the list (last one encountered). Am I doing something wrong or is there a way around this issue?
Thanks.
not sure what you mean it only accepts the last one. but this should work...
def cli = new CliBuilder().with {
x 'something', args:1
_ 'something', args:1, longOpt:'dir-beans'
_ 'something', args:1, longOpt:'dir-orm'
parse "-x param --dir-beans beans --dir-orm orm".split(' ')
}
assert cli.x == 'param'
assert cli.'dir-beans' == 'beans'
assert cli.'dir-orm' == 'orm'
I learned that my original code works correctly. What is not working is the function that takes all of the options built in the with enclosure and prints a detailed usage. The function call built into CliBuilder that prints the usage is:
cli.usage()
The original code above prints the following usage line:
usage: Generate.groovy [options]
--dir-orm <directory> Custom location for grails domain classes
-h,--help Usage information
-r,--root Root directory for code generation
-x Type of processor (all, schema, beans, docs)
This usage line makes it look like I'm missing options. I made the mistake of not printing each individual item separate from this usage function call. That's what made this look like it only cared about the last _ item in the with enclosure. I added this code to prove that it was passing values:
println "BEANS=${options.'dir-beans'}"
println "ORM=${options.'dir-orm'}"
I also discovered that you must use = between a long option and it's value or it will not parse the command line options correctly (--long-option=some_value)

Resources