We have evolved our Origen usage such that we have a params file and a flow file for each test module (scan, mbist, etc.). We are now at the point where we need to take into account the test insertion when handling the DUT model and the test flow generation. I can see here that using a job flag is the preferred method for specifying test insertion specifics into the flow file. And this video shows how to specify a test insertion when simulating the test flow. My question is how can a test insertion be specified when not generating a flow, only loading params files into the DUT model? Take this parameter set that defines some test conditions for a scan/ATPG test module.
scan.define_params :test_flows do |p|
p.flows.ws1.chain = [:vmin, :vmax]
p.flows.ft1.chain = [:vmin, :vmax]
p.flows.ws1.logic = [:vmin, :vmax]
p.flows.ft1.logic = [:vmin]
p.flows.ws1.delay = [:pmax]
p.flows.ft1.delay = [:pmin]
end
You can see in the parameter set hierarchy that there are two test insertions defined: 'ws1' and 'ft1'. Am I right to assume that the --job option only sets a flag somewhere when used with the origen testers:run command? Or can this option be applied to origen i, such that just loading some parameter sets will have access to the job selected?
thx
There's no built-in way to do what you want here, but given that you are using parameters in this example the way I would do it would be to align your parameter contexts to the job name:
scan.define_params :ws1 do |p|
p.flows.chain = [:vmin, :vmax]
p.flows.logic = [:vmin, :vmax]
p.flows.delay = [:pmax]
end
scan.define_params :ft1 do |p|
p.flows.chain = [:vmin, :vmax]
p.flows.logic = [:vmin]
p.flows.delay = [:pmin]
end
There are various ways to actually set the current context, one way would be to have a target setup per job:
# target/ws1.rb
MyDUT.new
dut.params = :ws1
# target/ft1.rb
MyDUT.new
dut.params = :ft1
Here it is assuming that the scan object is configured to track the context of the top-level DUT - http://origen-sdk.org/origen//guides/models/parameters/#Tracking_the_Context_of_Another_Object
Related
I am working on a testing tool for nvme-cli(written in c and can run on linux).
For SSD validation purpose, i was actually looking for a custom command(For e.g. I/O command, write and then read the same and finally compare if both the data are same)
For read the ioctl() function is used as shown in the below code.
struct nvme_user_io io = {
.opcode = opcode,
.flags = 0,
.control = control,
.nblocks = nblocks,
.rsvd = 0,
.metadata = (__u64)(uintptr_t) metadata,
.addr = (__u64)(uintptr_t) data,
.slba = slba,
.dsmgmt = dsmgmt,
.reftag = reftag,
.appmask = appmask,
.apptag = apptag,
};
err = ioctl(fd, NVME_IOCTL_SUBMIT_IO, &io);
Can I to where exactly the control of execution goes in order to understand the read.
Also I want to have another command that looks like
err = ioctl(fd,NVME_IOCTL_WRITE_AND_COMPARE_IO, &io);
so that I can internally do a write, then read the same location and finally compare the both data to ensure that the disk contains only the data that I wanted to write.
Since I am new to this nvme/ioctl(), if there is any mistakes please correct me.
nvme_io() is a main command handler that accepts as a parameter the NVMe opcode that you want to send to your device. According to the standard, you have separate commands (opcodes) for read, write and compare. You could either send those commands separately, or add a vendor specific command to calculate what you need.
I am looking to apply a callback post test execution that will check for an alarm flag. I don't see any listed here so I then checked the test interface and only see what looks like a flow level callback:
# This will be called at the end of every flow or sub-flow (at the end of every
# Flow.create block).
# Any options passed to Flow.create will be passed in here.
# The options will contain top_level: true, whenever this is called at the end of a
# top-level flow file.
def shutdown(options = {})
end
We need the ability to check the alarm flags after every test but still apply a common group ID to a list of tests like this:
group "func tests", id: :func do
[:minvdd, :maxvdd].each do |cond|
func :bin1_1200, ip: :cpu, testmode: :speed, cond: cond
end
end
Here is an example of the V93K alarm flow flag:
thx!
It is common when writing interfaces to funnel all test generation methods through a common single method to add them to the flow:
def func(name, options = {})
t = test_suites.add(name)
t.test_method = test_methods.origen.functional_test(options)
add_to_flow(t, options)
end
def para(name, options = {})
t = test_suites.add(name)
t.test_method = test_methods.origen.parametric_test(options)
add_to_flow(t, options)
end
def add_to_flow(test_obj, options = {})
# Here you can do anything you want before adding each test to the flow
flow.test(test_obj, options)
# Here you can do anything you want after adding each test to the flow
end
So while there is no per-test callback, you can generally achieve whatever you wanted to do with one via the above interface architecture.
EDIT:
With reference to the alarm flag flow structure you want to create, you would code it like this:
func :some_func_test, id: :sft1
if_failed :sft1 do
bin 10, if_flag: "Alarm"
bin 11, unless_flag: "Alarm"
end
Or, if you prefer, this is equivalent:
func :some_func_test, id: :sft1
bin 10, if_flag: "Alarm", if_failed: :sft1
bin 11, unless_flag: "Alarm", if_failed: :sft1
At the time of writing, that will generate something logically correct but with a sub-optimal branch structure.
In the next release that will be fixed, see the test case that has been added here and the output it generates here.
You can call all of the flow control methods from the interface the same way you can from within the flow, so you can inject such conditions in the add_to_flow method if you want.
Note also that in the test case both if_flag and if_enable are used. if_enable should generally be used if the flag is something that would be set at the start of the flow (e.g. by the operator) and would not change. if_flag should be used if it is a flag that is subject to modification by the flow at runtime.
when implementing Origen::Parameters, I understood the importance of defining a 'default' set. But, in essence, my real default is named something different. So I implemented a hack of a parameter alias:
Origen.top_level.define_params :default do |params|
params.tconds.override = 1
params.tconds.override_lev_equ_set = 1
params.tconds.override_lev_spec_set = 1
params.tconds.override_levset = 1
params.tconds.override_seqlbl = 'my_pattern'
params.tconds.override_testf = 'tm_3'
params.tconds.override_tim_spec_set = 'bist_xxMhz'
params.tconds.override_timset = '1,1,1,1,1,1,1,1'
params.tconds.site_control = 'parallel:'
params.tconds.site_match = 2
end
Origen.top_level.define_params :cpu_mbist_hr, inherit: :default do |params|
# way of aliasing parameter names
end
Is there a proper method of parameter aliasing that is just not documented?
There is no other way to do this currently, though I would be open to a PR to enable something like:
default_params = :cpu_mbist_hr
If you don't want them to be called :default in this case though, then maybe you don't really want them to be the default anyway.
e.g. adding this immediately after you define them would effectively give you an alternative default and would do pretty much the same job as the proposed API above:
# self is required here to help Ruby know that you are calling the params= API
# and not defining a local variable called params
self.params = :cpu_mbist_hr
I have a job named ActivityJob which fetches an user's github public activities.
class ActivityJob < ActiveJob::Base
queue_as :git
def perform(user)
begin
#fetch github activities for a user using user's auth_token
activities = ActivitiesFetcher.new(user).fetch
# process the retrieved activities.
rescue Github::Error::NotFound
#user is not present. ignore and complete the job.
rescue Github::Error::Unauthorized
#auth token is invalid. re-assign a new token on next sign in
user.auth_token = nil
user.save
# refresh_gh_client gets a new client using a random user's auth_token
user.refresh_gh_client
retry
rescue Github::Error::Forbidden
# Probably hit the Rate-limit, use another token
user.refresh_gh_client
retry
end
end
end
The method refresh_gh_client gets a random user from db and uses its auth_token to create an new gh_client. the new gh_client is assigned to the current user. This is working fine. In test cases, I am using mocha to stub the method calls.
class ActivityJobTest < ActiveJob::TestCase
def setup
super
#user = create :user, github_handle: 'prasadsurase', auth_token: 'somerandomtoken'
#round = create :round, :open
clear_enqueued_jobs
clear_performed_jobs
assert_no_performed_jobs
assert_no_enqueued_jobs
end
test 'perform with Github::Error::Unauthorized exception' do
User.any_instance.expects(:refresh_gh_client).returns(nil)
ActivitiesFetcher.any_instance.expects(:fetch).raises(Github::Error::Unauthorized, {})
ActivityJob.perform_now(#user, 'all', #round)
#user.reload
assert_nil #user.auth_token
end
end
The problem is that the 'retry' in the job calls the ActivitiesFetcher breaking the expectation that it was supposed to be called only once.
:
unexpected invocation: #<AnyInstance:User>.refresh_gh_client()
unsatisfied expectations:
- expected exactly once, invoked twice: # <AnyInstance:User>.refresh_gh_client(any_parameters)
- expected exactly once, invoked twice: #<AnyInstance:ActivitiesFetcher>.fetch
Make an interface for the ActivitiesFetcher behavior you need. You create an implementation of the interface used only for testing that deals with the idiosyncrasies of testing.
I would like to use the library threads (or perhaps parallel) for loading/preprocessing data into a queue but I am not entirely sure how it works. In summary;
Load data (tensors), pre-process tensors (this takes time, hence why I am here) and put them in a queue. I would like to have as many threads as possible doing this so that the model is not waiting or not waiting for long.
For the tensor at the top of the queue, extract it and forward it through the model and remove it from the queue.
I don't really understand the example in https://github.com/torch/threads enough. A hint or example as to where I would load data into the queue and train would be great.
EDIT 14/03/2016
In this example "https://github.com/torch/threads/blob/master/test/test-low-level.lua" using a low level thread, does anyone know how I can extract data from these threads into the main thread?
Look at this multi-threaded data provider:
https://github.com/soumith/dcgan.torch/blob/master/data/data.lua
It runs this file in the thread:
https://github.com/soumith/dcgan.torch/blob/master/data/data.lua#L18
by calling it here:
https://github.com/soumith/dcgan.torch/blob/master/data/data.lua#L30-L43
And afterwards, if you want to queue a job into the thread, you provide two functions:
https://github.com/soumith/dcgan.torch/blob/master/data/data.lua#L84
The first one runs inside the thread, and the second one runs in the main thread after the first one completes.
Hopefully that makes it a bit more clear.
If Soumith's examples in the previous answer are not very easy to use, I suggest you build your own pipeline from scratch. I provide here an example of two synchronized threads : one for writing data and one for reading data:
local t = require 'threads'
t.Threads.serialization('threads.sharedserialize')
local tds = require 'tds'
local dict = tds.Hash() -- only local variables work here, and only tables or tds.Hash()
dict[1] = torch.zeros(4)
local m1 = t.Mutex()
local m2 = t.Mutex()
local m1id = m1:id()
local m2id = m2:id()
m1:lock()
local pool = t.Threads(
1,
function(threadIdx)
end
)
pool:addjob(
function()
local t = require 'threads'
local m1 = t.Mutex(m1id)
local m2 = t.Mutex(m2id)
while true do
m2:lock()
dict[1] = torch.randn(4)
m1:unlock()
print ('W ===> ')
print(dict[1])
collectgarbage()
collectgarbage()
end
return __threadid
end,
function(id)
end
)
-- Code executing on master:
local a = 1
while true do
m1:lock()
a = dict[1]
m2:unlock()
print('R --> ')
print(a)
end