Can test_ids config as blocks allow checks to test metadata? - origen-sdk

I see that the Origen test_ids gem allows users to specify blocks as a config.
TestIds.configure :final_test do |config|
config.numbers do |bin, softbin|
(softbin * 10) + bin
end
end
Is it possible to reference any of the metadata passed to the test? For example, here are some test insertions in my flow file:
func :mytest, mode: :chain
func :mytest, mode: :jtag
Here is what I would like to do in the TestIds config:
TestIds.configure :final_test do |config|
config.numbers do |test_meta|
case test_meta[:mode]
when :chain
(softbin * 10) + bin
when :jtag
(softbin * 20) + bin
else
(softbin * 30) + bin
end
end
end
thx!

I'll try to answer your question. My first ever at stackoverflow, hope it makes sense. Feel free to email me if you have more questions.
#rchitect-of-info Yes, you can. Few changes will be required to the test_ids plugin to support your needs.
Please look at the allocate method here in test_ids plugin:
https://github.com/Origen-SDK/test_ids/blob/master/lib/test_ids/allocator.rb#L115
We will need to pass the options from the flow to the allocate_number method.
number['number'] ||= allocate_number(bin: bin['number'], softbin: softbin['number'], size: number_size, options: options)
number['size'] ||= number_size
Then, please look at the allocate_number method here in the test_ids plugin:
https://github.com/Origen-SDK/test_ids/blob/master/lib/test_ids/allocator.rb#L547
These callback options are what gets passed to config.numbers.
To access your metadata you would just pass the options along with bin and softbin here:
https://github.com/Origen-SDK/test_ids/blob/master/lib/test_ids/allocator.rb#L548
So the new callback would be
elsif callback = config.numbers.callback
callback.call(bin, softbin, options)
You would then be able to configure TestIds as
TestIds.configure :final_test do |config|
config.numbers do |bin, softbin, options|
case options[:mode]
when :chain
(softbin * 10) + bin
when :jtag
(softbin * 20) + bin
else
(softbin * 30) + bin
end
end
end
I am working on a similar update to test_ids, hopefully will be ready for review soon. My branch is currently a work in progress at https://github.com/priyavadan/test_ids/commits/change_config

Related

why do we consider "skipping the steps" as "failed steps" with karate.abort() in karate report?

For my tests scenarios i am using "karate.abort()" function and this skips the steps beneath it if the condition is fulfilled.
But this is marking my complete test as failed because of the skipped steps .
Is there any way to mark the test case as PASSED if the karate.abort() is called and next steps are skipped?
Example:
Scenario Outline: Lambda API registration when ARN is invalid
Given url ApiAdminURL
And path AdminPath
And header apigateway-apikey = apiGatewayKey
And header apigateway-basepath = 'lambda-migration'
* json myReq = read('swagger-lambda.json')
* set myReq.apiConf.subscriptionTiers = <subscriptionTiers>
* set myReq.swagger.info.title = 'REGTEST_AUTO_Regression_Lambda_Quote_Function'
* set myReq.swagger.basePath = 'lambda-migration'
* set myReq.swagger.info.version = 'v1'
* set myReq.swagger.x-lambda-arn = '<arn>'
And request myReq
When method post
Then status <responseCode>
* eval if (responseStatus == 400) karate.abort()
* call read('Lambda-Sleep.feature')
* call read('Lambda-APIDefinition.feature')
* def responsefromsubscriber = call read('Lambda-Subscriber.feature')
{accessTokenforInvokation: '#(accessTokenforInvokation)', applicationId: '#
(applicationId)', subscribeToken: '#(subscribeToken)'}
* def AccessTokenforInvokation =
responsefromsubscriber.accessTokenforInvokation
* def ApplicationId = responsefromsubscriber.applicationId
* def SubscribeToken = responsefromsubscriber.subscribeToken
This is a bug that was fixed in a patch release: https://github.com/intuit/karate/issues/464
Can you just upgrade your Karate version to 0.8.0.1 and try again.

Can callbacks be used to configure remotes?

I was reading the Origen documentation on remotes and had a question. When do the remote files get retrieved relative to the Origen callbacks? The reason i ask is that the files we want to retrieve would be used to construct our DUT model and there are some order dependencies.
I have tried all of the existing callbacks, in an attempt to configure the remotes, with no success.
def pre_initialize
Origen.app.config.remotes = [
{
dir: 'product',
vault: 'ssh://git#stash.com/myproduct/origen.git',
version: '0.1.0'
}
]
end
If I add the remotes config to the application file it works:
config.remotes = [
{
dir: 'product',
vault: 'ssh://git#stash.com/myproduct/origen.git',
version: '0.1.0'
}
]
The problem with using the config/application.rb file is that we can't keep product specific information anywhere in our application space. We use symbolic links to map to source files that are stored in test program stash repositories. I think we may need a new callback, please advise.
thx
** EDIT **
So I defined the remotes in another file and call the method to actually do it in the boot.rb file. Then I placed the remote manager require method in the on_create callback but got no remotes fetched.
284: def on_create
285: binding.pry
=> 286: Origen.remote_manager.require!
287: end
[1] pry(#<PPEKit::Product>)> Origen.config.remotes
=> [{:dir=>"remote_checkout", :rc_url=>"ssh://git#stash.us.com:7999/osdk/us-calendar.git", :version=>"v0.1.0"}]
[2] pry(#<PPEKit::Product>)>
origen(main):001:0>
It seems like the Origen.remote_manager.require! is not working. So I checked the remotes manager file and don't see how the require! method could ever work with a callback because it seems to be checking the remotes are dirty, which can never happen for a remote definition that was set after the application.rb file was loaded. So I created a resolve_remotes! method that seems to work:
def resolve_remotes!
resolve_remotes
remotes.each do |_name, remote|
dir = workspace_of(remote)
rc_url = remote[:rc_url] || remote[:vault]
tag = Origen::VersionString.new(remote[:version])
version_file = dir.to_s + '/.current_version'
begin
if File.exist?("#{dir}/.initial_populate_successful")
FileUtils.rm_f(version_file) if File.exist?(version_file)
rc = RevisionControl.new remote: rc_url, local: dir
rc.checkout version: prefix_tag(tag), force: true
File.open(version_file, 'w') do |f|
f.write tag
end
else
rc = RevisionControl.new remote: rc_url, local: dir
rc.checkout version: prefix_tag(tag), force: true
FileUtils.touch "#{dir}/.initial_populate_successful"
File.open(version_file, 'w') do |f|
f.write tag
end
end
rescue Origen::GitError, Origen::DesignSyncError => e
# If Git failed in the remote, its usually easy to see what the problem is, but now *where* it is.
# This will prepend the failing remote along with the error from the revision control system,
# then rethrow the error
e.message.prepend "When updating remotes for #{remote[:importer].name}: "
raise e
end
end
end
The resolve_remotes! method just forces all known remotes to be fetched. Would a PR be accepted for this solution?
thx
Remotes are currently required at application load time, which means it occurs before any of the application call back points.
The content of config.remotes can still be made dynamic by assigning it in a block:
config.remotes do
r = [{
dir: 'product',
vault: 'ssh://git#stash.com/myproduct/origen.git',
version: '0.1.0'
}]
if some_condition
r << { dir: ... }
end
r
end
The config.remotes attribute will be evaluated before the target is loaded however, so you won't be able to reference dut for example, though maybe that is good enough.
Alternatively, you could implement a post target require of the remotes within your application pretty easily.
Make the remotes return en empty array if the dut is not available yet, that will make it work ok when it is called by Origen during application load:
config.remotes do
if dut
# As above example
else
[]
end
end
Then within the callback handler of your choice:
Origen.remote_manager.require!
That should cause it to re-evaluate config.remotes and fetch any remotes that are missing.

post test execution callbacks available?

I am looking to apply a callback post test execution that will check for an alarm flag. I don't see any listed here so I then checked the test interface and only see what looks like a flow level callback:
# This will be called at the end of every flow or sub-flow (at the end of every
# Flow.create block).
# Any options passed to Flow.create will be passed in here.
# The options will contain top_level: true, whenever this is called at the end of a
# top-level flow file.
def shutdown(options = {})
end
We need the ability to check the alarm flags after every test but still apply a common group ID to a list of tests like this:
group "func tests", id: :func do
[:minvdd, :maxvdd].each do |cond|
func :bin1_1200, ip: :cpu, testmode: :speed, cond: cond
end
end
Here is an example of the V93K alarm flow flag:
thx!
It is common when writing interfaces to funnel all test generation methods through a common single method to add them to the flow:
def func(name, options = {})
t = test_suites.add(name)
t.test_method = test_methods.origen.functional_test(options)
add_to_flow(t, options)
end
def para(name, options = {})
t = test_suites.add(name)
t.test_method = test_methods.origen.parametric_test(options)
add_to_flow(t, options)
end
def add_to_flow(test_obj, options = {})
# Here you can do anything you want before adding each test to the flow
flow.test(test_obj, options)
# Here you can do anything you want after adding each test to the flow
end
So while there is no per-test callback, you can generally achieve whatever you wanted to do with one via the above interface architecture.
EDIT:
With reference to the alarm flag flow structure you want to create, you would code it like this:
func :some_func_test, id: :sft1
if_failed :sft1 do
bin 10, if_flag: "Alarm"
bin 11, unless_flag: "Alarm"
end
Or, if you prefer, this is equivalent:
func :some_func_test, id: :sft1
bin 10, if_flag: "Alarm", if_failed: :sft1
bin 11, unless_flag: "Alarm", if_failed: :sft1
At the time of writing, that will generate something logically correct but with a sub-optimal branch structure.
In the next release that will be fixed, see the test case that has been added here and the output it generates here.
You can call all of the flow control methods from the interface the same way you can from within the flow, so you can inject such conditions in the add_to_flow method if you want.
Note also that in the test case both if_flag and if_enable are used. if_enable should generally be used if the flag is something that would be set at the start of the flow (e.g. by the operator) and would not change. if_flag should be used if it is a flag that is subject to modification by the flow at runtime.

How to disable Create Project permission for users by default in GitLab?

I am using the Omnibus GitLab CE system with LDAP authentication.
Because of LDAP authentication, anyone in my company can sign in to GitLab and a new GitLab user account associated with this user is created (according to my understanding).
I want to modify it so that by default this new user (who can automatically sign in based on his LDAP credentials) cannot create new projects.
Then, I as the admin, will probably handle most new project creation.
I might give the Create Project permission to a few special users.
In newer versions of GitLab >= v7.8 …
This is not a setting in config/gitlab.yml but rather in the GUI for admins.
Simply navigate to https://___[your GitLab URL]___/admin/application_settings/general#js-account-settings, and set Default projects limit to 0.
You can then access individual users's project limit at https://___[your GitLab URL]___/admin/users.
See GitLab's update docs for more settings changed between v7.7 and v7.8.
git diff origin/7-7-stable:config/gitlab.yml.example origin/7-8-stable:config/gitlab.yml.example
For all new users:
Refer to Nick Merrill answer.
For all existing users:
This is the best and quick method to make changes to projects limits:
$ gitlab-rails runner "User.where(projects_limit: 10).each { |u| u.projects_limit = 0; u.save }"
( Update: This applies to versions <= 7.7:)
The default permissions are set in gitlab.yml
In omnibus, that is /opt/gitlab/embedded/service/gitlab-rails/config/gitlab.yml
Look for
## User settings
default_projects_limit: 10
# default_can_create_group: false # default: true
Setting default_projects_limit to zero, and default_can_create_group to false may be what you want.
Then an admin can change the limits for individual users.
Update:
This setting was included in the admin GUI in version 7.8 (see answer by #Nick M). At least with Omnibus on Centos7 an upgrade retains the setting.
Note that the setting default_can_create_group is still in gitlab.yml.
Here's my quick-and-dirty Python script which you can use in case you already have some users created and want to change all your existing users to make them unable to create projects on their own:
#!/usr/bin/env python
import requests
import json
gitlab_url = "https://<your_gitlab_host_and_domain>/api/v3"
headers = {'PRIVATE-TOKEN': '<private_token_of_a_user_with_admin_rights>'}
def set_user_projects_limit_to_zero (user):
user_id = str(user['id'])
put = requests.put(gitlab_url + "/users/" + user_id + "?projects_limit=0", headers=headers)
if put.status_code != 200:
print "!!! change failed with user id=%s, status code=%s" % (user_id, put.status_code)
exit(1)
else:
print "user with id=%s changed!" % user_id
users_processed = 0
page_no = 1
total_pages = 1
print "processing 1st page of users..."
while page_no <= total_pages:
users = requests.get(gitlab_url + "/users?page=" + str(page_no), headers=headers)
total_pages = int(users.headers['X-Total-Pages'])
for user in users.json():
set_user_projects_limit_to_zero(user)
users_processed = users_processed + 1
print "processed page %s/%s..." % (page_no, total_pages)
page_no = page_no + 1
print "no of processed users=%s" % users_processed
Tested & working with GitLab CE 8.4.1 052b38d, YMMV.

Jenkins Build Test Case pass fail count using groovy script

I want to fetch Total TestCase PASS and FAIL count for a build using groovy script. I am using Junit test results. I am using Multiconfiguration project , so is there any way to find this information on a per configuration basis?
If you use the plugin https://wiki.jenkins-ci.org/display/JENKINS/Groovy+Postbuild+Plugin, you can access the Jenkins TestResultAction directly from the build ie.
import hudson.model.*
def build = manager.build
int total = build.getTestResultAction().getTotalCount()
int failed = build.getTestResultAction().getFailCount()
int skipped = build.getTestResultAction().getSkipCount()
// can also be accessed like build.testResultAction.failCount
manager.listener.logger.println('Total: ' + total)
manager.listener.logger.println('Failed: ' + failed)
manager.listener.logger.println('Skipped: ' + skipped)
manager.listener.logger.println('Passed: ' + (total - failed - skipped))
API for additional TestResultAction methods/properties http://javadoc.jenkins-ci.org/hudson/tasks/test/AbstractTestResultAction.html
If you want to access a matrix build from another job, you can do something like:
def job = Jenkins.instance.getItemByFullName('MyJobName/MyAxisName=MyAxisValue');
def build = job.getLastBuild()
...
For Pipeline (Workflow) Job Type, the logic is slightly different from AlexS' answer that works for most other job types:
build.getActions(hudson.tasks.junit.TestResultAction).each {action ->
action.getTotalCount()
action.getFailCount()
action.getSkipCount()
}
(See http://hudson-ci.org/javadoc/hudson/tasks/junit/TestResultAction.html)
Pipeline jobs don't have a getTestResultAction() method.
I use this logic to disambiguate:
if (build.respondsTo('getTestResultAction')) {
//normal logic, see answer by AlexS
} else {
// pipeline logic above
}
I think you might be able to do it with something like this:
import javax.xml.parsers.DocumentBuilder;
import javax.xml.parsers.DocumentBuilderFactory;
import org.w3c.dom.Document;
File fXmlFile = new File("junit-tests.xml");
DocumentBuilderFactory dbFactory = DocumentBuilderFactory.newInstance();
DocumentBuilder dBuilder = dbFactory.newDocumentBuilder();
Document doc = dBuilder.parse(fXmlFile);
doc.getDocumentElement().normalize();
println("Total : " + doc.getDocumentElement().getAttribute("tests"))
println("Failed : " +doc.getDocumentElement().getAttribute("failures"))
println("Errors : " +doc.getDocumentElement().getAttribute("errors"))
I also harvest junit xml test results and use Groovy post-build plugin https://wiki.jenkins-ci.org/display/JENKINS/Groovy+Postbuild+Plugin to check pass/fail/skip counts. Make sure the order of your post-build actions has harvest junit xml before the groovy post-build script.
This example script shows how to get test result, set build description with result counts and also version info from a VERSION.txt file AND how to change overall build result to UNSTABLE in case all tests were skipped.
def currentBuild = Thread.currentThread().executable
// must be run groovy post-build action AFTER harvest junit xml
testResult1 = currentBuild.testResultAction
currentBuild.setDescription(currentBuild.getDescription() + "\n pass:"+testResult1.result.passCount.toString()+", fail:"+testResult1.result.failCount.toString()+", skip:"+testResult1.result.skipCount.toString())
// if no pass, no fail all skip then set result to unstable
if (testResult1.result.passCount == 0 && testResult1.result.failCount == 0 && testResult1.result.skipCount > 0) {
currentBuild.result = hudson.model.Result.UNSTABLE
}
currentBuild.setDescription(currentBuild.getDescription() + "\n" + currentBuild.result.toString())
def ws = manager.build.workspace.getRemote()
myFile = new File(ws + "/VERSION.txt")
desc = myFile.readLines()
currentBuild.setDescription(currentBuild.getDescription() + "\n" + desc)

Resources