I'm trying to create a new custom type/provider but not ensurable.
I've already checked the exec and augeas types, but I couldn't figure out clearly how exactly the integration between type and provider work when we don't define the ensurable mode.
Type:
Puppet::Type.newtype(:ptemplates) do
newparam(:name) do
desc ""
isnamevar
end
newproperty(:run) do
defaultto 'now'
# Actually execute the command.
def sync
provider.run
end
end
end
Provider:
require 'logger'
Puppet::Type.type(:ptemplates).provide(:ptemplates) do
desc ""
def run
log = Logger.new(STDOUT)
log.level = Logger::INFO
log.info("x.....................................")
end
But I don't know why the provider is being executed twice
root#puppet:/# puppet apply -e "ptemplates { '/tmp': }" --environment=production
Notice: Compiled catalog for puppet.localhost in environment production in 0.12 seconds
I, [2017-07-30T11:00:15.827103 #800] INFO -- : x.....................................
I, [2017-07-30T11:00:15.827492 #800] INFO -- : x.....................................
Notice: /Stage[main]/Main/Ptemplates[/tmp]/run: run changed 'true' to 'now'
Notice: Applied catalog in 4.84 seconds
Also, I had to define the defaultto to force the execution of the provider.run method.
What am I missing ?
Best Regards.
First you should spend some time reading this blog http://garylarizza.com/blog/2013/11/25/fun-with-providers/ and the two following by Gary Larizza. It gives a very good introduction to puppet type/providers.
Your log is being executed twice because of your def sync in the type that calls the run define, second when puppet tries to determine the value of your run property.
In order to write a type/provider that is not ensurable you need to do something like:
Type:
Puppet::Type.newtype(:ptemplates) do
#doc = ""
newparam(:name, :namevar => true) do
desc ""
end
newproperty(:run) do
desc ""
newvalues(:now, :notnow)
defaultto :now
end
end
Provider:
Puppet::Type.type(:ptemplates).provide(:ruby) do
desc ""
def run
#Do something to determine if run value and is now or notnow and return it
end
def run= value
#Do something to set the value of run
end
end
Note that all type providers must be able to determine the value of the property and be able to set it. The difference between an ensurable and a not ensurable type/provider is that the ensurable type/prover is able to create and destroy it, fx remove an user or add an user. A type/provider that is not ensurable is not able to create and destroy the property, fx selinux, you can set its value, but you cannot remove selinux.
Related
I was reading the Origen documentation on remotes and had a question. When do the remote files get retrieved relative to the Origen callbacks? The reason i ask is that the files we want to retrieve would be used to construct our DUT model and there are some order dependencies.
I have tried all of the existing callbacks, in an attempt to configure the remotes, with no success.
def pre_initialize
Origen.app.config.remotes = [
{
dir: 'product',
vault: 'ssh://git#stash.com/myproduct/origen.git',
version: '0.1.0'
}
]
end
If I add the remotes config to the application file it works:
config.remotes = [
{
dir: 'product',
vault: 'ssh://git#stash.com/myproduct/origen.git',
version: '0.1.0'
}
]
The problem with using the config/application.rb file is that we can't keep product specific information anywhere in our application space. We use symbolic links to map to source files that are stored in test program stash repositories. I think we may need a new callback, please advise.
thx
** EDIT **
So I defined the remotes in another file and call the method to actually do it in the boot.rb file. Then I placed the remote manager require method in the on_create callback but got no remotes fetched.
284: def on_create
285: binding.pry
=> 286: Origen.remote_manager.require!
287: end
[1] pry(#<PPEKit::Product>)> Origen.config.remotes
=> [{:dir=>"remote_checkout", :rc_url=>"ssh://git#stash.us.com:7999/osdk/us-calendar.git", :version=>"v0.1.0"}]
[2] pry(#<PPEKit::Product>)>
origen(main):001:0>
It seems like the Origen.remote_manager.require! is not working. So I checked the remotes manager file and don't see how the require! method could ever work with a callback because it seems to be checking the remotes are dirty, which can never happen for a remote definition that was set after the application.rb file was loaded. So I created a resolve_remotes! method that seems to work:
def resolve_remotes!
resolve_remotes
remotes.each do |_name, remote|
dir = workspace_of(remote)
rc_url = remote[:rc_url] || remote[:vault]
tag = Origen::VersionString.new(remote[:version])
version_file = dir.to_s + '/.current_version'
begin
if File.exist?("#{dir}/.initial_populate_successful")
FileUtils.rm_f(version_file) if File.exist?(version_file)
rc = RevisionControl.new remote: rc_url, local: dir
rc.checkout version: prefix_tag(tag), force: true
File.open(version_file, 'w') do |f|
f.write tag
end
else
rc = RevisionControl.new remote: rc_url, local: dir
rc.checkout version: prefix_tag(tag), force: true
FileUtils.touch "#{dir}/.initial_populate_successful"
File.open(version_file, 'w') do |f|
f.write tag
end
end
rescue Origen::GitError, Origen::DesignSyncError => e
# If Git failed in the remote, its usually easy to see what the problem is, but now *where* it is.
# This will prepend the failing remote along with the error from the revision control system,
# then rethrow the error
e.message.prepend "When updating remotes for #{remote[:importer].name}: "
raise e
end
end
end
The resolve_remotes! method just forces all known remotes to be fetched. Would a PR be accepted for this solution?
thx
Remotes are currently required at application load time, which means it occurs before any of the application call back points.
The content of config.remotes can still be made dynamic by assigning it in a block:
config.remotes do
r = [{
dir: 'product',
vault: 'ssh://git#stash.com/myproduct/origen.git',
version: '0.1.0'
}]
if some_condition
r << { dir: ... }
end
r
end
The config.remotes attribute will be evaluated before the target is loaded however, so you won't be able to reference dut for example, though maybe that is good enough.
Alternatively, you could implement a post target require of the remotes within your application pretty easily.
Make the remotes return en empty array if the dut is not available yet, that will make it work ok when it is called by Origen during application load:
config.remotes do
if dut
# As above example
else
[]
end
end
Then within the callback handler of your choice:
Origen.remote_manager.require!
That should cause it to re-evaluate config.remotes and fetch any remotes that are missing.
I'm trying to create a perforce custom type devtrack but am stuck in the prefetch stage. There I am trying to use my instances class method to find the correct provider
def self.prefetch(resources)
instances.each do |prov|
if resource = resources[prov.name]
resource.provider = prov
end
end
end
and in the instances class method I try to find all clients on the current host by using the command
p4 workspaces -u
using the below code
def self.get_list_of_workspaces_on_host(host)
ws_strs = p4(['workspaces', '-u', <USERNAME>]).split("\n")
ws_strs.select { |str| str.include?(host) }.map{ |ws| ws.split[1] }
end
def self.get_workspace_properties(ws)
md = /^(\w*)_.*_(main|\d{2})_managed$/.match(ws)
ws_props = {}
ws_props[:ensure] = :present
...
ws_props
end
def self.instances
host = `hostname`.strip
get_list_of_workspaces_on_host(host).collect do |ws|
ws_props = get_workspace_properties(ws)
new(ws_props)
end
end
and the p4 command is defined like
has_command(:p4, "/usr/bin/p4") do
environment :P4PORT => <PERFORCE SERVER>, :P4USER => <USERNAME>
end
The problem I have is that for any p4 command to work I need to access the server, this is specified in the type
devtrack { '36': source => '<PERFORCE SERVER>'}
but how can I access this value from prefetch? The problem beeing that prefetch is a class method and thus can not access the #properties_hash or the resource hash. Is there a way to get around this? Am I designing this completely wrong?
I am looking to apply a callback post test execution that will check for an alarm flag. I don't see any listed here so I then checked the test interface and only see what looks like a flow level callback:
# This will be called at the end of every flow or sub-flow (at the end of every
# Flow.create block).
# Any options passed to Flow.create will be passed in here.
# The options will contain top_level: true, whenever this is called at the end of a
# top-level flow file.
def shutdown(options = {})
end
We need the ability to check the alarm flags after every test but still apply a common group ID to a list of tests like this:
group "func tests", id: :func do
[:minvdd, :maxvdd].each do |cond|
func :bin1_1200, ip: :cpu, testmode: :speed, cond: cond
end
end
Here is an example of the V93K alarm flow flag:
thx!
It is common when writing interfaces to funnel all test generation methods through a common single method to add them to the flow:
def func(name, options = {})
t = test_suites.add(name)
t.test_method = test_methods.origen.functional_test(options)
add_to_flow(t, options)
end
def para(name, options = {})
t = test_suites.add(name)
t.test_method = test_methods.origen.parametric_test(options)
add_to_flow(t, options)
end
def add_to_flow(test_obj, options = {})
# Here you can do anything you want before adding each test to the flow
flow.test(test_obj, options)
# Here you can do anything you want after adding each test to the flow
end
So while there is no per-test callback, you can generally achieve whatever you wanted to do with one via the above interface architecture.
EDIT:
With reference to the alarm flag flow structure you want to create, you would code it like this:
func :some_func_test, id: :sft1
if_failed :sft1 do
bin 10, if_flag: "Alarm"
bin 11, unless_flag: "Alarm"
end
Or, if you prefer, this is equivalent:
func :some_func_test, id: :sft1
bin 10, if_flag: "Alarm", if_failed: :sft1
bin 11, unless_flag: "Alarm", if_failed: :sft1
At the time of writing, that will generate something logically correct but with a sub-optimal branch structure.
In the next release that will be fixed, see the test case that has been added here and the output it generates here.
You can call all of the flow control methods from the interface the same way you can from within the flow, so you can inject such conditions in the add_to_flow method if you want.
Note also that in the test case both if_flag and if_enable are used. if_enable should generally be used if the flag is something that would be set at the start of the flow (e.g. by the operator) and would not change. if_flag should be used if it is a flag that is subject to modification by the flow at runtime.
when implementing Origen::Parameters, I understood the importance of defining a 'default' set. But, in essence, my real default is named something different. So I implemented a hack of a parameter alias:
Origen.top_level.define_params :default do |params|
params.tconds.override = 1
params.tconds.override_lev_equ_set = 1
params.tconds.override_lev_spec_set = 1
params.tconds.override_levset = 1
params.tconds.override_seqlbl = 'my_pattern'
params.tconds.override_testf = 'tm_3'
params.tconds.override_tim_spec_set = 'bist_xxMhz'
params.tconds.override_timset = '1,1,1,1,1,1,1,1'
params.tconds.site_control = 'parallel:'
params.tconds.site_match = 2
end
Origen.top_level.define_params :cpu_mbist_hr, inherit: :default do |params|
# way of aliasing parameter names
end
Is there a proper method of parameter aliasing that is just not documented?
There is no other way to do this currently, though I would be open to a PR to enable something like:
default_params = :cpu_mbist_hr
If you don't want them to be called :default in this case though, then maybe you don't really want them to be the default anyway.
e.g. adding this immediately after you define them would effectively give you an alternative default and would do pretty much the same job as the proposed API above:
# self is required here to help Ruby know that you are calling the params= API
# and not defining a local variable called params
self.params = :cpu_mbist_hr
I have a job named ActivityJob which fetches an user's github public activities.
class ActivityJob < ActiveJob::Base
queue_as :git
def perform(user)
begin
#fetch github activities for a user using user's auth_token
activities = ActivitiesFetcher.new(user).fetch
# process the retrieved activities.
rescue Github::Error::NotFound
#user is not present. ignore and complete the job.
rescue Github::Error::Unauthorized
#auth token is invalid. re-assign a new token on next sign in
user.auth_token = nil
user.save
# refresh_gh_client gets a new client using a random user's auth_token
user.refresh_gh_client
retry
rescue Github::Error::Forbidden
# Probably hit the Rate-limit, use another token
user.refresh_gh_client
retry
end
end
end
The method refresh_gh_client gets a random user from db and uses its auth_token to create an new gh_client. the new gh_client is assigned to the current user. This is working fine. In test cases, I am using mocha to stub the method calls.
class ActivityJobTest < ActiveJob::TestCase
def setup
super
#user = create :user, github_handle: 'prasadsurase', auth_token: 'somerandomtoken'
#round = create :round, :open
clear_enqueued_jobs
clear_performed_jobs
assert_no_performed_jobs
assert_no_enqueued_jobs
end
test 'perform with Github::Error::Unauthorized exception' do
User.any_instance.expects(:refresh_gh_client).returns(nil)
ActivitiesFetcher.any_instance.expects(:fetch).raises(Github::Error::Unauthorized, {})
ActivityJob.perform_now(#user, 'all', #round)
#user.reload
assert_nil #user.auth_token
end
end
The problem is that the 'retry' in the job calls the ActivitiesFetcher breaking the expectation that it was supposed to be called only once.
:
unexpected invocation: #<AnyInstance:User>.refresh_gh_client()
unsatisfied expectations:
- expected exactly once, invoked twice: # <AnyInstance:User>.refresh_gh_client(any_parameters)
- expected exactly once, invoked twice: #<AnyInstance:ActivitiesFetcher>.fetch
Make an interface for the ActivitiesFetcher behavior you need. You create an implementation of the interface used only for testing that deals with the idiosyncrasies of testing.