How to init and kill DUT between rspec files for a gem? - origen-sdk

I have a gem that supports two different libraries with a generic DUT model:
module MyGemLibrary
module RSpec
class DUT
include Origen::TopLevel
attr_accessor :force_import, :nick, :revision
def initialize(options = {})
options = {
force_import: false,
nick: :rspec,
revision: :a0
}.update(options)
#force_import = options[:force_import]
#nick = options[:nick]
#revision = options[:revision]
end
end
end
end
I have two rspec files:
myserver:ondie_devices $ ls spec
ringosc_spec.rb spec_helper.rb testtrans_spec.rb
What is the best known method for initializing the DUT and closing the DUT in each spec file? When I used a local variable to store the DUT I got this error:
myserver:ondie_devices $ origen specs
[INFO] 0.010[0.010] || OndieDevices: Loading ringosc model
Attempt to set an instance of OndieDevices::RSpec::DUT as the top level when there is already an instance of OndieDevices::RSpec::DUT defined as the top-level!
I have seen various ways of dong this but none seem to accomplish what I need done:
prevent the error above
Ensure Origen.top_level is initialized correctly such that theinstance attributes are readily available from within the gem code library. E.G.
An error occurred while loading ./spec/ringosc_spec.rb.
Failure/Error: Pathname.new("#{tmpdir}/#{Origen.top_level.nick}_#{Origen.top_level.revision}_ringosc_#{data_type}.bin")
NoMethodError:
undefined method `nick' for nil:NilClass

You can't manually instantiate multiple top-level objects, it needs to be done within the context of a target load/change.
The easiest way to do this for spec tests is by using anonymous targets - https://origen-sdk.org/origen/guides/runtime/programming/#Anonymous_Targets
Origen.target.temporary = -> do
DUT.new
end
Origen.load_target
That will also safely un-load any existing target (and its DUT) which will resolve your issue.
It is often common to do something like the above within a before :each block to reload the target between tests, or in a before :all block to have the same target loaded for all tests in a spec file.

Related

Using a defined type of an yet unreleased puppet module

We are currently developing three puppet modules.
One contains a Defined Type which the other two shall use. This module, lets call it ModuleA, is not released yet into our local forge/repository and will not until it was successfully implemented and tested in at least one of the other two modules (company procedure).
The Definded Type is used in the other two modules to create a resource and is referenced via 'include'.
In the metadata.json ModuleA is added as dependency.
When I run pdk test unit it fails because the Defined Type is unknown.
Currently there is only a single it { is_expected.to compile.with_all_deps } test in the other two modules, nothing complex.
How can the other two modules be tested if ModuleA is not released yet?
How can the other two modules be tested if ModuleA is not released yet?
Your tests for the other modules can provide stub implementations of the required defined type. This can be done with a :pre_condition:
describe 'Module_b::Some_class' do
let(:pre_condition) { 'define module_a::some_type($param1, $param2) { }' }
# ...
end
Do be sure that the number and names of the stub match those of the real defined type.

Puppet Run Stages with sub modules

I am learning Puppet and have taken their "Getting Started with Puppet" class but it did not cover Run Stages, and their documentation on Run Stages is thin.
I need to make sure that two things happen before anything else that Puppet does. I have been advised by the instructor of my "Getting Started with Puppet" class to look at Run Stages.
In my investigation of Run Stages, I have learned that the puppetlabs-stdlib class sets up some "standard" Run Stages. One of them being "setup". As shown in the snippet below I have implemented the stage => 'setup' as per https://puppet.com/docs/puppet/5.5/lang_run_stages.html. However, I am getting errors from Puppet:
root#server:~# puppet agent -t
Info: Using configured environment 'dev_branch'
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Retrieving locales
Info: Loading facts
Error: Could not retrieve catalog from remote server: Error 500 on SERVER:
Server Error: Evaluation Error: Error while evaluating a Resource Statement, Could not find stage setup specified by
Class[Vpn::Roles::Vpn::Client] (file:
/etc/puppetlabs/code/environments/wal_prod1910_dev/modules/bh/manifests/roles/snaplocker.pp, line: 5, column: 3) on node server
Warning: Not using cache on failed catalog
Error: Could not retrieve catalog; skipping run
Looking at the error message and the Puppet documentation, I have added quotations around the various string values and replaced my initial -> with the correct =>, but I still get the same error.
class bh::roles::snaplocker()
{
# stage => setup takes advantage of the setup run stage introduced by
# puppetlabs-stdlib which is pulled in by puppet-control-bh/Puppetfile
class { 'vpn::roles::vpn::client': stage => 'setup' }
class { 'bh::profiles::move_archives': stage => 'setup' }
#...
}
Looking more closely at the error message, I believe that the cause is that puppetlabs-stdlib id introduced by the Puppetfile in the class that calls the module that I am working on. I have been deliberately avoiding trying to pull in puppetlabs-stdlib in the class I am working on to avoid duplication. But apparently I need it... The module I am working on does not have a Puppetfile do I need to somehow include puppetlabs-stdlib in my sub module? If so how should I do that? If not, how to I tell my sub module to use the instance declared in the parent module's Puppetfile?
Usually, you don't need any stage if you have correct classes/resources dependencies.
From the "Run stages" documentation:
CAUTION: Due to these limitations, use stages with the simplest of classes, and only when absolutely necessary. A valid use case is mass dependencies like package repositories.
In your case, if you really want stages, you should add include stdlib::stages1 or explicitly add stage like stage { 'setup': }

Does Origen.load_target allow initialization options to be augmented?

I am using the load_target in my specs testing as so:
describe "MBIST Test Module" do
before :all do
Origen.app.unload_target!
Origen.load_target 'prod_a0'
end
And I see the load_target method accepts an optional options hash. However, when I use it like shown below, I do not see the options being applied to the initialization options hash. Is this expected behavior?
describe "MBIST Test Module" do
before :all do
Origen.app.unload_target!
Origen.load_target 'prod_a0', force_import: true
end
thx
Such options are not passed automatically into the initialization of the DUT model, but rather they are available within the scope of the target file, this is described in this section of the docs - https://origen-sdk.org/origen/guides/runtime/programming/#Configurable_Targets
If you want them to be available within your DUT initialization, you can simply pass them through:
# target/my_dut.rb
MyApp::MyDut.new(options)

When should I use require() and when to use define()?

I have being playing around with requirejs for the last few days. I am trying to understand the differences between define and require.
Define seems to allow for module separation and allow for dependency ordering to be adhere. But it downloads all the files it needs to begin with. Whilst require only loads what you need when you need it.
Can these two be used together and for what purposes should each of them be used?
With define you register a module in require.js that you can then depend on in other module definitions or require statements.
With require you "just" load/use a module or javascript file that can be loaded by require.js.
For examples have a look at the documentation
My rule of thumb:
Define: If you want to declare a module other parts of your application will depend on.
Require: If you just want to load and use stuff.
From the require.js source code (line 1902):
/**
* The function that handles definitions of modules. Differs from
* require() in that a string for the module should be the first argument,
* and the function to execute after dependencies are loaded should
* return a value to define the module corresponding to the first argument's
* name.
*/
The define() function accepts two optional parameters (a string that represent a module ID and an array of required modules) and one required parameter (a factory method).
The return of the factory method MUST return the implementation for your module (in the same way that the Module Pattern does).
The require() function doesn't have to return the implementation of a new module.
Using define() you are asking something like "run the function that I am passing as a parameter and assign whatever returns to the ID that I am passing but, before, check that these dependencies are loaded".
Using require() you are saying something like "the function that I pass has the following dependencies, check that these dependencies are loaded before running it".
The require() function is where you use your defined modules, in order to be sure that the modules are defined, but you are not defining new modules there.
General rules:
You use define when you want to define a module that will be reused
You use require to simply load a dependency
//sample1.js file : module definition
define(function() {
var sample1 = {};
//do your stuff
return sample1;
});
//sample2.js file : module definition and also has a dependency on jQuery and sample1.js
define(['jquery', 'sample1'], function($,sample1) {
var sample2 = {
getSample1:sample1.getSomeData();
};
var selectSomeElement = $('#someElementId');
//do your stuff....
return sample2;
});
//calling in any file (mainly in entry file)
require(['sample2'], function(sample2) {
// sample1 will be loaded also
});
Hope this helps you.
"define" method for facilitating module definition
and
"require" method for handling dependency loading
define is used to define named or unnamed modules based on the proposal using the following signature:
define(
module_id /*optional*/,
[dependencies] /*optional*/,
definition function /*function for instantiating the module or object*/
);
require on the other hand is typically used to load code in a top-level JavaScript file or within a module should you wish to dynamically fetch dependencies
Refer to https://addyosmani.com/writing-modular-js/ for more information.
require() and define() both used to load dependencies.There is a major difference between these two method.
Its very Simple Guys
Require() : Method is used to run immediate functionalities.
define() : Method is used to define modules for use in multiple locations(reuse).

Understanding Node.js modules: multiple requires return the same object?

I have a question related to the node.js documentation on module caching:
Modules are cached after the first time they are loaded. This means
(among other things) that every call to require('foo') will get
exactly the same object returned, if it would resolve to the same
file.
Multiple calls to require('foo') may not cause the module code to be
executed multiple times. This is an important feature. With it,
"partially done" objects can be returned, thus allowing transitive
dependencies to be loaded even when they would cause cycles.
What is meant with may?
I want to know if require will always return the same object. So in case I require a module A in app.js and change the exports object within app.js (the one that require returns) and after that require a module B in app.js that itself requires module A, will I always get the modified version of that object, or a new one?
// app.js
var a = require('./a');
a.b = 2;
console.log(a.b); //2
var b = require('./b');
console.log(b.b); //2
// a.js
exports.a = 1;
// b.js
module.exports = require('./a');
If both app.js and b.js reside in the same project (and in the same directory) then both of them will receive the same instance of A. From the node.js documentation:
... every call to require('foo') will get exactly the same object returned, if it would resolve to the same file.
The situation is different when a.js, b.js and app.js are in different npm modules. For example:
[APP] --> [A], [B]
[B] --> [A]
In that case the require('a') in app.js would resolve to a different copy of a.js than require('a') in b.js and therefore return a different instance of A. There is a blog post describing this behavior in more detail.
node.js has some kind of caching implemented which blocks node from reading files 1000s of times while executing some huge server-projects.
This cache is listed in the require.cache object. I have to note that this object is read/writeable which gives the ability to delete files from the cache without killing the process.
http://nodejs.org/docs/latest/api/globals.html#require.cache
Ouh, forgot to answer the question. Modifying the exported object does not affect the next module-loading. This would cause much trouble... Require always return a new instance of the object, no reference. Editing the file and deleting the cache does change the exported object
After doing some tests, node.js does cache the module.exports. Modifying require.cache[{module}].exports ends up in a new, modified returned object.
Since the question was posted, the document has been updated to make it clear why "may" was originally used. It now answers the question itself by making things explicit (my emphasis to show what's changed):
Modules are cached after the first time they are loaded. This means
(among other things) that every call to require('foo') will get
exactly the same object returned, if it would resolve to the same
file.
Provided require.cache is not modified, multiple calls to
require('foo') will not cause the module code to be executed multiple
times. This is an important feature. With it, "partially done" objects
can be returned, thus allowing transitive dependencies to be loaded
even when they would cause cycles.
For what I have seen, if the module name resolve to a file previosuly loaded, the cached module will be returned, otherwise the new file will be loaded separately.
That is, caching is based on the actual file name that gets resolved. This is because, in general, there can be different versions of the same package that are installed at different levels of the file hierarchy and that must be loaded accordingly.
What I am not sure about is wether there are cases of cache invalidation not under the programmer's control or awareness, that might make it possible to accidentaly reload the very same package file multiple times.
In case the reason why you want require(x) to return a fresh object every time is just because you modify that object directly - which is a case I ran into - just clone it, and modify and use only the clone, like this:
var a = require('./a');
a = JSON.parse(JSON.stringify(a));
try drex: https://github.com/yuryb/drex
drex is watching a module for updates and cleanly re-requires the
module after the update. New code is being require()d as if the new
code is a totally different module, so require.cache is not a problem.
When you require an object, you are requiring its reference address, and by requiring the object twice, you will get the same address! To have copies of the same object, You should copy (clone) it.
var obj = require('./obj');
a = JSON.parse(JSON.stringify(obj));
b = JSON.parse(JSON.stringify(obj));
c = JSON.parse(JSON.stringify(obj));
Cloning is done in multiple ways, you can see this, for further information.

Resources