How to handle different pin names based on package - origen-sdk

We're importing our pins into the dut model by parsing xml design documents, which lists different pin names based on the package:
<pin name="mypin" direction="input">
<package_list>
<package package="pkg_a" name="mypin_a" location="XX" direction="input"/>
<package package="pkg_b" name="mypin_b" location="XX" direction="input"/>
</package_list>
</pin>
This is the same pin, just a different name depending on its package.
In my ruby flow files, depending on the test insertion I set the package so that I get the correct pins when accessing the dut models pins.
Is there currently a way to access the same pin but change its name depending on the package? As in, when using the above example, how would I get this behavior?
Package Pin Name
--------------------
nil mypin
pkg_a mypin_a
pkg_b mypin_b
I see in the origen documentation that there are package-scoped and function-scoped attributes, but there doesn't seem to be a way to rename the pin based on package that I am seeing, assuming I'm reading the docs correctly.
Would this functionality require an extension to the package-scoped attributes feature? Or is there a simpler way via some fancy aliasing?
Thanks

Even though this is the same physical pin, does it need to be modeled that way if the different names are mutually exclusive?
e.g. While this will be modeled as 3 separate pin objects, only one of the them would be available in each of the 3 package options and so it should behave like you want:
add_package :none
add_package :pkg_a
add_package :pkg_b
add_pin :mypin, package: :none, direction: :input
add_pin :mypin_a, package: :pkg_a, direction: :input
add_pin :mypin_b, package: :pkg_b, direction: :input
I guess what you probably want though, is to be able to call dut.pin(:mypin) in each package scope and have it return the pin with the correct ID for the current package, however I couldn't find a way to get such an alias to work correctly.
Such a package-scoped alias is something that perhaps we need to consider adding to Origen.
In the meantime, perhaps you could get by with a helper method in your application?
def find_pin(id)
dut.pins("#{id}_#{dut.package.id.to_s.sub('pkg_', '')")
end
dut.package = :pkg_a
pin = find_pin(:mypin)
pin.id # => :mypin_a
dut.package = :pkg_b
pin = find_pin(:mypin)
pin.id # => :mypin_b

Another option is to use the pin meta data hash - each pin has a meta data hash in which you can store whatever you like:
add_package :pkg_a
add_package :pkg_b
add_pin :mypin, package: :all, direction: :input,
meta: { names: { pkg_a: "mypin_a", pkg_b: "mypin_b"}}
Then you get the single pin defined and you can retrieve the package-specific name pretty easily in your application code:
origen(main):002:0> dut.package = :pkg_a
=> :pkg_a
origen(main):007:0> dut.pin(:mypin).meta[:names][dut.package.id]
=> "mypin_a"
origen(main):008:0> dut.package = :pkg_b
=> :pkg_b
origen(main):009:0> dut.pin(:mypin).meta[:names][dut.package.id]
=> "mypin_b"

Related

Is DiffSuppressFunc or being more restrictive when saving to TF state is preferable in Terraform SDKv2?

context: I'm adding a new resource to TF Provider (using SDKv2) with roughly the following schema:
resource "player" "football" {
type = "FOOTBALL"
...
config = {
"dribbling" = "50"
"speed" = "90"
"position" = "GOALKEEPER"
}
}
that I represent as:
"config": {
Type: schema.TypeMap,
Elem: &schema.Schema{
Type: schema.TypeString,
},
Required: true,
ForceNew: true,
},
The important detail here for different palyer instances' types there'll be a different set of required attributes (dribbling, speed, position for football and height, can_dunk, arm_span for basketball) -- all players share the same API endpoint so I introduced just one resource to cover them all.
I'd like to support the ability of importing players and apparently READ response includes a bunch of fields that are optional on create (and I suspect most of the users won't have them in Terraform configuration file) which results in the fact that I've got a state difference when saving the whole config like:
d.Set("config", player.GetConfig()) # GetConfig includes a bunch of new attributes (optional on a create or even computed)
So I've got a question: which of the following 2 options is preferable:
Implement DiffSuppressFunc for a config attribute where I'll be ignoring these optional fields (the downside is I'll have an implicit drift between main.tf and TF state file).
Be more restrictive when writing configs to TF state file:
instead of
d.Set("config", player.GetConfig())
# filtered config will match config in main.tf
filteredConfig = ...
d.Set("config", filteredConfig)
In some other Terraform providers that deal with similar situations (where a particular argument has a mixture of configuration-provided and remote-system-provided nested values), the resource type implementation takes a compromise position of effectively exposing the same data in two different attributes, where one of them represents what the user configured and the other represents the full data returned by the remote system. For example, you might have config to be set in the configuration, and expanded_config representing the full set of elements the server decided.
There is a challenge with that approach in that you'll probably need a special rule in your Read function to somehow decide if a change you detect in the remote system constitutes "drift" relative to the configuration or if it's just an additional element added by the server.
From what you described it seems like the rule could be that any key that's present in config in the prior state (that is, the values visible to d.Get inside Read before you call d.Set) would have its value overwritten by what the server returned, but any keys that were not present before are ignored entirely. This would create the effect then that any key the author specified in the configuration is considered "managed by Terraform" while any other key is only read by Terraform and not directly managed.
If you adopt that strategy then it's worth keeping in mind what will happen in a situation where the user has changed the configuration to include a new key or to remove a previously-present key. The Read operation is in terms of the previous state rather than the configuration, so that function will see the keys that were present at the end of the last apply, not the keys currently present in the configuration. In particular this means that if an author adds a new key that the server was already tracking then it will appear in the subsequent plan as being added, even though it might technically be more appropriate to show it as an in-place update ~ or a no-op. This is an example of the compromises we sometimes need to make in order to adapt remote APIs to fit within Terraform's model of resource instances.

Avoid triggering an update if Resource.Schema attribute changes in Terraform

I'm trying to figure out if it is possible to prevent resource updates when one of the Resource.Schema attributes changes.
Essentially I'm building a provider that manages infrastructure. I've got a resource that updates firmware. Something like:
resource "redfish_simple_update" "update" {
transfer_protocol = "HTTP"
target_firmware_image = "/home/mikeletux/BIOS_FXC54_WN64_1.15.0.EXE"
}
As you can see, target_firmware_image does refer to the full path of my firmware package. I want to be able to change directories without triggering an update. I.e. changing above target_firmware_image by /home/mikeletux/Downloads/BIOS_FXC54_WN64_1.15.0.EXE for instance.
I don't know if this is possible. If done my own research and I found the CustomDiff functions to be added to the schema, but I think that thing doesn't match my scenario.
Do you think of something else I could do?
Thanks!
Just posting here how I finally did it.
To avoid triggering an update when the path changes but not the filename, I've found out that DiffSuppressFunc function becomes very handy here:
"target_firmware_image": {
Type: schema.TypeString,
Required: true,
Description: "Target firmware image used for firmware update on the redfish instance. " +
"Make sure you place your firmware packages in the same folder as the module and set it as follows: \"${path.module}/BIOS_FXC54_WN64_1.15.0.EXE\"",
// DiffSuppressFunc will allow moving fw packages through the filesystem without triggering an update if so.
// At the moment it uses filename to see if they're the same. We need to strengthen that by somehow using hashing
DiffSuppressFunc: func(k, old, new string, d *schema.ResourceData) bool {
if filepath.Base(old) == filepath.Base(new) {
return true
}
return false
},
}
By checking old and new value using filepath.Base(), I can figure out if filename is the same, no matter what path the file is placed in.
I'd like to improve that behavior in the future by implementing file hashing, so even the filename doesn't matter, but that's something I'll leave for a new version.
Thanks!

Clear session or session variables on Bixby

Is there a way to specify that a session should be ended, or to clear out the memory of previous actions? In my testing (simulator only) I'm seeing a couple cases where Bixby is remembering a previous entry that isn't relevant anymore.
Example utterances
remove wet diaper
wet diaper
In this case there's 2 possible enums that can be said. "actionType" that is optional, in this case "remove" and "statType", in this case "wet diaper".
What is happening is on the second phrase it's caching the actionType. So, the second phrase my JavaScript still receives the "remove" even though it's not included.
I haven't tried this on an actual device (only the simulator) so it's possible this is just a simulation quirk.
This is kind of related to this question. There was a follow-up comment that the OP asked related to session management.
How does Bixby retain data from a previous NL input?
So, if you read that link. Is there a way I can signal to bixby that the conversation is over, or at least to not remember previous entries for the action?
One way would be to use the transient feature. Here is more information
For example, alter your input type so it doesn't carry over across executions.
name (ActionType) {
features {
transient
}
}
make sure all input types are NL friendly. name/enum concepts are meant for NL and you can attach vocabulary to them.
I used to have a similar issue like yours, in my case, my problem was related to the type of the 'requires' property inside the input-group declared in my action.model.bxb.
You need to handle by separate this two input cases in diferent action.model.bxb files:
In one of them you might have something like (model 1):
input-group(removeWeaper){
requires (OneOrMoreOf)
collect{
input (ActionType) {
type (Type)
min (Optional)
}
input (StatType) {
type (Type)
min (Optional)
}
}
Here, Bixby Will know that at least one of these properties will be apear in your input and will be waiting for an input with that structure.
In the other file you might have (model 2):
input-group(Weaper){
requires (OneOf)
collect{
input (StatType) {
type (Type)
min (Optional)
}
}
Here, Bixby will be waiting to catch an input that contains only one of the indicated values in you input.
(model 1) This could be ok only if you run 'wet diaper' by first time, also when you try again and run 'remove wet diaper' it might work, the problem is when you run again 'wet diaper' because Bixby Stores you previous approach including "remove". i'm not sure if there is something to clear the stored values, but, here is when (model 2) will help you to catch only the input 'wet diaper' as a different statement.
I share you this work around as my own experience, and i hope this could help you solving or getting another perspective of how you could handle or solve your problem.

Retrieve superior hash-key name in Hiera

Hallo I am building in Hiera / Puppet a data structure for creating mysql / config files. My goal ist to have some default values which can be overwritten with a merge. It works until this point.
Because we have different mysql instances on many hosts I want to automaticly configure some paths to be unique for every instance. I have the instance name as a hash (name) of hashes in the Namespace: our_mysql::configure_db::dbs:
In my case I want to lookup the instance names like "sales_db' or 'hr_db' in paths like datadir, but I can not find a way to lookup the superior keyname.
Hiera data from "our_mysql" module represents some default values:
our_mysql::configure_db::dbs:
'defaults':
datadir: /var/lib/mysql/"%{lookup('lookup to superior hash-key name')}"
log_error: /var/log/mysql/"%{lookup('lookup to superior hash-key name')}".log
logbindir: /var/lib/mysql/"%{lookup('lookup to superior hash-key name')}"
db_port: 3306
...: ...
KEY_N: VALUE_N
Hiera data from node definiton:
our_mysql::configure_db::dbs:
'sales_db':
db_port: "3317"
innodb_buffer_pool_size: "1"
innodb_log_file_size: 1GB
innodb_log_files_in_group: "2"
server_id: "1"
'hr_db':
db_port: "3307"
I now how to do simple lookups or to iterate by
.each | String $key, Hash $value | { ... }
but I have no clue how to reference a key from a certain hierarchy level. Searching all related topics to puppet and hiera didn't help.
Is it possible an any way and if yes how?
As I understand the question, I think what you hope to achieve is that, for example, when you look up our_mysql::configure_db::dbs.sales_db key, you get a merge of the data for that (sub)key and those for the our_mysql::configure_db::dbs.defaults subkey, AND that the various %{lookup ...} tokens in the latter somehow resolve to the string sales_db.
I'm afraid that's not going to happen. The interpolation tokens don't even factor in here -- Hiera simply won't perform such a merge at all. I guess you have a hash-merge lookup in mind, but that merges only identical keys and subkeys, so not our_mysql::configure_db::dbs.sales_db and our_mysql::configure_db::dbs.defaults. Hiera provides for defaults for particular keys in the form of data recorded for those specific keys at a low-priority level of the data hierarchy. The "defaults" subkey you present, on the other hand, has no special meaning to the standard Hiera data providers.
You can still address this problem, just not entirely within the data. For example, consider this:
$dbs = lookup('our_mysql::configure_db::dbs', Hash, 'deep')
$dbs.filter |$dbname, $dbparms| { $dbname != 'defaults' }.each |$dbname, $dbparms| {
# Declare a database using a suitable resource type. "my_mysql::database" is
# a dummy resource name for the purposes of this example only
my_mysql::database {
$dbname:
* => $dbparams;
default:
datadir => "/var/lib/mysql/${dbname}",
log_error => "/var/log/mysql/${dbname}.log",
logbindir => "/var/lib/mysql/${dbname}",
* => $dbs['defaults'];
}
}
That supposes data of the form presented in the question, and it uses the data from the defaults subkey where those do not require knowledge of the specific DB name, but it puts the patterns for various directory names into the resource declaration, instead of into the data. The most important things to recognize are the use of the splat * parameter wildcard for obtaining multiple parameters from a hash, and the use per-expression resource property defaults by use of the default keyword in a resource declaration.
If you wanted to do so, you could push more details of the directory names back into the data with a little more effort (and one or more new keys).

Generating a unique key for dynamodb within a lambda function

DynamoDB does not have the option to automatically generate a unique key for you.
In examples I see people creating a uid out of a combination of fields, but is there a way to create a unique ID for data which does not have any combination of values that can act as a unique identifier? My questions is specifically aimed at lambda functions.
One option I see is to create a uuid based on the timestamp with a counter at the end, insert it (or check if it exists) and in case of duplication retry with an increment until success. But, this would mean that I could potentially run over the execution time limit of the lambda function without creating an entry.
If you are using Node.js 8.x, you can use uuid module.
var AWS = require('aws-sdk'),
uuid = require('uuid'),
documentClient = new AWS.DynamoDB.DocumentClient();
[...]
Item:{
"id":uuid.v1(),
"Name":"MyName"
},
If you are using Node.js 10.x, you can use awsRequestId without uuid module.
var AWS = require('aws-sdk'),
documentClient = new AWS.DynamoDB.DocumentClient();
[...]
Item:{
"id":context.awsRequestId,
"Name":"MyName"
},
The UUID package available on NPM does exactly that.
https://www.npmjs.com/package/uuid
You can choose between 4 different generation algorithms:
V1 Timestamp
V3 Namespace
V4 Random
V5 Namespace (again)
This will give you:
"A UUID [that] is 128 bits long, and can guarantee uniqueness across
space and time." - RFC4122
The generated UUID will look like this: 1b671a64-40d5-491e-99b0-da01ff1f3341
If it's too long, you can always encode it in Base64 to get G2caZEDVSR6ZsAAA2gH/Hw but you'll lose the ability to manipulate your data through the timing and namespace information contained in the raw UUID (which might not matter to you).
awsRequestId looks like its actually V.4 UUID (Random), code snippet below:
exports.handler = function(event, context, callback) {
console.log('remaining time =', context.getRemainingTimeInMillis());
console.log('functionName =', context.functionName);
console.log('AWSrequestID =', context.awsRequestId);
callback(null, context.functionName);
};
In case you want to generate this yourself, you can still use https://www.npmjs.com/package/uuid or Ulide (slightly better in performance) to generate different versions of UUID based on RFC-4122
For Go developers, you can use these packages from Google's UUID, Pborman, or Satori. Pborman is better in performance, check these articles and benchmarks for more details.
More Info about Universal Unique Identifier Specification could be found here.
We use idgen npm package to create id's. There are more questions on the length depending upon the count to increase or decrease the size.
https://www.npmjs.com/package/idgen
We prefer this over UUID or GUID's since those are just numbers. With DynamoDB it is all characters for guid/uuid, using idgen you can create more id's with less collisions using less number of characters. Since each character has more ranges.
Hope it helps.
EDIT1:
Note! As of idgen 1.2.0, IDs of 16+ characters will include a 7-character prefix based on the current millisecond time, to reduce likelihood of collisions.
if you using node js runtime, you can use this
const crypto = require("crypto")
const uuid = crypto.randomUUID()
or
import { randomUUID } from 'crypto'
const uuid = randomUUID()
Here is a better solution.
This logic can be build without any library used because importing a lambda function layer can get difficult sometimes. Below you can find the link for the code which will generate the unique id and save it in the SQS queue, rather than DB which will incur the cost for writing, fetching, and deleting the ids.
There is also a cloudformation template provided, which you can go and deploy in your account, and it will setup the whole application. A detailed explanation is provided in the link.
Please refer to the link below.
https://github.com/tanishk97/UniqueIdGeneration_AWS_CFT/wiki

Resources