Configure ESLint to error when objects are defined with certain keys - eslint

I know of the no-restricted-properties option that allows setting up rules to error when accessing certain object keys (to discourage use of deprecated APIs and the like), but I cannot find a rule to disallow setting of certain keys.
Is this possible in ESLint?
To explain further, our project uses Sequelize ORM which uses the keyword allowNull for nullable columns, and we often copy our Sequelize model definitions directly into node-pg-migrate migration files, which uses the subtly different notNull keyword.
I always forget to change the object key in a definition from allowNull to notNull and would like a way to check this in the linter in a directory specific .eslintrc file.

I found that the similarly named no-restricted-syntax rule allows you to exclude pretty much anything you can find using Javascript AST selectors. Using the very helpful AST Explorer web tool, I was able to add a .eslintrc file in the directory with our database migrations with a single rule to error when objects have the key allowNull:
{
"rules": {
"no-restricted-syntax": [
"error", "Identifier[name='allowNull']",
]
}
}

Related

What's the strategy for implementing an attribute of type schema.TypeMap in Terraform Provider SDKv2?

Context: we're developing a TF Provider using TF Provider SDKv2.
Consider a resource that has an attribute of type schema.TypeMap that should support updates. Semantically it means a list of settings for a resource.
"settings": {
Type: schema.TypeMap,
Optional: true,
ForceNew: false,
Elem: schema.TypeString,
},
resource "tv" "example" {
...
settings = {
"brightest" = "23"
"contrast" = "56"
}
}
Now let's say tv has 50 settings where each setting has a default value and we don't want to make a user specify each 50 so we only want a user to specify settings that they want to override (for example, default value for contrast is 50 but user wants it to be 56 which is why they added "contrast" = "56" under settings attribute.
We can see 2 possible implementation options that would support Update for `setting attribute:
Don't use DiffSuppressFunc and store only overridden settings in TF state. This hardcoding default setting values is undesirable in the client (TF Provider), this option requires API to have overridden = true or something to indicate whether the value is set to a default value or was overridden.
Save all 50 settings in TF state and use DiffSuppressFunc to disable the diff between empty tf.example.settings in main.tf and 50 settings saved in TF state. However in this scenario it's a little bit unclear how Update would be implemented.
Which option is used typically?
The only example I've found is airflow_config_overrides attribute from GCP TF Provider which solves this issue in a different way.
In order to create predictable behavior when arguments to one resource use values derived from arguments to another, Terraform has some specific constraints on what providers are allowed to do with attributes during plan and apply which I'll summarize here:
If an argument is set (present and not null) in the configuration, the provider must set the value to exactly what the module author specified when generating a plan.
If an argument is known (not "known after apply") at the planning phase, then the final value during the apply phase must exactly match the value from the planning phase.
(There are more details in the internal documentation Resource Instance Change Lifecycle, but that is intended to be read by developers of SDKs rather than developers of providers using the SDK, so it uses some terminology that the SDK normally abstracts away.)
Since you are defining settings as a single argument of a map type, the rules above apply two that argument as a whole, rather than to individual elements. Therefore it is not possible to mix both author-provided keys and provider-defaulted keys in the same map. Terraform prohibits this so that a module author can write tv.example.settings somewhere else in their module and know it will evaluate to exactly the value they wrote.
However, if you declare an argument that cannot be set in the configuration -- Computed: true without Optional: true -- then your provider can freely set that argument to whatever it wants as long as it still upholds the second rule above, of consistency between plan and apply.
Therefore the compromise I would suggest here is to declare a pair of attributes where one is author-settable and the other one is not:
"settings": {
Type: schema.TypeMap,
Optional: true,
Elem: schema.TypeString,
},
"all_settings": {
Type: schema.TypeMap,
Computed: true,
Elem: schema.TypeString,
},
(all_settings might not be the best name. Other names I considered were final_settings, combined_settings, etc.)
With this approach, you can set a value for all_settings either during plan (CustomizeDiff) or apply (Create and Update), depending on whether you have all of the information you need during planning or not.
The one operation that will require some extra special care with this design is the one the SDK calls Read, because in that case you'll need to decide whether and how to incorporate potential changes made outside of Terraform into the settings value. If you don't update it at all then Terraform will not detect and "repair" changes made outside of Terraform, because it will not see the settings to have changed.
Given the intended meaning you described in your question, I can imagine two possible implementations depending on what the provider has access to:
If the provider itself knows what all of the default values are, Read could first update all_settings and then copy from there to settings the subset of values that aren't the defaults.
This is the most robust solution because it allows Terraform to recognize when a change outside of Terraform has changed one of the settings from its default to a non-default value, and so to propose to reconcile it by "removing" that setting. (That is, it will show in Terraform's UI as a removal, but your implementation will internally interpret that as "set it back to the default".)
If only the remote system knows the defaults and the provider cannot know them then I would probably compromise by first updating all_settings and then copying into settings only the subset of keys that match what was previously set in settings.
This will allow Terraform to detect when a setting the author configured was changed outside of Terraform, but it won't detect situations where a setting that wasn't managed in Terraform (i.e. originally set to the default) changes to a different value.
If you don't foresee a need for module authors to refer to the settings they didn't explicitly set elsewhere in the module (that is, if they are unlikely to want to write tv.example.all_settings somewhere) then you could potentially just ignore this part entirely, in which case this reduces to what you described as option 1, with the exception that the question would not be whether the author set the value to the default or not but instead whether the author set the value at all. In that case, you're essentially saying that setting a particular setting in the map represents an intent for Terraform to manage that setting, and leaving a setting unset represents the intent to let it be chosen by the server and arbitrarily drift in future.
Any other option you can think of which conforms to the requirements I summarized at the start of this answer would be fine too. However, I don't think that your option 2 meets those requirements because it suggests that settings would have a different value after apply than was set in the configuration, which would violate at least one of the requirements depending on whether you deal with it during the planning phase or the apply phase.

Shall I set an empty string computed string attribute for Terraform resource?

context: I'm adding a new resource to TF Provider.
I've got an API that optionally return a string attribute so I represent it as:
"foo": {
Type: schema.TypeString,
Computed: true,
Optional: true,
},
Question: if an API returns value not set / empty string for response.foo, shall I still set an empty string for foo attribute or I shouldn't set any value instead (e.g., null)?
in my resource schema.
(Hello! I'm the same person who wrote the answer you included in your screenshot.)
If both approaches -- returning null or returning an empty string -- were equally viable from a technical standpoint then I would typically prefer to use null to represent the absence of a value, since that is clearly distinct from an empty string which for some situations would otherwise be a valid present value for the attribute.
However, since it seems like you are using the old SDK ("SDKv2") here, you will probably be constrained from a technical standpoint: SDKv2 was designed for Terraform v0.11 and earlier and so it predates the idea of attributes being null and so there is no way in its API to specify that. You may be able to "trick" the SDK into effectively returning null by not calling d.Set("foo", ...) at all in your Create function, but there is no API provided to unset an attribute and so once you've set it to something non-null there would typically be no way to get it to go back to being null again.
Given that, I'd suggest it better to be consistent and always use "" when using the old SDK, because that way users of the provider won't have to deal with the inconsistency of the value sometimes being null and sometimes being "" in this case.
When using the modern Terraform Plugin Framework this limitation doesn't apply, because that framework was designed with the modern Terraform language in mind. You aren't using that framework and so this part of the answer probably won't help you right now, but I'm mentioning it just in case someone else finds this answer in future who might already be using or be considering use of the new framework.

No code suggestions for global defined variables in VSCode in a node.js server project

I have to deal with a node.js server project that uses global variables for common APIs. For instance in the entry point server.js there is a Firebase variable for the real-time database that is stored like this:
fireDB = admin.database();
I wasn't aware that this is possible and I would consider this a bad approach, but now I have to deal with it.
I'm not really interested to re-write any of the many calls to this variable in all those files, rather I would find a way to make fireDB show me suggestions only by changing this variable or installing an extension.
I tried to define it on top of the file as var fireDB, but then suggestions only work in the same file, not in others.
When I set a dot behind admin.database() the suggestions work, when I write fireDB. I get no suggestions, yet the call seems to be possible. Suggestions need to work in other files, too. How can I get this to work?
WARNING: MAKE SURE YOU UNDERSTAND THE PROBLEMS WITH GLOBALS BEFORE USING THEM IN A PROJECT
The above warning/disclaimer is mostly for anyone starting a new project that might happen across this answer.
With that out of the way, create a new .d.ts file and put it somewhere with a descriptive name. For example, globals.d.ts at the top level of the directory. Then just populate it with the following (I don't have any experience with firebase, so I had to make some assumptions about which module you're using, etc.):
globals.d.ts
import { database } from "firebase-admin";
declare global {
var fireDB: database.Database;
}
IntelliSense should then recognize fireDB as a global of the appropriate type in the rest of your JavaScript project.
Why does this work? IntelliSense uses TypeScript even if you're working with a JS project. Many popular JS packages includes a .d.ts file where typings are declared, which allows IntelliSense to suggest something useful when you type require('firebase-admin').database(), for example.
IntelliSense will also automatically create typings internally when you do something "obvious", e.g. with literals:
const MY_OBJ = { a: 1, b: "hello"};
MY_OBJ. // IntelliSense can already autocomplete properties "a" and "b" here
Global autocompletion isn't one of those "obvious" things, however, probably because of all the problems with global variables. I'd also guess it'd be difficult to efficiently know what order your files will run in (and hence when a global might be declared). Thus, you need to explicitly declare your global typings.
If you're interested in further augmenting the capabilities of IntelliSense within your JS project, you can also use comments to explicitly create typings:
/**
* #param {String[]} arrayOfStrings
*/
function asAnExample(arrayOfStrings) {
arrayOfStrings. // IntelliSense recognizes this as an array and will provide suggestions for it
}
See this TypeScript JSDoc reference for more on that.

SCIM PATCH library

I am implementing SCIM provisioning for my current project, and I am trying to implement the PATCH method and it seems not that easy.
What I read in the RFC is that SCIM PATCH is almost like JSON PATCH, but when I look deeper it seems a bit different on how the path is described which doesn't allow me to use json-patch libraries.
example:
"path":"addresses[type eq \"work\"]"
"path":"members[value eq
\"2819c223-7f76-453a-919d-413861904646\"]"
Do you know any library that is doing SCIM PATCH out of the box?
My project is currently a node project, but I don't care about the language I can rewrite it in javascript if needed.
Edit
I have finally create my own library for that, it is called scim-patch and it is available on npm https://www.npmjs.com/package/scim-patch
I implement SCIM PATCH operation in my own library. Please take a look here and here. It is currently a work in progress for v2, but the CRUD capability required by patch operations has matured.
First of all, you need a way to parse the SCIM path, which can optionally include a filter. I implement a finite state machine to parse the path and filter. A scanner would go through each byte of the text and point out interesting events, and a parser would use the scanner to break the text into meaningful tokens. For instance, emails[value eq "foo#bar.com"].type can be broken down to emails, [, eq, "foo#bar.com", ] and type. Finally, a compiler will take these token inputs and assemble it into an abstract syntax tree. On paper, it will look something like the following:
emails -> eq -> type
/ \
value "foo#bar.com"
Next, you need a way to traverse the resource data structure according to the abstract syntax tree. I designed my property model to carry a reference to the SCIM attribute. Consider the following resource:
{
"schemas": ["urn:ietf:params:scim:schemas:core:2.0:User"],
"userName": "imulab",
"emails": [
{
"value": "foo#bar.com",
"type": "work"
},
{
"value": "bar#foo.com",
"type": "home"
}
]
}

I start traversing from the root of the resource and find the child called emails, which will return a multiValued property of complex type. I see my next token (eq) is the root of a filter, so I perform the filter operations on the two elements of emails. For each element, I go down the value child and evaluate its value. Since only the first element matches the filter, I finally go down the type child of that complex property and arrive at the target property. From there, you are free to perform Add, Replace and Remove operations.
There are two things I recommend to watch out.
One thing is that you traversing path will split when you hit a multiValued property. In the above example, we only have one elements that matched the filter. In reality, we may have many matches, or there could be no filter at all, forcing you to traverse all elements.
The other is the syntax of the SCIM path. The specification mandates that it is possible to prefix the schema URN in front the actual paths and delimit them with a :. So in that representation, emails.type and urn:ietf:params:scim:schemas:core:2.0:User:emails.type are actual equivalents. Note that the schema URN contains dots (.) in the 2.0 part. This creates further complication that now you cannot simply delimit the text by . and hope to get all correct tokens. I use a Trie data structure to record all schema URNs as reserved words. Whenever I start a new segment in the path, I will try to match it in the Trie and not solely rely on the . to terminate the segment.
Hope it will help your work.
Have a look at scim2-filter-parser: https://github.com/15five/scim2-filter-parser
It is a library mainly used by the authors' django-scim2 library: https://github.com/15five/django-scim2
It relies on python AST objects, but I think you should get some takeaways from there.
Since I did not found any typescript library to implement scim patch operations, I have implemented my own library.
You can find it here: https://www.npmjs.com/package/scim-patch

Use a puppet module multiple times

I'm using a puppet module from Puppet Forge - https://forge.puppet.com/creativeview/mssql_system_dsn
The documentation indicates to use it like this:
class {'mssql_system_dsn':
dsn_name => 'vcenter',
db_name => 'vcdb',
db_server_ip => '192.168.35.20',
sql_version => '2012',
dsn_64bit => true,
}
I need to create multiple odbc data sources.
However, if I simply duplicate this snippet twice and change the parameters I get a multiple declaration error.
How can I declare this module multiple times?
How can I declare this module multiple times?
You cannot do so without modifying the module. Although it is possible to declare the same class multiple times if you use include-like syntax, that does not afford a means to use different parameters with different declarations. This is all connected to the fact that Puppet classes are singletons. I can confirm based on a quick review of the module's code that its design does not support defining multiple data sources.
I'd encourage you to file an enhancement request with the module author. If that does not quickly bear fruit, then you have the option of modifying the module yourself. It looks like that would be feasible, but not as simple as just changing a class keyword to define.
As the author didn't answer my request and had not merged a pull request from another contributor I created my own module;
https://forge.puppet.com/garfieldmoore/odbc_data_source
If anyone is interested enough to review my module's code and offer improvements or let me know when I have not followed best practises I would appreciate it

Resources