I have some files, like,
test/test1,
test/test2,
test/test3
and i want to rename there path to
newtest/test1,
newtest/test2,
newtest/test3
so that if we try to require the above file, then it will point to new path.
In require, one to one mapping is present, but not sure, if something like this is achievable,
require.map = {
"test/*": "newtest/*",
}
Any help :)
The map configuration option supports mapping module prefixes. Here's an example:
require.config({
map: {
"*": {
"foo": "bar"
}
}
});
define("bar/x", function () {
return "bar/x";
})
require(["foo/x"], console.log);
The last require call that tries to load foo/x will load bar/x.
This is as good as it gets for pattern matching with map. RequireJS does not support putting arbitrary patterns in there. Using "test/*": "newtest/*" would not work. The "*" I used in map is not a pattern. It is a hardcoded value that means "in all modules", and happens to look like a glob pattern. So the map above means "in all modules, when foo is required, load bar instead".
I wonder if you really need map. You can probably just use paths. Quoting from map documentation
This sort of capability is really important for larger projects which
may have two sets of modules that need to use two different versions
of 'foo', but they still need to cooperate with each other.
Also see paths documentation
Using the above sample config, "some/module"'s script tag will be
src="/another/path/some/v1.0/module.js".
Here is how paths can be used to map the directory from test to newtest for your example
paths: {
"test": "newtest"
}
Related
Let's say I have my NixOS configuration.nix set up as follows:
{config, pkgs, ...}:
{
services.openssh.enable = true;
}
I now want to have a second file called networking.nix which sets my hostname based on an argument.
{config, pkgs, hostname, ...}:
{
networking.hostName = hostname
}
Is this possible? How can I include the file. I already tried doing it by using imports = [ ./networking.nix { hostname = "helloworld"; } ]; but that didn't work.
Thanks.
A 'NixOS configuration file' is simply a module that doesn't define options, so there is really no distinction. A configuration.nix file is just a module, and typically it does not define any options, so it can be written in the abbreviated form.
Defining options is the normal way for NixOS modules to pass information around, so that's the most idiomatic way to go about.
However, if you really must, for some very special reason, because you're doing very unusual things with NixOS, you can put arbitrary functions in imports. But you shouldn't, because it doesn't work well with the module system's custom error messages and potentially other aspects that rely on knowing where a module is defined. If you do so, do make sure it is an actual function. In your case, that would imply modifying the first line of networking.nix to make it a curried function:
hostname: {config, pkgs, ...}:
Not very pretty in my opinion. Although it is very explicit about what is going on, it deviates from what is to be expected of a NixOS module.
I encountered this problem today and came up with a fairly simple solution recommended in the manual.
foobar.nix
{ lib, withFoo ? "bar", ... }:
# simple error checking to ensure garbage isn't passed in
assert lib.asserts.assertOneOf "withFoo" withFoo [
"bar"
"baz"
# other valid choices ...
];
{
# ...
}
configuration.nix
args#{ ... }:
{
imports = [
# ...
(
import ./foobar.nix (
args
// { withFoo = "baz"; }
)
)
# ...
];
}
This is ideal for 'one off' options in your configurations.
You should be able to use the _module.argsoption [1] to do that. So your configuration.nix would be like:
{config, pkgs, ...}:
{
_module.args.hostname = "ahostname";
services.openssh.enable = true;
}
However where the values are very simple it will probably be much easier to just set them directly, e.g. just define networking.hostname in configuration.nix. This section of the manual re. merging and priorities may be helpful also [2].
Further discussion:
The value of _module.args is indeed applied to all imported configurations (though the value will only be used in modules that directly refer to it, such as the pkgs value, the ... represents all the values that aren't referenced).
For passing arguments to modules it seems a good approach to me, but from your comments perhaps a different approach might be more suitable.
Another solution could be to flip the relationship in the imports: rather than a single common config that passes multiple different arguments instead multiple different configs import the common configuration. E.g.
$cat ./common.nix
{ services.openssh.enable = true; }
$cat ./ahostname.nix
{ imports = [ ./common.nix ]; networking.hostname = "ahostname"; }
The NixOS config in this Reddit comment looks like it uses this approach. There are quite a few other NixOS configurations that people have shared publicly online so you might find some useful ideas there. The points in the answer from Robert Hensing are very useful to bear in mind as well.
However it's hard to say what might be a better solution in your case without knowing a bit more about the context in which you want to use it. You could create a new SO question with some more information on that which might make it easier to see a more appropriate solution.
I need a specific instance of Map, generated at compile-time, to be made available to the code that I'm bundling with webpack. My use cases if for an instance of Map, but the question holds for any variable, really.
webpack.DefinePlugin won't work because my instance of Map doesn't survive stringification. I could deserialize/serialize, but that seems inefficient.
options.externals won't work because I'm not requireing an external module to be ignored by the bundler.
I've been able to get this working by setting a value on global, but I'm not sure this is the best way to go about this.
Here's a pathetically-contrived example:
webpack.config.js:
const myVar = new Map([
['a test value', () => {}]
]);
// is there a better way?
global.myVar = myVar;
module.exports = {
entry: './entry.js',
output: {
filename: './output.js'
},
target: 'node',
}
entry.js:
console.log(global.myVar) //=> Map { { "a test value", [Function] } }
In my real-world app, my Map value crawls the file system and builds a mapping of regexes and handlers. This should happen at compile time and is a constant mapping, so I want to consume it in my code-to-be-bundled as a constant.
I tried building the Map inside entry.js, but got all sorts of weird "Cannot find module" errors, I think mostly because the function I'm using to build the Map in the first place heavily abuses dynamic requires in a directory tree outside webpack's purview.
Would appreciate any insight.
I have a requirejs module which is used as a wrapper to an API that comes from a different JS file:
apiWrapper.js
define([], function () {
return {
funcA: apiFuncA,
funcB: apiFuncB
};
});
It works fine but now I have some new use cases where I need to replace the implementation, e.g. instead of apiFuncA invoke my own function. But I don't want to touch other places in my code, where I call the functions, like apiWrapper.funcA(param).
I can do something like the following:
define([], function () {
return {
funcA: function(){
if(regularUseCase){
return apiFuncA(arguments);
} else {
return (function myFuncAImplementation(params){
//my code, instead of the external API
})(arguments);
}
},
funcB: apiFuncB
};
});
But I feel like it doesn't look nice. What's a more elegant alternative? Is there a way to replace the module (apiWrapper) dynamically? Currently it's defined in my require.config paths definition. Can this path definition be changed at runtime so that I'll use a different file as a wrapper?
Well, first of all, if you use Require.js, you probably want to build it before production. As so, it is important you don't update paths dynamically at runtime or depends on runtime variables to defines path as this will prevent you from running r.js successfully.
There's a lot of tools (requirejs plugins) out there that can help you dynamically change the path to a module or conditionnaly load a dependency.
First, you could use require.replace that allow you to change parts (or all) of a module URL depending on a check you made without breaking the build.
If you're looking for polyfilling, there's requirejs feature
And there's a lot more listed here: https://github.com/jrburke/requirejs/wiki/Plugins
I have a bunch of unit tests in this folder: src/app/tests/. Do I have to list them individually in intern.js or is there a way to use a wildcard? I've tried
suites: [ 'src/app/tests/*' ]
but that just causes the test runner to try to load src/app/tests/*.js. Do I really have to list each test suite individually?
The common convention is to have an all module which collects your test modules, e.g.:
define([
'./module1',
'./module2',
// ...
], function(){});
Then you simply list the all module in the suites array, like this:
suites: [ 'src/app/tests/all' ],
Generally this is no different from the standard practice with DOH in Dojo 1.x either, other than being under a different module name. AMD loaders do not support globbing in module IDs, so this isn't really a direct limitation of Intern.
It may seem onerous, but ordinarily you would add each module to all.js as you create it, so it's not really that much additional work.
I agree that the verbosity and inflexibility of this configuration is annoying and hard to scale.
While it's not the same as a wildcard, here is how I solve that problem.
Modified intern.js config file:
define(
[ // dependencies...
'test/all'
],
function (testSuites) {
suites: testSuites.unit,
functionalSuites: testSuites.functional,
}
)
The power in this comes from the fact that the test/all module can return whatever it wants to. Simply give it some nicely named properties which are arrays of module ID strings and you are ready to rock.
Specifying test modules in the define() dependency array of a module given to suites or functionalSuites does work. But that is not very flexible. It still requires you to cherry-pick test suites and be careful about commas and which ones are commented out, etc. What you really want are named collections that can be exported. I do that like so...
test/all:
define(
[ // dependencies...
'./unitsuitelist' // array of paths, generated by hand or Grunt, etc.
'./funcsuitelist'
],
function (unitSuites, funcSuites) {
var experiments,
funTests,
usefulTests,
oldTests
// any logic you want to construct great collections of test suites...
myFavoriteUnitSuites = funTests.concat(experiments);
myFavoriteFunctionalSuites = usefulTests.concat(oldTests);
return {
unit: myFavoriteUnitSuites
functional: myFavoriteFuncSuites
}
}
)
Just make the necessary logic one time with a few reasonable collections. Then swap them out in the returned object during development. And if you prefer to change lists of module IDs instead of code, this pattern can still help you. It's easy to auto-generate a list of all test suite file locations within their directories using bash, Grunt, or other tools. This can be automatically fed into the intern.js configuration file with a similar pattern to the one above. Just remove the logic and it can effectively be a wildcard. If each category of test suite (unit and functional) lives in its own directory, it is very easy to generate path lists of all files contained within them.
I would like to iterate over an array that is stored as a Facter fact, and for each element of the array create a new system user and a directory, and finally make API calls to AWS.
Example of the fact: my_env => [shared1,shared2,shared3]
How can I iterate over an array in Puppet?
This might work, depending on what you are doing
# Assuming fact my_env => [ shared1, shared2, shared3 ]
define my_resource {
file { "/var/tmp/$name":
ensure => directory,
mode => '0600',
}
user { $name:
ensure => present,
}
}
my_resource { $my_env: }
It will work if your requirements are simple, if not, Puppet makes this very hard to do. The Puppet developers have irrational prejudices against iteration based on a misunderstanding about how declarative languages work.
If this kind of resource doesn't work for you, perhaps you could give a better idea of which resource properties you are trying to set from your array?
EDIT:
With Puppet 4, this lamentable flaw was finally fixed. Current state of affairs documented here. As the documentation says, you'll find examples of the above solution in a lot of old code.
As of puppet 3.2 this is possible using the "future" parser like so:
$my_env = [ 'shared1', 'shared2', 'shared3', ]
each($my_env) |$value| {
file { "/var/tmp/$value":
ensure => directory,
mode => 0600,
}
user { $value:
ensure -> present,
}
}
See also: http://docs.puppetlabs.com/puppet/3/reference/lang_experimental_3_2.html#background-the-puppet-future-parser
Puppet 3.7 released earlier this month have the new DSL, which one feature is the iteration, check the following URL https://docs.puppetlabs.com/puppet/latest/reference/experiments_lambdas.html#enabling-lambdas-and-iteration
these new features can be enabled with the :
Setting parser = future in your puppet.conf file
or adding the command line switch --parser=future
hope that helps
As of latest Puppet (6.4.2), and since Puppet 4, iteration over arrays is supported in a few ways:
$my_arr = ['foo', 'bar', 'baz']
Each function:
$my_arr.each |$v| {
notice($v)
}
Each function alternative syntax:
each($my_arr) |$v| {
notice($v)
}
To get the index:
Pass a second argument to each:
$my_arr.each |$i, $v| {
notice("Index: $i, value: $v")
}
Comparison with Ruby:
Note that this grammar is inspired by Ruby but slightly different, and it's useful to show the two side by side to avoid confusion. Ruby would allow:
my_arr.each do |v|
notice(v)
end
Or:
my_arr.each { |v|
notice(v)
}
Other iteration functions:
Note that Puppet provides a number of other iteration functions:
each - Repeats a block of code a number of times, using a collection of values to provide different parameters each time.
slice - Repeats a block of code a number of times, using groups of values from a collection as parameters.
filter - Uses a block of code to transform a data structure by removing non-matching elements.
map - Uses a block of code to transform every value in a data structure.
reduce - Uses a block of code to create a new value, or data structure, by combining values from a provided data structure.
with - Evaluates a block of code once, isolating it in its own local scope. It doesn’t iterate, but has a family resemblance to the iteration functions.
Puppet 3 and earlier:
If you have inherited old code still using Puppet 3, the accepted answer is still correct:
define my_type {
notice($name)
}
my_type { $my_arr: }
Note however that this is usually considered bad style in modern Puppet.
itsbruce's answer is probably the best for now, but there is an iteration proposal going through puppetlabs' armatures process for possible implementation in future.
There is a "create_resources()" function in puppet. that will be very helpful while iterating over the list of itmes