I know that Puppet has first-class support for referencing module templates with the template function and now I find myself trying to find support for referencing module files. I was expecting that file('foomodule/barfile') or even file('foomodule/files/barfile') would correctly reference modules/foomodule/files/barfile, wherever the module may be located, but I seem to be forced into providing the fully qualified path to the file.
The reference documentation for the file function unfortunately doesn't provide any hints on how this can be accomplished, though I'm assuming there should be some elegant way to accomplish this.
Do you mean this ?
file { "some_file" :
source =>"puppet:///modules/foomodule/barfile",
owner => "root",
.....
}
puppet:///modules check the modulepath (/etc/puppet/module) in my case. The bar file will be located in
/etc/puppet/module/foomodule/files/barfile
Related
I have a file structure looking somewhat like the following:
src/
--clients/
----queue_client/
------mod.rs
--data_evaluator/
----data_evaluator.rs
In data_evaluator, I want to use the queue_client module, but when I do mod queue_client in data_evaluator.rs- I get the following error - File not found for module queue_client. It only finds the module if I move it into the data_evaluator folder.
My question is, how do I correctly use modules that are outside of the consumer code's directory? Apologies if there is an easy way to do this, I did try searching for quite a while and couldn't find a way.
You seem to be a bit confused.
In Rust, you build the module tree.
You use mod to register a module as a submodule of your current module.
You use use to use a module within your current module.
This article may clear some things up: http://www.sheshbabu.com/posts/rust-module-system/
Aside from that, to use a module that's higher in the tree than your current module, you use crate to get to the root of your module tree.
So in your case, crate::clients::queue_client.
In RequireJS documents (http://requirejs.org/docs/api.html#modulename), I couldn't understand this sentence.
You can explicitly name modules yourself, but it makes the modules less portable
My question is
Why explicitly naming module makes less portable?
When explicitly naming module needed?
Why explicitly naming module makes less portable?
If you do not give the module a name explicitly, RequireJS is free to name it whichever way it wants, which gives you more freedom regarding the name you can use to refer to the module. Let's say you have a module the file bar.js. You could give RequireJS this path:
paths: {
"foo": "bar"
}
And you could load the module under the name "foo". If you had given a name to the module in the define call, then you'd be forced to use that name. An excellent example of this problem is with jQuery. It so happens that the jQuery developers have decided (for no good reason I can discern) to hardcode the module name "jquery" in the code of jQuery. Once in a while someone comes on SO complaining that their code won't work and their paths has this:
paths: {
jQuery: "path/to/jquery"
}
This does not work because of the hardcoded name. The paths configuration has to use the name "jquery", all lower case. (A map configuration can be used to map "jquery" to "jQuery".)
When explicitly naming module needed?
It is needed when there is no other way to name the module. A good example is r.js when it concatenates multiple modules together into one file. If the modules were not named during concatenation, there would be no way to refer to them. So r.js adds explicit names to all the modules it concatenates (unless you tell it not to do it or unless the module is already named).
Sometimes I use explicit naming for what I call "glue" or "utility" modules. For instance, suppose that jQuery is already loaded through a script element before RequireJS but I also want my RequireJS modules to be able to require the module jquery to access jQuery rather than rely on the global $. If I ever want to run my code in a context where there is no global jQuery to get, then I don't have to modify it for this situation. I might have a main file like this:
define('jquery', function () {
return $;
});
require.config({ ... });
The jquery module is there only to satisfy modules that need jQuery. There's nothing gained by putting it into a separate file, and to be referred to properly, it has to be named explicitly.
Here's why named modules are less portable, from Sitepen's "AMD, The Definite Source":
AMD is also “anonymous”, meaning that the module does not have to hard-code any references to its own path, the module name relies solely on its file name and directory path, greatly easing any refactoring efforts.
http://www.sitepen.com/blog/2012/06/25/amd-the-definitive-source/
And from Addy Osmani's "Writing modular javascript":
When working with anonymous modules, the idea of a module's identity is DRY, making it trivial to avoid duplication of filenames and code. Because the code is more portable, it can be easily moved to other locations (or around the file-system) without needing to alter the code itself or change its ID. The module_id is equivalent to folder paths in simple packages and when not used in packages. Developers can also run the same code on multiple environments just by using an AMD optimizer that works with a CommonJS environment such as r.js.
http://addyosmani.com/writing-modular-js/
Why one would need a explicitly named module, again from Addy Osmani's "Writing modular javascript":
The module_id is an optional argument which is typically only required when non-AMD concatenation tools are being used (there may be some other edge cases where it's useful too).
I'm just getting started with Component package manager. I understand that I can require in other local modules by adding the module to the local key in the component.json file, but what if I don't want to treat every file as a module?
In the (very minimal) documentation for Component, it's developer TJ says that I can add any other relevant scripts (that live in the same directory) to the scripts array. And yet, on doing so, I'm unable to require or reference any of the peripheral scripts' methods from my main file.
The require method fails on trying to load in the script, and any attempt to reference the methods or variables from that script from the 'bootstrap' file are futile. My build.js shows that the script has been compiled in, but I just can't seem to figure out the correct way to reference it from other scripts...
Help?
I just thought I'd post the answer to this question so anybody with the same problem can find it quickly/painlessly.
The answer is to reference the script with a pointer to it's current directory like so:
var script = require('./script.js');
Note the ./ at the beginning of the file name.
An easy mistake to make/rectify.
I have an erlang program, compiled with rebar, after the new debian release, it won't compile anymore, complaining about this:
-import(erl_scan).
-import(erl_parse).
-import(io_lib).
saying:
bad import declaration
I don't know erlang, I am just trying to compile this thing.
Apparently something bad happened to -import recently http://erlang.org/pipermail/erlang-questions/2013-March/072932.html
Is there an easy way to fix this?
Well, -import(). is working but it does NOT do what you are expecting it to do. It does NOT "import" the module into your module, nor does it go out, find the module and get all the exported functions and allow you to use them without the module name. You use -import like this:
-import(lists, [map/2,foldl/3,foldr/3]).
Then you can call the explicitly imported functions without module name and the compiler syntactically transforms the call by adding the module name. So the compiler will transform:
map(MyFun, List) ===> lists:map(MyFun, List)
Note that this is ALL it does. There are no checks for whether the module exists or if the function is exported, it is a pure naive syntactic transformation. All it gives you is slightly shorter code. For this reason it is seldom used most people advise not to use it.
Note also that the unit of code for all operations is the module so the compiler does not do any inter-module checking or optimisation at all. Everything between modules like checking a modules existence or which functions it exports is done at run-time when you call a function in the other module.
No, there is no easy way to fix this. The source code has to be updated, and every reference to imported functions prefixed with the module in question. For example, every call to format should be replaced with io_lib:format, though you'd have to know which function was imported from which module.
You could start by removing the -import directives. The compilation should then fail, complaining about undefined functions. That is where you need to provide the correct module name. Look at the documentation pages for io_lib, erl_scan and erl_parse to see which functions are in which module.
Your problem is that you were using the experimental -import(Mod) directive which is part of parameterized modules. These are gone in R16B and onwards.
I often advise against using import. It hurts quick searches and unique naming of foreign calls. Get an editor which can quickly expand names.
Start by looking at what is stored in the location $ERL_LIBS, typically this points to /usr/lib/erlang/lib.
I am creating an eclipse rcp application in which I am using SAXParser to parse an XML document. The "EventsDefinition.xsd" which I am using to validate the XML document has following import:
<xs:import namespace="http://www.w3.org/XML/1998/namespace"
schemaLocation="xml.xsd"/>
I keep the "EventsDefinition.xsd" & "xml.xsd" in the eclipse folder of the exported rcp product.
For accessing the "EventsDefinition.xsd", I use the following code which works.
URL fileURL = new URL(Platform.getInstallLocation().getURL() + "EventsDefinition.xsd");
File eventsDefinitionFile = new File(fileURL.getPath());
parser.setProperty("http://java.sun.com/xml/jaxp/properties/schemaSource", eventsDefinitionFile);
With this, the parser is able to access "EventsDefinition.xsd" but not the "xml.xsd" referenced by it, because it tries to find the xml.xsd relative to the directory from which the rcp application is executed.
Is there a similar way to tell the parser to find the "xml.xsd" at eclipse folder rather than in the present working directory?
I tried specifying schemaLocation="http://www.w3.org/2001/xml.xsd" in EventsDefinition.xsd, but it fails to read the schema. So I have to use the local copy of "xml.xsd" present at the exported product's eclipse folder.
Any suggestions will be extremely helpful.
I think that the problem is with the import declaration. First, although permitted, is not recommended to use "namespace" as a namespace prefix. Second, the problem arises form the fact that you use "http://www.w3.org/XML/1998/namespace" as namespace name, which is prohibited. Take a look here: http://www.w3.org/TR/REC-xml-names/#dt-prefix , precisely here:
The prefix xml is by definition bound to the namespace name http://www.w3.org/XML/1998/namespace. It MAY, but need not, be declared, and MUST NOT be bound to any other namespace name. Other prefixes MUST NOT be bound to this namespace name, and it MUST NOT be declared as the default namespace.
Try to rename the namespace name to something else (and the namespace prefix too). Hope it helps.