I have a file structure looking somewhat like the following:
src/
--clients/
----queue_client/
------mod.rs
--data_evaluator/
----data_evaluator.rs
In data_evaluator, I want to use the queue_client module, but when I do mod queue_client in data_evaluator.rs- I get the following error - File not found for module queue_client. It only finds the module if I move it into the data_evaluator folder.
My question is, how do I correctly use modules that are outside of the consumer code's directory? Apologies if there is an easy way to do this, I did try searching for quite a while and couldn't find a way.
You seem to be a bit confused.
In Rust, you build the module tree.
You use mod to register a module as a submodule of your current module.
You use use to use a module within your current module.
This article may clear some things up: http://www.sheshbabu.com/posts/rust-module-system/
Aside from that, to use a module that's higher in the tree than your current module, you use crate to get to the root of your module tree.
So in your case, crate::clients::queue_client.
Related
I am trying to create a Python package. You can have a look at my awful attempt here.
I have a module called imguralbum.py. It lives in a directory called ImgurAlbumDownloader which I understand is the name of the package -- in terms of what you type in an import statement, e.g.
import ImgurAlbumDownloader
My module contains two classes ImgurAlbumDownloader and ImgurAlbumException. I need to be able to use both of these classes in another module (script). However, I cannot for the life of me work out what I am supposed to put in my __init__.py file to make this so. I realize that this duplicates a lot of previously answered questions, but the advice seems very conflicting.
I still have to figure out why (I have some ideas), but this is now working:
from ImgurAlbumDownloader.imguralbum import ImgurAlbumDownloader, ImgurAlbumException
The trick was adding the package name to the module name.
It sure sounds to me like you don't actually want a package. It's OK to just use a single module, if your code just does one main thing and all its parts are closely related. Packages are useful when you have distinct parts of your code that might not all be needed at the same time, or when you have so much code that a single module would be very large and hard to find things in.
I am building a custom binary of NodeJS from the latest code base for an embedded system. I have a couple modules that I would like to ship as standard with the binary - or even run a custom script the is compiled into the binary and can be invoked through a command line option.
So two questions:
1) I vaguely remember that node allowed to include custom modules during build time but I went through the latest 5.9.0 configure script and I can't see anything related - or maybe I am missing it.
2) Did someone already do something similar? If yes, what were the best practices you came up with?
I am not looking for something like Electron or other binary bundlers but actually building into the node binary.
Thanks,
Andy
So I guess I figure it out much faster that I thought.
For anyone else, you can add any NPM module to it and just add the actual source files to the node.gyp configuration file.
Compile it and run the custom binary. It's all in there now.
> var cmu = require("cmu");
undefined
> cmu
{ version: [Function] }
> cmu.version()
'It worked!'
> `
After studying this for quite a while, I have to say that the flyandi's answer is not quite true. You cannot add any NPM module just by adding it to the node.gyp.
You can only add pure JavaScript modules this way. To be able to embed a C++ module (I deliberately don't use the word "native", because that one is quite ambiguous in nodeJS terminology - just look at the sources).
To summarize this:
To embed a JS module to your custom nodejs, just add it in the library_files section of the node.gyp file. Also note that it should be placed within the lib folder, otherwise you'll have troubles requiring the module. That's because the name/path listed in node.gyp / library_files is used to encode the id of the module in the node_javascript.cc intermediate file which is then used when searching for the built-in modules.
To embed a native module is much more difficult. The best way I have found so far is to build the module as a static library instead of dynamic, which for cmake(-js) based module you can achieve by changing the SHARED parameter to STATIC like this:
add_library(${PROJECT_NAME} STATIC ${SRC})
instead of:
add_library(${PROJECT_NAME} SHARED ${SRC})
And also changing the suffix:
set_target_properties(
${PROJECT_NAME}
PROPERTIES
PREFIX ""
SUFFIX ".lib") /* instead of .node */
Then you can link it from node.gyp by adding this section:
'link_settings': {
'libraries' : [
"path/to/my/library.lib",
#...add other static dependencies
],
},
(how to do this with node-gyp based project should be quite ease to google)
This allows you to build the module, but you won't be able to require it, because require() function in node can only be used to load built-in JS modules, external JS modules or external dynamic node modules. But now we have a built-in C++ module. Well, lot of node integrated modules are C++, but they always have a JS wrapper in /lib, and those wrappers they use process.binding() to load the C++ module. That is, process.binding() is sort of a require() function for integrated C++ modules.
That said, we also need to call require.binding() instead of require to load our integrated module. To be able to do that, we have to make our module "built-in" first.
We can do that by replacing
NODE_MODULE(mymodule, InitAll)
int the module definition with
NODE_BUILTIN_MODULE_CONTEXT_AWARE(mymodule, InitAll)
which will register it as internal module and from now on we can process.binding() it.
Note that NODE_BUILTIN_MODULE_CONTEXT_AWARE is not defined in node.h as NODE_MODULE but in node_internals.h so you either have to include that one, or copy the macro definition over to your cpp file (the first one is of course better because the nodejs API tends to change quite often...).
The last thing we need to do is to list our newly integrated module among the others so that the node knows to initialize them (that is include them within the list of modules used when searching for the modules loaded with process.binding()). In node_internals.h there is this macro:
#define NODE_BUILTIN_STANDARD_MODULES(V) \
V(async_wrap) \
V(buffer) \
V(cares_wrap) \
...
So just add the your module to the list the same way as the others V(mymodule).
I might have forgotten some step, so ask in the comments if you think I have missed something.
If you wonder why would anyone even want to do this... You can come up with several reasons, but here's one most important to me: Those package managers used to pack your project within one executable (like pkg or nexe) work only with node-gyp based modules. If you, like me, need to use cmake based module, the final executable won't work...
I have a puppet module that uses gini-archive. Recently I change my module to depend on biemond-wildfly, which depends on nanliu-archive.
However, I can't install nanliu-archive, because both of these archive modules install into a directory called archive. This, I believe, violates the puppet module requirements, as they should both install into directories called <username>-archive.
However, even if I put them in different directories, I still have a problem. Both classes are called archive (actually one is a class and one is a define, but I don't think that's too important right now), so when my module says include archive, puppet isn't going to know which one I want.
Note I have a java background where every class is in a package hierarchy which prevents these kind of issues, but I can't see any equivalent for puppet.
I know I could have a whole load of different modules directories (/etc/puppet/modules, /etc/puppet/modules2 etc), but puppet still seems to look through these in order, meaning it will always load the archive class from the first module directory in the list.
Is there any way of solving this or have I reached the limit of what puppet can do? I'd rather not have to fork every single module and change the class names, that seems to defeat the point of the forge.
Thanks.
The name of the directory the module is in must be archive, the username is only used for the purpose of distributing and packaging modules but is not used by puppet while autoloading. Basically, what you are seeing is correct.
There seems to be two ways of handling this:
Fork one of the two archive modules and rename the module so that it does not collide
Fork one of the modules using the archive modules and migrate it to use the same archive module as the other one. Since the two archive modules do almost the same thing, I prefer this method.
I just did this so I'm going expand a bit on option (1) in #ChrisPitman's answer by including more details using a module I just forked & renamed as an example.
(Unfortunately) the simplest solution is to fork one of the modules and rename it. Below is an example using puppet/selinux and thias/selinux which have a namespace collision at selinux. The following steps were taken to re-namespace the thias/selinux module into the namespace selinux_thias:
Fork the module. In this example I have created USF-IMaRS/puppet-selinux from thias/puppet-selinux.
Install the module into modules/$NEW_NAME. Using git submodules this is: git submodule add https://github.com/USF-IMARS/puppet-selinux modules/selinux_thias
rename the module class(es). Here is a commit demonstrating what this basically looks like.
modify modules using thias/selinux to use new name selinux_thias instead of selinux.
I have an erlang program, compiled with rebar, after the new debian release, it won't compile anymore, complaining about this:
-import(erl_scan).
-import(erl_parse).
-import(io_lib).
saying:
bad import declaration
I don't know erlang, I am just trying to compile this thing.
Apparently something bad happened to -import recently http://erlang.org/pipermail/erlang-questions/2013-March/072932.html
Is there an easy way to fix this?
Well, -import(). is working but it does NOT do what you are expecting it to do. It does NOT "import" the module into your module, nor does it go out, find the module and get all the exported functions and allow you to use them without the module name. You use -import like this:
-import(lists, [map/2,foldl/3,foldr/3]).
Then you can call the explicitly imported functions without module name and the compiler syntactically transforms the call by adding the module name. So the compiler will transform:
map(MyFun, List) ===> lists:map(MyFun, List)
Note that this is ALL it does. There are no checks for whether the module exists or if the function is exported, it is a pure naive syntactic transformation. All it gives you is slightly shorter code. For this reason it is seldom used most people advise not to use it.
Note also that the unit of code for all operations is the module so the compiler does not do any inter-module checking or optimisation at all. Everything between modules like checking a modules existence or which functions it exports is done at run-time when you call a function in the other module.
No, there is no easy way to fix this. The source code has to be updated, and every reference to imported functions prefixed with the module in question. For example, every call to format should be replaced with io_lib:format, though you'd have to know which function was imported from which module.
You could start by removing the -import directives. The compilation should then fail, complaining about undefined functions. That is where you need to provide the correct module name. Look at the documentation pages for io_lib, erl_scan and erl_parse to see which functions are in which module.
Your problem is that you were using the experimental -import(Mod) directive which is part of parameterized modules. These are gone in R16B and onwards.
I often advise against using import. It hurts quick searches and unique naming of foreign calls. Get an editor which can quickly expand names.
Start by looking at what is stored in the location $ERL_LIBS, typically this points to /usr/lib/erlang/lib.
I'm working on a large Node project. Naturally, I want to break this into multiple source files. There are many modules from the standard lib that I use in a majority of my source files, and there are also quite a few of my own files that I want to use almost everywhere.
I've been making this work by including a huge require block at the beginning of each source file, but this feels awfully redundant. Is there a better way to do this? Or is this an intended consequence of Node's admirable module system?
You can use a container module to load a series of modules. For example, given the following project structure:
lib/
index.js
module1.js
module2.js
main.js
You can have index.js import the other modules in the library.
# index.js
module.exports.module1 = require('./module1');
module.exports.module2 = require('./module2');
Then main.js need only import a single module:
# main.js
var lib = require('./lib');
lib.module1.doSomething();
lib.module2.doSomethingElse();
This technique can be expanded, reducing redundant imports.
I'd say generally that a require block is better practice than using global in Node.
You need to remember that requires are cached so when you put them in all of your code modules, you will always get the same instance not a new one each time.
Doing it this way ensures that you get the appropriate code with the expected name spaces exactly where you want it whereas using global will include things you don't need. Doing it the Node way with require will also tend to make your code slightly more portable.
Well, a couple of things, here.
First, if so many of your files are requiring the same libraries over and over again, you might want to step back and determine if you're really breaking your code up in the proper way. Perhaps there's a better organization where certain libraries are only needed by subsets of your source files?
Second, remember that the global object is shared between all of your required files in your Node.js app. Your "root" source file, say index.js, can do things like global.fs = require('fs'); and then it's accessible from all of your various files. This would eliminate the need to require a file full of requires. (In Node.js, you have to explicitly state that you're accessing a global variable by prepending global., unlike in the browser.)
This can be a good idea for CRUD-type Express apps where you have lots of code for controllers that are all almost the same but have to be slightly different for each view and you just want to split them apart not for any particular organization structure, but just to make it easier to debug (error in this file, not that file). If the structure of the app is more complex than that, take the usual warnings against global variables to heart before using that trick.
Require more than one file without absolute path through require-file-directory.
1- Can require more than one file in single statement.
2- Can require files with only their name.
Visit for solution: https://www.npmjs.com/package/require-file-directory