I am new to the linux kernel. I have searched a little bit of EXPORT_SYMBOL but I still get a little confused. I know it's used to export a variable or function defined in one module to another module. Does that mean by using that, we do not need to include any header file that declares that variable or function? Or are they both needed? If both needed , why do we need to have EXPORT_SYMBOL? Thanks,
Header files are for the compiler. EXPORT_SYMBOL is for the module loader. This allows for proper separation of module code from kernel code.
Related
In RequireJS documents (http://requirejs.org/docs/api.html#modulename), I couldn't understand this sentence.
You can explicitly name modules yourself, but it makes the modules less portable
My question is
Why explicitly naming module makes less portable?
When explicitly naming module needed?
Why explicitly naming module makes less portable?
If you do not give the module a name explicitly, RequireJS is free to name it whichever way it wants, which gives you more freedom regarding the name you can use to refer to the module. Let's say you have a module the file bar.js. You could give RequireJS this path:
paths: {
"foo": "bar"
}
And you could load the module under the name "foo". If you had given a name to the module in the define call, then you'd be forced to use that name. An excellent example of this problem is with jQuery. It so happens that the jQuery developers have decided (for no good reason I can discern) to hardcode the module name "jquery" in the code of jQuery. Once in a while someone comes on SO complaining that their code won't work and their paths has this:
paths: {
jQuery: "path/to/jquery"
}
This does not work because of the hardcoded name. The paths configuration has to use the name "jquery", all lower case. (A map configuration can be used to map "jquery" to "jQuery".)
When explicitly naming module needed?
It is needed when there is no other way to name the module. A good example is r.js when it concatenates multiple modules together into one file. If the modules were not named during concatenation, there would be no way to refer to them. So r.js adds explicit names to all the modules it concatenates (unless you tell it not to do it or unless the module is already named).
Sometimes I use explicit naming for what I call "glue" or "utility" modules. For instance, suppose that jQuery is already loaded through a script element before RequireJS but I also want my RequireJS modules to be able to require the module jquery to access jQuery rather than rely on the global $. If I ever want to run my code in a context where there is no global jQuery to get, then I don't have to modify it for this situation. I might have a main file like this:
define('jquery', function () {
return $;
});
require.config({ ... });
The jquery module is there only to satisfy modules that need jQuery. There's nothing gained by putting it into a separate file, and to be referred to properly, it has to be named explicitly.
Here's why named modules are less portable, from Sitepen's "AMD, The Definite Source":
AMD is also “anonymous”, meaning that the module does not have to hard-code any references to its own path, the module name relies solely on its file name and directory path, greatly easing any refactoring efforts.
http://www.sitepen.com/blog/2012/06/25/amd-the-definitive-source/
And from Addy Osmani's "Writing modular javascript":
When working with anonymous modules, the idea of a module's identity is DRY, making it trivial to avoid duplication of filenames and code. Because the code is more portable, it can be easily moved to other locations (or around the file-system) without needing to alter the code itself or change its ID. The module_id is equivalent to folder paths in simple packages and when not used in packages. Developers can also run the same code on multiple environments just by using an AMD optimizer that works with a CommonJS environment such as r.js.
http://addyosmani.com/writing-modular-js/
Why one would need a explicitly named module, again from Addy Osmani's "Writing modular javascript":
The module_id is an optional argument which is typically only required when non-AMD concatenation tools are being used (there may be some other edge cases where it's useful too).
I'm trying to export a per-cpu symbol "x86_cpu_to_logical_apicid" from kernel so that my kernel module can access it. In "arch/x86/kernel/apic/x2apic_cluster.c", I did
//static DEFINE_PER_CPU(u32, x86_cpu_to_logical_apicid);
DEFINE_PER_CPU(u32, x86_cpu_to_logical_apicid); //I remove static
EXPORT_PER_CPU_SYMBOL(x86_cpu_to_logical_apicid); // I add this
And after I recompile the kernel, the /proc/kallsyms shows
0000000000011fc0 V x86_cpu_to_logical_apicid
0000000000012288 V x86_cpu_to_node_map
ffffffff8187df50 r __ksymtab_x86_cpu_to_apicid
Then I try to access the "x86_cpu_to_logical_apicid" in my kernel module, by using
int apicid = per_cpu(x86_cpu_to_logical_apicid, 2)
However, when I loaded it, it fails to load it due to "Unknown symbol in module". The flag "V" means weak object, however I'm not sure whether this is the reason I fails to export the symbol. Can anyone give me some suggestions? Thank you!
I realize that the OP perhaps is not interested in the answer anymore, but today I had a similar issue, and I thought it might help others as well.
Before using an exported per_cpu variable in a module, you have to declare it first. For your case:
DECLARE_PER_CPU(u32, x86_cpu_to_logical_apicid);
Then you can use get_cpu_var and put_cpu_var to safely access the current processor's copy of the variable. You can read more here.
I have an erlang program, compiled with rebar, after the new debian release, it won't compile anymore, complaining about this:
-import(erl_scan).
-import(erl_parse).
-import(io_lib).
saying:
bad import declaration
I don't know erlang, I am just trying to compile this thing.
Apparently something bad happened to -import recently http://erlang.org/pipermail/erlang-questions/2013-March/072932.html
Is there an easy way to fix this?
Well, -import(). is working but it does NOT do what you are expecting it to do. It does NOT "import" the module into your module, nor does it go out, find the module and get all the exported functions and allow you to use them without the module name. You use -import like this:
-import(lists, [map/2,foldl/3,foldr/3]).
Then you can call the explicitly imported functions without module name and the compiler syntactically transforms the call by adding the module name. So the compiler will transform:
map(MyFun, List) ===> lists:map(MyFun, List)
Note that this is ALL it does. There are no checks for whether the module exists or if the function is exported, it is a pure naive syntactic transformation. All it gives you is slightly shorter code. For this reason it is seldom used most people advise not to use it.
Note also that the unit of code for all operations is the module so the compiler does not do any inter-module checking or optimisation at all. Everything between modules like checking a modules existence or which functions it exports is done at run-time when you call a function in the other module.
No, there is no easy way to fix this. The source code has to be updated, and every reference to imported functions prefixed with the module in question. For example, every call to format should be replaced with io_lib:format, though you'd have to know which function was imported from which module.
You could start by removing the -import directives. The compilation should then fail, complaining about undefined functions. That is where you need to provide the correct module name. Look at the documentation pages for io_lib, erl_scan and erl_parse to see which functions are in which module.
Your problem is that you were using the experimental -import(Mod) directive which is part of parameterized modules. These are gone in R16B and onwards.
I often advise against using import. It hurts quick searches and unique naming of foreign calls. Get an editor which can quickly expand names.
Start by looking at what is stored in the location $ERL_LIBS, typically this points to /usr/lib/erlang/lib.
Suppose I have a top level file that I pass to my compiler that has:
`include "my_defines.sv"
`include "my_component.sv"
Inside "my_component.sv" file, I am using some defines from "my_defines.sv", like this:
my_variable = `CONSTANT_FROM_MY_DEFINES;
The question is the following: do I need to have `include "my_defines.sv" inside "my_component.sv"? Perhaps this requirement is compiler-specific?
If your "my_defines.sv" has an "include" guard, then it is safe and better to include "my_defines.sv" in all your other files. The "include" guard at the top of "my_defines.sv" will look like this:
`ifndef MY_DEFINES_SV
`define MY_DEFINES_SV
// put your own defines here ...
`endif
include directives like that are like copying and pasting that file into the point where the include is. The compiler:
Reads the file you give it.
When it encounters an include, it reads that file.
When it's finished that file it continues the original file.
The result is that the compiler sees one big flat file.
In your example you can use stuff from my_defines in my_component because it appears earlier.
The problem with doing a lot of this is that eventually you'll end up with conflicts. Maybe two things reference each other (which include comes first), two things use the same name (clashing definitions), or multiple things have the same include statement (multiple definitions of the same thing).
Packages solve those problems. Once things start getting a little more complex, look into them.
It is dependent upon the order in which your source files are compiled. Because you are referring specifically todefine macros, which are global, it is required that the macro definitions are compiled before the macro is used. In your case, you do not need to include "my_defines.sv" inside "my_component.sv" since "my_defines.sv" was already compiled in your top file.
Macro definitions only persist across files but only to the end of the translation unit. Simulators must support two different methods of assigning source files to translation units and it's hard to get `include files full of `defines to compile correctly in both methods.
It is better use parameters or const variables for constants. Since parameters and constants follow normal scoping rules you can safely include them in every file/scope that needs them. Then it doesn't matter how the code is broken into translation units, it always compiles. I think it is easier to find the definitions when you're browsing the code because the `include is probably in the same file instead of off in some other unrelated file.
you have to include `include "my_defines.sv in my_component.sv...
best practice is add all include in one pkg and add that pkg to each of file.
I'm just looking for a simple, concise explanation of the difference between these two. MSDN doesn't go into a hell of a lot of detail here.
__declspec( dllexport ) - The class or function so tagged will be exported from the DLL it is built in. If you're building a DLL and you want an API, you'll need to use this or a separate .DEF file that defines the exports (MSDN). This is handy because it keeps the definition in one place, but the .DEF file provides more options.
__declspec( dllimport ) - The class or function so tagged will be imported from a DLL. This is not actually required - you need an import library anyway to make the linker happy. But when properly marked with dllimport, the compiler and linker have enough information to optimize the call; without it, you get normal static linking to a stub function in the import library, which adds unnecessary indirection. ONT1 ONT2
__declspec(dllexport) tells the linker that you want this object to be made available for other DLL's to import. It is used when creating a DLL that others can link to.
__declspec(dllimport) imports the implementation from a DLL so your application can use it.
I'm only a novice C/C++ developer, so perhaps someone's got a better explanation than I.
Two different use cases:
1) You are defining a class implementation within a dll. You want another program to use the class. Here you use dllexport as you are creating a class that you wish the dll to expose.
2) You are using a function provided by a dll. You include a header supplied with the dll. Here the header uses dllimport to bring in the implementation to be used by the current program.
Often the same header file is used in both cases and a macro defined. The build configuration defines the macro to be import or export depending which it needs.
Dllexport is used to mark a function as exported. You implement the function in your DLL and export it so it becomes available to anyone using your DLL.
Dllimport is the opposite: it marks a function as being imported from a DLL. In this case you only declare the function's signature and link your code with the library.