How to get plain text files in Doxygen documentation? - text

I cannot include any text file in my Doxygen documentation. The only exception is a README.md file that I set as the main page.
In particular, I would like to see the Changelog.txt file in the documentation. I tried to add it explicitly in the INPUT field and in the FILE_PATTERNS field, without success. In the generated HTML documentation, I cannot find anything neither in the file list nor making a search.
The only trace is in Doxygen's log file:
Preprocessing C:/Source/Changelog.txt...
Parsing file C:/Source/Changelog.txt...
...
Parsing code for file Changelog.txt...
If I change the extension of the file from txt to md, the file is added to the documentation.

You need EXTENSION_MAPPING=txt=md otherwise the .txt file is handled as a C / C++ source file and it is missing comment signs, resulting in no output.
From the documentation:
EXTENSION_MAPPING Doxygen selects the parser to use depending on the
extension of the files it parses. With this tag you can assign which
parser to use for a given extension. Doxygen has a built-in mapping,
but you can override or extend it using this tag. The format is
ext=language, where ext is a file extension, and language is one of
the parsers supported by doxygen: IDL, Java, Javascript, C#, C, C++,
D, PHP, Objective-C, Python, Fortran (fixed format Fortran:
FortranFixed, free formatted Fortran: FortranFree, unknown formatted
Fortran: Fortran. In the later case the parser tries to guess whether
the code is fixed or free formatted code, this is the default for
Fortran type files), VHDL. For instance to make doxygen treat .inc
files as Fortran files (default is PHP), and .f files as C (default is
Fortran), use: inc=Fortran f=C. Note: For files without extension you
can use no_extension as a placeholder. Note that for custom extensions
you also need to set FILE_PATTERNS otherwise the files are not read by
doxygen.

Related

Haxe compiling to C++ and JS source

I am trying to write source code in one language and have it converted to both native c++ and JS source. Ideally the converted source should be human readable and resemble the original source as best it can. I was hoping haxe could solve this problem for me. So I code in haxescript and have it convert it to its corresponding C++ and JS source. However the examples I'm finding of haxe seems to create the final application for you. So with C++ it will use msbuild (or whatever compiler it finds) and creates the final exe for you from generated C++ code. Does haxe also create the c++ and JS source code for you to view or is it all done internally to haxe and not accessible? If it is accessible then is it possible to remove the building side of haxe so it simply creates the source code and stops?
Thanks
When you generate CPP all the intermediate files are generated and kept wherever you decide to generate your output (the path given using -cpp pathToOutput). The fact that you get an executable is probably because you are using the -main switch. That implies an entry point to your application but that is not really required and you can just pass to the command line a bunch of types that you want to have built in your output.
For JS it is very similar, a single JS file is generated and it only has an entry point if you used -main.
Regarding the other topic, does your Haxe code resembles the generated code the answer is yes, but ... some of the types (like Enum and Abstract) only exist in Haxe so they will generate code that functionally works but it might look quite different. Also Haxe has an always-on optimizer/analyzer that might mungle your code in unexpected ways (the analyzer can be disabled). I still find that it is not that difficult to figure out the Haxe source from the generated code. JS has support for source mapping which is really useful for debugging. So in the end, Haxe doesn't do anything to obfuscate your generated code but also doesn't do much to try to preserve it too strictly.

irun, ncverilog does not determine header file

irun does not determine define.h file. When I use irun like this
irun -f xxx.f
I've got a error message like this.
irun: E.FMUK the type of the file m_def.h could not be determined.
Above file is consist of all 'define xxxx. How can I solve this problem?
You can use irun commnad line option - vlog_ext to add new file extensions to irun.
Add extensions to the list of built-in, predefined extensions by using a plus sign ( + ) before the list of extensions to add. For example, the following option adds .rtl and .vh.
-vlog_ext +.rtl,.vh
Rename m_def.h to m_def.vh (or m_def.v).
The .h file extension is for C/C++ header files. Verilog header files more often use the .vh extension; if not then .v. SystemVerilog header files should use .svh extension.
Many Verilog/SystemVerilog simulators allow overriding/extending the accepted file extension type. Refer to the manual for the specific simulator. Note that some simulator except C/C++, Verilog, SystemVerilog, VHDL, and others. It is recommended not to add the file extension to one language that is already being used by another.
In this case .h is already used with C/C++, so don't add .h to the allowed Verilog/SystemVerilog file extension. If .vh is not supported by default, you may add it to the allowed Verilog file extension list.

Why there are verilog verification files not in the form of module?

Why are there Verilog verification files not in the form of a module?
The files I see start with just initial begin, and some file names use the .inc extension.
It is common to include files of arbitrary content into Verilog modules. This is done using the `include compiler directive, as described in IEEE Std 1800-2012, section "22.4 `include":
The file inclusion (include) compiler directive is used to insert the
entire contents of a source file in another file during compilation.
The result is as though the contents of the included source file
appear in place of the `include compiler directive.
It can be useful for sharing common code between different modules: parameters, define macros, tasks, functions, etc.
In general, the .inc file extension is not special. It may be a convention used by certain simulation tools.

How to structure a project in Visual C++ 2008 Express

I am using Visual C++ 2008 Express for the first time for a project. And I can't seem to be able to split the .h & .cpp files for classes I'm writing. I was under the impression that you add a header file and prototype the class in there, and then you add a .cpp file with the implementation into your source files directory. Then when you include the .h it would automatically include the .cpp implementation files. Is this correct or am I missing something?
Not sure if this is the same as in Express version. But you can also add a new C++ class with header (.h) and source (.cpp) at the same time by right clicking on the project -> Add -> Class...
By including the .h file using #include, doesn't mean the actual implementation (in another .cpp file) is also include in your source file. The content of .h file which are class and method prototypes is only included. These prototypes allow you to make use of the classes declared in header file (without including real C++ code.)
Each source files (.cpp) are first compiled into object files. All these object file are then linked together to create single executable file. The referenced symbols in, each object file, are actually linked to their implementation during this linking process (http://www.cprogramming.com/compilingandlinking.html)
I don't remember the rules, but sometimes the IDE assumes you're putting all your code in the header file. This is legal, but not a common preference.

The compilation process

Can anyone explain how compilation works?
I can't seem to figure out how compilation works..
To be more specific, here's an example.. I'm trying to write some code in MSVC++ 6 to load a Lua state..
I've already:
set the additional directories for the library and include files to the right directories
used extern "C" (because Lua is C only or so I hear)
include'd the right header files
But i'm still getting some errors in MSVC++6 about unresolved external symbols (for the Lua functions that I used).
As much as I'd like to know how to solve this problem and move on, I think it would be much better for me if I came to understand the underlying processes involved, so could anyone perhaps write a nice explanation for this? What I'm looking to know is the process.. It could look like this:
Step 1:
Input: Source code(s)
Process: Parsing (perhaps add more detail here)
Output: whatever is output here..
Step 2:
Input: Whatever was output from step 1, plus maybe whatever else is needed (libraries? DLLs? .so? .lib? )
Process: whatever is done with the input
Output: whatever is output
and so on..
Thanks..
Maybe this will explain what symbols are, what exactly "linking" is, what "object" code or whatever is..
Thanks.. Sorry for being such a noob..
P.S. This doesn't have to be language specific.. But feel free to express it in the language you're most comfortable in.. :)
EDIT: So anyway, I was able to get the errors resolved, it turns out that I have to manually add the .lib file to the project; simply specifying the library directory (where the .lib resides) in the IDE settings or project settings does not work..
However, the answers below have somewhat helped me understand the process better. Many thanks!.. If anyone still wants to write up a thorough guide, please do.. :)
EDIT: Just for additional reference, I found two articles by one author (Mike Diehl) to explain this quite well.. :)
Examining the Compilation Process: Part 1
Examining the Compilation Process: Part 2
From source to executable is generally a two stage process for C and associated languages, although the IDE probably presents this as a single process.
1/ You code up your source and run it through the compiler. The compiler at this stage needs your source and the header files of the other stuff that you're going to link with (see below).
Compilation consists of turning your source files into object files. Object files have your compiled code and enough information to know what other stuff they need, but not where to find that other stuff (e.g., the LUA libraries).
2/ Linking, the next stage, is combining all your object files with libraries to create an executable. I won't cover dynamic linking here since that will complicate the explanation with little benefit.
Not only do you need to specify the directories where the linker can find the other code, you need to specify the actual library containing that code. The fact that you're getting unresolved externals indicates that you haven't done this.
As an example, consider the following simplified C code (xx.c) and command.
#include <bob.h>
int x = bob_fn(7);
cc -c -o xx.obj xx.c
This compiles the xx.c file to xx.obj. The bob.h contains the prototype for bob_fn() so that compilation will succeed. The -c instructs the compiler to generate an object file rather than an executable and the -o xx.obj sets the output file name.
But the actual code for bob_fn() is not in the header file but in /bob/libs/libbob.so, so to link, you need something like:
cc -o xx.exe xx.obj -L/bob/libs;/usr/lib -lbob
This creates xx.exe from xx.obj, using libraries (searched for in the given paths) of the form libbob.so (the lib and .so are added by the linker usually). In this example, -L sets the search path for libraries. The -l specifies a library to find for inclusion in the executable if necessary. The linker usually takes the "bob" and finds the first relevant library file in the search path specified by -L.
A library file is really a collection of object files (sort of how a zip file contains multiple other files, but not necessarily compressed) - when the first relevant occurrence of an undefined external is found, the object file is copied from the library and added to the executable just like your xx.obj file. This generally continues until there are no more unresolved externals. The 'relevant' library is a modification of the "bob" text, it may look for libbob.a, libbob.dll, libbob.so, bob.a, bob.dll, bob.so and so on. The relevance is decided by the linker itself and should be documented.
How it works depends on the linker but this is basically it.
1/ All of your object files contain a list of unresolved externals that they need to have resolved. The linker puts together all these objects and fixes up the links between them (resolves as many externals as possible).
2/ Then, for every external still unresolved, the linker combs the library files looking for an object file that can satisfy the link. If it finds it, it pulls it in - this may result in further unresolved externals as the object pulled in may have its own list of externals that need to be satisfied.
3/ Repeat step 2 until there are no more unresolved externals or no possibility of resolving them from the library list (this is where your development was at, since you hadn't included the LUA library file).
The complication I mentioned earlier is dynamic linking. That's where you link with a stub of a routine (sort of a marker) rather than the actual routine, which is later resolved at load time (when you run the executable). Things such as the Windows common controls are in these DLLs so that they can change without having to relink the objects into a new executable.
Step 1 - Compiler:
Input: Source code file[s]
Process: Parsing source code and translating into machine code
Output: Object file[s], which consist[s] of:
The names of symbols which are defined in this object, and which this object file "exports"
The machine code associated with each symbol that's defined in this object file
The names of symbols which are not defined in this object file, but on which the software in this object file depends and to which it must subsequently be linked, i.e. names which this object file "imports"
Step 2 - Linking:
Input:
Object file[s] from step 1
Libraries of other objects (e.g. from the O/S and other software)
Process:
For each object that you want to link
Get the list of symbols which this object imports
Find these symbols in other libraries
Link the corresponding libraries to your object files
Output: a single, executable file, which includes the machine code from all all your objects, plus the objects from libraries which were imported (linked) to your objects.
The two main steps are compilation and linking.
Compilation takes single compilation units (those are simply source files, with all the headers they include), and create object files. Now, in those object files, there are a lot of functions (and other stuff, like static data) defined at specific locations (addresses). In the next step, linking, a bit of extra information about these functions is also needed: their names. So these are also stored. A single object file can reference functions (because it wants to call them when to code is run) that are actually in other object files, but since we are dealing with a single object file here, only symbolic references (their 'names') to those other functions are stored in the object file.
Next comes linking (let's restrict ourselves to static linking here). Linking is where the object files that were created in the first step (either directly, or after they have been thrown together into a .lib file) are taken together and an executable is created.
In the linking step, all those symbolic references from one object file or lib to another are resolved (if they can be), by looking up the names in the correct object, finding the address of the function, and putting the addresses in the right place.
Now, to explain something about the 'extern "C"' thing you need:
C does not have function overloading. A function is always recognizable by its name. Therefore, when you compile code as C code, only the real name of the function is stored in the object file.
C++, however, has something called 'function / method overloading'. This means that the name of a function is no longer enough to identify it. C++ compilers therefore create 'names' for functions that include the prototypes of the function (since the name plus the prototype will uniquely identify a function). This is known as 'name mangling'.
The 'extern "C"' specification is needed when you want to use a library that has been compiled as 'C' code (for example, the pre-compiled Lua binaries) from a C++ project.
For your exact problem: if it still does not work, these hints might help:
* have the Lua binaries been compiled with the same version of VC++?
* can you simply compile Lua yourself, either within your VC solution, or as a separate project as C++ code?
* are you sure you have all the 'extern "C"' things correct?
You have to go into project setting and add a directory where you have that LUA library *.lib files somewhere on the "linker" tab. Setting called "including libraries" or something, sorry I can't look it up.
The reason you get "unresolved external symbols" is because compilation in C++ works in two stages. First, the code gets compiled, each .cpp file in it's own .obj file, then "linker" starts and join all that .obj files into .exe file. .lib file is just a bunch of .obj files merged together to make distribution of libraries just a little bit simplier.
So by adding all the "#include" and extern declaration you told the compiler that somewhere it would be possible to find code with those signatures but linker can't find that code because it doesn't know where those .lib files with actual code is placed.
Make sure you have read REDME of the library, usually they have rather detailed explanation of what you had to do to include it in your code.
You might also want to check this out: COMPILER, ASSEMBLER, LINKER AND LOADER: A BRIEF STORY.

Resources