This question already has answers here:
How can I build multiple binaries with Cargo?
(3 answers)
Package with both a library and a binary?
(4 answers)
Closed 3 years ago.
I'd like to create a rust package with two binary crates and a library which contains the shared code. I know how do do this for a simple program by putting the source files for the binaries in a src/bin/ subdirectory (e.g. src/bin/firstbin.rs and src/bin/secondbin.rs) and the library code either in src/ or in src/lib/.
However, if the binaries have a substantial amount of non-shared code which does not belong in the library, and I want to split their source into multiple files, I'm not sure of how to lay out the source files. I'm thinking something along the lines of having src/bin/firstbin/ for the files which belong only to the first binary, and src/bin/secondbin/ for the second binary. However, I'm not sure how to reference these files from firstbin.rs and secondbin.rs.
So is this the right approach and if so how do I reference the files? If not, what's the best layout?
You can put your fn main() into src/bin/firstbin/main.rs and add more files for submodules in the same directory. This is documented in this section of the Cargo manual (in the text, the gray box is wrong).
Related
I currently have one main applications where parts of which can be "live-patched". Eg. some functions with predefined name and signature can be updated during runtime. Currently, the repatching is performed by this two steps:
use std::process::Command to call rustc to compile a cdylib from the source. Each output file has a new name in order to make sure that dlopen does not use the cached old file
use libloading to load and run the newly patched function
This is obviously not ideal for a couple of reasons. My question would be if there is a better way to achieve this? Eg do the compilation from within rust.
Additional info:
The patch files do not require any external crates
The patch files need to be aware of some common lib modules, which do not get live patched
I have compiled the sources of wget, here is the ftp server https://ftp.gnu.org/gnu/wget/ to link my own program to one of the object files that I obtained after I compiled the project. But running nm -u on the desired file (to be specific src/http.o) gives me a whole lot of names that need to be resolved at link-time.
Question #1
Is there a tool to find which other object files are needed to be present for linker to resolve all the symbols? Manually testing every possible combination of object files does not even seem reasonable.
Question #2
When I try to link my program with every possible object file obtained from compiling the project I meet the following error - multiple definition. Does it imply that in general I need to select only a meaningful subset of the object files that I get after compiling some project and then building my executable with them?
Is there a tool to find which other object files are needed to be present for linker to resolve all the symbols?
No. Constructing such a tool would not be difficult (you want to find connected components in the dependency graph), but the problem is not common.
Manually testing every possible combination of object files does not even seem reasonable.
It looks like wget consists of about 100 source files. Using all possible permutations, you would only have to try your link 100! times, which is indeed a bit too many combinations to try.
As #kaylum commented, the developers didn't intend wget as a reusable library, so there is no guarantee that there is a solution even if you do try every possible combination.
Also note that linking in wget sources imposes licence restrictions on your final program (you would have to release it under GPLv3).
When I try to link my program with every possible object file obtained from compiling the project I meet the following error - multiple definition.
That is expected: both your own program and wget/src/main.c define the main function.
Does it imply that in general I need to select only a meaningful subset of the object files that I get after compiling some project and then building my executable with them?
Yes. And in general there is no guarantee that a subset satisfying your requirements even exists.
This question already has an answer here:
What is the recommended directory structure for a Rust project?
(1 answer)
Closed 4 years ago.
Where in the project structure do benchmark tests for a library go? I tried putting them in a file in the tests folder of the library and they fail to run when running cargo bench.
The documentation for Cargo's project layout says:
Benchmarks go in the benches directory.
That's the convention, but in your Cargo.toml you can configure targets, and one of those is the [[bench]] target. Doing that will let you place them wherever you'd like.
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Common GNU makefile directory path
After reading Recursive Make Considered Harmful I decided to use "include makefile" to my next project. I got a main Makefile that include two sub-makefiles that are in diffrent dirs. the problem is that the paths that inside the sub-makefile is relative to his dir so when I include it from the main Makefile he can't find the files. is there a way to solve this problem without changing the paths?
Although the article is right about recursive make and DAG tree, I read the article about half a year ago and tried to use the approach described in it and found the "classic" approach to recursive make much more convenient. Consider this:
big_project
|--Makefile
|
|--sub_project_1
| |--...
| |--Makefile
|
|--sub_project_2
|--...
|--Makefile
It's wonderful when you're running make from big_project project directory, but well, if you do things as it's recommended in the article, there would be no Makefiles in sub_project_x directories, thus you won't be able to treat each sub-project separately.
This question already has answers here:
Assembly code vs Machine code vs Object code?
(10 answers)
Closed 9 years ago.
I'm in the middle of my a levels and im doing some revision for my Computing exam.
I was wondering if someone could tell me what the difference is between machine code and object code.
keep it it simple please.
Object code is the output of the compiler. It contains instructions and tokens like your source code, but in a compact and optimized (often executable) format. It can also contain other things like debugger symbols. Usually, object code is then processed by the linker, which connects the object code from each compilation unit together to form an executable (or library, such as a dll). The executable or library contains machine code, which can be executed directly by the processor and is specific to the machine architecture and operation set.