I was playing with Rust when I encountered a bug inside a cargo projects source code, I changed the code to fix the bug and recompiled the project but it was still using the old code which made me think.
Is Rust compiling the source code of a cargo package on my machine or does the result come from the cloud? If it is compiled on my machine is it only done once? Where are the results? If they are compiled in the Cloud how does that maintain compatibility between versions (of Rust) if the binary is static? Or are is there a binary for each Rust version?
Cargo only compiles code on the local machine- there is no built-in support for downloading pre-built Rust binaries from the Internet. You can find the source code of dependencies you have used in ~/.cargo/registry/src (Linux path). Cargo places all of the generated binary files in your project's target directory; It doesn't even reuse dependencies compiled on the local machine.
However, by installing and using sccache, you actually can share dependency binaries either between local projects and the cloud.
For your particular case of editing a dependency locally, you want to use the technique of Overriding Dependencies to patch the source code locally. Cargo doesn't check for changes when the code is edited in-cache, and thus you won't see anything change from in-place patches.
Finally, the per-crate .rlib files Cargo generates in target/debug/deps are equivalent to C object files- they need to all be linked together to produce a valid executable or library. The ABI between them is dependent on the Rust version, which is why switching toolchains will cause them all to be rebuilt.
Related
I have a Cargo workspace with a library and multiple binaries. The binaries enable different features in the library, and I would like to build the library only once (because it's big and building it takes a lot of time).
However, if I invoke cargo run/cargo test/clippy in the directory of one of the packages, the library is rebuilt with only the features enabled by the package I'm building.
Is it possible to tell cargo to apply feature unification as if the whole workspace is being built, when building only one package?
https://github.com/experiment9123/rust_wasm32_emscripten_pthreads
I have created this minimal example project which demonstrates the compile problem I'm having. It requires the emscripten SDK to be installed and in the path; compile with cargo build --target=wasm32-unknown-emscripten.
I get this error:
wasm-ld: error: --shared-memory is disallowed by std-e42ff3047517e183.std.70dd5b92-cgu.0.rcgu.o because it was not compiled with 'atomics' or 'bulk-memory' features.
I have a Rust project which runs in the browser fine , compiled to wasm32 with the emscripten toolchain & SDK; but when I attempt to enable pthreads, I get a linker error , which I'm guessing indicates the std lib needs to be recompiled with the same options.
Could anyone confirm if thats what I'm really seeing,
and moreover what would the remedy be?
I think cargo brings in source for most dependencies and builds them from source? - can it be made to do this for the stdlib? how would you go about injecting this option?
Is it something that would require major work within the rust stdlib itself ? (eg another target, or some way of adding options)
While exploring about the platform setup for OpenCASCADE, I came to know about WOK commands which arent needed for CMake build system to use with OpenCASCADE
However another option of genproj tool (for which I havent yet found any exe but DLLs..) to be used with MSVC+ in built compiler so that we dont need any gcc installation
Whats the difference between the twos and which one is better and easier??
Also suggest me how to download and install and setup genproj on windows
OCCT project provided the following build systems:
CMake. This is the main building system since OCCT 7.0.0.
It allows building OCCT for almost every supported target platform.
WOK. This was an in-house building system used by OCCT before 7.0.0 release.
The tool handled classes defined in CDL (CAS.CADE definition language) files (WOK generated C++ header files from CDL) and supported building in a distributed environment (e.g. local WOK setup builds only modified source files and reused unmodified binary / object files from local network). WOK support has been discontinued since OCCT 7.5.0 and unlikely will be able building up-to-date OCCT sources (although project structure remains compatible with WOK).
genproj. This is a Tcl script allowing to generate project for building OCCT using Visual Studio (2010+), Code::Blocks, XCode and Qt Creator. This script has been initially extracted from WOK package (where it was implemented as command wgenproj in it's shell) and now maintained independently from it.
qmake. Experimental adm/qmake solution can be opened directly from QtCreator without CMake plugin (the project files will be generated recursively by qmake). Although header files generation (filling in inc folder) still should be done using genproj (qmake scripting capabilities were found too limited for this staff).
genproj doesn't require any DLLs or EXE files - it comes with OCCT itself and requires Tcl interpreter. On Windows platform it can be executed with genconf.bat and genproj.bat batch scripts in the root of OCCT source code folder. At first launch it will ask to put a path to tclsh.exe.
While CMake is the main building tool for OCCT project, genproj remains maintained and used by (some) developers - mostly due to personal habits and hatred to CMake. They differences of genproj from CMake that could be considered as advantages in some cases:
Generated project files can be moved to another location / computer without necessity to re-generate them.
Simplified 3rd-party dependency search tool genconf with GUI based on Tcl/Tk.
Batch-script environment/configuration files (env.bat and custom.bat), although CMake script in OCCT emulates similar files.
Generated Visual Studio solution contains Debug+Release and 32bit/64bit configurations.
Running Draw Harness and regression tests can be started directly from Visual Studio (without building any INSTALL target).
No problems with CMakeCache.txt.
Limitations of genproj:
No CMake configuration files. Other CMake-based projects would not be able re-using configuration files to simplify 3rd-party setup.
Regeneration of project files should be called explicitly.
Out-of-source builds are not supported (however, each configuration is put into dedicated sub-folder).
No INSTALL target.
No PCH (pre-compiler header file) generation.
It should be noted, that several attempts have been done to make compiler / linker flags consistent between CMake and genproj, but in reality they may be different.
I am working on a project where a dependency requires a specific nightly feature. I need to use this lib, but I am afraid that if I compile the project with nightly, I could depend on nother libraries that include another unstable feature dependency and I wouldn't be aware of that.
Is it possible to compile the library that I need using nightly (while set the nightly version to version that already merged to the release branch) to some kind of "lib.a" file, and compile the whole project on stable release while linking to "lib.a"?
No, you cannot do this.
See also:
Is there any way to get unstable features on the compiler versions in stable or beta?
Is there a way to use unstable modules from Rust stable?
For a cross-platform software project that builds on Linux and Windows we have distinct ways to handle third-party libraries. On Linux we build and link against the versions distributed with the CentOS/RHEL distribution, which means we link against release builds, whereas on Windows we maintain our own third-party library "packages" and on Windows we build two versions of every library - a release version that links msvcr100 and msvcp100 and a debug version that links msvcr100d and msvcp100d.
My question is simply whether it is necessary to build the debug version of the third-party dependencies on Windows or can we simply use /nodefaultlib:msvcr100 when building debug builds of our own software.
A follow up question: Where can I learn about good practices in this regard. I've read the MSDN pages about the msvc runtime, but there is very little there in terms of recommendations.
EDIT:
Let me rephrase the question more concisely. With VS2010, what is the problem with using /nodefaultlib:msvcr100 to link an executable build with /MDd when linking with libraries that are compiled with /MD.
My motivation for this is to avoid to have to build both release and debug version of third party libraries that I use. Also I want my debug build to run faster.
From the document for /MD, /MT, /LD (Use Run-Time Library):
MD: Causes your application to use the multithread- and DLL-specific version of the run-time library. Defines _MT and _DLL and causes the compiler to place the library name MSVCRT.lib into the .obj file.
Applications compiled with this option are statically linked to MSVCRT.lib. This library provides a layer of code that allows the linker to resolve external references. The actual working code is contained in MSVCR100.DLL, which must be available at run time to applications linked with MSVCRT.lib
/MDd: Defines _DEBUG, _MT, and _DLL and causes your application to use the debug multithread- and DLL-specific version of the run-time library. It also causes the compiler to place the library name MSVCRTD.lib into the .obj file.
So there is no documentation for any difference done to the generated code other than _DEBUG being defined.
You only use the Debug build of the CRT to debug your app. It contains lots of asserts to help you catch mistakes in your code. You never ship the debug build of your project, always the Release build. Nor can you, the license forbids shipping msvcr100d.dll. So building your project correctly automatically avoids the dependency on the debug version of the CRT.
The /nodefaultlib linker option was intended to allow linking your program with a custom CRT implementation. Quite rare but some programmers care a lot about building small programs and the standard CRT isn't exactly small.
Some programmers use the /nodefaultlib has a hack around a link problem. Induced when they link code that was built with Debug configuration settings with code built with Release configuration settings. Or link code that has incompatible CRT choices, /MD vs /MT. This can work, no guarantee, but of course only sweeps the real problem under the floor mat.
So no, it is not the proper choice, fixing the core problem should be your goal. Ensure that all your .obj and .lib files are built with the same compiler options and you won't have this problem. If that means that you have to pester a library owner for a proper build then pester first, hack around it only when you've discovered that you don't want to have a dependency on that .lib anymore but don't yet have the time to find an alternative.