I'm on Windows 10 x64 with:
rustup 1.25.1 (bb60b1e89 2022-07-12)
rustc 1.66.1 (90743e729 2023-01-10)
MSVC v143 installed with Visual Studio Installer from Microsoft.
If I open a Rust project and change a simple char in code (eg: a variable value from 1 to 2) it re-builds the project (using watchexec) in 12 seconds.
I installed llvm and used this in global cargo config file (C:/Users/<username>/.cargo/config)
[target.x86_64-pc-windows-msvc]
linker = "lld-link.exe"
After a cargo clean and an initial re-build (debug mode, 2 minutes, the same as without lld) the time is the same (maybe even worse than 1 second).
So no change with or without LLD.
Can you confirm or am I wrong?
How to get faster incremental (development) builds?
Related
Good Morning.
I am compiling Clang, following the instructions here Getting Started: Building and Running Clang
I am on linux and the compilation goes smoothly. But I think I am missing out something...
I want to compile ONLY clang, not all the related libraries. The option -DLLVM_ENABLE_PROJECTS=clang seems doing what I want (check LLVM_ENABLE_PROJECTS here)
If I use the instructions written there, I can compile, but I think I am compiling too much....a build directory of 70GB seems too much to me...
I tried to download the official debian source and compile the debian package (same source code! just using the "debian way" to create a package from official debian source), just to compare...The compilation goes smoothly, is very fast, and the build directory is much much smaller...as I expected...
I noticed in the first link I provided the phrase "This builds both LLVM and Clang for debug mode."...
So, anyone knows if my problem is due to the fact that I am compiling a "debug mode" version? if so, how could I compile the default version? and is there a way to compile ONLY clang without LLVM?
Yes, debug mode binaries are typically much larger than release mode binaries.
Cmake normally uses CMAKE_BUILD_TYPE to determine he build type. It can be set from the command line with -DCMAKE_BUILD_TYPE="Release" o -DCMAKE_BUILD_TYPE="Debug" (sometimes there are other build types as well).
I am trying to build a c++/cuda extension with Pytorch following the tutorial here, (with instructions how to use pytorch with c++ here). My environment details are:
Using Microsoft Visual Studio 2019 version 16.6.5
Windows 10
libtorch c++ debug 1.70 with cuda 11.0 installed from the pytorch website
I am using this cmake code where I set the include directory for python 3.6 and the library for python36.lib
cmake_minimum_required (VERSION 3.8)
project ("DAConvolution")
find_package(Torch REQUIRED)
# Add source to this project's executable.
add_executable (DAConvolution "DAConvolution.cpp" "DAConvolution.h")
include_directories("C:/Users/James/Anaconda3/envs/masters/include")
target_link_libraries(DAConvolution "${TORCH_LIBRARIES}" "C:/Users/James/Anaconda3/envs/masters/libs/python36.lib")
if (MSVC)
file(GLOB TORCH_DLLS "${TORCH_INSTALL_PREFIX}/lib/*.dll")
add_custom_command(TARGET DAConvolution
POST_BUILD
COMMAND ${CMAKE_COMMAND} -E copy_if_different
${TORCH_DLLS}
$<TARGET_FILE_DIR:DAConvolution>)
endif (MSVC)
I set the CMake command arguments to be -DCMAKE_PREFIX_PATH=C:\libtorch (my path to libtorch debug mentioned above). I am building with the x64-debug option in MSVC version (as building with the x-64 Release option gives me a torch-NOTFOUND error).
The example DAConvolution.cpp file is:
#ifdef _DEBUG
#undef _DEBUG
#include <python.h>
#define _DEBUG
#else
#include <python.h>
#endif
#include <torch/extension.h>
Where I have undefined the _DEBUG flag so that the linker does not look for the python36_d.lib file (which I do not have).
I am getting a linking error:
Simply including torch.h works fine, but when I want to include the extension header thats when I get these problems, as it uses Pybind 11 I believe. Any insights much appreciated. I have tried to include all the info I can, but would be happy to give more information.
For Windows and with Visual studio, you are better to work with the Visual Studio rather than the CMake.
Just create a simple Console Application, go to the project's Properties, change the Configuration type to Dynamic Library (dll), Configure the include and Library directories, add the required enteries to your linker in Linker>Input (such as torch.lib, torch_cpu.lib, etc) and you are good to go click build, and if you have done everything correctly you'll get yourself a dll that you can use (e.g loading it using torch.classes.load_library from Python and use it.
The Python debug version is not shipped with Anaconda/ normal python distribution, but if you install the Microsoft Python distribution which I believe can be downloaded/installed from Visual Studio installer, its available.
Also starting from Python 3.8 I guess the debug binaries are also shipped.
In case they are not, see this.
For the cmake part you can follow something like the following. This is a butchered version taken from my own cmake that I made for my python extension some time ago.
Read it and change it based on your own requirements it should be straight forward :
# NOTE:
# TORCH_LIB_DIRS needs to be set. When calling cmake you can specify them like this:
# cmake -DCMAKE_PREFIX_PATH="somewhere/libtorch/share/cmake" -DTORCH_LIB_DIRS="/somewhere/lib" ..
cmake_minimum_required(VERSION 3.1 FATAL_ERROR)
project(DAConvolution)
find_package(Torch REQUIRED)
# we are using the C++17, if you are not change this or remove it altogether
set(CMAKE_CXX_STANDARD 17)
#define where your headers and libs are, specify for example where your DaConvolution.h resides!
include_directories( somewhere/Yourinclude_dir ${TORCH_INCLUDE_DIRS})
set(DAConvolution_SRC ./DAConvolution.cpp )
LINK_DIRECTORIES(${TORCH_LIB_DIRS})
add_library(
DAConvolution
SHARED
${DAConvolution_SRC}
)
# if you use some custom libs, you previously built, specify its location here
# target_link_directories(DAConvolution PRIVATE somewhere/your_previously_built_stuff/libs)
target_link_libraries(DAConvolution ${TORCH_LIB_DIRS}/libc10.so)
target_link_libraries(DAConvolution ${TORCH_LIB_DIRS}/libtorch_cpu.so)
install(TARGETS DAConvolution LIBRARY DESTINATION lib )
Side note:
I made the cmake for Linux only, so under Windows, I always use Visual Studio (2019 to be exact), in the same way I explained earlier. its by far the best /easiest approach imho. Suit yourself and choose either of them that best fits your problem.
I have set up static library builds of zlib and libpng. Both compile fine into .lib files. I am using MSVC 2010.
With this setup, to use libpng.lib, you need to link against zlib.lib as well. To avoid this, I'm trying to use lib.exe to link zlib into libpng directly. My invocation looks like:
call "C:/Program Files (x86)/Microsoft Visual Studio 10.0/VC/bin/lib.exe" /OUT:x64\Release\libpng2.lib x64\Release\libpng.lib ..\zlib\x64\Release\zlib.lib /LTCG
In both of their project settings, I explicitly set "Librarian->General->Target Machine" to MachineX64. And, using dumpbin, I can check that the relevant zlib.lib and libpng are both compiled for x64.
Additionally, "General->Whole Program Optimization" and "C/C++->Optimization->Whole Program Optimization" have identical values.
The problem only occurs for x64 Release configurations. x86 Debug, x86 Release, and x64 Debug all work fine.
EDIT: Specifically, the problem is that I get a C1905/LNK1257 error:
C1905: Front end and back end not compatible (must target same processor).
LNK1257: code generation failed
I ran into this problem with VS2012. The lib.exe you're calling is part of the x86 tools. In the amd64 subfolder in VC/bin you will find the x64 versions. Opening a Visual Studio x64 Win64 Command Prompt will set your PATH correctly or you can call the x64 lib.exe directly, specifying its full path as you are doing now.
Hi i have been compiling llvm and clang on my cygwin env using CC=gcc-4 and CXX=g++-4 flags as gcc 3.4.x doesnt seems to compile llvm clang at all. But my question is about the age long compilation time. I have been compiling this from 8pm in the evening and right now its 1:35 am. Also the size of my build directory has gone above 8 gigabytes. And still i see
llvm[5]: Linking Debug+Asserts executable clang-format
Is this normal? Can i somehow make this faster?
Here are some stats
Compiler: GCC 4.5.3
Clang, LLVM: 3.2
A Debug+Assert build took me around 8 hours to build with total build
size over 11 gigabytes.
A Release+assert took mere 1 hour with 800 megabytes of build only.
Also for Release build (configure with --enable-optimized) i used make with -j 4. But i highly doubt the long compilation time was mainly due to debug build as warned by build process itself.
Note: Debug build can be 10 times slower than an optimized build
I suspect this is because of Cygwin. You should be able to build them with MS Visual Studio, and some have done it with Mingw.
What you're seeing is pretty much expected. LLVM / clang are written in C++. So, there are tons of debug information there. Linker is having really hard times trying to merge everything together.
On Linux the usual suggestion is to try gold instead of ld. This usually speeds everything up tenfold.
I have been trying for a few days to build a project based on UIMA C++ framework (http://uima.apache.org/). I am currently using the version 2.4.0 release candidate 6, which comes with Linux and Windows binaries to have all dependancies easily bundled.
In particular, it comes with binary libraries for ICU (3.6 I believe).
In my project, I am building a C++ UIMA annotator and my code makes use of Boost C++ library v1.51.0.
Everything compiles fine but at runtime, I get Access Violation exceptions when starting to use, let's say operator <<(ostream&, const icu::UnicodeString&). It may be a problem of version incompatibility between Boost and UIMA C++.
So, I'm trying to recompile Boost on my machine, telling it to reuse the ICU that comes along with UIMA C++, but there seems to be a problem with MSVC toolset because I always get messages telling me there is no ICU available when building Boost:
c:\Users\Sylvain\boost_1_51_0>b2 toolset=msvc-10.0 variant=release -sICU_LINK=c:\users\sylvain\apache-uima\uimacpp
Building the Boost C++ Libraries.
Performing configuration checks
- 32-bit : yes
- x86 : yes
- has_icu builds : no
warning: Graph library does not contain MPI-based parallel components.
note: to enable them, add "using mpi ;" to your user-config.jam
- iconv (libc) : no
- iconv (separate) : no
- icu : no
- icu (lib64) : no
- gcc visibility : no
- long double support : yes
Has anyone managed to build Boost with the -sICU_PATH options and MSVC?
Thanks,
Sylvain
Just had to build Boost with ICU (and succeeded). Since this question is one of the first results on google (and not of very much help right now), I decided to share what I learned.
I was doing an x64 build of Boost 1.56 with MSVC11 (Visual Studio 2012), linking against a custom build of ICU 4.8.1.
First of all, Boost's directory detection for ICU seems a little weird. Here is what my final layout for the ICU directory looked like:
my_icu_root
+- bin
+- bin64
+- include
+- layout
+- unicode
+- lib
+- lib64
I copied all ICU dlls (both Debug and Release versions) to bin, all libs (again Debug and Release) to lib and all header files to include. To make bjam happy, I also had to copy the full bin and lib directories to bin64 and lib64 respectively. Without both directories, either detection of ICU or compilation of Boost.Locale would fail on my machine.
With this layout, all I had to do was to add -sICU_PATH=<my_icu_root> to my usual bjam command line to get it to build.
You know that Boost successfully detected ICU if you get both
- has_icu builds : yes
[...]
- icu : yes
during configuration.
Here is some advice if, for some reason, it does not work right away.
Bjam caches configure information to bin.v2/project-cache.jam. If you try to re-run Bjam after a failed configuration, be sure to delete that file first. Otherwise bjam might decide to just skip ICU detection altogether (you will see lots of (cached) in the console output).
If configuration fails, take a look at bin.v2/config.log to get detailed information on what went wrong. Most likely, it was unable to compile the test program at libs/regex/build/has_icu_test.cpp. This log contains the command line of the build command, which is usually enough to find out what went wrong. If the log seems suspiciously empty, you probably forgot to delete the project-cache.jam.
Finally, a successful configure run is no guarantee for a successful build. On my machine, I managed to configure everything correctly but still had Boost.locale fail during build because of missing lib files. So be sure to check the build output for failed or skipped targets.
Good luck!
Take a look a boost/libs/regex/build/has_icu_test.cpp. I can't remember the fix/issue off the top of my head but you should be able to cheat and simply return 0 from main() there.
Maybe Boost doesn't work with a six year old ICU. Can you rebuild UIMA instead?
My command line as follows:
bjam -sICU_PATH=c:\icu --toolset=msvc-10.0 variant=release stage
Just look into \bin.v2\config.log
It contains exact error. In my case it was absence of specific library for linking
...found 10 targets...
...found 3 targets...
...found 66 targets...
...updating 2 targets...
msvc.link bin.v2\libs\regex\build\msvc-10.0\debug\threading-multi\has_icu.exe
LINK : fatal error LNK1181: cannot open input file 'icuind.lib'
The problem is - boost build anyway looks for debug library even when requested for variant=release.
I'm experiencing the same problem. And the way I choose to work around it is to make a copy of icuin.lib and name it as icuind.lib, and so for other libs. Then bjam says it has found icu.