How do I build with a custom libstd? - rust

I want to make some changes in libstd and then test them with a toy program. It looks like I can build libstd.so by going to rust/src/libstd and doing a (nightly) cargo build. Once I've done that, how do I get a toy program to build with that libstd instead of the regular version installed on my system?

There are two possibilities in my mind.
Build the compiler from source everytime
Download the Rust source
Make your changes to std
Follow the steps for building from source
Pass an option to rustc that modifies it search path
Run rustc --help
The first two options (--cfg SPEC or -L [KIND=]PATH) are probably were you would point rustc to your version of the std.
I am not very sure how this would work. Ideally someone more knowledgeable could answer this part, because I think it is the preferred solution and way easier.

Related

error: linker `x86_64-w64-mingw32-gcc` not found

I am using MacOS Big Sur, and i am trying to cross compile to windows, but the problem is, this "error: linker x86_64-w64-mingw32-gcc not found" prevents me from doing that, here are my cargo dependencies:
[dependencies]
rand = "0.8.4"
macroquad = "0.3.13"
perlin_rust = "0.1.0"
libm = "0.2.2"
I have tried Cargo Clean/Update, and I have tried mvsc instead of gnu
TLDR;
Besides installing a cross target with rustup you need to install an actual cross linker and tell cargo about it using cargo config file or an environment variable
It seems you are attempting to cross compile your package.
you can read here more about cross compilation;
In a nutshell compiler is a program that takes your text source code and produces something the your operating system and cpu can understand.
When you are building software for the platform you are developing on, it's all nice. You have all the tools but when you want to target another platform or os, you need a compiler that is produced to work on your machine but outputs a binary that is supposed to work on the target platform/os.
So, In your case you need to install a cross toolchain that for mac for mingw target because rust does not have a cross linker itself. Once you get a cross toolchain all you need to do is to tell cargo how to find it.
Here is a project aims to make cross compilation less painful.
I also strongly advise you to read the cargo book
here you can see one of the ways of telling cargo about the cross linker
another way is to use an environment variable (which I like better and easier to use with makefiles)
and below you can see an example of that from one of my makefiles
and Again the cargo book refers to it
Overall cross compiling is painful it took me quite some time to understand the mechanics of it but it was worth it instead of copy pasting commands I found on the blogs.
I also feel like it lacks severe documentation. Cargo book doesn't tell you anything about finding a linker assumes you know this already and pictures cross compiling as something just work out of box after installing a target toolchain with rustup.
I had the same problem, the reason was that there were special characters in the directory where I had the project. such as accents á, í, etc.
So I changed them for regular characters and the problem stopped showing.

Makefile explanation. Understanding someone else's Makefile

I am relatively new to programming on Linux.
I understand that Makefiles are used to ease the compiling process when compiling several files.
Rather than writing "g++ main.cpp x.cpp y.cpp -o executable" everytime you need to compile and run your program, you can throw it into a Makefile and run make in that directory.
I am trying to get a RPi and Arduino to communicate with each other using the nRF24L01 radios using tmrh20's library here. I have been successful using tmrh20's Makefile to build the the executable needed (on the RPi). I would like to, however, use tmrh20's library to build my own executables.
I have watched several tutorial videos on Makefiles but still cannot seem to piece together what is happening in tmrh20's.
The Makefile (1) in question is here. I believe it is somehow referencing a second Makefile (2) (for filenames?) here. (Why is this necessary?)
If it helps anyone understand (it took me a while) I had to build using SPIDEV (the instructions here) the Makefile (3) in the RF24 directory which produced several object files which I think are relevant to Makefile (1)&(2).
How do I find out what files I need to make my own Makefile, from tmrh20's Makefile (if that makes sense?) He seems to use variables in his Makefile that are not defined? Or are perhaps defined elsewhere?
Apologies for my poor explanation.
The canonical sequence is not just make and make install. There is an initial ./configure step (such a file is here) that sets up everything and generates several files used in the make steps.
You only need to run this configure script successfully only once, unless you want to change build parameters. I say "successfully" because the first execution will usually complain that you are missing libraries or header files. But ince ./configure runs without errors, make and make install should run without errors.
PS: I didn't try to compile it, but since the project has a rather comprehensive configure it is likely complete and you shouldn't need to tweak makefiles if your follow the usual procedure.
The reason for splitting the Makefiles in the way you've mentioned and linked to here is to separate the definition of the variables from the implementation. This way you could have multiple base Makefiles that define their PROGRAM variable differently, but all do the same thing based on the value of that variable.
In my own personal opinion, I see some value here - but there very many ways to skin this proverbial cat.
Having learned GNU Make the hard way, I can only recommend you do the same. There's a slight steep curve at the beginning, but once you get the main concepts down following other peoples Makefiles gets pretty easy.
Good luck: https://www.gnu.org/software/make/manual/html_node/index.html

Handling autoconf with Android after NDK16

I'm trying to update an existing configuration we have we are cross compiling for a number of targets - the question specifically here is about Android. More specifically we are building code using cmake and the hunter package manager. However we are building ICU using a link that uses autoconf/configure, called from cmake. Not sure that is specifically important except that we have less control on the use of configure than is generally the case.
OK: we have a version that builds against an old NDK but I am updating and have hit a problem identified by https://android.googlesource.com/platform/ndk/+/master/docs/UnifiedHeaders.md: with NDK16 and later, the value of the sysroot parameter needs to vary between compilation and linkage. As it stands the configure script tries to build a small program conftest.c - the program fails to link. Manually I can compile the code in two stages using -c and then linking the subsequent .o, but that is not what configure is trying to do.
Now the reality is that when I build this code, I don't actually need to link the code - I am generating a library which is used elsewhere. However that is not currently the way that configure sees it.
I may look to redo the configuration script to just check that the code can be compiled when cross compiling. However I am curious to know if anybody has managed to handle this sort of thing by keeping the existing config files and just changing the parameters by which the scripts are called.
When r19 releases to stable this problem will go away on its own (https://github.com/android-ndk/ndk/issues/780), but since that's still in beta it's not a good solution just yet.
Prior to r19 (this isn't really unique to r16+, this has always been the case and it was just asymptomatic previously), autoconf builds should be done using a standalone toolchain.
You however should not use a standalone toolchain for CMake, so odds are something about your configuration will need to change until r19 is released. Depending on the effort involved, it may make sense to keep to r15 until r19 is available.

best practice for building with/out included libraries

a project ships with a copy of library foo, in a filesystem layout like:
myproject/
myproject/src/ # sources of my project
myproject/libfoo/ # import of "foo" library
the standard (autotools-based) build-system builds libfoo, then builds myproject which dynamically links against libfoo.
libfoo is basically unmodified (with some minor amendments to properly fit into the build-system). libfoo uses autotools itself, so i'm usually calling configure recursively using AC_CONFIG_SUBDIRS.
however, libfoo is already packaged for various distributions, so i would like to avoid building against the imported library on these systems and rather use system-wide installation - this way i get the benefits of a better maintained version of libfoo (less bugs, security issues,...).
otoh, i want keep libfoo in my source-tree, so that i have a fallback for building on systems that do not ship that library (without the user requiring to separately fetch the sources and build the lib themselves).
i can think of a number of configure-flags i could instroduce, so the user can select whether they want to build the project with the system-installed, the local or without the library. (it's an optional dependency).
disabling the "local foo", should completely disable building of libfoo (and probably also configuring foo)
e.g. something like:
./configure --enable-foo=no # aka "--disable-foo": build without foo
./configure --enable-foo # use system-wide foo
./configure --enable-foo=local # use local copy of foo
alternatively:
./configure --disable-foo
./configure --enable-foo --disable-local-foo
./configure --enable-foo --enable-local-foo
but i'd like to do this in a standard-conformant way.
what's the best practice for selecting via autoconf, whether to use a local copy or a library, a system-wide copy or to not use the library at all?
pointers to projects that use such a mechanism are most welcome.
I have a similar in my project where I use the included version of the BuDDy library when (1) the library isn't already installed, or (2) it is installed but does not have to interface I expect, or (3) configure was run with --with-included-buddy.
You can see the configure macro here. After that I just use $(BUDDY_CPPFLAGS) and $(BUDDY_LDFLAGS) in the Makefile.ams, and the top-level Makefile.am only include the buddy directory conditionally in SUBDIRS.
I prefer --with-foo when dealing with external software, but it's just a preference. The examples and documentation at the link might help you decide how you want to do it. I'd go with your first example that uses only one flag rather than the second one that uses two flags for easier documentation/maintenance.
I really don't think you want to do this
You're going to make the build machinery a lot more complicated, and autotools are already considered black magic by most. It'll make things a lot more complicated for the developer, a little more complicated for a potential distro packager and ever-so-slightly easier for the end user.
If you're conditionally configuring then you make the process of building distribution tarballs (make dist/make distcheck) more brittle.
This is the sort of trouble you can cause.
But if you must...
You may be able to adapt the code in my recent answer.

Where did the first make binary come from?

I'm having to build gnu make from source for reasons too complicated to explain here.
I noticed to build it I require the make command itself, in the traditional fashion:
./configure
make install
So what if I didn't have the make binary already? Where did the first ever make binary come from?
From the same place the first gcc binary came from.
The first make was created probably using a shell script to do the build. After that, make would "make" itself.
It's a notable achievement in systems development when the platform becomes "self-hosting". That is the platform can build itself.
Things like "make make" and "gcc gcc.c".
Many language writers will create their language in another language (say, C) and when they have moved it far enough along, they will use that original bootstrap compiler to write a new compiler in the original language. Finally, they discard the original.
Back in the day, a friend was working on a debugger for OS/2, notable for being a multi-tasking operating system at the time. And he would regale about the times when they would be debugging the debugger, and find a bug. So, they would debug the debugger debugging the debugger. It's a novel concept and goes to the heart of computing and abstraction.
Inevitably, it all boils back to when someone keyed in something through a hardwire key pad or some other switches to get an initial program loaded. Then they leveraged that program to do other work, and it all just grows from there.
Stuart Feldman, then at AT&T, wrote the source code for make around the time of 7th Edition UNIX™, and used manual compilation (or maybe a shell script) until make was working well enough to be used to build itself. You can find the UNIX Programmer's Manual for 7th Edition online, and in particular, the original paper describing the original version of make, dated August 1978.
make is just one convenience tool. It is still possible to invoke cc, ld, etc. manually or via other scripting tools.
If you're building GNU make, have a look at build.sh in the source tree after running configure:
# Shell script to build GNU Make in the absence of any `make' program.
# build.sh. Generated from build.sh.in by configure.
Compiling C programs is not the only way to produce an executable file. The first make executable (or more notably the C compiler itself) could for example be an assembly program, or it could be hand coded in machine code. It could also be cross compiled on a completely different system.
The essence of make is that it is a simplified way of running some commands.
To make the first make, the author had to manually act as make, and run gcc or whatever toolset was available, rather than having it run automatically.

Resources