Can rust-toolchain.toml be a development environment descriptor - rust

Currently, rust-toolchain.toml, allows specification of the development channel, target platform, and associated tooling (compiler, packager), etc. Unfortunately, the components key which accepts additional tools does not accommodate the stipulation of cargo-watch and trunk (a cargo alternative for WASM crates). As a newbie, their rejection seems strange. Their exclusion limits the amazing utility of rust-toolchain.toml to automate the tooling of a development environment.
The question is, am I missing something? Is there a way to integrate these tools into rust-toolchain.toml, is there some other way for them be a specified (apart from a shell script), or are they redundant?
Presently I manually install them: cargo install watch trunk. Yes, this is easy and simple but also undocumented, forgettable, and clumsy.
I must say, rust and its tooling is impressive.

As far as I know, the components key is specifically for toolchain internal components. These components are also toolchain specific, e.g. a rustup +stable component add rust-src is different from a rustup +nightly component add rust-src.
On the other hand, crates from crates.io (which is what cargo install can install) are essentially toolchain independent. So it makes sense to me that crates in general can not be specified by rust-toolchain.toml file, which is more about pinning the toolchain to a specific version.
However, about cargo plugins specifically, maybe you find a compelling way to propose this as new feature to cargo (e.g. allow specifying cargo plugins in config.toml).

No, the components of a rust-toolchain.toml are a specific set of tools developed and distributed by the Rust language team. It is only used to augment the built-in cargo commands.

Related

Should Cargo.lock be committed when the crate is both a rust library and an executable?

I 've read https://doc.rust-lang.org/cargo/guide/cargo-toml-vs-cargo-lock.html
If I understand correctly, when I commit Cargo.lock into my crate (which is both a library and an executable)'s repository, and also, publish it to crates.io, downstream crates will ignore it and build it's own snap, right?
Yes, crates that depend on your library will ignore your Cargo.lock. The Cargo FAQ provides more details:
Why do binaries have Cargo.lock in version control, but not libraries?
The purpose of a Cargo.lock is to describe the state of the world at the time
of a successful build. It is then used to provide deterministic builds across
whatever machine is building the package by ensuring that the exact same
dependencies are being compiled.
This property is most desirable from applications and packages which are at the
very end of the dependency chain (binaries). As a result, it is recommended that
all binaries check in their Cargo.lock.
For libraries the situation is somewhat different. A library is not only used by
the library developers, but also any downstream consumers of the library. Users
dependent on the library will not inspect the library’s Cargo.lock (even if it
exists). This is precisely because a library should not be deterministically
recompiled for all users of the library.
If a library ends up being used transitively by several dependencies, it’s
likely that just a single copy of the library is desired (based on semver
compatibility). If Cargo used all of the dependencies' Cargo.lock files,
then multiple copies of the library could be used, and perhaps even a version
conflict.
In other words, libraries specify semver requirements for their dependencies but
cannot see the full picture. Only end products like binaries have a full
picture to decide what versions of dependencies should be used.
I found the best practice from the excellent project ripgrep, which split's itself into several crates. For the binary crate in the root, they track Cargo.lock, but for library crates that provide functionality for the application (for example, pcre2), they don't.

Can I get Kronos-Haskell to use my Haskell platform?

Kronos-Haskell installs as a self-contained application and can exist (as near as I can tell) alongside an installation of the Haskell Platform without any issues or interactions. This is a nice feature, however I would like to use a current version of Haskell, along with some additional packages I've installed to the Haskell Platform.
Is there a way to get Kronos-Haskell to use my installation of the Haskell Platform?
You can build IHaskell yourself using this recipe from the README:
git clone http://www.github.com/gibiansky/IHaskell
cd IHaskell
./macos-install.sh
Note - it might take a while as there are a ton of dependencies.

Best practice: deploying depencencies on Linux

What is the best practice for deploying dependencies on Linux when shipping an own application?
Some SO posts recommend to include all dependencies in the package (utilizing LD_LIBRARY_PATH), other posts recommend to only ship the binary and use the "dependency" feature of the DEB/RPM packages instead. I tried to use the second approach, but immediately ran into the problem that one dependency (libicu52) doesn't seem to be available in certain Linux distributions yet. For example, in my OpenSuse test installation only "libicu51" is available in the package manager.
I initially thought that the whole idea of the packaging system is to avoid duplicate SO files in the system. But does it really work (see above), or should I rather ship all dependencies with my app, to make sure that it runs on all distributions?
For custom application, which "does not care" about distribution-specific packaging, versioning, it's upgrades, etc,. I would recommend to redistribute dependencies manually.
You can use RPATH linker option, by it's setting value to $ORIGIN you will tell linker to search libraries in directory, relative to that binary file, without need to pre-set LD_LIBRARY_PATH before execution:
gcc -Wl,-rpath,'$ORIGIN/../lib'
Example taken from here.

Tools to help manage sets of multiple versions of executables on Linux?

We are in a networked Linux environment and what I'm looking for is a FOSS or generic system level method for managing what versions of executables and libraries get used, per session. The executables would preferably be installed on the network. The executables will be in-house tools and installs of commercial packages like Houdini, Maya and Nuke.
The need for this is that we'd prefer to have multiple versions of the software installed and available for the artists but there needs to be an easy way to select which version to use. As an added benefit, I'd like to be able to track the version of software used to generate a given output as metadata. I've worked at studios that did this successfully but I was not 100% up to speed on how it was achieved. Every executable in a given set was assigned a single uber version for the set. That way, the "approved packages" of the studio tools were all collapsed into a single package of tools that were known to work together.
Due to the way they install, some programs make setting this up easy (It's as simple as adding their install directories to $PATH). Other programs don't make it quite so easy. I'm particularly worried about how to handle the libraries a program might install. What's needed is a generic access method I can use to wrap everything into a clean front end.
Does anyone know of such a system available in the wild or am I going to have to implement it from scratch? Google hasn't been very helpful in finding a solution.
Thanks!
Check out the "modules" system at http://modules.sourceforge.net/ ; it's quite widely used in HPC.
There is eselect . I have only used it on funtoo(offspring of gentoo) but it seems to be doing what you need. It is also written entirely in BASH, so it should be quite possible to port to other distros.

Bundling an scons based source

We are using SCons for all our build need, and we would like to distribute a library in open source.
Now most softwares uses ./configure, make and make install as build mechanism, we were wondering how we should bundle our library.
We have the following solutions:
Just bundle like the way it is, requiring scons to build.
Add a dummy configure and makefile that just call scons.
Add autoconf and a makefile.
How it is perceived to get a software requiring python and scons to build?
I think it depends largely on your target audience (ie, users who can easily install scons if they don't have it, or ones who can't), but if you are distributing source at all then presumably your users are happy compiling things, and they can install scons too (and python if for some obscene reason they don't have it already)
Also, if you are worried about people not being able to build it, you should probably be distributing a binary package anyway.
If your library is cross-platform and can be compiled on Windows too, then using scons is the right choice.
Another option would be to include the scons-local version in the package. This reduces the dependencies to just python.

Resources